content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How can I provide safety template for user to modify with python?
I am building a multi-user web application. Each user can have their own site under my application. I am considering how to allow user to modify template without security problem? I have evaluated some python template engine. For example, genshi, it is a pretty wonderful template engine, but however it might be dangerous to allow user to modify genshi template. It have a syntax like this:
<?python
?>
This syntax allow you run whatever you want python can do. I notice that it seems can be shutdown by passing some parameter. But there are still a lots of potential problems. For example, user can access build-in functions, and methods of passed variables. For example, if I pass a ORM object to template. It might contain some method and variable that I don't want to allow user touch it. May like this:
site.metadata.connection.execute("drop table xxx")
So my question is how can I allow user to modify template of their site without security problems? Any python template engine can be used.
Thanks.
A:
Jinja2 is a Django-ish templating system that has a sandboxing feature. I've never attempted to use the sandboxing, but I quite like Jinja2 as an alternative to Django's templates. It still promotes separation of template from business logic, but has more Pythonic calling conventions, namespacing, etc.
Jinja2 Sandbox
A:
Look at Django templte engine. It does not support execution of arbitrary python code and all accessible variables must be passed into template explicity. This should be pretty good foundation for building user-customizable pages. Beware that you'll still need to handle occasional syntax errors from your users.
A:
In rails there's something called liquid. You might take a look at that to get some ideas. Another idea: at the very least, one thing you could do is to convert your objects into simple dictionary - something like a json representation, and then pass to your template.
| How can I provide safety template for user to modify with python? | I am building a multi-user web application. Each user can have their own site under my application. I am considering how to allow user to modify template without security problem? I have evaluated some python template engine. For example, genshi, it is a pretty wonderful template engine, but however it might be dangerous to allow user to modify genshi template. It have a syntax like this:
<?python
?>
This syntax allow you run whatever you want python can do. I notice that it seems can be shutdown by passing some parameter. But there are still a lots of potential problems. For example, user can access build-in functions, and methods of passed variables. For example, if I pass a ORM object to template. It might contain some method and variable that I don't want to allow user touch it. May like this:
site.metadata.connection.execute("drop table xxx")
So my question is how can I allow user to modify template of their site without security problems? Any python template engine can be used.
Thanks.
| [
"Jinja2 is a Django-ish templating system that has a sandboxing feature. I've never attempted to use the sandboxing, but I quite like Jinja2 as an alternative to Django's templates. It still promotes separation of template from business logic, but has more Pythonic calling conventions, namespacing, etc. \nJinja2 Sandbox\n",
"Look at Django templte engine. It does not support execution of arbitrary python code and all accessible variables must be passed into template explicity. This should be pretty good foundation for building user-customizable pages. Beware that you'll still need to handle occasional syntax errors from your users.\n",
"In rails there's something called liquid. You might take a look at that to get some ideas. Another idea: at the very least, one thing you could do is to convert your objects into simple dictionary - something like a json representation, and then pass to your template.\n"
] | [
5,
1,
0
] | [
"The short answer is probably \"you can't\".\nThe best you can probably do is to trap the individual users in virtual machines or sandboxes.\n"
] | [
-1
] | [
"python",
"templates",
"web"
] | stackoverflow_0000550337_python_templates_web.txt |
Q:
python, dictionary and int error
I have a very frustrating python problem. In this code
fixedKeyStringInAVar = "SomeKey"
def myFunc(a, b):
global sleepTime
global fixedKeyStringInAVar
varMe=int("15")
sleepTime[fixedKeyStringInAVar] = varMe*60*1000
#more code
Now this works. BUT sometimes when I run this function I get
TypeError: 'int' object does not support item assignment
It is extremely annoying since I tried several test cases and could not reproduce the error, yet it happens very often when I run the full code. The code reads data from a db, access sites, etc. so its hard for me to go through the data since it reads from several sources and depends on 3rd party input that changes (websites).
What could this error be?
A:
Don't use global keyword in a function unless you'd like to change binding of a global name.
Search for 'sleepTime =' in your code. You are binding an int object to the sleepTime name at some point in your program.
A:
Your sleepTime is a global variable. It could be changed to be an int at some point in your program.
The item assignment is the "foo[bar] = baz" construction in your function.
| python, dictionary and int error | I have a very frustrating python problem. In this code
fixedKeyStringInAVar = "SomeKey"
def myFunc(a, b):
global sleepTime
global fixedKeyStringInAVar
varMe=int("15")
sleepTime[fixedKeyStringInAVar] = varMe*60*1000
#more code
Now this works. BUT sometimes when I run this function I get
TypeError: 'int' object does not support item assignment
It is extremely annoying since I tried several test cases and could not reproduce the error, yet it happens very often when I run the full code. The code reads data from a db, access sites, etc. so its hard for me to go through the data since it reads from several sources and depends on 3rd party input that changes (websites).
What could this error be?
| [
"\nDon't use global keyword in a function unless you'd like to change binding of a global name.\nSearch for 'sleepTime =' in your code. You are binding an int object to the sleepTime name at some point in your program.\n\n",
"Your sleepTime is a global variable. It could be changed to be an int at some point in your program.\nThe item assignment is the \"foo[bar] = baz\" construction in your function.\n"
] | [
5,
1
] | [] | [] | [
"dictionary",
"global",
"python"
] | stackoverflow_0000550673_dictionary_global_python.txt |
Q:
Java Servlet Filter Equivalent in Ruby [on Rails] and PHP?
Not sure if the terminology is correct, but are there rough equivalents to Java Servlet Filters in Ruby and PHP ? Are they actual concrete classes ?
I assume there is also a number of common web app libraries/frameworks in Python. Is there an equivalent there ?
Thanks.
=== ADDENDUM ===
On the good advice of Kevin Davis, I just want to quickly elaborate on what Java Servlet Filters are. It is basically an HTTP request interceptor. A chain of filters can be configured between the raw receipt of the request and the final destination of the request. The request parameters (and cookies, headers etc. etc.) are passed to the first filter in the chain and each filter does something with them (or not) and then passes them up the chain (or not. eg. a caching filter may simply return the result, bypassing the rest of the chain and the endpoint).
One of the advantages is the ability to modify or enhance the web app without touching the original endpoint code.
Cheers.
A:
I assume there is also a number of
common web app libraries/frameworks in
Python. Is there an equivalent there ?
Django provides a framework of middleware hooks that can be used to alter input/output in request/response processing. See the Middleware documentation page for more details.
A:
In a typical Apache/PHP scenario, the answer is generally: No, there are no custom filters. However, there some solutions for problems solved with Java Servlet Filters:
There are multiple collaborating access control modules
Compression can be set in the php configuration
You can create a .htaccess file to set those properties for a directory and its subdirectories.
A:
Ruby on Rails has filters that serve this purpose
A new feature is Rack Middleware, which is similar to Django middleware
A:
In the PHP world, Zend Framework provides a plugin API for its front controller object that allows hooking of plugin objects between the pre-routing and post-dispatching phases. Although I didn't have a chance to work with Java servlets I assume this would match the descripton in your addendum. Anyway, this is not built into PHP, its framework dependent as with RoR or Django.
| Java Servlet Filter Equivalent in Ruby [on Rails] and PHP? | Not sure if the terminology is correct, but are there rough equivalents to Java Servlet Filters in Ruby and PHP ? Are they actual concrete classes ?
I assume there is also a number of common web app libraries/frameworks in Python. Is there an equivalent there ?
Thanks.
=== ADDENDUM ===
On the good advice of Kevin Davis, I just want to quickly elaborate on what Java Servlet Filters are. It is basically an HTTP request interceptor. A chain of filters can be configured between the raw receipt of the request and the final destination of the request. The request parameters (and cookies, headers etc. etc.) are passed to the first filter in the chain and each filter does something with them (or not) and then passes them up the chain (or not. eg. a caching filter may simply return the result, bypassing the rest of the chain and the endpoint).
One of the advantages is the ability to modify or enhance the web app without touching the original endpoint code.
Cheers.
| [
"\nI assume there is also a number of\n common web app libraries/frameworks in\n Python. Is there an equivalent there ?\n\nDjango provides a framework of middleware hooks that can be used to alter input/output in request/response processing. See the Middleware documentation page for more details.\n",
"In a typical Apache/PHP scenario, the answer is generally: No, there are no custom filters. However, there some solutions for problems solved with Java Servlet Filters:\n\nThere are multiple collaborating access control modules\nCompression can be set in the php configuration\n\nYou can create a .htaccess file to set those properties for a directory and its subdirectories.\n",
"Ruby on Rails has filters that serve this purpose\nA new feature is Rack Middleware, which is similar to Django middleware\n",
"In the PHP world, Zend Framework provides a plugin API for its front controller object that allows hooking of plugin objects between the pre-routing and post-dispatching phases. Although I didn't have a chance to work with Java servlets I assume this would match the descripton in your addendum. Anyway, this is not built into PHP, its framework dependent as with RoR or Django.\n"
] | [
2,
0,
0,
0
] | [] | [] | [
"java",
"php",
"python",
"ruby_on_rails",
"servlets"
] | stackoverflow_0000417158_java_php_python_ruby_on_rails_servlets.txt |
Q:
"Private" (implementation) class in Python
I am coding a small Python module composed of two parts:
some functions defining a public interface,
an implementation class used by the above functions, but which is not meaningful outside the module.
At first, I decided to "hide" this implementation class by defining it inside the function using it, but this hampers readability and cannot be used if multiple functions reuse the same class.
So, in addition to comments and docstrings, is there a mechanism to mark a class as "private" or "internal"? I am aware of the underscore mechanism, but as I understand it it only applies to variables, function and methods name.
A:
Use a single underscore prefix:
class _Internal:
...
This is the official Python convention for 'internal' symbols; "from module import *" does not import underscore-prefixed objects.
Reference to the single underscore convention.
A:
In short:
You cannot enforce privacy. There are no private classes/methods/functions in Python. At least, not strict privacy as in other languages, such as Java.
You can only indicate/suggest privacy. This follows a convention. The Python convention for marking a class/function/method as private is to preface it with an _ (underscore). For example, def _myfunc() or class _MyClass:. You can also create pseudo-privacy by prefacing the method with two underscores (for example, __foo). You cannot access the method directly, but you can still call it through a special prefix using the classname (for example, _classname__foo). So the best you can do is indicate/suggest privacy, not enforce it.
Python is like Perl in this respect. To paraphrase a famous line about privacy from the Perl book, the philosophy is that you should stay out of the living room because you weren't invited, not because it is defended with a shotgun.
For more information:
Private variables Python Documentation
Why are Python’s ‘private’ methods not actually private? Stack Overflow question 70528
A:
Define __all__, a list of names that you want to be exported (see documentation).
__all__ = ['public_class'] # don't add here the 'implementation_class'
A:
A pattern that I sometimes use is this:
Define a class:
class x(object):
def doThis(self):
...
def doThat(self):
...
Create an instance of the class, overwriting the class name:
x = x()
Define symbols that expose the functionality:
doThis = x.doThis
doThat = x.doThat
Delete the instance itself:
del x
Now you have a module that only exposes your public functions.
A:
The convention is prepend "_" to internal classes, functions, and variables.
A:
To address the issue of design conventions, and as chroder said, there's really no such thing as "private" in Python. This may sound twisted for someone coming from C/C++ background (like me a while back), but eventually, you'll probably realize following conventions is plenty enough.
Seeing something having an underscore in front should be a good enough hint not to use it directly. If you're concerned with cluttering help(MyClass) output (which is what everyone looks at when searching on how to use a class), the underscored attributes/classes are not included there, so you'll end up just having your "public" interface described.
Plus, having everything public has its own awesome perks, like for instance, you can unit test pretty much anything from outside (which you can't really do with C/C++ private constructs).
A:
Use two underscores to prefix names of "private" identifiers. For classes in a module, use a single leading underscore and they will not be imported using "from module import *".
class _MyInternalClass:
def __my_private_method:
pass
(There is no such thing as true "private" in Python. For example, Python just automatically mangles the names of class members with double underscores to be __clssname_mymember. So really, if you know the mangled name you can use the "private" entity anyway. See here. And of course you can choose to manually import "internal" classes if you wanted to).
| "Private" (implementation) class in Python | I am coding a small Python module composed of two parts:
some functions defining a public interface,
an implementation class used by the above functions, but which is not meaningful outside the module.
At first, I decided to "hide" this implementation class by defining it inside the function using it, but this hampers readability and cannot be used if multiple functions reuse the same class.
So, in addition to comments and docstrings, is there a mechanism to mark a class as "private" or "internal"? I am aware of the underscore mechanism, but as I understand it it only applies to variables, function and methods name.
| [
"Use a single underscore prefix:\nclass _Internal:\n ...\n\nThis is the official Python convention for 'internal' symbols; \"from module import *\" does not import underscore-prefixed objects.\nReference to the single underscore convention.\n",
"In short:\n\nYou cannot enforce privacy. There are no private classes/methods/functions in Python. At least, not strict privacy as in other languages, such as Java.\n\nYou can only indicate/suggest privacy. This follows a convention. The Python convention for marking a class/function/method as private is to preface it with an _ (underscore). For example, def _myfunc() or class _MyClass:. You can also create pseudo-privacy by prefacing the method with two underscores (for example, __foo). You cannot access the method directly, but you can still call it through a special prefix using the classname (for example, _classname__foo). So the best you can do is indicate/suggest privacy, not enforce it.\n\n\nPython is like Perl in this respect. To paraphrase a famous line about privacy from the Perl book, the philosophy is that you should stay out of the living room because you weren't invited, not because it is defended with a shotgun.\nFor more information:\n\nPrivate variables Python Documentation\nWhy are Python’s ‘private’ methods not actually private? Stack Overflow question 70528\n\n",
"Define __all__, a list of names that you want to be exported (see documentation).\n__all__ = ['public_class'] # don't add here the 'implementation_class'\n\n",
"A pattern that I sometimes use is this:\nDefine a class:\nclass x(object):\n def doThis(self):\n ...\n def doThat(self):\n ...\n\nCreate an instance of the class, overwriting the class name:\nx = x()\n\nDefine symbols that expose the functionality:\ndoThis = x.doThis\ndoThat = x.doThat\n\nDelete the instance itself:\ndel x\n\nNow you have a module that only exposes your public functions.\n",
"The convention is prepend \"_\" to internal classes, functions, and variables.\n",
"To address the issue of design conventions, and as chroder said, there's really no such thing as \"private\" in Python. This may sound twisted for someone coming from C/C++ background (like me a while back), but eventually, you'll probably realize following conventions is plenty enough.\nSeeing something having an underscore in front should be a good enough hint not to use it directly. If you're concerned with cluttering help(MyClass) output (which is what everyone looks at when searching on how to use a class), the underscored attributes/classes are not included there, so you'll end up just having your \"public\" interface described.\nPlus, having everything public has its own awesome perks, like for instance, you can unit test pretty much anything from outside (which you can't really do with C/C++ private constructs).\n",
"Use two underscores to prefix names of \"private\" identifiers. For classes in a module, use a single leading underscore and they will not be imported using \"from module import *\".\nclass _MyInternalClass:\n def __my_private_method:\n pass\n\n(There is no such thing as true \"private\" in Python. For example, Python just automatically mangles the names of class members with double underscores to be __clssname_mymember. So really, if you know the mangled name you can use the \"private\" entity anyway. See here. And of course you can choose to manually import \"internal\" classes if you wanted to).\n"
] | [
235,
97,
46,
19,
13,
7,
6
] | [] | [] | [
"access_modifiers",
"python"
] | stackoverflow_0000551038_access_modifiers_python.txt |
Q:
python runtime error, can dump a file?
I am using libcurl to DL a webpage, then i am scanning it for data and doing something with one of the links. However, once in a while the page is different then i except thus i extract bad data and pycurl throws an exception. I tried finding the exception name for pycurl but had no luck.
Is there a way i can get the traceback to execute a function so i can dump the file so i can look at the file input and see were my code went wrong?
A:
sys.excepthook may help you here, where you can set a global exception handler. I am not sure how pycurl exceptions are handled, it being a binding library, but it will probably work to reassign it to a generic function. Something like:
>>> import sys
>>>
>>> def my_global_exception_handler(type, value, traceback):
... print traceback
... sys.exit()
...
>>> sys.excepthook = my_global_exception_handler
>>> raise
<traceback object at 0xb7cfcaa4>
This exception hook function could easily be an instance method that has access to the file that needs dumping.
A:
You can use a generic exception handler.
logging.basicConfig( file="someFile.log", level=logging.DEBUG )
logger= logging.getLogger( __name__ )
try:
curl = pycurl.Curl()
curl.setopt(pycurl.URL, url)
# etc.
curl.perform()
curl.close
logger.info( "Read %s", url )
except Exception, e:
logger.exception( e )
print e, repr(e), e.message, e.args
raise
logging.shutdown()
This will write a nice log that has the exception information you're looking for.
A:
Can you catch all exceptions somewhere in the main block and use sys.exc_info() for callback information and log that to your file. exc_info() returns not just exception type, but also call traceback so there should information what went wrong.
| python runtime error, can dump a file? | I am using libcurl to DL a webpage, then i am scanning it for data and doing something with one of the links. However, once in a while the page is different then i except thus i extract bad data and pycurl throws an exception. I tried finding the exception name for pycurl but had no luck.
Is there a way i can get the traceback to execute a function so i can dump the file so i can look at the file input and see were my code went wrong?
| [
"sys.excepthook may help you here, where you can set a global exception handler. I am not sure how pycurl exceptions are handled, it being a binding library, but it will probably work to reassign it to a generic function. Something like:\n>>> import sys\n>>> \n>>> def my_global_exception_handler(type, value, traceback):\n... print traceback\n... sys.exit()\n... \n>>> sys.excepthook = my_global_exception_handler\n>>> raise\n<traceback object at 0xb7cfcaa4>\n\nThis exception hook function could easily be an instance method that has access to the file that needs dumping.\n",
"You can use a generic exception handler.\nlogging.basicConfig( file=\"someFile.log\", level=logging.DEBUG )\nlogger= logging.getLogger( __name__ )\ntry:\n curl = pycurl.Curl()\n curl.setopt(pycurl.URL, url)\n # etc.\n curl.perform()\n curl.close\n logger.info( \"Read %s\", url )\nexcept Exception, e:\n logger.exception( e )\n print e, repr(e), e.message, e.args\n raise\nlogging.shutdown()\n\nThis will write a nice log that has the exception information you're looking for.\n",
"Can you catch all exceptions somewhere in the main block and use sys.exc_info() for callback information and log that to your file. exc_info() returns not just exception type, but also call traceback so there should information what went wrong.\n"
] | [
3,
3,
2
] | [] | [] | [
"error_handling",
"pycurl",
"python"
] | stackoverflow_0000550804_error_handling_pycurl_python.txt |
Q:
Deploying application with Python or another embedded scripting language
I'm thinking about using Python as an embedded scripting language in a hobby project written in C++. I would not like to depend on separately installed Python distribution. Python documentation seems to be quite clear about general usage, but I couldn't find a clear answer to this.
Is it feasible to deploy a Python interpreter + standard library with my application? Would some other language like Lua, Javascript (Spidermonkey), Ruby, etc. be better for this use?
Here's the criteria I'm weighing the different languages against:
No/Few dependencies on externally installed packages
Standard library with good feature set
Nice language :)
Doesn't result in a huge install package
edit:
I guess the question should be:
How do I deploy my own python library + standard library with the installer of my program, so that it doesn't matter whether the platform already has python installed or not?
edit2:
One more clarification. I don't need info about specifics of linking C and Python code.
A:
Link your application to the python library (pythonXX.lib on Windows) and add the following to your main() function.
Py_NoSiteFlag = 1; // Disable importing site.py
Py_Initialize(); // Create a python interpreter
Put the python standard library bits you need into a zip file (called pythonXX.zip) and place this and pythonXX.dll beside the executable you distribute. Have a look at PyZipFile in the the zipfile module.
A:
The embedding process is fully documented : Embedding Python in Another Application.
The documents suggests a few levels at which embedding is done, choose whatever best fits your requirements.
A simple demo of embedding Python can be found in the directory Demo/embed/ of the source distribution.
The demo is here, should be able to build from the distro.
Very High Level Embedding
Beyond Very High Level Embedding: An overview
Pure Embedding
Extending Embedded Python
Embedding Python in C++
From the standard library you can select the components that do not carry too much dependencies.
A:
To extend the answer by gimel, there is nothing to stop you from shipping python.dll, using it, and setting a correct PYTHONPATH in order to use your own installation of the python standard library. They are just libraries and files, and your install process can just deal with them as such.
A:
If you're looking for a way to simply deploy a Python app, you could use Py2Exe. That would allow your app all the dependencies it needs without worrying what is or isn't already installed on the users machine.
[Edit] If Py2Exe is a no-go for embedded applications, could you simply get your installer to check for the relevent files and download / install anything that's missing?[/Edit]
[Edit2] Could Elmer be what you're looking for? [/Edit2]
| Deploying application with Python or another embedded scripting language | I'm thinking about using Python as an embedded scripting language in a hobby project written in C++. I would not like to depend on separately installed Python distribution. Python documentation seems to be quite clear about general usage, but I couldn't find a clear answer to this.
Is it feasible to deploy a Python interpreter + standard library with my application? Would some other language like Lua, Javascript (Spidermonkey), Ruby, etc. be better for this use?
Here's the criteria I'm weighing the different languages against:
No/Few dependencies on externally installed packages
Standard library with good feature set
Nice language :)
Doesn't result in a huge install package
edit:
I guess the question should be:
How do I deploy my own python library + standard library with the installer of my program, so that it doesn't matter whether the platform already has python installed or not?
edit2:
One more clarification. I don't need info about specifics of linking C and Python code.
| [
"Link your application to the python library (pythonXX.lib on Windows) and add the following to your main() function.\nPy_NoSiteFlag = 1; // Disable importing site.py\nPy_Initialize(); // Create a python interpreter\n\nPut the python standard library bits you need into a zip file (called pythonXX.zip) and place this and pythonXX.dll beside the executable you distribute. Have a look at PyZipFile in the the zipfile module.\n",
"The embedding process is fully documented : Embedding Python in Another Application.\nThe documents suggests a few levels at which embedding is done, choose whatever best fits your requirements. \n\nA simple demo of embedding Python can be found in the directory Demo/embed/ of the source distribution.\n\nThe demo is here, should be able to build from the distro.\n\n\nVery High Level Embedding\nBeyond Very High Level Embedding: An overview\nPure Embedding\nExtending Embedded Python\nEmbedding Python in C++\n\n\nFrom the standard library you can select the components that do not carry too much dependencies.\n",
"To extend the answer by gimel, there is nothing to stop you from shipping python.dll, using it, and setting a correct PYTHONPATH in order to use your own installation of the python standard library. They are just libraries and files, and your install process can just deal with them as such.\n",
"If you're looking for a way to simply deploy a Python app, you could use Py2Exe. That would allow your app all the dependencies it needs without worrying what is or isn't already installed on the users machine.\n[Edit] If Py2Exe is a no-go for embedded applications, could you simply get your installer to check for the relevent files and download / install anything that's missing?[/Edit]\n[Edit2] Could Elmer be what you're looking for? [/Edit2]\n"
] | [
18,
8,
5,
0
] | [] | [] | [
"c++",
"deployment",
"embedded_language",
"python",
"scripting_language"
] | stackoverflow_0000551227_c++_deployment_embedded_language_python_scripting_language.txt |
Q:
What's the best way to make a time from "Today" or "Yesterday" and a time in Python?
Python has pretty good date parsing but is the only way to recognize a datetime such as "Today 3:20 PM" or "Yesterday 11:06 AM" by creating a new date today and doing subtractions?
A:
A library that I like a lot, and I'm seeing more and more people use, is python-dateutil but unfortunately neither it nor the other traditional big datetime parser, mxDateTime from Egenix can parse the word "tomorrow" in spite of both libraries having very strong "fuzzy" parsers.
The only library I've seen that can do this is magicdate. Examples:
>>> import magicdate
>>> magicdate.magicdate('today')
datetime.date(2009, 2, 15)
>>> magicdate.magicdate('tomorrow')
datetime.date(2009, 2, 16)
>>> magicdate.magicdate('yesterday')
datetime.date(2009, 2, 14)
Unfortunately this only returns datetime.date objects, and so won't include time parts and can't handle your example of "Today 3:20 PM".
So, you need mxDateTime for that. Examples:
>>> import mx.DateTime
>>> mx.DateTime.Parser.DateTimeFromString("Today 3:20 PM")
<mx.DateTime.DateTime object for '2009-02-15 15:20:00.00' at 28faa28>
>>> mx.DateTime.Parser.DateTimeFromString("Tomorrow 5:50 PM")
<mx.DateTime.DateTime object for '2009-02-15 17:50:00.00' at 2a86088>
EDIT: mxDateTime.Parser is only parsing the time in these examples and ignoring the words "today" and "tomorrow". So for this particular case you need to use a combo of magicdate to get the date and mxDateTime to get the time. My recommendation is to just use python-dateutils or mxDateTime and only accept the string formats they can parse.
EDIT 2: As noted in the comments it looks python-dateutil can now handle fuzzy parsing. I've also since discovered the parsedatetime module that was developed for use in Chandler and it works with the queries in this question:
>>> import parsedatetime.parsedatetime as pdt
>>> import parsedatetime.parsedatetime_consts as pdc
>>> c=pdc.Constants()
>>> p=pdt.Calendar(c)
>>> p.parse('Today 3:20 PM')
((2010, 3, 12, 15, 20, 0, 4, 71, -1), 3)
>>> p.parse('Yesterday 11:06 AM')
((2010, 3, 11, 11, 6, 0, 3, 70, -1), 3)
and for reference here is the current time:
>>> import datetime
>>> datetime.datetime.now()
datetime.datetime(2010, 3, 12, 15, 23, 35, 951652)
| What's the best way to make a time from "Today" or "Yesterday" and a time in Python? | Python has pretty good date parsing but is the only way to recognize a datetime such as "Today 3:20 PM" or "Yesterday 11:06 AM" by creating a new date today and doing subtractions?
| [
"A library that I like a lot, and I'm seeing more and more people use, is python-dateutil but unfortunately neither it nor the other traditional big datetime parser, mxDateTime from Egenix can parse the word \"tomorrow\" in spite of both libraries having very strong \"fuzzy\" parsers.\nThe only library I've seen that can do this is magicdate. Examples:\n>>> import magicdate\n>>> magicdate.magicdate('today')\ndatetime.date(2009, 2, 15)\n>>> magicdate.magicdate('tomorrow')\ndatetime.date(2009, 2, 16)\n>>> magicdate.magicdate('yesterday')\ndatetime.date(2009, 2, 14)\n\nUnfortunately this only returns datetime.date objects, and so won't include time parts and can't handle your example of \"Today 3:20 PM\".\nSo, you need mxDateTime for that. Examples:\n>>> import mx.DateTime\n\n>>> mx.DateTime.Parser.DateTimeFromString(\"Today 3:20 PM\")\n<mx.DateTime.DateTime object for '2009-02-15 15:20:00.00' at 28faa28>\n\n>>> mx.DateTime.Parser.DateTimeFromString(\"Tomorrow 5:50 PM\")\n<mx.DateTime.DateTime object for '2009-02-15 17:50:00.00' at 2a86088>\n\nEDIT: mxDateTime.Parser is only parsing the time in these examples and ignoring the words \"today\" and \"tomorrow\". So for this particular case you need to use a combo of magicdate to get the date and mxDateTime to get the time. My recommendation is to just use python-dateutils or mxDateTime and only accept the string formats they can parse.\n\nEDIT 2: As noted in the comments it looks python-dateutil can now handle fuzzy parsing. I've also since discovered the parsedatetime module that was developed for use in Chandler and it works with the queries in this question:\n>>> import parsedatetime.parsedatetime as pdt\n>>> import parsedatetime.parsedatetime_consts as pdc\n>>> c=pdc.Constants()\n>>> p=pdt.Calendar(c)\n>>> p.parse('Today 3:20 PM')\n((2010, 3, 12, 15, 20, 0, 4, 71, -1), 3)\n>>> p.parse('Yesterday 11:06 AM')\n((2010, 3, 11, 11, 6, 0, 3, 70, -1), 3)\n\nand for reference here is the current time:\n>>> import datetime\n>>> datetime.datetime.now()\ndatetime.datetime(2010, 3, 12, 15, 23, 35, 951652)\n\n"
] | [
19
] | [
"I am not yet completely up to speed on Python yet, but your question interested me, so I dug around a bit.\nDate subtraction using timedelta is by far the most common solution I found.\nSince your question asks if that's the only way to do it, I checked out the strftime format codes to see if you could define your own. Unfortunately not. From Python's strftime documentation:\n\n... The full set of format codes supported varies across platforms, because Python calls the platform C library’s strftime() function, and platform variations are common.\nThe following is a list of all the format codes that the C standard (1989 version) requires ...\n\nAnyhow, this isn't a definitive answer, but maybe It'll save others time barking up the wrong tree.\n"
] | [
-3
] | [
"datetime",
"parsing",
"python"
] | stackoverflow_0000552073_datetime_parsing_python.txt |
Q:
Missing first line when downloading .rar file using urllib2.urlopen()
Okey this is really strange. I have this script which basically downloads bunch of achieve files and extracts them. Usually those files are .zip files. Today I sat down and decided to make it work with rar files and I got stuck. At first I thought that the problem is in my unrar code, but it wasn't there. So I did:
f = urllib2.urlopen(file_location)
data = StringIO(f.read())
print data.getvalue()
heck I even did:
f = urllib2.urlopen(file_location)
print f.read()
because I just wanted to see the first chunk and the result is the same - I'm missing first line of the .rar file.
If I use web browser to download the very same file everything is fine, it's not corrupt.
Can anyone please explain me what the hell is going on here? And what does it have to do with file type.
A:
When trying to determine the content of binary data string, use repr() or hex(). For example,
>>> print repr(data)
'\x00\x01\x02\x03\x04\x05\x06\x07\x08\t'
>>> print [hex(ord(c)) for c in data]
['0x0', '0x1', '0x2', '0x3', '0x4', '0x5', '0x6', '0x7', '0x8', '0x9']
>>>
A:
Does the data maybe contain a "carriage return" character ("\r") so that the first chunk is overwritten with subsequent data when you try to display it? This would explain why you don't see the first chunk in your output, but not why you aren't able to decode it later on.
| Missing first line when downloading .rar file using urllib2.urlopen() | Okey this is really strange. I have this script which basically downloads bunch of achieve files and extracts them. Usually those files are .zip files. Today I sat down and decided to make it work with rar files and I got stuck. At first I thought that the problem is in my unrar code, but it wasn't there. So I did:
f = urllib2.urlopen(file_location)
data = StringIO(f.read())
print data.getvalue()
heck I even did:
f = urllib2.urlopen(file_location)
print f.read()
because I just wanted to see the first chunk and the result is the same - I'm missing first line of the .rar file.
If I use web browser to download the very same file everything is fine, it's not corrupt.
Can anyone please explain me what the hell is going on here? And what does it have to do with file type.
| [
"When trying to determine the content of binary data string, use repr() or hex(). For example,\n>>> print repr(data)\n'\\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\t'\n>>> print [hex(ord(c)) for c in data]\n['0x0', '0x1', '0x2', '0x3', '0x4', '0x5', '0x6', '0x7', '0x8', '0x9']\n>>>\n\n",
"Does the data maybe contain a \"carriage return\" character (\"\\r\") so that the first chunk is overwritten with subsequent data when you try to display it? This would explain why you don't see the first chunk in your output, but not why you aren't able to decode it later on.\n"
] | [
3,
2
] | [] | [] | [
"python",
"urllib2"
] | stackoverflow_0000552328_python_urllib2.txt |
Q:
How do python classes work?
I have a code file from the boto framework pasted below, all of the print statements are mine, and the one commented out line is also mine, all else belongs to the attributed author.
My question is what is the order in which instantiations and allocations occur in python when instantiating a class? The author's code below is under the assumption that 'DefaultDomainName' will exist when an instance of the class is created (e.g. __init__() is called), but this does not seem to be the case, at least in my testing in python 2.5 on OS X.
In the class Manager __init__() method, my print statements show as 'None'. And the print statements in the global function set_domain() further down shows 'None' prior to setting Manager.DefaultDomainName, and shows the expected value of 'test_domain' after the assignment. But when creating an instance of Manager again after calling set_domain(), the __init__() method still shows 'None'.
Can anyone help me out, and explain what is going on here. It would be greatly appreciated. Thank you.
# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import boto
from boto.utils import find_class
class Manager(object):
DefaultDomainName = boto.config.get('Persist', 'default_domain', None)
def __init__(self, domain_name=None, aws_access_key_id=None, aws_secret_access_key=None, debug=0):
self.domain_name = domain_name
self.aws_access_key_id = aws_access_key_id
self.aws_secret_access_key = aws_secret_access_key
self.domain = None
self.sdb = None
self.s3 = None
if not self.domain_name:
print "1: %s" % self.DefaultDomainName
print "2: %s" % Manager.DefaultDomainName
self.domain_name = self.DefaultDomainName
#self.domain_name = 'test_domain'
if self.domain_name:
boto.log.info('No SimpleDB domain set, using default_domain: %s' % self.domain_name)
else:
boto.log.warning('No SimpleDB domain set, persistance is disabled')
if self.domain_name:
self.sdb = boto.connect_sdb(aws_access_key_id=self.aws_access_key_id,
aws_secret_access_key=self.aws_secret_access_key,
debug=debug)
self.domain = self.sdb.lookup(self.domain_name)
if not self.domain:
self.domain = self.sdb.create_domain(self.domain_name)
def get_s3_connection(self):
if not self.s3:
self.s3 = boto.connect_s3(self.aws_access_key_id, self.aws_secret_access_key)
return self.s3
def get_manager(domain_name=None, aws_access_key_id=None, aws_secret_access_key=None, debug=0):
return Manager(domain_name, aws_access_key_id, aws_secret_access_key, debug=debug)
def set_domain(domain_name):
print "3: %s" % Manager.DefaultDomainName
Manager.DefaultDomainName = domain_name
print "4: %s" % Manager.DefaultDomainName
def get_domain():
return Manager.DefaultDomainName
def revive_object_from_id(id, manager):
if not manager.domain:
return None
attrs = manager.domain.get_attributes(id, ['__module__', '__type__', '__lineage__'])
try:
cls = find_class(attrs['__module__'], attrs['__type__'])
return cls(id, manager=manager)
except ImportError:
return None
def object_lister(cls, query_lister, manager):
for item in query_lister:
if cls:
yield cls(item.name)
else:
o = revive_object_from_id(item.name, manager)
if o:
yield o
A:
A few python notes
When python executes the class block, it creates all of the "attributes" of that class as it encounters them. They are usually class variables as well as functions (methods), and the like.
So the value for "Manager.DefaultDomainName" is set when it is encountered in the class definition. This code is only ever run once - never again. The reason for that is that it is just "defining" the class object called "Manager".
When an object of class "Manager" is instantiated, it is an instance of the "Manager" class. (that may sound repetitive). To be perfectly clear, the value:
self.DefaultDomainName
does not exist. Following the rules of classes, python says "hmm, that does not exist on this object instance, I'll look at the class object(s)". So python actually finds the value at:
Manager.DefaultDomainName
# also referenced by
self.__class__.DefaultDomainName
All of that to exemplify the point that the class attribute "Manager.DefaultDomainName" is only created once, can only exist once, and can only hold one value at once.
In the example above, run the builtin function id() on each of the values:
print "1: %s" % id(self.DefaultDomainName)
print "2: %s" % id(Manager.DefaultDomainName)
You should see that they are referring to exactly the same memory location.
Now, in (non)answer to the original question... I don't know from perusing the code above. I would suggest that you try a couple of techniques to find it out:
# Debug with pdb. Follow every step of the process to ensure that you are
# setting valeus as you thought, and that the code you thought would be
# called is actually being called. I've had many problems like this where
# the error was in procedure, not in the actual code at hand.
import pdb; pdb.set_trace()
# check to see if id(Manager) is the same as id(self.__class__)
# in the set_domain() function:
# check to see what attributes you can see on Manager,
# and if they match the attributes on Manager and self.__class__ in __init__
Please update here when you figure it out.
A:
What gahooa said. Besides that, I assume that boto.config.get(...) returns None, presumably because the default_domain key is not defined in the Persist section of your config file.
boto.config is defined in boto/__init__.py as config = boto.pyami.config.Config() (essentially). boto.pyami.config.Config is a subclass of the standard ConfigParser.SafeConfigParser, and it looks for config files in the locations specified by boto.pyami.BotoConfigLocations, which defaults to a list containing /etc/boto.cfg and $HOME/.boto. If you don't have a config in either of those locations, you won't have a default domain name.
A:
Thank you all for your help. I figured out what I was missing:
The class definitions of the boto classes I am using contain class variables for Manager, which in turn serve as a default value if no Manager is passed to the __init__() of these classes. I didn't even think about the fact that these class variables would be evaluated with the import statement when importing the modules containing these classes.
So, these class variable Managers' self.domain_name values were set from DefaultDomainName before I even called set_domain(), and since I do not have the config files setup as ruds pointed out, that value was None.
So, I have to rework my code a little, but thank you all for helping out a python newcomer.
A:
when you load the module, python executes each of the code line by line. Since code is executed, class variables should all be set at load time. Most likely your boto.config.get function was just returned None. In another word, yes, all class variables are allocated before instance variables.
| How do python classes work? | I have a code file from the boto framework pasted below, all of the print statements are mine, and the one commented out line is also mine, all else belongs to the attributed author.
My question is what is the order in which instantiations and allocations occur in python when instantiating a class? The author's code below is under the assumption that 'DefaultDomainName' will exist when an instance of the class is created (e.g. __init__() is called), but this does not seem to be the case, at least in my testing in python 2.5 on OS X.
In the class Manager __init__() method, my print statements show as 'None'. And the print statements in the global function set_domain() further down shows 'None' prior to setting Manager.DefaultDomainName, and shows the expected value of 'test_domain' after the assignment. But when creating an instance of Manager again after calling set_domain(), the __init__() method still shows 'None'.
Can anyone help me out, and explain what is going on here. It would be greatly appreciated. Thank you.
# Copyright (c) 2006,2007,2008 Mitch Garnaat http://garnaat.org/
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import boto
from boto.utils import find_class
class Manager(object):
DefaultDomainName = boto.config.get('Persist', 'default_domain', None)
def __init__(self, domain_name=None, aws_access_key_id=None, aws_secret_access_key=None, debug=0):
self.domain_name = domain_name
self.aws_access_key_id = aws_access_key_id
self.aws_secret_access_key = aws_secret_access_key
self.domain = None
self.sdb = None
self.s3 = None
if not self.domain_name:
print "1: %s" % self.DefaultDomainName
print "2: %s" % Manager.DefaultDomainName
self.domain_name = self.DefaultDomainName
#self.domain_name = 'test_domain'
if self.domain_name:
boto.log.info('No SimpleDB domain set, using default_domain: %s' % self.domain_name)
else:
boto.log.warning('No SimpleDB domain set, persistance is disabled')
if self.domain_name:
self.sdb = boto.connect_sdb(aws_access_key_id=self.aws_access_key_id,
aws_secret_access_key=self.aws_secret_access_key,
debug=debug)
self.domain = self.sdb.lookup(self.domain_name)
if not self.domain:
self.domain = self.sdb.create_domain(self.domain_name)
def get_s3_connection(self):
if not self.s3:
self.s3 = boto.connect_s3(self.aws_access_key_id, self.aws_secret_access_key)
return self.s3
def get_manager(domain_name=None, aws_access_key_id=None, aws_secret_access_key=None, debug=0):
return Manager(domain_name, aws_access_key_id, aws_secret_access_key, debug=debug)
def set_domain(domain_name):
print "3: %s" % Manager.DefaultDomainName
Manager.DefaultDomainName = domain_name
print "4: %s" % Manager.DefaultDomainName
def get_domain():
return Manager.DefaultDomainName
def revive_object_from_id(id, manager):
if not manager.domain:
return None
attrs = manager.domain.get_attributes(id, ['__module__', '__type__', '__lineage__'])
try:
cls = find_class(attrs['__module__'], attrs['__type__'])
return cls(id, manager=manager)
except ImportError:
return None
def object_lister(cls, query_lister, manager):
for item in query_lister:
if cls:
yield cls(item.name)
else:
o = revive_object_from_id(item.name, manager)
if o:
yield o
| [
"A few python notes\nWhen python executes the class block, it creates all of the \"attributes\" of that class as it encounters them. They are usually class variables as well as functions (methods), and the like.\nSo the value for \"Manager.DefaultDomainName\" is set when it is encountered in the class definition. This code is only ever run once - never again. The reason for that is that it is just \"defining\" the class object called \"Manager\".\nWhen an object of class \"Manager\" is instantiated, it is an instance of the \"Manager\" class. (that may sound repetitive). To be perfectly clear, the value:\nself.DefaultDomainName\n\ndoes not exist. Following the rules of classes, python says \"hmm, that does not exist on this object instance, I'll look at the class object(s)\". So python actually finds the value at:\nManager.DefaultDomainName\n\n# also referenced by\nself.__class__.DefaultDomainName\n\nAll of that to exemplify the point that the class attribute \"Manager.DefaultDomainName\" is only created once, can only exist once, and can only hold one value at once.\n\nIn the example above, run the builtin function id() on each of the values:\nprint \"1: %s\" % id(self.DefaultDomainName)\nprint \"2: %s\" % id(Manager.DefaultDomainName)\n\nYou should see that they are referring to exactly the same memory location.\n\nNow, in (non)answer to the original question... I don't know from perusing the code above. I would suggest that you try a couple of techniques to find it out:\n# Debug with pdb. Follow every step of the process to ensure that you are \n# setting valeus as you thought, and that the code you thought would be \n# called is actually being called. I've had many problems like this where \n# the error was in procedure, not in the actual code at hand.\nimport pdb; pdb.set_trace()\n\n# check to see if id(Manager) is the same as id(self.__class__)\n\n# in the set_domain() function:\n# check to see what attributes you can see on Manager, \n# and if they match the attributes on Manager and self.__class__ in __init__\n\nPlease update here when you figure it out.\n",
"What gahooa said. Besides that, I assume that boto.config.get(...) returns None, presumably because the default_domain key is not defined in the Persist section of your config file.\nboto.config is defined in boto/__init__.py as config = boto.pyami.config.Config() (essentially). boto.pyami.config.Config is a subclass of the standard ConfigParser.SafeConfigParser, and it looks for config files in the locations specified by boto.pyami.BotoConfigLocations, which defaults to a list containing /etc/boto.cfg and $HOME/.boto. If you don't have a config in either of those locations, you won't have a default domain name.\n",
"Thank you all for your help. I figured out what I was missing:\nThe class definitions of the boto classes I am using contain class variables for Manager, which in turn serve as a default value if no Manager is passed to the __init__() of these classes. I didn't even think about the fact that these class variables would be evaluated with the import statement when importing the modules containing these classes.\nSo, these class variable Managers' self.domain_name values were set from DefaultDomainName before I even called set_domain(), and since I do not have the config files setup as ruds pointed out, that value was None.\nSo, I have to rework my code a little, but thank you all for helping out a python newcomer.\n",
"when you load the module, python executes each of the code line by line. Since code is executed, class variables should all be set at load time. Most likely your boto.config.get function was just returned None. In another word, yes, all class variables are allocated before instance variables.\n"
] | [
9,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000552329_python.txt |
Q:
python ide vs cmd line detection?
When programming in the two IDEs i used, bad things happen when i use raw_input. However on the command line it works EXACTLY how i expect it to. Typically this app is ran in cmd line but i like to edit and debug it in an IDE. Is there a way to detect if i executed the app in an IDE or not?
A:
if sys.stdin.isatty():
# command line (not a pipe, no stdin redirection)
else:
# something else, could be IDE
A:
I would strongly advise (and you have been previously advised on this) to use a good IDE, and a good debugger instead of hacking around your code to fix something that shouldn't be broken in the first place.
I deserve to be down-voted for not answering the question, but please consider this advice for your future sanity.
I would personally recommend Winpdb debugger and PIDA IDE
| python ide vs cmd line detection? | When programming in the two IDEs i used, bad things happen when i use raw_input. However on the command line it works EXACTLY how i expect it to. Typically this app is ran in cmd line but i like to edit and debug it in an IDE. Is there a way to detect if i executed the app in an IDE or not?
| [
"if sys.stdin.isatty():\n # command line (not a pipe, no stdin redirection)\nelse:\n # something else, could be IDE\n\n",
"I would strongly advise (and you have been previously advised on this) to use a good IDE, and a good debugger instead of hacking around your code to fix something that shouldn't be broken in the first place.\nI deserve to be down-voted for not answering the question, but please consider this advice for your future sanity.\nI would personally recommend Winpdb debugger and PIDA IDE\n"
] | [
5,
1
] | [] | [] | [
"python"
] | stackoverflow_0000552627_python.txt |
Q:
Windows Server 2008 or Vista?
What is an easy (to implement) way to check whether I am on Windows Vista or Windows Server 2008 from a Python script?
platform.uname() gives the same result for both versions.
A:
As mentioned in the other question the foolproof (I think) way is to use win32api.GetVersionEx(1). The combination of the version number and the product type will give you the current windows platform you're running on. Eg. the combination of version number "6.*" and product type VER_NT_SERVER is Windows Server 2008.
You can find information about the different combinations you can get at msdn
| Windows Server 2008 or Vista? | What is an easy (to implement) way to check whether I am on Windows Vista or Windows Server 2008 from a Python script?
platform.uname() gives the same result for both versions.
| [
"As mentioned in the other question the foolproof (I think) way is to use win32api.GetVersionEx(1). The combination of the version number and the product type will give you the current windows platform you're running on. Eg. the combination of version number \"6.*\" and product type VER_NT_SERVER is Windows Server 2008.\nYou can find information about the different combinations you can get at msdn\n"
] | [
2
] | [] | [] | [
"python",
"windows_server_2008",
"windows_vista",
"windowsversion"
] | stackoverflow_0000553372_python_windows_server_2008_windows_vista_windowsversion.txt |
Q:
python exit a blocking thread?
In my code I loop though raw_input() to see if the user has requested to quit. My app can quit before the user quits, but my problem is the app is still alive until I enter a key to return from the blocking function raw_input(). Can I do to force raw_input() to return by maybe sending it a fake input? Could I terminate the thread that it's on? (the only data it has is a single variable called wantQuit).
A:
Why don't you just mark the thread as daemonic?
From the docs:
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon attribute.
A:
You might use a non-blocking function to read user input.
This solution is windows-specific:
import msvcrt
import time
while True:
# test if there are keypresses in the input buffer
while msvcrt.kbhit():
# read a character
print msvcrt.getch()
# no keypresses, sleep for a while...
time.sleep(1)
To do something similar in Unix, which reads a line at a time, unlike the windows version reading char by char (thanks to Aaron Digulla for providing the link to the python user forum):
import sys
import select
i = 0
while i < 10:
i = i + 1
r,w,x = select.select([sys.stdin.fileno()],[],[],2)
if len(r) != 0:
print sys.stdin.readline()
See also: http://code.activestate.com/recipes/134892/
A:
You can use this time out function that wraps your function. Here's the recipe from: http://code.activestate.com/recipes/473878/
def timeout(func, args=(), kwargs={}, timeout_duration=1, default=None):
'''This function will spwan a thread and run the given function using the args, kwargs and
return the given default value if the timeout_duration is exceeded
'''
import threading
class InterruptableThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.result = default
def run(self):
try:
self.result = func(*args, **kwargs)
except:
self.result = default
it = InterruptableThread()
it.start()
it.join(timeout_duration)
if it.isAlive():
return it.result
else:
return it.result
A:
There is a post on the Python mailing list which explains how to do this for Unix:
# this works on some platforms:
import signal, sys
def alarm_handler(*args):
raise Exception("timeout")
def function_xyz(prompt, timeout):
signal.signal(signal.SIGALRM, alarm_handler)
signal.alarm(timeout)
sys.stdout.write(prompt)
sys.stdout.flush()
try:
text = sys.stdin.readline()
except:
text = ""
signal.alarm(0)
return text
| python exit a blocking thread? | In my code I loop though raw_input() to see if the user has requested to quit. My app can quit before the user quits, but my problem is the app is still alive until I enter a key to return from the blocking function raw_input(). Can I do to force raw_input() to return by maybe sending it a fake input? Could I terminate the thread that it's on? (the only data it has is a single variable called wantQuit).
| [
"Why don't you just mark the thread as daemonic?\nFrom the docs:\n\nA thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon attribute.\n\n",
"You might use a non-blocking function to read user input.\nThis solution is windows-specific:\nimport msvcrt\nimport time\n\nwhile True:\n # test if there are keypresses in the input buffer\n while msvcrt.kbhit(): \n # read a character\n print msvcrt.getch()\n # no keypresses, sleep for a while...\n time.sleep(1)\n\nTo do something similar in Unix, which reads a line at a time, unlike the windows version reading char by char (thanks to Aaron Digulla for providing the link to the python user forum):\nimport sys\nimport select\n\ni = 0\nwhile i < 10:\n i = i + 1\n r,w,x = select.select([sys.stdin.fileno()],[],[],2)\n if len(r) != 0:\n print sys.stdin.readline()\n\nSee also: http://code.activestate.com/recipes/134892/\n",
"You can use this time out function that wraps your function. Here's the recipe from: http://code.activestate.com/recipes/473878/\ndef timeout(func, args=(), kwargs={}, timeout_duration=1, default=None):\n '''This function will spwan a thread and run the given function using the args, kwargs and \n return the given default value if the timeout_duration is exceeded \n ''' \n import threading\n class InterruptableThread(threading.Thread):\n def __init__(self):\n threading.Thread.__init__(self)\n self.result = default\n def run(self):\n try:\n self.result = func(*args, **kwargs)\n except:\n self.result = default\n it = InterruptableThread()\n it.start()\n it.join(timeout_duration)\n if it.isAlive():\n return it.result\n else:\n return it.result\n\n",
"There is a post on the Python mailing list which explains how to do this for Unix:\n# this works on some platforms:\n\nimport signal, sys\n\ndef alarm_handler(*args):\n raise Exception(\"timeout\")\n\ndef function_xyz(prompt, timeout):\n signal.signal(signal.SIGALRM, alarm_handler)\n signal.alarm(timeout)\n sys.stdout.write(prompt)\n sys.stdout.flush()\n try:\n text = sys.stdin.readline()\n except:\n text = \"\"\n signal.alarm(0)\n return text\n\n"
] | [
6,
2,
2,
1
] | [] | [] | [
"multithreading",
"python",
"raw_input"
] | stackoverflow_0000552996_multithreading_python_raw_input.txt |
Q:
Python super class reflection
If I have Python code
class A():
pass
class B():
pass
class C(A, B):
pass
and I have class C, is there a way to iterate through it's super classed (A and B)? Something like pseudocode:
>>> magicGetSuperClasses(C)
(<type 'A'>, <type 'B'>)
One solution seems to be inspect module and getclasstree function.
def magicGetSuperClasses(cls):
return [o[0] for o in inspect.getclasstree([cls]) if type(o[0]) == type]
but is this a "Pythonian" way to achieve the goal?
A:
C.__bases__ is an array of the super classes, so you could implement your hypothetical function like so:
def magicGetSuperClasses(cls):
return cls.__bases__
But I imagine it would be easier to just reference cls.__bases__ directly in most cases.
A:
@John: Your snippet doesn't work -- you are returning the class of the base classes (which are also known as metaclasses). You really just want cls.__bases__:
class A: pass
class B: pass
class C(A, B): pass
c = C() # Instance
assert C.__bases__ == (A, B) # Works
assert c.__class__.__bases__ == (A, B) # Works
def magicGetSuperClasses(clz):
return tuple([base.__class__ for base in clz.__bases__])
assert magicGetSuperClasses(C) == (A, B) # Fails
Also, if you're using Python 2.4+ you can use generator expressions instead of creating a list (via []), then turning it into a tuple (via tuple). For example:
def get_base_metaclasses(cls):
"""Returns the metaclass of all the base classes of cls."""
return tuple(base.__class__ for base in clz.__bases__)
That's a somewhat confusing example, but genexps are generally easy and cool. :)
A:
The inspect module was a good start, use the getmro function:
Return a tuple of class cls’s base classes, including cls, in method resolution order. No class appears more than once in this tuple. ...
>>> class A: pass
>>> class B: pass
>>> class C(A, B): pass
>>> import inspect
>>> inspect.getmro(C)[1:]
(<class __main__.A at 0x8c59f2c>, <class __main__.B at 0x8c59f5c>)
The first element of the returned tuple is C, you can just disregard it.
A:
if you need to know to order in which super() would call the classes you can use C.__mro__ and don't need inspect therefore.
| Python super class reflection | If I have Python code
class A():
pass
class B():
pass
class C(A, B):
pass
and I have class C, is there a way to iterate through it's super classed (A and B)? Something like pseudocode:
>>> magicGetSuperClasses(C)
(<type 'A'>, <type 'B'>)
One solution seems to be inspect module and getclasstree function.
def magicGetSuperClasses(cls):
return [o[0] for o in inspect.getclasstree([cls]) if type(o[0]) == type]
but is this a "Pythonian" way to achieve the goal?
| [
"C.__bases__ is an array of the super classes, so you could implement your hypothetical function like so:\ndef magicGetSuperClasses(cls):\n return cls.__bases__\n\nBut I imagine it would be easier to just reference cls.__bases__ directly in most cases.\n",
"@John: Your snippet doesn't work -- you are returning the class of the base classes (which are also known as metaclasses). You really just want cls.__bases__:\nclass A: pass\nclass B: pass\nclass C(A, B): pass\n\nc = C() # Instance\n\nassert C.__bases__ == (A, B) # Works\nassert c.__class__.__bases__ == (A, B) # Works\n\ndef magicGetSuperClasses(clz):\n return tuple([base.__class__ for base in clz.__bases__])\n\nassert magicGetSuperClasses(C) == (A, B) # Fails\n\nAlso, if you're using Python 2.4+ you can use generator expressions instead of creating a list (via []), then turning it into a tuple (via tuple). For example:\ndef get_base_metaclasses(cls):\n \"\"\"Returns the metaclass of all the base classes of cls.\"\"\"\n return tuple(base.__class__ for base in clz.__bases__)\n\nThat's a somewhat confusing example, but genexps are generally easy and cool. :)\n",
"The inspect module was a good start, use the getmro function:\n\nReturn a tuple of class cls’s base classes, including cls, in method resolution order. No class appears more than once in this tuple. ...\n\n>>> class A: pass\n>>> class B: pass\n>>> class C(A, B): pass\n>>> import inspect\n>>> inspect.getmro(C)[1:]\n(<class __main__.A at 0x8c59f2c>, <class __main__.B at 0x8c59f5c>)\n\nThe first element of the returned tuple is C, you can just disregard it.\n",
"if you need to know to order in which super() would call the classes you can use C.__mro__ and don't need inspect therefore.\n"
] | [
37,
11,
4,
2
] | [] | [] | [
"python",
"reflection"
] | stackoverflow_0000025807_python_reflection.txt |
Q:
Can anyone provide a more pythonic way of generating the morris sequence?
I'm trying to generate the morris sequence in python. My current solution is below, but I feel like I just wrote c in python. Can anyone provide a more pythonic solution?
def morris(x):
a = ['1', '11']
yield a[0]
yield a[1]
while len(a) <= x:
s = ''
count = 1
al = a[-1]
for i in range(0,len(al)):
if i+1 < len(al) and al[i] == al[i+1]:
count += 1
else:
s += '%s%s' % (count, al[i])
count = 1
a.append(s)
yield s
a = [i for i in morris(30)]
A:
itertools.groupby seems to fit perfectly! Just define a next_morris function as follows:
def next_morris(number):
return ''.join('%s%s' % (len(list(group)), digit)
for digit, group in itertools.groupby(str(number)))
That's all!!! Look:
print next_morris(1)
11
print next_morris(111221)
312211
I could use that to make a generator:
def morris_generator(maxlen, start=1):
num = str(start)
while len(num) < maxlen:
yield int(num)
num = next_morris(num)
Usage:
for n in morris_generator(10):
print n
results:
1
11
21
1211
111221
312211
13112221
A:
from itertools import groupby, islice
def morris():
morris = '1'
yield morris
while True:
morris = groupby(morris)
morris = ((len(list(group)), key) for key, group in morris)
morris = ((str(l), k) for l, k in morris)
morris = ''.join(''.join(t) for t in morris)
yield morris
print list(islice(morris(), 10))
First of all I'd make the iterator infinite and let the consumer decide, how much of it he wants. That way he could either get every morris number that is shorter than x or the first x numbers, etc.
Then there is obviously no need to store the whole list of previous morris numbers in a list, since the recursion is only n := f(n-1) anyway.
Lastly, using itertools to give it a functional touch is always worth a geek point or two ;) I split the generator expression into several lines to make it a bit easier on the eye.
The main ugliness in this solution comes from the fact that len() can't be called on an iterator and gives us an int where we need a str. The other hickup is the nested str.join) to flatten the whole thing into a str again.
If you want to start the sequence from arbitrary numbers, define the function like this:
def morris(morris=None):
if morris is None:
morris = '1'
[...]
If you want to turn around that generator, you can write it like this:
def morris():
morris = '1'
yield morris
while True:
print morris
morris = ''.join(''.join(t)
for t in ((str(len(list(group))), key)
for key, group in groupby(morris)))
yield morris
I'm not sure i like the splitting into two functions, but this seems to be the most readable solution:
def m_groupby(s):
for key, group in groupby(s):
yield str(len(list(group)))
yield key
def morris():
morris = '1'
yield morris
while True:
morris = ''.join(m_groupby(morris))
yield morris
Hope you like it!
| Can anyone provide a more pythonic way of generating the morris sequence? | I'm trying to generate the morris sequence in python. My current solution is below, but I feel like I just wrote c in python. Can anyone provide a more pythonic solution?
def morris(x):
a = ['1', '11']
yield a[0]
yield a[1]
while len(a) <= x:
s = ''
count = 1
al = a[-1]
for i in range(0,len(al)):
if i+1 < len(al) and al[i] == al[i+1]:
count += 1
else:
s += '%s%s' % (count, al[i])
count = 1
a.append(s)
yield s
a = [i for i in morris(30)]
| [
"itertools.groupby seems to fit perfectly! Just define a next_morris function as follows:\ndef next_morris(number):\n return ''.join('%s%s' % (len(list(group)), digit)\n for digit, group in itertools.groupby(str(number)))\n\nThat's all!!! Look:\nprint next_morris(1)\n11\nprint next_morris(111221)\n312211\n\n\nI could use that to make a generator:\ndef morris_generator(maxlen, start=1):\n num = str(start)\n while len(num) < maxlen:\n yield int(num)\n num = next_morris(num)\n\nUsage:\nfor n in morris_generator(10):\n print n\n\nresults:\n1\n11\n21\n1211\n111221\n312211\n13112221\n\n",
"from itertools import groupby, islice\n\ndef morris():\n morris = '1'\n yield morris\n while True:\n morris = groupby(morris)\n morris = ((len(list(group)), key) for key, group in morris)\n morris = ((str(l), k) for l, k in morris)\n morris = ''.join(''.join(t) for t in morris)\n yield morris\n\nprint list(islice(morris(), 10))\n\nFirst of all I'd make the iterator infinite and let the consumer decide, how much of it he wants. That way he could either get every morris number that is shorter than x or the first x numbers, etc.\nThen there is obviously no need to store the whole list of previous morris numbers in a list, since the recursion is only n := f(n-1) anyway.\nLastly, using itertools to give it a functional touch is always worth a geek point or two ;) I split the generator expression into several lines to make it a bit easier on the eye.\nThe main ugliness in this solution comes from the fact that len() can't be called on an iterator and gives us an int where we need a str. The other hickup is the nested str.join) to flatten the whole thing into a str again.\nIf you want to start the sequence from arbitrary numbers, define the function like this:\ndef morris(morris=None):\n if morris is None:\n morris = '1'\n[...]\n\nIf you want to turn around that generator, you can write it like this:\ndef morris():\n morris = '1'\n yield morris\n while True:\n print morris\n morris = ''.join(''.join(t) \n for t in ((str(len(list(group))), key) \n for key, group in groupby(morris)))\n yield morris\n\nI'm not sure i like the splitting into two functions, but this seems to be the most readable solution:\ndef m_groupby(s):\n for key, group in groupby(s):\n yield str(len(list(group)))\n yield key\n\ndef morris():\n morris = '1'\n yield morris\n while True:\n morris = ''.join(m_groupby(morris))\n yield morris\n\nHope you like it!\n"
] | [
24,
6
] | [] | [] | [
"python",
"python_itertools",
"sequences"
] | stackoverflow_0000553871_python_python_itertools_sequences.txt |
Q:
What's a way to create flash animations with Python?
I'm having a set of Python scripts that process the photos. What I would like is to be able to create some kind of flash-presentation out of those images.
Is there any package or 'framework' that would help to do this?
A:
I don't know of any Python-specific solutions but there are multiple tools to handle this:
You can create a flash file with dummy pictures which you then replace using mtasc, swfmill, SWF Tools or similar. This way means lots of trouble but allows you to create a dynamic flash file.
If you don't need dynamic content, though, you're better off creating a video with ffmpeg. It can create videos out of multiple images, so if you're somehow able to render the frames you want in the presentation, you could use ffmpeg to make a video out of it.
If you only want charts, use SWF Charts.
You could use external languages that have a library for creating flash files.
And finally there was another script language that could be compiled into several other languages, where swf waas one of the targets, but I can't remember its name right now.
A:
You should generate a formated list with the data to your photos, path and what else you need in your presentation.
That data you load into a SWF, where your presentation happens.
Like that you can let python do what it does and flash what flash does best.
You might find allready made solutions for flash galleries / slideshows. http://airtightinteractive.com/simpleviewer/ is a famous one. You can load your custom xml in it.
A:
Check out Ming, it seems to have Python bindings.
A:
Ming is powerful but you might not find it pythonic to work with.
I prefer Haxe for Flash work. (It's the successor of MTASC)
| What's a way to create flash animations with Python? | I'm having a set of Python scripts that process the photos. What I would like is to be able to create some kind of flash-presentation out of those images.
Is there any package or 'framework' that would help to do this?
| [
"I don't know of any Python-specific solutions but there are multiple tools to handle this:\nYou can create a flash file with dummy pictures which you then replace using mtasc, swfmill, SWF Tools or similar. This way means lots of trouble but allows you to create a dynamic flash file.\nIf you don't need dynamic content, though, you're better off creating a video with ffmpeg. It can create videos out of multiple images, so if you're somehow able to render the frames you want in the presentation, you could use ffmpeg to make a video out of it.\nIf you only want charts, use SWF Charts.\nYou could use external languages that have a library for creating flash files.\nAnd finally there was another script language that could be compiled into several other languages, where swf waas one of the targets, but I can't remember its name right now.\n",
"You should generate a formated list with the data to your photos, path and what else you need in your presentation.\nThat data you load into a SWF, where your presentation happens.\nLike that you can let python do what it does and flash what flash does best.\nYou might find allready made solutions for flash galleries / slideshows. http://airtightinteractive.com/simpleviewer/ is a famous one. You can load your custom xml in it.\n",
"Check out Ming, it seems to have Python bindings.\n",
"Ming is powerful but you might not find it pythonic to work with.\nI prefer Haxe for Flash work. (It's the successor of MTASC)\n"
] | [
3,
2,
1,
1
] | [] | [] | [
"flash",
"python"
] | stackoverflow_0000531377_flash_python.txt |
Q:
Creating an inheritable Python type with PyCxx
A friend and I have been toying around with various Python C++ wrappers lately, trying to find one that meets the needs of both some professional and hobby projects. We've both honed in on PyCxx as a good balance between being lightweight and easy to interface with while hiding away some of the ugliest bits of the Python C api. PyCxx is not terribly robust when it comes to exposing types, however (ie: it instructs you to create type factories rather than implement constructors), and we have been working on filling in the gaps in order to expose our types in a more functional manner. In order to fill these gaps we turn to the C api.
This has left us with some questions, however, that the api documentation doesn't seem to cover in much depth (and when it does, the answers are occasionally contradictory). The basic overarching question is simply this: What must be defined for a Python type to function as a base type? We've found that for the PyCxx class to function as a type we need to define tp_new and tp_dealloc explicitly and set the type as a module attribute, and that we need to have Py_TPFLAGS_BASETYPE set on [our type]->tp_flags, but beyond that we're still groping in the dark.
Here is our code thus far:
class kitty : public Py::PythonExtension<kitty> {
public:
kitty() : Py::PythonExtension<kitty>() {}
virtual ~kitty() {}
static void init_type() {
behaviors().name("kitty");
add_varargs_method("speak", &kitty::speak);
}
static PyObject* tp_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) {
return static_cast<PyObject*>(new kitty());
}
static void tp_dealloc(PyObject *obj) {
kitty* k = static_cast<kitty*>(obj);
delete k;
}
private:
Py::Object speak(const Py::Tuple &args) {
cout << "Meow!" << endl;
return Py::None();
}
};
// cat Module
class cat_module : public Py::ExtensionModule<cat_module> {
public:
cat_module() : Py::ExtensionModule<cat_module>("cat") {
kitty::init_type();
// Set up additional properties on the kitty type object
PyTypeObject* kittyType = kitty::type_object();
kittyType->tp_new = &kitty::tp_new;
kittyType->tp_dealloc = &kitty::tp_dealloc;
kittyType->tp_flags |= Py_TPFLAGS_BASETYPE;
// Expose the kitty type through the module
module().setAttr("kitty", Py::Object((PyObject*)kittyType));
initialize();
}
virtual ~cat_module() {}
};
extern "C" void initcat() {
static cat_module* cat = new cat_module();
}
And our Python test code looks like this:
import cat
class meanKitty(cat.kitty):
def scratch(self):
print "hiss! *scratch*"
myKitty = cat.kitty()
myKitty.speak()
meanKitty = meanKitty()
meanKitty.speak()
meanKitty.scratch()
The curious bit is that if you comment all the meanKitty bits out, the script runs and the cat meows just fine, but if you uncomment the meanKitty class suddenly Python gives us this:
AttributeError: 'kitty' object has no attribute 'speak'
Which confuses the crap out of me. It's as if inheriting from it hides the base class entirely! If anyone could provide some insight into what we are missing, it would be appreciated! Thanks!
EDIT: Okay, so about five seconds after posting this I recalled something that we had wanted to try earlier. I added the following code to kitty -
virtual Py::Object getattr( const char *name ) {
return getattr_methods( name );
}
And now we're meowing on both kitties in Python! still not fully there, however, because now I get this:
Traceback (most recent call last):
File "d:\Development\Junk Projects\PythonCxx\Toji.py", line 12, in <module>
meanKitty.scratch()
AttributeError: scratch
So still looking for some help! Thanks!
A:
You must declare kitty as class new_style_class: public Py::PythonClass< new_style_class >. See simple.cxx and the Python test case at http://cxx.svn.sourceforge.net/viewvc/cxx/trunk/CXX/Demo/Python3/.
Python 2.2 introduced new-style classes which among other things allow the user to subclass built-in types (like your new built-in type). Inheritance didn't work in your example because it defines an old-style class.
A:
I've only done a tiny bit of work with PyCxx, and I'm not at a compiler, but I suspect what you're seeing is similar to the following situation, as expressed in pure Python:
>>> class C(object):
... def __getattribute__(self, key):
... print 'C', key
...
>>> class D(C):
... def __init__(self):
... self.foo = 1
...
>>> D().foo
C foo
>>>
My best guess is that the C++ getattr method should check this.ob_type->tp_dict (which will of course be the subclass' dict, if this an instance of the subclass) and only call getattr_methods if you fail to find name in there (see the PyDict_ API functions).
Also, I don't think you should set tp_dealloc yourself: I don't see how your implementation improves on PyCxx's default extension_object_deallocator.
| Creating an inheritable Python type with PyCxx | A friend and I have been toying around with various Python C++ wrappers lately, trying to find one that meets the needs of both some professional and hobby projects. We've both honed in on PyCxx as a good balance between being lightweight and easy to interface with while hiding away some of the ugliest bits of the Python C api. PyCxx is not terribly robust when it comes to exposing types, however (ie: it instructs you to create type factories rather than implement constructors), and we have been working on filling in the gaps in order to expose our types in a more functional manner. In order to fill these gaps we turn to the C api.
This has left us with some questions, however, that the api documentation doesn't seem to cover in much depth (and when it does, the answers are occasionally contradictory). The basic overarching question is simply this: What must be defined for a Python type to function as a base type? We've found that for the PyCxx class to function as a type we need to define tp_new and tp_dealloc explicitly and set the type as a module attribute, and that we need to have Py_TPFLAGS_BASETYPE set on [our type]->tp_flags, but beyond that we're still groping in the dark.
Here is our code thus far:
class kitty : public Py::PythonExtension<kitty> {
public:
kitty() : Py::PythonExtension<kitty>() {}
virtual ~kitty() {}
static void init_type() {
behaviors().name("kitty");
add_varargs_method("speak", &kitty::speak);
}
static PyObject* tp_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) {
return static_cast<PyObject*>(new kitty());
}
static void tp_dealloc(PyObject *obj) {
kitty* k = static_cast<kitty*>(obj);
delete k;
}
private:
Py::Object speak(const Py::Tuple &args) {
cout << "Meow!" << endl;
return Py::None();
}
};
// cat Module
class cat_module : public Py::ExtensionModule<cat_module> {
public:
cat_module() : Py::ExtensionModule<cat_module>("cat") {
kitty::init_type();
// Set up additional properties on the kitty type object
PyTypeObject* kittyType = kitty::type_object();
kittyType->tp_new = &kitty::tp_new;
kittyType->tp_dealloc = &kitty::tp_dealloc;
kittyType->tp_flags |= Py_TPFLAGS_BASETYPE;
// Expose the kitty type through the module
module().setAttr("kitty", Py::Object((PyObject*)kittyType));
initialize();
}
virtual ~cat_module() {}
};
extern "C" void initcat() {
static cat_module* cat = new cat_module();
}
And our Python test code looks like this:
import cat
class meanKitty(cat.kitty):
def scratch(self):
print "hiss! *scratch*"
myKitty = cat.kitty()
myKitty.speak()
meanKitty = meanKitty()
meanKitty.speak()
meanKitty.scratch()
The curious bit is that if you comment all the meanKitty bits out, the script runs and the cat meows just fine, but if you uncomment the meanKitty class suddenly Python gives us this:
AttributeError: 'kitty' object has no attribute 'speak'
Which confuses the crap out of me. It's as if inheriting from it hides the base class entirely! If anyone could provide some insight into what we are missing, it would be appreciated! Thanks!
EDIT: Okay, so about five seconds after posting this I recalled something that we had wanted to try earlier. I added the following code to kitty -
virtual Py::Object getattr( const char *name ) {
return getattr_methods( name );
}
And now we're meowing on both kitties in Python! still not fully there, however, because now I get this:
Traceback (most recent call last):
File "d:\Development\Junk Projects\PythonCxx\Toji.py", line 12, in <module>
meanKitty.scratch()
AttributeError: scratch
So still looking for some help! Thanks!
| [
"You must declare kitty as class new_style_class: public Py::PythonClass< new_style_class >. See simple.cxx and the Python test case at http://cxx.svn.sourceforge.net/viewvc/cxx/trunk/CXX/Demo/Python3/.\nPython 2.2 introduced new-style classes which among other things allow the user to subclass built-in types (like your new built-in type). Inheritance didn't work in your example because it defines an old-style class.\n",
"I've only done a tiny bit of work with PyCxx, and I'm not at a compiler, but I suspect what you're seeing is similar to the following situation, as expressed in pure Python:\n>>> class C(object):\n... def __getattribute__(self, key):\n... print 'C', key\n... \n>>> class D(C):\n... def __init__(self):\n... self.foo = 1\n... \n>>> D().foo\nC foo\n>>> \n\nMy best guess is that the C++ getattr method should check this.ob_type->tp_dict (which will of course be the subclass' dict, if this an instance of the subclass) and only call getattr_methods if you fail to find name in there (see the PyDict_ API functions).\nAlso, I don't think you should set tp_dealloc yourself: I don't see how your implementation improves on PyCxx's default extension_object_deallocator.\n"
] | [
3,
1
] | [] | [] | [
"pycxx",
"python",
"python_c_api"
] | stackoverflow_0000548442_pycxx_python_python_c_api.txt |
Q:
Stackless python network performance degrading over time?
So i'm toying around with stackless python, writing a very simple webserver to teach myself programming with microthreads/tasklets. But now to my problem, when I run something like ab -n 100000 -c 50 http://192.168.0.192/ (100k requests, 50 concurrency) in apache bench I get something like 6k req/s, the second time I run it I get 5.5k, third time 5k, fourth time, 4.5k, etc. all the way down to 100req/s or something.
The problem goes away when I restart the python script, though.
Now my question is why? Am i forgetting to delete tasklets? I've checked the stackless.getruncount() (and it always seems to return 1, for some reason) so it doesn't seem like there would be any dead tasklets hanging around? I've tried calling .kill() on all tasklets that are done, didn't help. I just can't figure this one out.
import socket
import select
import stackless
import time
class socket_wrapper(object):
def __init__(self, sock, sockets):
super(socket_wrapper, self).__init__()
self.sock = sock
self.fileno = sock.fileno
self.sockets_list = sockets
self.channel = stackless.channel()
self.writable = False
self.error = False
def remove(self):
self.sock.close()
self.sockets_list.remove(self)
def send(self, data):
self.sock.send(data)
def push(self, bytes):
self.channel.send(self.sock.recv(bytes))
def stackless_accept(accept, handler, recv_size=1024, timeout=0):
sockets = [accept]
while True:
read, write, error = select.select(sockets, sockets, sockets, timeout)
for sock in read:
if sock is accept:
# Accept socket and create wrapper
sock = socket_wrapper(sock.accept()[0], sockets)
# Create tasklett for this connection
tasklet = stackless.tasklet(handler)
tasklet.setup(sock)
# Store socket
sockets.append(sock)
else:
# Send data to handler
sock.push(recv_size)
# Tag all writable sockets
for sock in write:
if sock is not accept:
sock.writable = True
# Tag all faulty sockets
for sock in error:
if sock is not accept:
sock.error = True
else:
pass # should do something here if the main socket is faulty
timeout = 0 if socket else 1
stackless.schedule()
def simple_handler(tsock):
data = ""
while data[-4:] != "\r\n\r\n":
data += tsock.channel.receive()
while not tsock.writable and not tsock.error:
stackless.schedule()
if not tsock.error:
tsock.send("HTTP/1.1 200 OK\r\nContent-length: 8\r\n\r\nHi there")
tsock.remove()
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(("192.168.0.192", 8000))
sock.listen(5)
stackless.tasklet(stackless_accept)(sock, simple_handler)
stackless.run()
A:
Two things.
First, please make Class Name start with an Upper Case Letter. It's more conventional and easier to read.
More importantly, in the stackless_accept function you accumulate a list of Sock objects, named sockets. This list appears to grow endlessly. Yes, you have a remove, but it isn't always invoked. If the socket gets an error, then it appears that it will be left in the collection forever.
| Stackless python network performance degrading over time? | So i'm toying around with stackless python, writing a very simple webserver to teach myself programming with microthreads/tasklets. But now to my problem, when I run something like ab -n 100000 -c 50 http://192.168.0.192/ (100k requests, 50 concurrency) in apache bench I get something like 6k req/s, the second time I run it I get 5.5k, third time 5k, fourth time, 4.5k, etc. all the way down to 100req/s or something.
The problem goes away when I restart the python script, though.
Now my question is why? Am i forgetting to delete tasklets? I've checked the stackless.getruncount() (and it always seems to return 1, for some reason) so it doesn't seem like there would be any dead tasklets hanging around? I've tried calling .kill() on all tasklets that are done, didn't help. I just can't figure this one out.
import socket
import select
import stackless
import time
class socket_wrapper(object):
def __init__(self, sock, sockets):
super(socket_wrapper, self).__init__()
self.sock = sock
self.fileno = sock.fileno
self.sockets_list = sockets
self.channel = stackless.channel()
self.writable = False
self.error = False
def remove(self):
self.sock.close()
self.sockets_list.remove(self)
def send(self, data):
self.sock.send(data)
def push(self, bytes):
self.channel.send(self.sock.recv(bytes))
def stackless_accept(accept, handler, recv_size=1024, timeout=0):
sockets = [accept]
while True:
read, write, error = select.select(sockets, sockets, sockets, timeout)
for sock in read:
if sock is accept:
# Accept socket and create wrapper
sock = socket_wrapper(sock.accept()[0], sockets)
# Create tasklett for this connection
tasklet = stackless.tasklet(handler)
tasklet.setup(sock)
# Store socket
sockets.append(sock)
else:
# Send data to handler
sock.push(recv_size)
# Tag all writable sockets
for sock in write:
if sock is not accept:
sock.writable = True
# Tag all faulty sockets
for sock in error:
if sock is not accept:
sock.error = True
else:
pass # should do something here if the main socket is faulty
timeout = 0 if socket else 1
stackless.schedule()
def simple_handler(tsock):
data = ""
while data[-4:] != "\r\n\r\n":
data += tsock.channel.receive()
while not tsock.writable and not tsock.error:
stackless.schedule()
if not tsock.error:
tsock.send("HTTP/1.1 200 OK\r\nContent-length: 8\r\n\r\nHi there")
tsock.remove()
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(("192.168.0.192", 8000))
sock.listen(5)
stackless.tasklet(stackless_accept)(sock, simple_handler)
stackless.run()
| [
"Two things.\nFirst, please make Class Name start with an Upper Case Letter. It's more conventional and easier to read.\nMore importantly, in the stackless_accept function you accumulate a list of Sock objects, named sockets. This list appears to grow endlessly. Yes, you have a remove, but it isn't always invoked. If the socket gets an error, then it appears that it will be left in the collection forever.\n"
] | [
14
] | [] | [] | [
"io",
"networking",
"performance",
"python",
"python_stackless"
] | stackoverflow_0000554805_io_networking_performance_python_python_stackless.txt |
Q:
Parameterised regular expression in Python
In Python, is there a better way to parameterise strings into regular expressions than doing it manually like this:
test = 'flobalob'
names = ['a', 'b', 'c']
for name in names:
regexp = "%s" % (name)
print regexp, re.search(regexp, test)
This noddy example tries to match each name in turn. I know there's better ways of doing that, but its a simple example purely to illustrate the point.
The answer appears to be no, there's no real alternative. The best way to paramaterise regular expressions in python is as above or with derivatives such as str.format(). I tried to write a generic question, rather than 'fix ma codez, kthxbye'. For those still interested, I've fleshed out an example closer to my needs here:
for diskfilename in os.listdir(''):
filenames = ['bob.txt', 'fred.txt', 'paul.txt']
for filename in filenames:
name, ext = filename.split('.')
regexp = "%s.*\.%s" % (name, ext)
m = re.search(regexp, diskfilename)
if m:
print diskfilename, regexp, re.search(regexp, diskfilename)
# ...
I'm trying to figure out the 'type' of a file based on its filename, of the form <filename>_<date>.<extension>. In my real code, the filenames array is a dict, containing a function to call once a match is found.
Other ways I've considered doing it:
Have a regular expression in the array. I already have an array of filenames without any regular expression magic, so I am loathe to do this. I have done this elsewhere in my code and its a mess (though necessary there).
Match only on the start of the filename. This would work, but would break with .bak copies of files, etc. At some point I'll probably want to extract the date from the filename so would need to use a regular expression anyway.
Thanks for the responses suggesting alternatives to regular expressions to achieve the same end result. I was more interested in parameterising regular expressions for now and for the future. I never come across fnmatch, so its all useful in the long run.
A:
Well, as you build a regexp from a string, I see no other way. But you could parameterise the string itself with a dictionary:
d = {'bar': 'a', 'foo': 'b'}
regexp = '%(foo)s|%(bar)s' % d
Or, depending on the problem, you could use list comprehensions:
vlist = ['a', 'b', 'c']
regexp = '|'.join([s for s in vlist])
EDIT: Mat clarified his question, this makes things different and the above mentioned is totally irrelevant.
I'd probably go with an approach like this:
filename = 'bob_20090216.txt'
regexps = {'bob': 'bob_[0-9]+.txt',
'fred': 'fred_[0-9]+.txt',
'paul': 'paul_[0-9]+.txt'}
for filetype, regexp in regexps.items():
m = re.match(regexp, filename)
if m != None:
print '%s is of type %s' % (filename, filetype)
A:
import fnmatch, os
filenames = ['bob.txt', 'fred.txt', 'paul.txt']
# 'b.txt.b' -> 'b.txt*.b'
filepatterns = ((f, '*'.join(os.path.splitext(f))) for f in filenames)
diskfilenames = filter(os.path.isfile, os.listdir(''))
pattern2filenames = dict((fn, fnmatch.filter(diskfilenames, pat))
for fn, pat in filepatterns)
print pattern2filenames
Output:
{'bob.txt': ['bob20090217.txt'], 'paul.txt': [], 'fred.txt': []}
Answers to previous revisions of your question follow:
I don't understand your updated question but filename.startswith(prefix) might be sufficient in your specific case.
After you've updated your question the old answer below is less relevant.
Use re.escape(name) if you'd like to match a name literally.
Any tool available for string parametrization is applicable here. For example:
import string
print string.Template("$a $b").substitute(a=1, b="B")
# 1 B
Or using str.format() in Python 2.6+:
print "{0.imag}".format(1j+2)
# 1.0
A:
may be glob and fnmatch modules can be of some help for you?
| Parameterised regular expression in Python | In Python, is there a better way to parameterise strings into regular expressions than doing it manually like this:
test = 'flobalob'
names = ['a', 'b', 'c']
for name in names:
regexp = "%s" % (name)
print regexp, re.search(regexp, test)
This noddy example tries to match each name in turn. I know there's better ways of doing that, but its a simple example purely to illustrate the point.
The answer appears to be no, there's no real alternative. The best way to paramaterise regular expressions in python is as above or with derivatives such as str.format(). I tried to write a generic question, rather than 'fix ma codez, kthxbye'. For those still interested, I've fleshed out an example closer to my needs here:
for diskfilename in os.listdir(''):
filenames = ['bob.txt', 'fred.txt', 'paul.txt']
for filename in filenames:
name, ext = filename.split('.')
regexp = "%s.*\.%s" % (name, ext)
m = re.search(regexp, diskfilename)
if m:
print diskfilename, regexp, re.search(regexp, diskfilename)
# ...
I'm trying to figure out the 'type' of a file based on its filename, of the form <filename>_<date>.<extension>. In my real code, the filenames array is a dict, containing a function to call once a match is found.
Other ways I've considered doing it:
Have a regular expression in the array. I already have an array of filenames without any regular expression magic, so I am loathe to do this. I have done this elsewhere in my code and its a mess (though necessary there).
Match only on the start of the filename. This would work, but would break with .bak copies of files, etc. At some point I'll probably want to extract the date from the filename so would need to use a regular expression anyway.
Thanks for the responses suggesting alternatives to regular expressions to achieve the same end result. I was more interested in parameterising regular expressions for now and for the future. I never come across fnmatch, so its all useful in the long run.
| [
"Well, as you build a regexp from a string, I see no other way. But you could parameterise the string itself with a dictionary:\nd = {'bar': 'a', 'foo': 'b'}\nregexp = '%(foo)s|%(bar)s' % d\n\nOr, depending on the problem, you could use list comprehensions:\nvlist = ['a', 'b', 'c']\nregexp = '|'.join([s for s in vlist])\n\nEDIT: Mat clarified his question, this makes things different and the above mentioned is totally irrelevant.\nI'd probably go with an approach like this:\nfilename = 'bob_20090216.txt'\n\nregexps = {'bob': 'bob_[0-9]+.txt',\n 'fred': 'fred_[0-9]+.txt',\n 'paul': 'paul_[0-9]+.txt'}\n\nfor filetype, regexp in regexps.items():\n m = re.match(regexp, filename)\n if m != None:\n print '%s is of type %s' % (filename, filetype)\n\n",
"import fnmatch, os\n\nfilenames = ['bob.txt', 'fred.txt', 'paul.txt']\n\n # 'b.txt.b' -> 'b.txt*.b'\nfilepatterns = ((f, '*'.join(os.path.splitext(f))) for f in filenames) \ndiskfilenames = filter(os.path.isfile, os.listdir(''))\npattern2filenames = dict((fn, fnmatch.filter(diskfilenames, pat))\n for fn, pat in filepatterns)\n\nprint pattern2filenames\n\nOutput:\n{'bob.txt': ['bob20090217.txt'], 'paul.txt': [], 'fred.txt': []}\n\nAnswers to previous revisions of your question follow:\n\nI don't understand your updated question but filename.startswith(prefix) might be sufficient in your specific case.\nAfter you've updated your question the old answer below is less relevant.\n\n\nUse re.escape(name) if you'd like to match a name literally.\nAny tool available for string parametrization is applicable here. For example:\nimport string\nprint string.Template(\"$a $b\").substitute(a=1, b=\"B\")\n# 1 B\n\nOr using str.format() in Python 2.6+:\nprint \"{0.imag}\".format(1j+2)\n# 1.0\n\n\n",
"may be glob and fnmatch modules can be of some help for you?\n"
] | [
6,
2,
2
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000554957_python_regex.txt |
Q:
Python or Ruby for a .NET developer?
I'm a C# .NET developer and I work on mostly ASP.NET projects.
I want to learn a new programming language,
to improve my programming skills by experiencing a new language,
to see something different then microsoft environment,
and maybe to think in a different way.
I focus on two languages for my goal. Python and Ruby.
Which one do you offer for me ?
Pros and cons of them on each other?
Is it worth learning them ?
EDIT : Sorry I editted my post but not inform here,
Ruby on Rails replaced with Ruby.
A:
Both languages are powerful and fun. Either would be a useful addition to your tool box.
Python has a larger community and probably more mature documentation and libraries. Its object-orientation is a little inconsistent and feels (to me, IMHO) like something that was bolted on to the language. You can alter class behaviour at runtime (monkey-patching) but not for the precompiled classes and it's generally frowned-upon.
Ruby might be a little more different to your current experience: it has some flavour of Smalltalk (method-calling is more correctly message-sending for example). Its object-orientation is built-in from scratch, all classes are open to modification and it's an accepted - if slightly scary - practise. The community is smaller, the libraries less mature and documentation coverage is less.
Both languages will have some level of broken backward compatibility in their next majopr releases, both have .Net implementations (IronPython is production, IronRuby getting there). Both have web frameworks that reflect their strengths (search SO for the Django/Rails debate).
If I'd never seen Ruby, I'd be very happy working in Python, and have done so without suffering when necessary. I always found myself wishing I could do the work in Ruby. But that's my opinion, YMMV.
Edit: Come to think of it, and even though it pains me, if you're seeking to leverage your knowledge of the .Net framework, you might be best off looking at IronPython, as it's more mature than the Ruby equivalent.
A:
First... good for you for wanting to broaden your knowledge! Second, you are comparing a language (Python) with a web framework (Ruby on Rails).
I think your best option is to try a few different frameworks in both Python and Ruby, do the same fairly simple task in each, and only then pick which one you'd like to learn more about. Rails is nice for Ruby, but it's not the only one out there. For Python I like Pylons and Django.
Pros and cons: Ruby is a little cleaner, language-wise, than Python. Python has a much larger set of modules.
Is it worth learning? Yes, to both Python and Ruby.
A:
If you're a beginner, I would recommend you try Django if you decide to start learning Python. Of course if you decide Ruby is your choice of flavor, Rails is the obvious way to go. Whichever language you choose, I can assure you it will be a good choice.
Having said that, my personal choice is Python. I like the language, I like the community, and I use Python for almost every occasion. I use it for command-line apps, GUI apps, and I use it for web apps (Django). Oh and I use it for system administration scripts on Windows and Linux as well.
Having said that as well, I would recommend you learn a language like Haskell or Lisp as well. That will really open your eyes to a new perspective to programming. Furthermore, since you say you are mostly familiar with the .Net framework, I would really recommend you start with F# since you'll already be familiar with the libraries and it will make the transition much more smoother. Either way, good luck.
A:
It's always valuable to learn a new programming language. And both Python and Ruby are good ones to know. It's important to note that while Python is a language, Ruby on Rails is a framework. IMHO, you should learn Ruby before you learn Rails.
Go try ruby! to see if you like it. If you do, then try Rails. Otherwise, try Python. Both are similarly useful. To me, Ruby is more "fun". If you like Lisp, you'll probably like Ruby. If you like C, you might prefer Python. Try them both!
A:
Rule of thumb - Python if you like strict rules and Ruby if you hate them.
Another one: if you adore JavaScript - Ruby is your choice :)
A:
What? No mention of IronPython?
IronPython is the flagship language of the DLR. It allows you to use all the familiar .NET libraries, but through Python.
I would definitely try Python and IronPython. You'll learn a lot and might even sneak it into your current projects (you can embed an IronPython engine in a .NET application).
A:
If you're looking to learn Ruby on Rails, the guides site has a great guide for getting started and the further guides for improving your rails-fu.
Also, Tore Darell has written a Survivor's Guide for Ruby on Rails which could prove useful to you too.
A:
I'd get in on Ruby. Seems to have a larger (or at least more active) community, the pace of new projects & continued development is second-to-none, and the learning resources seem to outnumber & outpace those of Python. I could be wrong, but these are my impressions.
| Python or Ruby for a .NET developer? | I'm a C# .NET developer and I work on mostly ASP.NET projects.
I want to learn a new programming language,
to improve my programming skills by experiencing a new language,
to see something different then microsoft environment,
and maybe to think in a different way.
I focus on two languages for my goal. Python and Ruby.
Which one do you offer for me ?
Pros and cons of them on each other?
Is it worth learning them ?
EDIT : Sorry I editted my post but not inform here,
Ruby on Rails replaced with Ruby.
| [
"Both languages are powerful and fun. Either would be a useful addition to your tool box.\nPython has a larger community and probably more mature documentation and libraries. Its object-orientation is a little inconsistent and feels (to me, IMHO) like something that was bolted on to the language. You can alter class behaviour at runtime (monkey-patching) but not for the precompiled classes and it's generally frowned-upon.\nRuby might be a little more different to your current experience: it has some flavour of Smalltalk (method-calling is more correctly message-sending for example). Its object-orientation is built-in from scratch, all classes are open to modification and it's an accepted - if slightly scary - practise. The community is smaller, the libraries less mature and documentation coverage is less.\nBoth languages will have some level of broken backward compatibility in their next majopr releases, both have .Net implementations (IronPython is production, IronRuby getting there). Both have web frameworks that reflect their strengths (search SO for the Django/Rails debate).\nIf I'd never seen Ruby, I'd be very happy working in Python, and have done so without suffering when necessary. I always found myself wishing I could do the work in Ruby. But that's my opinion, YMMV.\nEdit: Come to think of it, and even though it pains me, if you're seeking to leverage your knowledge of the .Net framework, you might be best off looking at IronPython, as it's more mature than the Ruby equivalent.\n",
"First... good for you for wanting to broaden your knowledge! Second, you are comparing a language (Python) with a web framework (Ruby on Rails).\nI think your best option is to try a few different frameworks in both Python and Ruby, do the same fairly simple task in each, and only then pick which one you'd like to learn more about. Rails is nice for Ruby, but it's not the only one out there. For Python I like Pylons and Django.\nPros and cons: Ruby is a little cleaner, language-wise, than Python. Python has a much larger set of modules.\nIs it worth learning? Yes, to both Python and Ruby.\n",
"If you're a beginner, I would recommend you try Django if you decide to start learning Python. Of course if you decide Ruby is your choice of flavor, Rails is the obvious way to go. Whichever language you choose, I can assure you it will be a good choice.\nHaving said that, my personal choice is Python. I like the language, I like the community, and I use Python for almost every occasion. I use it for command-line apps, GUI apps, and I use it for web apps (Django). Oh and I use it for system administration scripts on Windows and Linux as well.\nHaving said that as well, I would recommend you learn a language like Haskell or Lisp as well. That will really open your eyes to a new perspective to programming. Furthermore, since you say you are mostly familiar with the .Net framework, I would really recommend you start with F# since you'll already be familiar with the libraries and it will make the transition much more smoother. Either way, good luck.\n",
"It's always valuable to learn a new programming language. And both Python and Ruby are good ones to know. It's important to note that while Python is a language, Ruby on Rails is a framework. IMHO, you should learn Ruby before you learn Rails.\nGo try ruby! to see if you like it. If you do, then try Rails. Otherwise, try Python. Both are similarly useful. To me, Ruby is more \"fun\". If you like Lisp, you'll probably like Ruby. If you like C, you might prefer Python. Try them both!\n",
"Rule of thumb - Python if you like strict rules and Ruby if you hate them. \nAnother one: if you adore JavaScript - Ruby is your choice :)\n",
"What? No mention of IronPython?\nIronPython is the flagship language of the DLR. It allows you to use all the familiar .NET libraries, but through Python.\nI would definitely try Python and IronPython. You'll learn a lot and might even sneak it into your current projects (you can embed an IronPython engine in a .NET application).\n",
"If you're looking to learn Ruby on Rails, the guides site has a great guide for getting started and the further guides for improving your rails-fu.\nAlso, Tore Darell has written a Survivor's Guide for Ruby on Rails which could prove useful to you too.\n",
"I'd get in on Ruby. Seems to have a larger (or at least more active) community, the pace of new projects & continued development is second-to-none, and the learning resources seem to outnumber & outpace those of Python. I could be wrong, but these are my impressions.\n"
] | [
16,
6,
3,
2,
2,
2,
1,
0
] | [] | [] | [
"comparison",
"python",
"ruby",
"ruby_on_rails"
] | stackoverflow_0000551465_comparison_python_ruby_ruby_on_rails.txt |
Q:
Multiple output files
edit: Initially I was trying to be general but it came out vague. I've included more detail below.
I'm writing a script that pulls in data from two large CSV files, one of people's schedules and the other of information about their schedules. The data is mined and combined to eventually create pajek format graphs for Monday-Sat of peoples connections, with a seventh graph representing all connections over the week with a string of 1's and 0's to indicate which days of the week the connections are made. This last graph is a break from the pajek format and is used by a seperate program written by another researcher.
Pajek format has a large header, and then lists connections as (vertex1 vertex2) unordered pairs. It's difficult to store these pairs in a dictionary, because there are often multiple connections on the same day between two pairs.
I'm wondering what the best way to output to these graphs are. Should I make the large single graph and have a second script deconstruct it into several smaller graphs? Should I keep seven streams open and as I determine a connection write to them, or should I keep some other data structure for each and output them when I can (like a queue)?
A:
I would open seven file streams as accumulating them might be quite memory extensive if it's a lot of data. Of course that is only an option if you can sort them live and don't first need all data read to do the sorting.
A:
"...pulls in data from two large CSV files, one of people's schedules and the other of information about their schedules." Vague, but I think I get it.
"The data is mined and combined to eventually create pajek format graphs for Monday-Sat of peoples connections," Mined and combined. Cool. Where? In this script? In another application? By some 3rd-party module? By some web service?
Is this row-at-a-time algorithm? Does one row of input produce one connection that gets sent to one or more daily graphs?
Is this an algorithm that has to see an entire schedule before it can produce anything? [If so, it's probably wrong, but I don't really know and your question is pretty vague on this central detail.]
"... a seventh graph representing all connections over the week with a string of 1's and 0's to indicate which days of the week the connections are made." Incomplete, but probably good enough.
def makeKey2( row2 ):
return ( row2[1], row2[2] ) # Whatever the lookup key is for source2
def makeKey1( row1 ):
return ( row1[3], row1[0] ) # Whatever the lookup key is for source1
dayFile = [ open("day%d.pajek","w") for i in range(6) ]
combined = open("combined.dat","w")
source1 = open( schedules, "r" )
rdr1= csv.reader( source1 )
source2 = open( aboutSchedules, "r" )
rdr2= csv.reader( source2 )
# "Combine" usually means a relational join between source 1 and source 2.
# We'll assume that source2 is a small-ish dimension and the
# source1 is largish facts
aboutDim = dict( (makeKey2(row),row) for row in rdr2 )
for row in rdr1:
connection, dayList = mine_and_combine( row, aboutDim[ makeKey1(row) ] )
for d in dayList:
dayFile[d].write( connection )
flags = [ 1 if d is in dayList else 0 for d in range(6) ]
combined.write( connection, flags )
Something like that.
The points are:
One pass through each data source. No nested loops. O(n) processing.
Keep as little in memory as you need to create a useful result.
| Multiple output files | edit: Initially I was trying to be general but it came out vague. I've included more detail below.
I'm writing a script that pulls in data from two large CSV files, one of people's schedules and the other of information about their schedules. The data is mined and combined to eventually create pajek format graphs for Monday-Sat of peoples connections, with a seventh graph representing all connections over the week with a string of 1's and 0's to indicate which days of the week the connections are made. This last graph is a break from the pajek format and is used by a seperate program written by another researcher.
Pajek format has a large header, and then lists connections as (vertex1 vertex2) unordered pairs. It's difficult to store these pairs in a dictionary, because there are often multiple connections on the same day between two pairs.
I'm wondering what the best way to output to these graphs are. Should I make the large single graph and have a second script deconstruct it into several smaller graphs? Should I keep seven streams open and as I determine a connection write to them, or should I keep some other data structure for each and output them when I can (like a queue)?
| [
"I would open seven file streams as accumulating them might be quite memory extensive if it's a lot of data. Of course that is only an option if you can sort them live and don't first need all data read to do the sorting.\n",
"\"...pulls in data from two large CSV files, one of people's schedules and the other of information about their schedules.\" Vague, but I think I get it.\n\"The data is mined and combined to eventually create pajek format graphs for Monday-Sat of peoples connections,\" Mined and combined. Cool. Where? In this script? In another application? By some 3rd-party module? By some web service?\nIs this row-at-a-time algorithm? Does one row of input produce one connection that gets sent to one or more daily graphs? \nIs this an algorithm that has to see an entire schedule before it can produce anything? [If so, it's probably wrong, but I don't really know and your question is pretty vague on this central detail.]\n\"... a seventh graph representing all connections over the week with a string of 1's and 0's to indicate which days of the week the connections are made.\" Incomplete, but probably good enough.\ndef makeKey2( row2 ):\n return ( row2[1], row2[2] ) # Whatever the lookup key is for source2\n\ndef makeKey1( row1 ):\n return ( row1[3], row1[0] ) # Whatever the lookup key is for source1\n\ndayFile = [ open(\"day%d.pajek\",\"w\") for i in range(6) ]\ncombined = open(\"combined.dat\",\"w\")\nsource1 = open( schedules, \"r\" )\nrdr1= csv.reader( source1 )\nsource2 = open( aboutSchedules, \"r\" )\nrdr2= csv.reader( source2 )\n\n# \"Combine\" usually means a relational join between source 1 and source 2.\n# We'll assume that source2 is a small-ish dimension and the\n# source1 is largish facts\n\naboutDim = dict( (makeKey2(row),row) for row in rdr2 )\n\nfor row in rdr1:\n connection, dayList = mine_and_combine( row, aboutDim[ makeKey1(row) ] )\n for d in dayList:\n dayFile[d].write( connection )\n flags = [ 1 if d is in dayList else 0 for d in range(6) ]\n combined.write( connection, flags )\n\nSomething like that.\nThe points are:\n\nOne pass through each data source. No nested loops. O(n) processing.\nKeep as little in memory as you need to create a useful result.\n\n"
] | [
2,
2
] | [] | [] | [
"file_io",
"python"
] | stackoverflow_0000555146_file_io_python.txt |
Q:
Match series of (non-nested) balanced parentheses at end of string
How can I match one or more parenthetical expressions appearing at the end of string?
Input:
'hello (i) (m:foo)'
Desired output:
['i', 'm:foo']
Intended for a python script. Paren marks cannot appear inside of each other (no nesting), and the parenthetical expressions may be separated by whitespace.
It's harder than it might seem at first glance, at least so it seems to me.
A:
paren_pattern = re.compile(r"\(([^()]*)\)(?=(?:\s*\([^()]*\))*\s*$)")
def getParens(s):
return paren_pattern.findall(s)
or even shorter:
getParens = re.compile(r"\(([^()]*)\)(?=(?:\s*\([^()]*\))*\s*$)").findall
explaination:
\( # opening paren
([^()]*) # content, captured into group 1
\) # closing paren
(?= # look ahead for...
(?:\s*\([^()]*\))* # a series of parens, separated by whitespace
\s* # possibly more whitespace after
$ # end of string
) # end of look ahead
A:
You don't need to use regex:
def splitter(input):
return [ s.rstrip(" \t)") for s in input.split("(") ][1:]
print splitter('hello (i) (m:foo)')
Note: this solution only works if your input is already known to be valid. See MizardX's solution that will work on any input.
| Match series of (non-nested) balanced parentheses at end of string | How can I match one or more parenthetical expressions appearing at the end of string?
Input:
'hello (i) (m:foo)'
Desired output:
['i', 'm:foo']
Intended for a python script. Paren marks cannot appear inside of each other (no nesting), and the parenthetical expressions may be separated by whitespace.
It's harder than it might seem at first glance, at least so it seems to me.
| [
"paren_pattern = re.compile(r\"\\(([^()]*)\\)(?=(?:\\s*\\([^()]*\\))*\\s*$)\")\n\ndef getParens(s):\n return paren_pattern.findall(s)\n\nor even shorter:\ngetParens = re.compile(r\"\\(([^()]*)\\)(?=(?:\\s*\\([^()]*\\))*\\s*$)\").findall\n\nexplaination:\n\\( # opening paren\n([^()]*) # content, captured into group 1\n\\) # closing paren\n(?= # look ahead for...\n (?:\\s*\\([^()]*\\))* # a series of parens, separated by whitespace\n \\s* # possibly more whitespace after\n $ # end of string\n) # end of look ahead\n\n",
"You don't need to use regex:\ndef splitter(input):\n return [ s.rstrip(\" \\t)\") for s in input.split(\"(\") ][1:]\nprint splitter('hello (i) (m:foo)')\n\nNote: this solution only works if your input is already known to be valid. See MizardX's solution that will work on any input.\n"
] | [
6,
5
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000555344_python_regex.txt |
Q:
SQLAlchemy/Elixir validation rules?
I just found out how to validate my database input before saving it, but I'm kinda bummed to find there are no premade rules (like validate email, length, etc) that are found in some web based frameworks. Are there any validation libraries laying around anywhere or somewhere that some premade validation lists are hiding that I haven't found yet?
A:
Yes. There are. But keep your validation separate from your data layer. (As all the web frameworks do.)
Now the libraries you can use for validation are the exact form libraries from the web frameworks. Start with:
Formencode
And a lot of others have sprung up recently, but most of them also deal with some degree of form generation. My personal favourite is WTForms.
On an interesting note, Formencode actually came from being the validation library for the (now) lesser-used SQLObject library, so it certainly has some traction and usage in this exact domain.
| SQLAlchemy/Elixir validation rules? | I just found out how to validate my database input before saving it, but I'm kinda bummed to find there are no premade rules (like validate email, length, etc) that are found in some web based frameworks. Are there any validation libraries laying around anywhere or somewhere that some premade validation lists are hiding that I haven't found yet?
| [
"Yes. There are. But keep your validation separate from your data layer. (As all the web frameworks do.)\nNow the libraries you can use for validation are the exact form libraries from the web frameworks. Start with:\n\nFormencode\n\nAnd a lot of others have sprung up recently, but most of them also deal with some degree of form generation. My personal favourite is WTForms.\nOn an interesting note, Formencode actually came from being the validation library for the (now) lesser-used SQLObject library, so it certainly has some traction and usage in this exact domain.\n"
] | [
3
] | [] | [] | [
"python",
"python_elixir",
"sqlalchemy",
"validation"
] | stackoverflow_0000555578_python_python_elixir_sqlalchemy_validation.txt |
Q:
Getting a list of all modules in the current package
Here's what I want to do: I want to build a test suite that's organized into packages like tests.ui, tests.text, tests.fileio, etc. In each __init__.py in these packages, I want to make a test suite consisting of all the tests in all the modules in that package. Of course, getting all the tests can be done with unittest.TestLoader, but it seems that I have to add each module individually. So supposing that test.ui has editor_window_test.py and preview_window_test.py, I want the __init__.py to import these two files and get a list of the two module objects. The idea is that I want to automate making the test suites so that I can't forget to include something in the test suite.
What's the best way to do this? It seems like it would be an easy thing to do, but I'm not finding anything.
I'm using Python 2.5 btw.
A:
Solution to exactly this problem from our django project:
"""Test loader for all module tests
"""
import unittest
import re, os, imp, sys
def find_modules(package):
files = [re.sub('\.py$', '', f) for f in os.listdir(os.path.dirname(package.__file__))
if f.endswith(".py")]
return [imp.load_module(file, *imp.find_module(file, package.__path__)) for file in files]
def suite(package=None):
"""Assemble test suite for Django default test loader"""
if not package: package = myapp.tests # Default argument required for Django test runner
return unittest.TestSuite([unittest.TestLoader().loadTestsFromModule(m)
for m in find_modules(package)])
if __name__ == '__main__':
unittest.TextTestRunner().run(suite(myapp.tests))
EDIT: The benefit compared to bialix's solution is that you can place this loader anytwhere in the project tree, there's no need to modify init.py in every test directory.
A:
Good answers here, but the best thing to do would be to use a 3rd party test discovery and runner like:
Nose (my favourite)
Trial (pretty nice, especially when testing async stuff)
py.test (less good, in my opinion)
They are all compatible with plain unittest.TestCase and you won't have to modify your tests in any way, neither would you have to use the advanced features in any of them. Just use as a suite discovery.
Is there a specific reason you want to reinvent the nasty stuff in these libs?
A:
You can use os.listdir to find all files in the test.* directory and then filter out .py files:
# Place this code to your __init__.py in test.* directory
import os
modules = []
for name in os.listdir(os.path.dirname(os.path.abspath(__file__))):
m, ext = os.path.splitext()
if ext == '.py':
modules.append(__import__(m))
__all__ = modules
The magic variable __file__ contains filepath of the current module. Try
print __file__
to check.
| Getting a list of all modules in the current package | Here's what I want to do: I want to build a test suite that's organized into packages like tests.ui, tests.text, tests.fileio, etc. In each __init__.py in these packages, I want to make a test suite consisting of all the tests in all the modules in that package. Of course, getting all the tests can be done with unittest.TestLoader, but it seems that I have to add each module individually. So supposing that test.ui has editor_window_test.py and preview_window_test.py, I want the __init__.py to import these two files and get a list of the two module objects. The idea is that I want to automate making the test suites so that I can't forget to include something in the test suite.
What's the best way to do this? It seems like it would be an easy thing to do, but I'm not finding anything.
I'm using Python 2.5 btw.
| [
"Solution to exactly this problem from our django project:\n\"\"\"Test loader for all module tests\n\"\"\"\nimport unittest\nimport re, os, imp, sys\n\ndef find_modules(package):\n files = [re.sub('\\.py$', '', f) for f in os.listdir(os.path.dirname(package.__file__))\n if f.endswith(\".py\")]\n return [imp.load_module(file, *imp.find_module(file, package.__path__)) for file in files]\n\ndef suite(package=None):\n \"\"\"Assemble test suite for Django default test loader\"\"\"\n if not package: package = myapp.tests # Default argument required for Django test runner\n return unittest.TestSuite([unittest.TestLoader().loadTestsFromModule(m)\n for m in find_modules(package)])\n\nif __name__ == '__main__':\n unittest.TextTestRunner().run(suite(myapp.tests))\n\nEDIT: The benefit compared to bialix's solution is that you can place this loader anytwhere in the project tree, there's no need to modify init.py in every test directory.\n",
"Good answers here, but the best thing to do would be to use a 3rd party test discovery and runner like:\n\nNose (my favourite)\nTrial (pretty nice, especially when testing async stuff)\npy.test (less good, in my opinion)\n\nThey are all compatible with plain unittest.TestCase and you won't have to modify your tests in any way, neither would you have to use the advanced features in any of them. Just use as a suite discovery.\nIs there a specific reason you want to reinvent the nasty stuff in these libs?\n",
"You can use os.listdir to find all files in the test.* directory and then filter out .py files:\n# Place this code to your __init__.py in test.* directory\nimport os\nmodules = []\nfor name in os.listdir(os.path.dirname(os.path.abspath(__file__))):\n m, ext = os.path.splitext()\n if ext == '.py':\n modules.append(__import__(m))\n__all__ = modules\n\nThe magic variable __file__ contains filepath of the current module. Try\nprint __file__\n\nto check.\n"
] | [
2,
2,
1
] | [] | [] | [
"module",
"package",
"python",
"python_2.5",
"unit_testing"
] | stackoverflow_0000555571_module_package_python_python_2.5_unit_testing.txt |
Q:
Python code to find if x is following y on twitter. More Pythonic way please
I wrote a twitter application in Python. Following is the code I used for a module where I find if x is following y. This code can be obviously improved upon. A pythonic way to do that?
import urllib2
import sys
import re
import base64
from urlparse import urlparse
import simplejson
def is_follows(follower, following):
theurl = 'http://twitter.com/friendships/exists.json?user_a='+follower+'&user_b='+following
username = 'uname1'
password = 'pwd1'
handle = urllib2.Request(theurl)
base64string = base64.encodestring('%s:%s' % (username, password))
authheader = "Basic %s" % base64string
handle.add_header("Authorization", authheader)
fol=True
try:
fol = simplejson.load(urllib2.urlopen(handle))
except IOError, e:
# here we shouldn't fail if the username/password is right
print "It looks like the username or password is wrong."
return fol
Update: Indentation fixed.
A:
Three things:
Fix the indentation (but then I guess this was not done on purpose).
Use formatting instead of concatenation in constructing theurl.
Remove the fol variable. Rather, do the following:
try:
return simplejson.load(urllib2.urlopen(handle))
except IOError, e:
# here we shouldn't fail if the username/password is right
print "It looks like the username or password is wrong."
return False
A:
Konrad gave you a good answer with changes you can make to make your code more Pythonic. All I want to add is if you are interested in seeing some advanced code to do this same thing check out The Minimalist Twitter API for Python.
It can show you a Pythonic way to write an API that doesn't repeat itself (in other words follows DRY [don't repeat yourself] principles) by using dynamic class method construction using __getattr__() and __call__(). Your example would be something like:
fol = twitter.friendships.exists(user_a="X", user_b="Y")
even though the twitter class doesn't have "friendships" or "exists" methods/properties.
(Warning: I didn't test the code above so it might not be quite right, but should be pretty close)
A:
From your code it looks like you are trying to do a Basic HTTP Authentication. Is this right? Then you shouldn't create the HTTP headers by hand. Instead use the urllib2.HTTPBasicAuthHandler. An example from the docs:
import urllib2
# Create an OpenerDirector with support for Basic HTTP Authentication...
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(realm='PDQ Application',
uri='https://mahler:8092/site-updates.py',
user='klem',
passwd='kadidd!ehopper')
opener = urllib2.build_opener(auth_handler)
# ...and install it globally so it can be used with urlopen.
urllib2.install_opener(opener)
urllib2.urlopen('http://www.example.com/login.html')
A:
Google threw this library with an Twitter API for Python at me.
I think such an library would simplify this task alot.
It has a GetFollowers() method that lets you look up if someone is following you.
Edit: It looks like you can't look up followers of arbitrary users.
| Python code to find if x is following y on twitter. More Pythonic way please | I wrote a twitter application in Python. Following is the code I used for a module where I find if x is following y. This code can be obviously improved upon. A pythonic way to do that?
import urllib2
import sys
import re
import base64
from urlparse import urlparse
import simplejson
def is_follows(follower, following):
theurl = 'http://twitter.com/friendships/exists.json?user_a='+follower+'&user_b='+following
username = 'uname1'
password = 'pwd1'
handle = urllib2.Request(theurl)
base64string = base64.encodestring('%s:%s' % (username, password))
authheader = "Basic %s" % base64string
handle.add_header("Authorization", authheader)
fol=True
try:
fol = simplejson.load(urllib2.urlopen(handle))
except IOError, e:
# here we shouldn't fail if the username/password is right
print "It looks like the username or password is wrong."
return fol
Update: Indentation fixed.
| [
"Three things:\n\nFix the indentation (but then I guess this was not done on purpose).\nUse formatting instead of concatenation in constructing theurl.\nRemove the fol variable. Rather, do the following:\n\n\ntry:\n return simplejson.load(urllib2.urlopen(handle))\nexcept IOError, e:\n # here we shouldn't fail if the username/password is right\n print \"It looks like the username or password is wrong.\"\n return False\n\n",
"Konrad gave you a good answer with changes you can make to make your code more Pythonic. All I want to add is if you are interested in seeing some advanced code to do this same thing check out The Minimalist Twitter API for Python.\nIt can show you a Pythonic way to write an API that doesn't repeat itself (in other words follows DRY [don't repeat yourself] principles) by using dynamic class method construction using __getattr__() and __call__(). Your example would be something like:\nfol = twitter.friendships.exists(user_a=\"X\", user_b=\"Y\")\n\neven though the twitter class doesn't have \"friendships\" or \"exists\" methods/properties.\n(Warning: I didn't test the code above so it might not be quite right, but should be pretty close)\n",
"From your code it looks like you are trying to do a Basic HTTP Authentication. Is this right? Then you shouldn't create the HTTP headers by hand. Instead use the urllib2.HTTPBasicAuthHandler. An example from the docs:\nimport urllib2\n# Create an OpenerDirector with support for Basic HTTP Authentication...\nauth_handler = urllib2.HTTPBasicAuthHandler()\nauth_handler.add_password(realm='PDQ Application',\n uri='https://mahler:8092/site-updates.py',\n user='klem',\n passwd='kadidd!ehopper')\nopener = urllib2.build_opener(auth_handler)\n# ...and install it globally so it can be used with urlopen.\nurllib2.install_opener(opener)\nurllib2.urlopen('http://www.example.com/login.html')\n\n",
"Google threw this library with an Twitter API for Python at me.\nI think such an library would simplify this task alot.\nIt has a GetFollowers() method that lets you look up if someone is following you.\nEdit: It looks like you can't look up followers of arbitrary users.\n"
] | [
2,
2,
2,
0
] | [] | [] | [
"authentication",
"python",
"twitter"
] | stackoverflow_0000556342_authentication_python_twitter.txt |
Q:
WingIDE no autocompletion for my modules
If anyone using WingIDE for their python dev needs:
For some reason I get auto-completion on all the standard python libraries (like datetime....) but not on my own modules.
So if I create my own class and then import it from another class, I get no auto-completion on it.
Does anyone know why this is?
A:
Just found an answer, it has to do with pythonpath:
http://www.wingware.com/doc/edit/how-analysis-works
| WingIDE no autocompletion for my modules | If anyone using WingIDE for their python dev needs:
For some reason I get auto-completion on all the standard python libraries (like datetime....) but not on my own modules.
So if I create my own class and then import it from another class, I get no auto-completion on it.
Does anyone know why this is?
| [
"Just found an answer, it has to do with pythonpath:\nhttp://www.wingware.com/doc/edit/how-analysis-works\n"
] | [
0
] | [] | [] | [
"ide",
"python"
] | stackoverflow_0000556785_ide_python.txt |
Q:
How to use storeHtmlSource in python code (Selenium RC)
I found storeHtmlSource method description in Selenium reference, but can't figure out how to use it in python code I generated by exporting recording of my actions from the Selenium IDE.
I need to pass the html source code of the current page into a function for processing. How to do that? Can anyone show example of code for calling this method? Could it at all to be called from python?
A:
I can't speak for Python, but check out the getHtmlSource method for the Java API of the Selenium interface. It explains what it does pretty clearly.
| How to use storeHtmlSource in python code (Selenium RC) | I found storeHtmlSource method description in Selenium reference, but can't figure out how to use it in python code I generated by exporting recording of my actions from the Selenium IDE.
I need to pass the html source code of the current page into a function for processing. How to do that? Can anyone show example of code for calling this method? Could it at all to be called from python?
| [
"I can't speak for Python, but check out the getHtmlSource method for the Java API of the Selenium interface. It explains what it does pretty clearly.\n"
] | [
1
] | [] | [] | [
"html",
"python",
"selenium"
] | stackoverflow_0000522229_html_python_selenium.txt |
Q:
Which Version of TurboGears should I use for a new project?
I'm planing a new project and I want to use TurboGears. The problem is: I'm not sure which version to choose. There are three choices:
Turbogears 1.0.8 (stable)
Turbogears 1.1 (beta 3)
Turbogears 2.0 (beta 4)
As this is a new project I dont want to choose the wrong framework. So where are the differeneces, how "beta" is 2.0?
Thanks for any help!
A:
I personally would go with TG2 (but would also look at other frameworks such as Pylons or repoze.bfg) esp. if it's a new project. Remember that you might need to upgrade at some point (or want to at least). TG2 also is offers full WSGI support which gains more and more traction and is IMHO also something you really want.
For me the most important thing these days is always how good the code of that framework looks like because you never know when you need to dig into it to try to understand some misbehaviour or even fix a bug (regardless of what version).
Another important aspect is IMHO the community, so hanging around in IRC channels or on mailing lists for a bit might help you to make better decision.
Maybe these 10 reasons to use TG2 also help.
A:
Yes, TG2. TG1.0.x is legacy, and TG1.1 is meant as a stepping stone to TG2 for legacy apps.
I'd look very seriously at Pylons and Django too.
| Which Version of TurboGears should I use for a new project? | I'm planing a new project and I want to use TurboGears. The problem is: I'm not sure which version to choose. There are three choices:
Turbogears 1.0.8 (stable)
Turbogears 1.1 (beta 3)
Turbogears 2.0 (beta 4)
As this is a new project I dont want to choose the wrong framework. So where are the differeneces, how "beta" is 2.0?
Thanks for any help!
| [
"I personally would go with TG2 (but would also look at other frameworks such as Pylons or repoze.bfg) esp. if it's a new project. Remember that you might need to upgrade at some point (or want to at least). TG2 also is offers full WSGI support which gains more and more traction and is IMHO also something you really want. \nFor me the most important thing these days is always how good the code of that framework looks like because you never know when you need to dig into it to try to understand some misbehaviour or even fix a bug (regardless of what version).\nAnother important aspect is IMHO the community, so hanging around in IRC channels or on mailing lists for a bit might help you to make better decision. \nMaybe these 10 reasons to use TG2 also help. \n",
"Yes, TG2. TG1.0.x is legacy, and TG1.1 is meant as a stepping stone to TG2 for legacy apps.\nI'd look very seriously at Pylons and Django too.\n"
] | [
5,
1
] | [] | [] | [
"project",
"python",
"turbogears"
] | stackoverflow_0000557101_project_python_turbogears.txt |
Q:
Django - having middleware communicate with views/templates
Alright, this is probably a really silly question but I am new to Python/Django so I can't really wrap my head around its scoping concepts just yet. Right now I am writing a middleware class to handle some stuff, and I want to set 'global' variables that my views and templates can access. What is the "right" way of doing this? I considered doing something like this:
middleware.py
from django.conf import settings
class BeforeFilter(object):
def process_request(self, request):
settings.my_var = 'Hello World'
return None
views.py
from django.conf import settings
from django.http import HttpResponse
def myview(request):
return HttpResponse(settings.my_var)
Although this works, I am not sure if it is the "Django way" or the "Python way" of doing this.
So, my questions are:
1. Is this the right way?
2. If it is the right way, what is the right way of adding variables that can be used in the actual template from the middleware? Say I want to evaluate something and I want to set a variable headername as 'My Site Name' in the middleware, and I want to be able to do {{ headername }} in all templates. Doing it the way I have it now I'd have to add headername to the context inside every view. Is there anyway to bypass this? I am thinking something along the lines of CakePHP's $this->set('headername','My Site Name');
3. I am using the middleware class as an equivalent of CakePHP's beforeFilter that runs before every view (or controller in CakePHP) is called. Is this the right way of doing this?
4. Completely unrelated but it is a small question, what is a nice way of printing out the contents of a variable to the browser ala print_r? Say I want to see all the stuff inside the request that is passed into the view? Is pprint the answer?
A:
It's not the best way. You could set my_var on the request rather than on the settings. Settings are global and apply to the whole site. You don't want to modify it for every request. There could be concurrency issues with multiple request updating/reading the variable at the same time.
To access request.my_var in your templates you could do {{ request.my_var }}. To get access to the request variable in your template you will have to add django.core.context_processors.request to your TEMPLATE_CONTEXT_PROCESSORS setting.
Yes. Other terminology to describe request middleware would be request pre-processor/filter/interceptor.
Also, if you want to use a common Site name for the header in your templates, you might want to check out the Django Sites application which provides a site name variable for your use.
A:
Here's what we do. We use a context processor like this...
def context_myApp_settings(request):
"""Insert some additional information into the template context
from the settings.
Specifically, the LOGOUT_URL, MEDIA_URL and BADGES settings.
"""
from django.conf import settings
additions = {
'MEDIA_URL': settings.MEDIA_URL,
'LOGOUT_URL': settings.LOGOUT_URL,
'BADGES': settings.BADGES,
'DJANGO_ROOT': request.META['SCRIPT_NAME'],
}
return additions
Here the setting that activates this.
TEMPLATE_CONTEXT_PROCESSORS = (
"django.core.context_processors.auth",
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.media",
"django.core.context_processors.request",
"myapp. context_myApp_settings",
)
This provides "global" information in the context of each template that gets rendered. This is the standard Django solution. See http://docs.djangoproject.com/en/dev/ref/templates/api/#ref-templates-api for more information on context processors.
"what is a nice way of printing out the contents of a variable to the browser ala print_r?"
In the view? You can provide a pprint.pformat string to a template to be rendered for debugging purposes.
In the log? You have to use Python's logging module and send stuff to a separate log file. The use of simple print statements to write stuff to the log doesn't work wonderfully consistently for all Django implementations (mod_python, for example, loses all the stdout and stderr stuff.)
A:
1) If you modify 'settings', this is truly going to be global even across requests. In other words, concurrent requests are going to stomp each other if you need each request to have its own value. It's safer to modify the request object itself, which is what some of the common Django middleware does (e.g. django.contrib.auth.middleware.AuthenticationMiddleware adds a reference to 'user' on the request object)
2) (EDIT 2) See #4, getting a common set of variables into each template is probably better suited for a custom context processor
3) I'm not familiar with CakePHP, but adding a process_request middleware is definitely a good Django way of preprocessing each request.
4) Take a look at the doc for template context processors. If you use RequestContext each template will have a context variable called 'request' that you can dump to your template. You can also use the debug context processor and do something like this so it only dumps when settings.DEBUG=True:
{% if debug %}
<!-- {{ request.REQUEST }} -->
{% endif %}
This will work for both GET and POST, but you can modify accordingly if you only need one or the other.
EDIT
Also, I just looked closer at your views.py. Not sure I fully understand what you are trying to do there by just returning the variable in the response. If you really have that use case, you'll probably also want to set the mimetype like this:
return HttpResponse (..., mimetype='text/plain')
That's just to be explicit that you're not returning HTML, XML or some other structured content type.
EDIT 2
Just saw the question was updated with a new subquestion, renumbered answers
| Django - having middleware communicate with views/templates | Alright, this is probably a really silly question but I am new to Python/Django so I can't really wrap my head around its scoping concepts just yet. Right now I am writing a middleware class to handle some stuff, and I want to set 'global' variables that my views and templates can access. What is the "right" way of doing this? I considered doing something like this:
middleware.py
from django.conf import settings
class BeforeFilter(object):
def process_request(self, request):
settings.my_var = 'Hello World'
return None
views.py
from django.conf import settings
from django.http import HttpResponse
def myview(request):
return HttpResponse(settings.my_var)
Although this works, I am not sure if it is the "Django way" or the "Python way" of doing this.
So, my questions are:
1. Is this the right way?
2. If it is the right way, what is the right way of adding variables that can be used in the actual template from the middleware? Say I want to evaluate something and I want to set a variable headername as 'My Site Name' in the middleware, and I want to be able to do {{ headername }} in all templates. Doing it the way I have it now I'd have to add headername to the context inside every view. Is there anyway to bypass this? I am thinking something along the lines of CakePHP's $this->set('headername','My Site Name');
3. I am using the middleware class as an equivalent of CakePHP's beforeFilter that runs before every view (or controller in CakePHP) is called. Is this the right way of doing this?
4. Completely unrelated but it is a small question, what is a nice way of printing out the contents of a variable to the browser ala print_r? Say I want to see all the stuff inside the request that is passed into the view? Is pprint the answer?
| [
"\nIt's not the best way. You could set my_var on the request rather than on the settings. Settings are global and apply to the whole site. You don't want to modify it for every request. There could be concurrency issues with multiple request updating/reading the variable at the same time.\nTo access request.my_var in your templates you could do {{ request.my_var }}. To get access to the request variable in your template you will have to add django.core.context_processors.request to your TEMPLATE_CONTEXT_PROCESSORS setting.\nYes. Other terminology to describe request middleware would be request pre-processor/filter/interceptor. \n\nAlso, if you want to use a common Site name for the header in your templates, you might want to check out the Django Sites application which provides a site name variable for your use.\n",
"Here's what we do. We use a context processor like this...\ndef context_myApp_settings(request):\n \"\"\"Insert some additional information into the template context\n from the settings.\n Specifically, the LOGOUT_URL, MEDIA_URL and BADGES settings.\n \"\"\"\n from django.conf import settings\n additions = {\n 'MEDIA_URL': settings.MEDIA_URL,\n 'LOGOUT_URL': settings.LOGOUT_URL,\n 'BADGES': settings.BADGES,\n 'DJANGO_ROOT': request.META['SCRIPT_NAME'],\n }\n return additions\n\nHere the setting that activates this.\nTEMPLATE_CONTEXT_PROCESSORS = (\n \"django.core.context_processors.auth\",\n \"django.core.context_processors.debug\",\n \"django.core.context_processors.i18n\",\n \"django.core.context_processors.media\",\n \"django.core.context_processors.request\",\n \"myapp. context_myApp_settings\",\n )\n\nThis provides \"global\" information in the context of each template that gets rendered. This is the standard Django solution. See http://docs.djangoproject.com/en/dev/ref/templates/api/#ref-templates-api for more information on context processors.\n\n\"what is a nice way of printing out the contents of a variable to the browser ala print_r?\"\nIn the view? You can provide a pprint.pformat string to a template to be rendered for debugging purposes.\nIn the log? You have to use Python's logging module and send stuff to a separate log file. The use of simple print statements to write stuff to the log doesn't work wonderfully consistently for all Django implementations (mod_python, for example, loses all the stdout and stderr stuff.)\n",
"1) If you modify 'settings', this is truly going to be global even across requests. In other words, concurrent requests are going to stomp each other if you need each request to have its own value. It's safer to modify the request object itself, which is what some of the common Django middleware does (e.g. django.contrib.auth.middleware.AuthenticationMiddleware adds a reference to 'user' on the request object)\n2) (EDIT 2) See #4, getting a common set of variables into each template is probably better suited for a custom context processor\n3) I'm not familiar with CakePHP, but adding a process_request middleware is definitely a good Django way of preprocessing each request.\n4) Take a look at the doc for template context processors. If you use RequestContext each template will have a context variable called 'request' that you can dump to your template. You can also use the debug context processor and do something like this so it only dumps when settings.DEBUG=True:\n{% if debug %}\n <!-- {{ request.REQUEST }} -->\n{% endif %}\n\nThis will work for both GET and POST, but you can modify accordingly if you only need one or the other.\nEDIT\nAlso, I just looked closer at your views.py. Not sure I fully understand what you are trying to do there by just returning the variable in the response. If you really have that use case, you'll probably also want to set the mimetype like this:\nreturn HttpResponse (..., mimetype='text/plain')\n\nThat's just to be explicit that you're not returning HTML, XML or some other structured content type.\nEDIT 2 \nJust saw the question was updated with a new subquestion, renumbered answers\n"
] | [
19,
12,
4
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000557460_django_python.txt |
Q:
Python for Autohotkey style key-combination sniffing, automation?
I want to automate several tasks (eg. simulate eclipse style ctrl-shift-R open dialog for other editors). The general pattern is: the user will press some key combination, my program will detect it and potentially pop up a dialog to get user input, and then run a corresponding command, typically by running an executable.
My target environment is windows, although cross-platform would be nice. My program would be started once, read a configuration file, and sit in the background till triggered by a key combination or other event.
Basically autohotkey.
Why not just use autohotkey? I actually have quite a few autohotkey macros, but I'd prefer to use a saner language.
My question is: is there a good way to have a background python process detect key combinations?
Update: found the answer using pyHook and the win32 extensions:
import pyHook
import pythoncom
def OnKeyboardEvent(event):
print event.Ascii
hm = pyHook.HookManager()
hm.KeyDown = OnKeyboardEvent
hm.HookKeyboard()
while True:
pythoncom.PumpMessages()
A:
You may want to look at AutoIt. It does everything that AutoHotKey can do, but the language syntax doesn't make you want to pull your hair out. Additonally, it has COM bindings so you can use most of it's abilities easily in python if you so desired. I've posted about how to do it here before.
A:
Found the answer using pyHook and the win32 extensions:
import pyHook
import pythoncom
def OnKeyboardEvent(event):
print event.Ascii
hm = pyHook.HookManager()
hm.KeyDown = OnKeyboardEvent
hm.HookKeyboard()
while True:
pythoncom.PumpMessages()
| Python for Autohotkey style key-combination sniffing, automation? | I want to automate several tasks (eg. simulate eclipse style ctrl-shift-R open dialog for other editors). The general pattern is: the user will press some key combination, my program will detect it and potentially pop up a dialog to get user input, and then run a corresponding command, typically by running an executable.
My target environment is windows, although cross-platform would be nice. My program would be started once, read a configuration file, and sit in the background till triggered by a key combination or other event.
Basically autohotkey.
Why not just use autohotkey? I actually have quite a few autohotkey macros, but I'd prefer to use a saner language.
My question is: is there a good way to have a background python process detect key combinations?
Update: found the answer using pyHook and the win32 extensions:
import pyHook
import pythoncom
def OnKeyboardEvent(event):
print event.Ascii
hm = pyHook.HookManager()
hm.KeyDown = OnKeyboardEvent
hm.HookKeyboard()
while True:
pythoncom.PumpMessages()
| [
"You may want to look at AutoIt. It does everything that AutoHotKey can do, but the language syntax doesn't make you want to pull your hair out. Additonally, it has COM bindings so you can use most of it's abilities easily in python if you so desired. I've posted about how to do it here before.\n",
"Found the answer using pyHook and the win32 extensions:\nimport pyHook\nimport pythoncom\n\ndef OnKeyboardEvent(event):\n print event.Ascii\n\nhm = pyHook.HookManager()\nhm.KeyDown = OnKeyboardEvent\nhm.HookKeyboard()\n\nwhile True:\n pythoncom.PumpMessages()\n\n"
] | [
7,
5
] | [] | [] | [
"autohotkey",
"python"
] | stackoverflow_0000294285_autohotkey_python.txt |
Q:
How to lookup custom ip address field stored as integer in Django-admin?
In my Django model I've created custom MyIPAddressField which is stored as integer in mysql backend. To do that I've implemented to_python, get_db_prep_value, get_iternal_type (returns PositiveIntegerField) and formfield methods (uses stock IPAddressField as form_class).
The only problem is field lookup in cases like builtin search in ModelAdmin. So, the question is how to implement get_db_prep_lookup to perform string-based lookup types like 'contains', 'regex', 'startswith', 'endswith'?
Mysql has special function inet_ntoa() but I don't know how to instruct ORM to use it in admin search queries. For example: SELECT inet_ntoa(ip_address) as ip FROM table WHERE ip LIKE '%search_term%'. Why custom fields perform type casting on python side but not on database side?
EDIT1: probably there is another way to solve search problem - don't transform integer column to string, but instead split search argument into subnet/mask and perform some binary math to compare them against integer IP value.
EDIT2: this is my code so far:
models.py:
class MyIPField(models.Field):
empty_strings_allowed = False
__metaclass__ = models.SubfieldBase
def get_db_prep_value(self, value):
if value is None: return None
return unpack('!L', inet_aton(value))[0]
def get_internal_type(self):
return "PositiveIntegerField"
def to_python(self, value):
if type(value).__name__ in ('NoneType', 'unicode'): return value
return inet_ntoa(pack('!L', value))
def formfield(self, **kwargs):
defaults = {'form_class': IPAddressField}
defaults.update(kwargs)
return super(MyIPField, self).formfield(**defaults)
class MyManager(models.Manager):
def get_query_set(self):
return super(MyManager, self).get_query_set().extra(select={'fakeip': "inet_ntoa(ip)"})
class Address(models.Model):
# ... other fields are skipped (Note: there was several foreign keys)
ip = MyIPField(u"IP address", unique=True)
objects = AddressManager()
def __unicode__(self):
return self.ip
admin.py:
class AddressAdmin(admin.ModelAdmin):
list_display = ('address',) # ... some fields are skipped from this example
list_display_links = ('address',)
search_fields = ('fakeip', )
admin.site.register(Address, AddressAdmin)
But when I use admin changelist search box, I get error "Can not resolve keyword 'fakeip' into field. Choices are: ip, id". Is it possible to fool Django and make it think that fakeip is a real field?
Using standard IPAddressField (string based) is not appropriate for my needs, as well as switching to Postgres where it stored in proper format.
Also I've looked to Django admin internals (options.py and views/main.py), and I see no easy way to customize ChangeList class or search mechanics without massive copy/pasting. I thought that Django admin is more powerful than it is.
A:
You could instruct the ORM to add an extra field to your SQL queries, like so:
IPAddressModel.objects.extra(select={'ip': "inet_ntoa(ip_address)"})
This adds SELECT inet_ntoa(ip_address) as ip to the query and a field ip to your objects. You can use the new synthesized field in your WHERE clause.
Are you sure you don't really want something like WHERE (ip_address & 0xffff0000) = inet_aton('192.168.0.0')? Or do you really want to find all ip addresses in your log that contain the number 119 somewhere?
If suffix is the /24 from CIDR notation:
mask = 0xffffffff ^ 0xffffffff >> suffix
Add to the WHERE clause:
(ip_address & mask) = (inet_aton(prefix) & mask)
| How to lookup custom ip address field stored as integer in Django-admin? | In my Django model I've created custom MyIPAddressField which is stored as integer in mysql backend. To do that I've implemented to_python, get_db_prep_value, get_iternal_type (returns PositiveIntegerField) and formfield methods (uses stock IPAddressField as form_class).
The only problem is field lookup in cases like builtin search in ModelAdmin. So, the question is how to implement get_db_prep_lookup to perform string-based lookup types like 'contains', 'regex', 'startswith', 'endswith'?
Mysql has special function inet_ntoa() but I don't know how to instruct ORM to use it in admin search queries. For example: SELECT inet_ntoa(ip_address) as ip FROM table WHERE ip LIKE '%search_term%'. Why custom fields perform type casting on python side but not on database side?
EDIT1: probably there is another way to solve search problem - don't transform integer column to string, but instead split search argument into subnet/mask and perform some binary math to compare them against integer IP value.
EDIT2: this is my code so far:
models.py:
class MyIPField(models.Field):
empty_strings_allowed = False
__metaclass__ = models.SubfieldBase
def get_db_prep_value(self, value):
if value is None: return None
return unpack('!L', inet_aton(value))[0]
def get_internal_type(self):
return "PositiveIntegerField"
def to_python(self, value):
if type(value).__name__ in ('NoneType', 'unicode'): return value
return inet_ntoa(pack('!L', value))
def formfield(self, **kwargs):
defaults = {'form_class': IPAddressField}
defaults.update(kwargs)
return super(MyIPField, self).formfield(**defaults)
class MyManager(models.Manager):
def get_query_set(self):
return super(MyManager, self).get_query_set().extra(select={'fakeip': "inet_ntoa(ip)"})
class Address(models.Model):
# ... other fields are skipped (Note: there was several foreign keys)
ip = MyIPField(u"IP address", unique=True)
objects = AddressManager()
def __unicode__(self):
return self.ip
admin.py:
class AddressAdmin(admin.ModelAdmin):
list_display = ('address',) # ... some fields are skipped from this example
list_display_links = ('address',)
search_fields = ('fakeip', )
admin.site.register(Address, AddressAdmin)
But when I use admin changelist search box, I get error "Can not resolve keyword 'fakeip' into field. Choices are: ip, id". Is it possible to fool Django and make it think that fakeip is a real field?
Using standard IPAddressField (string based) is not appropriate for my needs, as well as switching to Postgres where it stored in proper format.
Also I've looked to Django admin internals (options.py and views/main.py), and I see no easy way to customize ChangeList class or search mechanics without massive copy/pasting. I thought that Django admin is more powerful than it is.
| [
"You could instruct the ORM to add an extra field to your SQL queries, like so:\nIPAddressModel.objects.extra(select={'ip': \"inet_ntoa(ip_address)\"})\n\nThis adds SELECT inet_ntoa(ip_address) as ip to the query and a field ip to your objects. You can use the new synthesized field in your WHERE clause.\nAre you sure you don't really want something like WHERE (ip_address & 0xffff0000) = inet_aton('192.168.0.0')? Or do you really want to find all ip addresses in your log that contain the number 119 somewhere?\nIf suffix is the /24 from CIDR notation:\nmask = 0xffffffff ^ 0xffffffff >> suffix\n\nAdd to the WHERE clause:\n(ip_address & mask) = (inet_aton(prefix) & mask)\n\n"
] | [
2
] | [] | [] | [
"django",
"mysql",
"python"
] | stackoverflow_0000541115_django_mysql_python.txt |
Q:
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all.
So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns).
Then I plan to read the data in, and have a set of functions which provide access to and operations on the data.
My question is this:
is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects?
I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python.
Alternatively, is there a good book I should read that would point me in the right direction on this?
A:
If the data is a natural fit for database tables ("rectangular data"), why not convert it to sqlite? It's portable -- just one file to move the db around, and sqlite is available anywhere you have python (2.5 and above anyway).
A:
Generally you want your Objects to absolutely match your "real world entities".
Since you're starting from a database, it's not always the case that the database has any real-world fidelity, either. Some database designs are simply awful.
If your database has reasonable models for Fruit, that's where you start. Get that right first.
A "collection" may -- or may not -- be an artificial construct that's part of the solution algorithm, not really a proper part of the problem. Usually collections are part of the problem, and you should design those classes, also.
Other times, however, the collection is an artifact of having used a database, and a simple Python list is all you need.
Still other times, the collection is actually a proper mapping from some unique key value to an entity, in which case, it's a Python dictionary.
And sometimes, the collection is a proper mapping from some non-unique key value to some collection of entities, in which case it's a Python collections.defaultdict(list).
Start with the fundamental, real-world-like entities. Those get class definitions.
Collections may use built-in Python collections or may require their own classes.
A:
There's no "one size fits all" answer for this -- it'll depend a lot on the data and how it's used in the application. If the data and usage are simple enough you might want to store your fruit in a dict with id as key and the rest of the data as tuples. Or not. It totally depends. If there's a guiding principle out there then it's to extract the underlying requirements of the app and then write code against those requirements.
A:
you could have a fruit class with id and name instance variables. and a function to read/write the information from a file, and maybe a class variable to keep track of the number of fruits (objects) created
A:
In the simple case namedtuples let get you started:
>>> from collections import namedtuple
>>> Fruit = namedtuple("Fruit", "name weight color")
>>> fruits = [Fruit(*row) for row in cursor.execute('select * from fruits')]
Fruit is equivalent to the following class:
>>> Fruit = namedtuple("Fruit", "name weight color", verbose=True)
class Fruit(tuple):
'Fruit(name, weight, color)'
__slots__ = ()
_fields = ('name', 'weight', 'color')
def __new__(cls, name, weight, color):
return tuple.__new__(cls, (name, weight, color))
@classmethod
def _make(cls, iterable, new=tuple.__new__, len=len):
'Make a new Fruit object from a sequence or iterable'
result = new(cls, iterable)
if len(result) != 3:
raise TypeError('Expected 3 arguments, got %d' % len(result))
return result
def __repr__(self):
return 'Fruit(name=%r, weight=%r, color=%r)' % self
def _asdict(t):
'Return a new dict which maps field names to their values'
return {'name': t[0], 'weight': t[1], 'color': t[2]}
def _replace(self, **kwds):
'Return a new Fruit object replacing specified fields with new values'
result = self._make(map(kwds.pop, ('name', 'weight', 'color'), self))
if kwds:
raise ValueError('Got unexpected field names: %r' % kwds.keys())
return result
def __getnewargs__(self):
return tuple(self)
name = property(itemgetter(0))
weight = property(itemgetter(1))
color = property(itemgetter(2))
A:
Another way would be to use the ZODB to directly store objects persistently. The only thing you have to do is to derive your classes from Peristent and everything from the root object up is then automatically stored in that database as an object. The root object comes from the ZODB connection. There are many backends available and the default is simple a file.
A class could then look like this:
class Collection(persistent.Persistent):
def __init__(self, fruit = []):
self.fruit = fruit
class Fruit(peristent.Persistent):
def __init__(self, name):
self.name = name
Assuming you have the root object you can then do:
fruit = Fruit("apple")
root.collection = Collection([fruit])
and it's stored in the database automatically. You can find it again by simply looking accessing 'collection' from the root object:
print root.collection.fruit
You can also derive subclasses from e.g. Fruit as usual.
Useful links with more information:
The new ZODB homepage
a ZODB tutorial
That way you still are able to use the full power of Python objects and there is no need to serialize something e.g. via an ORM but you still have an easy way to store your data.
A:
Here are a couple points for you to consider. If your data is large reading it all into memory may be wasteful. If you need random access and not just sequential access to your data then you'll either have to scan the at most the entire file each time or read that table into an indexed memory structure like a dictionary. A list will still require some kind of scan (straight iteration or binary search if sorted). With that said, if you don't require some of the features of a DB then don't use one but if you just think MySQL is too heavy then +1 on the Sqlite suggestion from earlier. It gives you most of the features you'd want while using a database without the concurrency overhead.
A:
Abstract persistence from the object class. Put all of the persistence logic in an adapter class, and assign the adapter to the object class. Something like:
class Fruit(Object):
@classmethod
def get(cls, id):
return cls.adapter.get(id)
def put(self):
cls.adapter.put(self)
def __init__(self, id, name, weight, color):
self.id = id
self.name = name
self.weight = weight
self.color = color
class FruitAdapter(Object):
def get(id):
# retrieve attributes from persistent storage here
return Fruit(id, name, weight, color)
def put(fruit):
# insert/update fruit in persistent storage here
Fruit.adapter = FruitAdapter()
f = Fruit.get(1)
f.name = "lemon"
f.put()
# and so on...
Now you can build different FruitAdapter objects that interoperate with whatever persistence format you settle on (database, flat file, in-memory collection, whatever) and the basic Fruit class will be completely unaffected.
| Converting a database-driven (non-OO) python script into a non-database driven, OO-script | I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all.
So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns).
Then I plan to read the data in, and have a set of functions which provide access to and operations on the data.
My question is this:
is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects?
I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python.
Alternatively, is there a good book I should read that would point me in the right direction on this?
| [
"If the data is a natural fit for database tables (\"rectangular data\"), why not convert it to sqlite? It's portable -- just one file to move the db around, and sqlite is available anywhere you have python (2.5 and above anyway).\n",
"Generally you want your Objects to absolutely match your \"real world entities\".\nSince you're starting from a database, it's not always the case that the database has any real-world fidelity, either. Some database designs are simply awful.\nIf your database has reasonable models for Fruit, that's where you start. Get that right first.\nA \"collection\" may -- or may not -- be an artificial construct that's part of the solution algorithm, not really a proper part of the problem. Usually collections are part of the problem, and you should design those classes, also.\nOther times, however, the collection is an artifact of having used a database, and a simple Python list is all you need.\nStill other times, the collection is actually a proper mapping from some unique key value to an entity, in which case, it's a Python dictionary.\nAnd sometimes, the collection is a proper mapping from some non-unique key value to some collection of entities, in which case it's a Python collections.defaultdict(list).\nStart with the fundamental, real-world-like entities. Those get class definitions.\nCollections may use built-in Python collections or may require their own classes.\n",
"There's no \"one size fits all\" answer for this -- it'll depend a lot on the data and how it's used in the application. If the data and usage are simple enough you might want to store your fruit in a dict with id as key and the rest of the data as tuples. Or not. It totally depends. If there's a guiding principle out there then it's to extract the underlying requirements of the app and then write code against those requirements.\n",
"you could have a fruit class with id and name instance variables. and a function to read/write the information from a file, and maybe a class variable to keep track of the number of fruits (objects) created\n",
"In the simple case namedtuples let get you started:\n>>> from collections import namedtuple\n>>> Fruit = namedtuple(\"Fruit\", \"name weight color\")\n>>> fruits = [Fruit(*row) for row in cursor.execute('select * from fruits')]\n\nFruit is equivalent to the following class:\n>>> Fruit = namedtuple(\"Fruit\", \"name weight color\", verbose=True)\nclass Fruit(tuple):\n 'Fruit(name, weight, color)'\n\n __slots__ = ()\n\n _fields = ('name', 'weight', 'color')\n\n def __new__(cls, name, weight, color):\n return tuple.__new__(cls, (name, weight, color))\n\n @classmethod\n def _make(cls, iterable, new=tuple.__new__, len=len):\n 'Make a new Fruit object from a sequence or iterable'\n result = new(cls, iterable)\n if len(result) != 3:\n raise TypeError('Expected 3 arguments, got %d' % len(result))\n return result\n\n def __repr__(self):\n return 'Fruit(name=%r, weight=%r, color=%r)' % self\n\n def _asdict(t):\n 'Return a new dict which maps field names to their values'\n return {'name': t[0], 'weight': t[1], 'color': t[2]}\n\n def _replace(self, **kwds):\n 'Return a new Fruit object replacing specified fields with new values'\n result = self._make(map(kwds.pop, ('name', 'weight', 'color'), self))\n if kwds:\n raise ValueError('Got unexpected field names: %r' % kwds.keys())\n\n return result\n\n def __getnewargs__(self):\n return tuple(self)\n\n name = property(itemgetter(0))\n weight = property(itemgetter(1))\n color = property(itemgetter(2))\n\n",
"Another way would be to use the ZODB to directly store objects persistently. The only thing you have to do is to derive your classes from Peristent and everything from the root object up is then automatically stored in that database as an object. The root object comes from the ZODB connection. There are many backends available and the default is simple a file.\nA class could then look like this:\nclass Collection(persistent.Persistent):\n\n def __init__(self, fruit = []):\n self.fruit = fruit\n\nclass Fruit(peristent.Persistent):\n\n def __init__(self, name):\n self.name = name\n\nAssuming you have the root object you can then do:\nfruit = Fruit(\"apple\")\nroot.collection = Collection([fruit])\n\nand it's stored in the database automatically. You can find it again by simply looking accessing 'collection' from the root object:\nprint root.collection.fruit\n\nYou can also derive subclasses from e.g. Fruit as usual.\nUseful links with more information:\n\nThe new ZODB homepage \na ZODB tutorial\n\nThat way you still are able to use the full power of Python objects and there is no need to serialize something e.g. via an ORM but you still have an easy way to store your data.\n",
"Here are a couple points for you to consider. If your data is large reading it all into memory may be wasteful. If you need random access and not just sequential access to your data then you'll either have to scan the at most the entire file each time or read that table into an indexed memory structure like a dictionary. A list will still require some kind of scan (straight iteration or binary search if sorted). With that said, if you don't require some of the features of a DB then don't use one but if you just think MySQL is too heavy then +1 on the Sqlite suggestion from earlier. It gives you most of the features you'd want while using a database without the concurrency overhead.\n",
"Abstract persistence from the object class. Put all of the persistence logic in an adapter class, and assign the adapter to the object class. Something like:\nclass Fruit(Object):\n\n @classmethod\n def get(cls, id):\n return cls.adapter.get(id)\n\n def put(self):\n cls.adapter.put(self)\n\n def __init__(self, id, name, weight, color):\n self.id = id\n self.name = name\n self.weight = weight\n self.color = color\n\n\n class FruitAdapter(Object):\n\n def get(id):\n # retrieve attributes from persistent storage here\n return Fruit(id, name, weight, color)\n\n def put(fruit):\n # insert/update fruit in persistent storage here\n\n Fruit.adapter = FruitAdapter()\n f = Fruit.get(1)\n f.name = \"lemon\"\n f.put()\n # and so on...\n\nNow you can build different FruitAdapter objects that interoperate with whatever persistence format you settle on (database, flat file, in-memory collection, whatever) and the basic Fruit class will be completely unaffected.\n"
] | [
5,
2,
1,
1,
1,
1,
1,
1
] | [] | [] | [
"object",
"python"
] | stackoverflow_0000557199_object_python.txt |
Q:
What is wrong with my nested loops in Python?
How do I make nested loops in Python (version 3.0)?
I am trying to get the following loops to show me the products of two numbers:
def PrintProductsBelowNumber(number):
number1 = 1
number2 = 1
while number1 <= number:
while number2 <= number:
print(number1, "*", number2, "=", number1 * number2)
number2 += 1
number1 += 1
PrintProductsBelowNumber(2)
As a result I get:
1 * 1 = 1
1 * 2 = 2
So it seems the outer loop over number1 does not run.
How do I get the loop over number1 to run, and thus obtain:
1 * 1 = 1
1 * 2 = 2
2 * 1 = 2
2 * 2 = 4
A:
number2 only gets initialized once, you need to re-initialize it for each iteration of the inner loop. However, this code is very C-like and not very Pythonic. The better way to do it would be to use the for number in range(n) construct:
def PrintProductsBelowNumber(number):
for number1 in range(1, number+1):
for number2 in range(1, number+1):
print(number1, "*", number2, "=", number1 * number2)
A:
Because you aren't setting number2 back to 1 after the inner loop completes the first time. number1 then increments, but since number2 is still too high the inner loop doesn't run again.
def PrintProductsBelowNumber(number):
number1 = 1
while number1 <= number:
number2 = 1
while number2 <= number:
print(number1, "*", number2, "=", number1 * number2)
number2 += 1
number1 += 1
PrintProductsBelowNumber(2)
EDIT: Adam's solution is much better in general, but this is to show why your's wasn't working the way you thought it should in the first place.
A:
You could modify Adam's solution with a list comprehension:
def PrintProductsBelowNumber(number):
results = [(i, j, i * j) for i in range(1, number + 1)
for j in range(1, number + 1)]
for number1, number2, result in results:
print(number1, "*", number2, "=", result)
or some variation thereof.
| What is wrong with my nested loops in Python? | How do I make nested loops in Python (version 3.0)?
I am trying to get the following loops to show me the products of two numbers:
def PrintProductsBelowNumber(number):
number1 = 1
number2 = 1
while number1 <= number:
while number2 <= number:
print(number1, "*", number2, "=", number1 * number2)
number2 += 1
number1 += 1
PrintProductsBelowNumber(2)
As a result I get:
1 * 1 = 1
1 * 2 = 2
So it seems the outer loop over number1 does not run.
How do I get the loop over number1 to run, and thus obtain:
1 * 1 = 1
1 * 2 = 2
2 * 1 = 2
2 * 2 = 4
| [
"number2 only gets initialized once, you need to re-initialize it for each iteration of the inner loop. However, this code is very C-like and not very Pythonic. The better way to do it would be to use the for number in range(n) construct:\ndef PrintProductsBelowNumber(number):\n for number1 in range(1, number+1):\n for number2 in range(1, number+1):\n print(number1, \"*\", number2, \"=\", number1 * number2)\n\n",
"Because you aren't setting number2 back to 1 after the inner loop completes the first time. number1 then increments, but since number2 is still too high the inner loop doesn't run again. \ndef PrintProductsBelowNumber(number):\n number1 = 1\n while number1 <= number:\n number2 = 1\n while number2 <= number:\n print(number1, \"*\", number2, \"=\", number1 * number2)\n number2 += 1\n number1 += 1\n\nPrintProductsBelowNumber(2)\n\nEDIT: Adam's solution is much better in general, but this is to show why your's wasn't working the way you thought it should in the first place.\n",
"You could modify Adam's solution with a list comprehension:\ndef PrintProductsBelowNumber(number):\n\n results = [(i, j, i * j) for i in range(1, number + 1) \n for j in range(1, number + 1)]\n\n for number1, number2, result in results:\n print(number1, \"*\", number2, \"=\", result)\n\nor some variation thereof.\n"
] | [
14,
8,
0
] | [] | [] | [
"loops",
"nested",
"python"
] | stackoverflow_0000558539_loops_nested_python.txt |
Q:
Make pyunit show output for every assertion
How can I make python's unittest module show output for every assertion, rather than failing at the first one per test case? It would be much easier to debug if I could see the complete pattern of failures rather than just the first one.
In my case the assertions are based on a couple loops over an array containing an object plus some function names and the expected output (see below), so there isn't an obvious way (at least to me) to just separate every assertion into a separate TestCase:
import unittest
import get_nodes
class mytest2(unittest.TestCase):
def testfoo(self):
root = get_nodes.mmnode_plus.factory('mytree.xml')
tests = [
(root, {'skip_traversal': False, 'skip_as_child': True, 'skip_as_parent': False, 'is_leaf': False}),
(root[0], {'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False, 'is_leaf': False}),
(root[1], {'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True}),
(root[1][0], {'skip_traversal': True}),
(root[0][0], {'is_leaf': False, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False}),
(root[0][0][0], {'is_leaf': True, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True}),
(root[0][4], {'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True}),
(root[0][7], {'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True}),
]
for (node, states) in tests:
for test_state, exp_result in states.iteritems():
self.assertEqual(node.__getattribute__(test_state)(), exp_result, "unexpected %s for state %s of node %s %s" % (not exp_result, test_state, repr(node), repr(node.__dict__)))
unittest.main()
obj.__getattribute__('hello') returns obj.hello so node.__getattribute__(test_state)() is my way of calling the member function of node whose name is stored in the test_state variable.
A:
import unittest
import get_nodes
class TestSuper(unittest.TestCase):
def setUp( self ):
self.root = get_nodes.mmnode_plus.factory('mytree.xml')
def condition( self, aNode, skip_traversal, skip_as_child, skip_as_parent, is_leaf ):
self.assertEquals( skip_traversal, aNode.skip_traversal )
self.assertEquals( skip_as_child, aNode. skip_as_child)
self.assertEquals( skip_as_parent, aNode. skip_as_parent)
self.assertEquals( is_leaf , aNode. is_leaf )
class TestRoot( TestSuper ):
def testRoot( self ):
self.condition( self.root, **{'skip_traversal': False, 'skip_as_child': True, 'skip_as_parent': False, 'is_leaf': False} )
class TestRoot0( TestSuper ):
def testRoot0( self ):
self.condition( self.root[0], **{'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False, 'is_leaf': False} )
class TestRoot1( TestSuper ):
def testRoot1( self ):
self.condition( self.root[1], **{'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True})
class TestRoot10( TestSuper ):
def testRoot10( self ):
self.condition( self.root[1][0], **{'skip_traversal': True})
class TestRoot00( TestSuper ):
def testRoot00( self ):
self.condition( self.root[0][0], **{'is_leaf': False, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False})
class TestRoot0( TestSuper ):
def testRoot000( self ):
self.condition( root[0][0][0], **{'is_leaf': True, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True})
class TestRoot04( TestSuper ):
def testRoot04( self ):
self.condition( self.root[0][4], **{'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True})
class TestRoot07( TestSuper ):
def testRoot07( self ):
self.condition( self.root[0][7], **{'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True})
unittest.main()
A:
I was able to do it by making new TestCase classes dynamically using the builtin type() factory:
root = get_nodes.mmnode_plus.factory('somenodes.xml')
tests = [
(root, {'skip_traversal': False, 'skip_as_child': True, 'skip_as_parent': False, 'is_leaf': False}),
(root[0], {'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False, 'is_leaf': False}),
(root[1], {'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True}),
(root[1][0], {'skip_traversal': True}),
(root[0][0], {'is_leaf': False, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False}),
(root[0][0][0], {'is_leaf': True, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True}),
(root[0][4], {'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True}),
(root[0][7], {'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True}),
]
i = 0
for (node, states) in tests:
for test_state, exp_result in states.iteritems():
input = node.__getattribute__(test_state)()
errstr = "expected %s, not %s for state %s of node %s" % (input, exp_result, test_state, repr(node))
locals()['foo' + str(i)] = type('foo' + str(i), (unittest.TestCase,),
{'input': input, 'exp_result': exp_result, 'errstr': errstr, 'testme': lambda self: self.assertEqual(self.input, self.exp_result, self.errstr)})
i += 1
| Make pyunit show output for every assertion | How can I make python's unittest module show output for every assertion, rather than failing at the first one per test case? It would be much easier to debug if I could see the complete pattern of failures rather than just the first one.
In my case the assertions are based on a couple loops over an array containing an object plus some function names and the expected output (see below), so there isn't an obvious way (at least to me) to just separate every assertion into a separate TestCase:
import unittest
import get_nodes
class mytest2(unittest.TestCase):
def testfoo(self):
root = get_nodes.mmnode_plus.factory('mytree.xml')
tests = [
(root, {'skip_traversal': False, 'skip_as_child': True, 'skip_as_parent': False, 'is_leaf': False}),
(root[0], {'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False, 'is_leaf': False}),
(root[1], {'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True}),
(root[1][0], {'skip_traversal': True}),
(root[0][0], {'is_leaf': False, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False}),
(root[0][0][0], {'is_leaf': True, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True}),
(root[0][4], {'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True}),
(root[0][7], {'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True}),
]
for (node, states) in tests:
for test_state, exp_result in states.iteritems():
self.assertEqual(node.__getattribute__(test_state)(), exp_result, "unexpected %s for state %s of node %s %s" % (not exp_result, test_state, repr(node), repr(node.__dict__)))
unittest.main()
obj.__getattribute__('hello') returns obj.hello so node.__getattribute__(test_state)() is my way of calling the member function of node whose name is stored in the test_state variable.
| [
"import unittest\nimport get_nodes\n\nclass TestSuper(unittest.TestCase):\n def setUp( self ):\n self.root = get_nodes.mmnode_plus.factory('mytree.xml')\n def condition( self, aNode, skip_traversal, skip_as_child, skip_as_parent, is_leaf ):\n self.assertEquals( skip_traversal, aNode.skip_traversal )\n self.assertEquals( skip_as_child, aNode. skip_as_child)\n self.assertEquals( skip_as_parent, aNode. skip_as_parent)\n self.assertEquals( is_leaf , aNode. is_leaf )\n\nclass TestRoot( TestSuper ):\n def testRoot( self ):\n self.condition( self.root, **{'skip_traversal': False, 'skip_as_child': True, 'skip_as_parent': False, 'is_leaf': False} )\n\nclass TestRoot0( TestSuper ):\n def testRoot0( self ):\n self.condition( self.root[0], **{'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False, 'is_leaf': False} )\n\nclass TestRoot1( TestSuper ):\n def testRoot1( self ):\n self.condition( self.root[1], **{'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True})\n\nclass TestRoot10( TestSuper ):\n def testRoot10( self ):\n self.condition( self.root[1][0], **{'skip_traversal': True})\n\nclass TestRoot00( TestSuper ):\n def testRoot00( self ):\n self.condition( self.root[0][0], **{'is_leaf': False, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False})\n\nclass TestRoot0( TestSuper ):\n def testRoot000( self ):\n self.condition( root[0][0][0], **{'is_leaf': True, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True})\n\nclass TestRoot04( TestSuper ):\n def testRoot04( self ):\n self.condition( self.root[0][4], **{'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True})\n\nclass TestRoot07( TestSuper ):\n def testRoot07( self ):\n self.condition( self.root[0][7], **{'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True})\n\nunittest.main()\n\n",
"I was able to do it by making new TestCase classes dynamically using the builtin type() factory:\nroot = get_nodes.mmnode_plus.factory('somenodes.xml')\n\ntests = [\n (root, {'skip_traversal': False, 'skip_as_child': True, 'skip_as_parent': False, 'is_leaf': False}),\n (root[0], {'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False, 'is_leaf': False}),\n (root[1], {'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True}),\n (root[1][0], {'skip_traversal': True}),\n (root[0][0], {'is_leaf': False, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': False}),\n (root[0][0][0], {'is_leaf': True, 'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True}),\n (root[0][4], {'skip_traversal': True, 'skip_as_child': True, 'skip_as_parent': True}),\n (root[0][7], {'skip_traversal': False, 'skip_as_child': False, 'skip_as_parent': True}),\n]\n\ni = 0\nfor (node, states) in tests:\n for test_state, exp_result in states.iteritems():\n\n input = node.__getattribute__(test_state)()\n errstr = \"expected %s, not %s for state %s of node %s\" % (input, exp_result, test_state, repr(node))\n\n locals()['foo' + str(i)] = type('foo' + str(i), (unittest.TestCase,),\n {'input': input, 'exp_result': exp_result, 'errstr': errstr, 'testme': lambda self: self.assertEqual(self.input, self.exp_result, self.errstr)})\n i += 1\n\n"
] | [
2,
1
] | [] | [] | [
"python",
"python_unittest",
"unit_testing"
] | stackoverflow_0000557213_python_python_unittest_unit_testing.txt |
Q:
DOM Aware Browser Python GUI Widget
I'm looking for a python browser widget (along the lines of pyQT4's QTextBrowser class or wxpython's HTML module) that has events for interaction with the DOM. For example, if I highlight an h1 node, the widget class should have a method that notifies me something was highlighted and what dom properties that node had (<h1>, contents of the tag, sibling and parent tags, etc). Ideally the widget module/class would give access to the DOM tree object itself so I can traverse it, modify it, and re-render the new tree.
Does something like this exist? I've tried looking but I'm unfortunately not able to find it. Thanks in advance!
A:
It may not be ideal for your purposes, but you might want to take a look at the Python bindings to KHTML that are part of PyKDE. One place to start looking is the KHTMLPart class:
http://api.kde.org/pykde-4.2-api/khtml/KHTMLPart.html
Since the API for this class is based on the signals and slots paradigm used in Qt, you will need to connect various signals to slots in your own code to find out when parts of a document have been changed. There's also a DOM API, so it should also be possible to access DOM nodes for selected parts of the document.
More information can be found here:
http://api.kde.org/pykde-4.2-api/khtml/index.html
A:
I would also love such a thing. I suspect one with Python bindings does not exist, but would be really happy to be wrong about this.
One option I recently looked at (but never tried) is the Webkit browser. Now this has some bindings for Python, and built against different toolkits (I use GTK). However there are available API for the entire Javascript machine for C++, but no Python bindings and I don't see any reason why these can't be bound for Python. It's a fairly huge task, I know, but it would be a universally useful project, so maybe worth the investment.
A:
If you don't mind being limited to Windows, you can use the IE browser control. From wxPython, it's in wx.lib.iewin.IEHtmlWindow (there's a demo in the wxPython demo). This gives you full access to the DOM and ability to sink events, e.g.
ie.document.body.innerHTML = u"<p>Hello, world</p>"
| DOM Aware Browser Python GUI Widget | I'm looking for a python browser widget (along the lines of pyQT4's QTextBrowser class or wxpython's HTML module) that has events for interaction with the DOM. For example, if I highlight an h1 node, the widget class should have a method that notifies me something was highlighted and what dom properties that node had (<h1>, contents of the tag, sibling and parent tags, etc). Ideally the widget module/class would give access to the DOM tree object itself so I can traverse it, modify it, and re-render the new tree.
Does something like this exist? I've tried looking but I'm unfortunately not able to find it. Thanks in advance!
| [
"It may not be ideal for your purposes, but you might want to take a look at the Python bindings to KHTML that are part of PyKDE. One place to start looking is the KHTMLPart class:\nhttp://api.kde.org/pykde-4.2-api/khtml/KHTMLPart.html\nSince the API for this class is based on the signals and slots paradigm used in Qt, you will need to connect various signals to slots in your own code to find out when parts of a document have been changed. There's also a DOM API, so it should also be possible to access DOM nodes for selected parts of the document.\nMore information can be found here:\nhttp://api.kde.org/pykde-4.2-api/khtml/index.html\n",
"I would also love such a thing. I suspect one with Python bindings does not exist, but would be really happy to be wrong about this.\nOne option I recently looked at (but never tried) is the Webkit browser. Now this has some bindings for Python, and built against different toolkits (I use GTK). However there are available API for the entire Javascript machine for C++, but no Python bindings and I don't see any reason why these can't be bound for Python. It's a fairly huge task, I know, but it would be a universally useful project, so maybe worth the investment.\n",
"If you don't mind being limited to Windows, you can use the IE browser control. From wxPython, it's in wx.lib.iewin.IEHtmlWindow (there's a demo in the wxPython demo). This gives you full access to the DOM and ability to sink events, e.g.\nie.document.body.innerHTML = u\"<p>Hello, world</p>\"\n\n"
] | [
2,
1,
1
] | [] | [] | [
"browser",
"python",
"widget"
] | stackoverflow_0000531487_browser_python_widget.txt |
Q:
Shared folder sessions in Python
I'm trying to get a list of currently-open sessions in Python via WMI.
What I'm after is the exact information displayed in the Computer Management thingy, when you go to System Tools -> Shared Folders -> Sessions (ie username, computer name, connected time, that sort of thing).
I know (or at least believe) it has something to do with Win32_ConnectionShare...
If it makes a difference, I'm using Tim Golden's wmi module.
Of course, if there's another (non-WMI) way of getting this information then that's welcome too...
A:
Never mind -- I found it:
>>> import wmi
>>> c = wmi.WMI()
>>> for x in c.Win32_ConnectionShare():
print "%s: %s" % (x.Dependent.Username, x.Dependent.ComputerName)
| Shared folder sessions in Python | I'm trying to get a list of currently-open sessions in Python via WMI.
What I'm after is the exact information displayed in the Computer Management thingy, when you go to System Tools -> Shared Folders -> Sessions (ie username, computer name, connected time, that sort of thing).
I know (or at least believe) it has something to do with Win32_ConnectionShare...
If it makes a difference, I'm using Tim Golden's wmi module.
Of course, if there's another (non-WMI) way of getting this information then that's welcome too...
| [
"Never mind -- I found it:\n>>> import wmi\n>>> c = wmi.WMI()\n>>> for x in c.Win32_ConnectionShare():\n print \"%s: %s\" % (x.Dependent.Username, x.Dependent.ComputerName)\n\n"
] | [
1
] | [] | [] | [
"python",
"shared_directory",
"winapi",
"wmi"
] | stackoverflow_0000559662_python_shared_directory_winapi_wmi.txt |
Q:
Django - queries made repeat/inefficient
Alright, I have a Django view, like this:
@render_to('home/main.html')
def login(request):
# also tried Client.objects.select_related().all()
clients = Client.objects.all()
return {'clients':clients}
And I have a template, main.html, like this:
<ul>
{% for client in clients %}
<li>{{ client.full_name }}</li>
<ul>
{% for pet in client.pets.all %}
<li>{{ pet.full_name }}</li>
{% endfor %}
</ul>
{% endfor %}
</ul>
I also print out all the queries in sql_queries at the bottom of my base template. When I run this view, the following queries are made:
SELECT `home_client`.`id`, ... FROM `home_client`;
SELECT `home_pet`.`id`, ... FROM `home_pet` WHERE `home_pet`.`client_id` = 1;
SELECT `home_client`.`id`, ... FROM `home_client` WHERE `home_client`.`id` = 1;
SELECT `home_client`.`id`, ... FROM `home_client` WHERE `home_client`.`id` = 1;
SELECT `home_pet`.`id`, ... FROM `home_pet` WHERE `home_pet`.`client_id` = 2;
SELECT `home_client`.`id`, ... FROM `home_client` WHERE `home_client`.`id` = 2;
My question is, why are all these queries being made? Shouldn't it just be 1 query to retrieve all the clients and a query per client to retrieve all the pets from each client? I have 2 clients in the home_client table, so it should be 3 queries total. Most troubling of all is that queries 3 and 4 are 100% identical. I don't want to "prematurely optimize" or anything but I do want to make sure Django isn't being wildly inefficient. Any help on this would be appreciated. Thanks.
A:
Django uses a cache. The RDBMS uses a cache. Don't prematurely optimize the queries.
You can play with bulk queries in your view function instead of one-at-a-time queries in your template.
@render_to('home/main.html')
def login(request):
# Query all clients
clients = Client.objects.all()
# Assemble an in-memory table of pets
pets = collections.defaultdict(list)
for p in Pet.objects.all():
pets[pet.client].append(p)
# Create clients and pets tuples
clientsPetTuples = [ (c,pets[c]) for c in clients ]
return {'clientPets': clientsPetTuples}
However, you don't seem to have any evidence that your template is the slowest part of your application.
Further, this trades off giant memory use against SQL use. Until you have measurements that prove that your template queries are actually slow, you shouldn't be over thinking the SQL.
Don't worry about the SQL until you have evidence.
A:
try using Client.objects.all().select_related()
This will automagically also cache related models in a single database query.
A:
Does Client 1 have 2 Pets and Client 2 have 1 Pet?
If so, that would indicate to me that Pet.full_name or something else you're doing in the Pet display loop is trying to access its related Client's details. Django's ORM doesn't use an identity map, so accessing the Client foreign key from any of your Pet objects would require hitting the database again to retrieve that Client.
P.S. select_related won't have any effect on the data you're using in this scenario as it only follows foreign-key relationships, but the pet-to-client relationship is many-to-one.
Update: if you want to avoid having to change the logic in Pet.full_name or having to perform said logic in the template instead for this case, you could alter the way you get a handle on each Client's Pets in order to prefill the ForeignKey cache with for each Pet with its Client:
class Client(models.Model):
# ...
def get_pets(self):
for pet in self.pets.all():
setattr(pet, '_client_cache', self)
yield pet
...where the 'client' part of '_client_cache' is whatever attribute name is used in the Pet class for the ForeignKey to the Pet's Client. This takes advantage of the way Django implements access to ForeignKey-related objects using its SingleRelatedObjectDescriptor class, which looks for this cache attribute before querying the database.
Resulting template usage:
{% for pet in client.get_pets %}
...
{% endfor %}
| Django - queries made repeat/inefficient | Alright, I have a Django view, like this:
@render_to('home/main.html')
def login(request):
# also tried Client.objects.select_related().all()
clients = Client.objects.all()
return {'clients':clients}
And I have a template, main.html, like this:
<ul>
{% for client in clients %}
<li>{{ client.full_name }}</li>
<ul>
{% for pet in client.pets.all %}
<li>{{ pet.full_name }}</li>
{% endfor %}
</ul>
{% endfor %}
</ul>
I also print out all the queries in sql_queries at the bottom of my base template. When I run this view, the following queries are made:
SELECT `home_client`.`id`, ... FROM `home_client`;
SELECT `home_pet`.`id`, ... FROM `home_pet` WHERE `home_pet`.`client_id` = 1;
SELECT `home_client`.`id`, ... FROM `home_client` WHERE `home_client`.`id` = 1;
SELECT `home_client`.`id`, ... FROM `home_client` WHERE `home_client`.`id` = 1;
SELECT `home_pet`.`id`, ... FROM `home_pet` WHERE `home_pet`.`client_id` = 2;
SELECT `home_client`.`id`, ... FROM `home_client` WHERE `home_client`.`id` = 2;
My question is, why are all these queries being made? Shouldn't it just be 1 query to retrieve all the clients and a query per client to retrieve all the pets from each client? I have 2 clients in the home_client table, so it should be 3 queries total. Most troubling of all is that queries 3 and 4 are 100% identical. I don't want to "prematurely optimize" or anything but I do want to make sure Django isn't being wildly inefficient. Any help on this would be appreciated. Thanks.
| [
"Django uses a cache. The RDBMS uses a cache. Don't prematurely optimize the queries. \nYou can play with bulk queries in your view function instead of one-at-a-time queries in your template. \n@render_to('home/main.html')\ndef login(request):\n # Query all clients \n clients = Client.objects.all()\n # Assemble an in-memory table of pets\n pets = collections.defaultdict(list)\n for p in Pet.objects.all():\n pets[pet.client].append(p)\n # Create clients and pets tuples\n clientsPetTuples = [ (c,pets[c]) for c in clients ]\n return {'clientPets': clientsPetTuples}\n\nHowever, you don't seem to have any evidence that your template is the slowest part of your application.\nFurther, this trades off giant memory use against SQL use. Until you have measurements that prove that your template queries are actually slow, you shouldn't be over thinking the SQL.\nDon't worry about the SQL until you have evidence.\n",
"try using Client.objects.all().select_related()\nThis will automagically also cache related models in a single database query.\n",
"Does Client 1 have 2 Pets and Client 2 have 1 Pet?\nIf so, that would indicate to me that Pet.full_name or something else you're doing in the Pet display loop is trying to access its related Client's details. Django's ORM doesn't use an identity map, so accessing the Client foreign key from any of your Pet objects would require hitting the database again to retrieve that Client.\nP.S. select_related won't have any effect on the data you're using in this scenario as it only follows foreign-key relationships, but the pet-to-client relationship is many-to-one.\nUpdate: if you want to avoid having to change the logic in Pet.full_name or having to perform said logic in the template instead for this case, you could alter the way you get a handle on each Client's Pets in order to prefill the ForeignKey cache with for each Pet with its Client:\nclass Client(models.Model):\n # ...\n def get_pets(self):\n for pet in self.pets.all():\n setattr(pet, '_client_cache', self)\n yield pet\n\n...where the 'client' part of '_client_cache' is whatever attribute name is used in the Pet class for the ForeignKey to the Pet's Client. This takes advantage of the way Django implements access to ForeignKey-related objects using its SingleRelatedObjectDescriptor class, which looks for this cache attribute before querying the database.\nResulting template usage:\n{% for pet in client.get_pets %}\n...\n{% endfor %}\n\n"
] | [
7,
4,
3
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000559701_django_python.txt |
Q:
Debugging web apps
I've gotten pretty used to step-through debuggers over the years, both in builder, and using the pydev debugger in Eclipse.
Currently, I'm making something in Python and running it on Google App Engine, and I should add that I'm pretty new to developing any real web app; I've never really done much beyond editing HTML code.
So, I'm running Google's dev_appserver and viewing my work at http://localhost, dig, and right now, the only tool I'm using to identify issues is PMD (poor man's debugger). . .basically writing things to the html pages to see the value of local variables.
Is there a better technique for dealing with this?
A:
The dev_appserver is just a python script, you can simply use the pydev debugger on that script with the proper arguments as far as I know.
Here is a very detailed guide on how to do that:
http://www.ibm.com/developerworks/opensource/library/os-eclipse-mashup-google-pt1/index.html
A:
I would suggest to use logging statements instead of prints though as you can control them better. Python has a quite good logging library included.
For logging from Google App Engine to e.g. Firebug there is also some handy tool called FirePython. This allows to log to the firebug console from within your Django or WSGI app (it's middleware).
A:
"Is there a better technique for dealing with this?" Not really.
"step-through debuggers" are their own problem. They're a kind of mental crutch that make it easy to get something that looks like it works.
First, look at http://code.google.com/appengine/docs/python/tools/devserver.html#The_Development_Console for something that might be helpful.
Second, note that --debug Prints verbose debugging messages to the console while running.
Finally, note that you'll need plenty of Python experience and Google AppEngine experience to write things like web applications. To get that experience, the print statement is really quite good. It shows you what's going on, and it encourages you to really understand what you expect or intend to happen.
Debuggers are passive. It devolves to writing random code, seeing what happens, making changes until it works. I've watched people do this.
Print statement is active. You have to plan what should happen, write code, and consider the results carefully to see if the plans worked out. If it doesn't do what you intended, you have to hypothesize and test your hypothesis. If it works, then you "understood" what was going on. Once you get the semantics of Python and the Google AppEngine, your understanding grows and this becomes really easy.
A:
My debugging toolbox for GAE:
standard Python logging as a substitute for print statements
Werkzeug debugger if I do not want to go to the console log on each error (not everything works, most notably interactive interpreter session)
interactive console at http://localhost:8080/_ah/admin/interactive (not as good as Django's python manage.py shell but still...)
Symbolic debuggers are not so valued as elsewhere, possibly because Python's superior introspection and reflection mechanisms.
| Debugging web apps | I've gotten pretty used to step-through debuggers over the years, both in builder, and using the pydev debugger in Eclipse.
Currently, I'm making something in Python and running it on Google App Engine, and I should add that I'm pretty new to developing any real web app; I've never really done much beyond editing HTML code.
So, I'm running Google's dev_appserver and viewing my work at http://localhost, dig, and right now, the only tool I'm using to identify issues is PMD (poor man's debugger). . .basically writing things to the html pages to see the value of local variables.
Is there a better technique for dealing with this?
| [
"The dev_appserver is just a python script, you can simply use the pydev debugger on that script with the proper arguments as far as I know.\nHere is a very detailed guide on how to do that:\nhttp://www.ibm.com/developerworks/opensource/library/os-eclipse-mashup-google-pt1/index.html\n",
"I would suggest to use logging statements instead of prints though as you can control them better. Python has a quite good logging library included.\nFor logging from Google App Engine to e.g. Firebug there is also some handy tool called FirePython. This allows to log to the firebug console from within your Django or WSGI app (it's middleware).\n",
"\"Is there a better technique for dealing with this?\" Not really.\n\"step-through debuggers\" are their own problem. They're a kind of mental crutch that make it easy to get something that looks like it works.\nFirst, look at http://code.google.com/appengine/docs/python/tools/devserver.html#The_Development_Console for something that might be helpful.\nSecond, note that --debug Prints verbose debugging messages to the console while running.\nFinally, note that you'll need plenty of Python experience and Google AppEngine experience to write things like web applications. To get that experience, the print statement is really quite good. It shows you what's going on, and it encourages you to really understand what you expect or intend to happen. \nDebuggers are passive. It devolves to writing random code, seeing what happens, making changes until it works. I've watched people do this.\nPrint statement is active. You have to plan what should happen, write code, and consider the results carefully to see if the plans worked out. If it doesn't do what you intended, you have to hypothesize and test your hypothesis. If it works, then you \"understood\" what was going on. Once you get the semantics of Python and the Google AppEngine, your understanding grows and this becomes really easy.\n",
"My debugging toolbox for GAE:\n\nstandard Python logging as a substitute for print statements\nWerkzeug debugger if I do not want to go to the console log on each error (not everything works, most notably interactive interpreter session)\ninteractive console at http://localhost:8080/_ah/admin/interactive (not as good as Django's python manage.py shell but still...)\n\nSymbolic debuggers are not so valued as elsewhere, possibly because Python's superior introspection and reflection mechanisms.\n"
] | [
7,
4,
2,
2
] | [] | [] | [
"debugging",
"eclipse",
"google_app_engine",
"python"
] | stackoverflow_0000557927_debugging_eclipse_google_app_engine_python.txt |
Q:
Issue with PyAMF, Django, and Python's "property" feature
So far, I've had great success using PyAMF to communicate between my Flex front-end and my Django back-end. However, I believe I've encountered a bug. The following example (emphasis on the word "example") demonstrates the (potential) bug:
My Flex app contains the following VO:
package myproject.model.vo
{
[Bindable]
[RemoteClass(alias="myproject.models.Book")]
public class BookVO
{
public var id:int;
public var title:String;
public var numberOfOddPages:int;
}
}
My Django app contains the following model:
class Book(models.Models):
title = models.CharField(max_length=20)
def _get_number_of_odd_pages(self):
#some code that calculates odd pages
return odd_page_total
numberOfOddPages = property(_get_number_of_odd_pages)
When I attempt to retrieve the book objects to display in a DataGrid, the books display in the grid as expected. However, "numberOfOddPages" is always set to 0. I have even attempted to explicitly set this attribute with a default value (i.e., "numberOfOddPages=100") to see if my "_get_number_of_odd_pages()" method had an error in it. Unfortunately, it yields the same result: the value in the VO remains at 0.
Does anyone have any insight into what I may be doing wrong?
A:
I just received the following response from PyAMF's lead developer. It's definitely a bug:
This is a bug in the way the Django
adapter handles non models.fields.*
properties.
If I do:
import pyamf
class Book(object):
def _get_number_of_odd_pages(self):
return 52
numberOfOddPages = property(_get_number_of_odd_pages)
pyamf.register_class(Book, 'Book')
encoded = pyamf.encode(Book()).getvalue()
print pyamf.decode(encoded).next().numberOfOddPages
Then i get the correct values of 52.
I have created a ticket for this
and will look into getting a patch a
little later.
Cheers,
Nick
UPDATE: Nick has fixed this bug and it will be released in PyAMF 0.4.1 (which should be released this weekend).
| Issue with PyAMF, Django, and Python's "property" feature | So far, I've had great success using PyAMF to communicate between my Flex front-end and my Django back-end. However, I believe I've encountered a bug. The following example (emphasis on the word "example") demonstrates the (potential) bug:
My Flex app contains the following VO:
package myproject.model.vo
{
[Bindable]
[RemoteClass(alias="myproject.models.Book")]
public class BookVO
{
public var id:int;
public var title:String;
public var numberOfOddPages:int;
}
}
My Django app contains the following model:
class Book(models.Models):
title = models.CharField(max_length=20)
def _get_number_of_odd_pages(self):
#some code that calculates odd pages
return odd_page_total
numberOfOddPages = property(_get_number_of_odd_pages)
When I attempt to retrieve the book objects to display in a DataGrid, the books display in the grid as expected. However, "numberOfOddPages" is always set to 0. I have even attempted to explicitly set this attribute with a default value (i.e., "numberOfOddPages=100") to see if my "_get_number_of_odd_pages()" method had an error in it. Unfortunately, it yields the same result: the value in the VO remains at 0.
Does anyone have any insight into what I may be doing wrong?
| [
"I just received the following response from PyAMF's lead developer. It's definitely a bug:\n\nThis is a bug in the way the Django\n adapter handles non models.fields.*\n properties.\nIf I do:\n\nimport pyamf\n\nclass Book(object): \ndef _get_number_of_odd_pages(self):\n return 52\n\nnumberOfOddPages = property(_get_number_of_odd_pages)\n\npyamf.register_class(Book, 'Book')\n\nencoded = pyamf.encode(Book()).getvalue() \nprint pyamf.decode(encoded).next().numberOfOddPages\n\n\nThen i get the correct values of 52.\nI have created a ticket for this \n and will look into getting a patch a\n little later.\nCheers,\nNick\n\nUPDATE: Nick has fixed this bug and it will be released in PyAMF 0.4.1 (which should be released this weekend).\n"
] | [
1
] | [
"Not at all.\nDo you think the error is from Django or from Flex? You could first of all trace the AMF Object in Flex. If the value there is allready 0 then have a good look what PyAMF does.\n"
] | [
-1
] | [
"apache_flex",
"django",
"flex3",
"pyamf",
"python"
] | stackoverflow_0000558926_apache_flex_django_flex3_pyamf_python.txt |
Q:
How do i extract my required data from HTML file?
This is the HTML I have:
p_tags = '''<p class="foo-body">
<font class="test-proof">Full name</font> Foobar<br />
<font class="test-proof">Born</font> July 7, 1923, foo, bar<br />
<font class="test-proof">Current age</font> 27 years 226 days<br />
<font class="test-proof">Major teams</font> <span style="white-space: nowrap">Japan,</span> <span style="white-space: nowrap">Jakarta,</span> <span style="white-space: nowrap">bazz,</span> <span style="white-space: nowrap">foo,</span> <span style="white-space: nowrap">foobazz</span><br />
<font class="test-proof">Also</font> bar<br />
<font class="test-proof">foo style</font> hand <br />
<font class="test-proof">bar style</font> ball<br />
<font class="test-proof">foo position</font> bak<br />
<br class="bar" />
</p>'''
This is my Python code, using Beautiful Soup:
def get_info(p_tags):
"""Returns brief information."""
head_list = []
detail_list = []
# This works fine
for head in p_tags.findAll('font', 'test-proof'):
head_list.append(head.contents[0])
# Some problem with this?
for index in xrange(2, 30, 4):
detail_list.append(p_tags.contents[index])
return dict([(l, detail_list[head_list.index(l)]) for l in head_list])
I get the proper head_list from the HTML but the detail_list is not working.
head_list = [u'Full name',
u'Born',
u'Current age',
u'Major teams',
u'Also',
u'foo style',
u'bar style',
u'foo position']
I wanted something like this
{
'Full name': 'Foobar',
'Born': 'July 7, 1923, foo, bar',
'Current age': '78 years 226 days',
'Major teams': 'Japan, Jakarta, bazz, foo, foobazz',
'Also': 'bar',
'foo style': 'hand',
'bar style': 'ball',
'foo position': 'bak'
}
Any help would be appreciable. Thanks in advance.
A:
The issue is that your HTML is not very well thought out -- you have a "mixed content model" where your labels and your data are interleaved. Your labels are wrapped in <font> Tags, but your data is in NavigableString nodes.
You need to iterate over the contents of p_tag. There will be two kinds of nodes: Tag nodes (which have your <font> tags) and NavigableString nodes which have the other bits of text.
from beautifulsoup import *
label_value_pairs = []
for n in p_tag.contents:
if isinstance(n,Tag) and tag == "font"
label= n.string
elif isinstance(n, NavigableString):
value= n.string
label_value_pairs.append( label, value )
else:
# Generally tag == "br"
pass
print dict( label_value_pairs )
Something approximately like that.
A:
I started answering this before I realised you were using 'beautiful soup' but here's a parser that I think works with your example string written using the HTMLParser library
from HTMLParser import HTMLParser
results = {}
class myParse(HTMLParser):
def __init__(self):
self.state = ""
HTMLParser.__init__(self)
def handle_starttag(self, tag, attrs):
attrs = dict(attrs)
if tag == "font" and attrs.has_key("class") and attrs['class'] == "test-proof":
self.state = "getKey"
def handle_endtag(self, tag):
if self.state == "getKey" and tag == "font":
self.state = "getValue"
def handle_data(self, data):
data = data.strip()
if not data:
return
if self.state == "getKey":
self.resultsKey = data
elif self.state == "getValue":
if results.has_key(self.resultsKey):
results[self.resultsKey] += " " + data
else:
results[self.resultsKey] = data
if __name__ == "__main__":
p_tags = """<p class="foo-body"> <font class="test-proof">Full name</font> Foobar<br /> <font class="test-proof">Born</font> July 7, 1923, foo, bar<br /> <font class="test-proof">Current age</font> 27 years 226 days<br /> <font class="test-proof">Major teams</font> <span style="white-space: nowrap">Japan,</span> <span style="white-space: nowrap">Jakarta,</span> <span style="white-space: nowrap">bazz,</span> <span style="white-space: nowrap">foo,</span> <span style="white-space: nowrap">foobazz</span><br /> <font class="test-proof">Also</font> bar<br /> <font class="test-proof">foo style</font> hand <br /> <font class="test-proof">bar style</font> ball<br /> <font class="test-proof">foo position</font> bak<br /> <br class="bar" /></p>"""
parser = myParse()
parser.feed(p_tags)
print results
Gives the result:
{'foo position': 'bak',
'Major teams': 'Japan, Jakarta, bazz, foo, foobazz',
'Also': 'bar',
'Current age': '27 years 226 days',
'Born': 'July 7, 1923, foo, bar' ,
'foo style': 'hand',
'bar style': 'ball',
'Full name': 'Foobar'}
A:
Sorry for the unnecessarily complex code, I badly need a big dose of caffeine ;)
import re
str = """<p class="foo-body">
<font class="test-proof">Full name</font> Foobar<br />
<font class="test-proof">Born</font> July 7, 1923, foo, bar<br />
<font class="test-proof">Current age</font> 27 years 226 days<br />
<font class="test-proof">Major teams</font> <span style="white-space: nowrap">Japan,</span> <span style="white-space: nowrap">Jakarta,</span> <span style="white-space: nowrap">bazz,</span> <span style="white-space: nowrap">foo,</span> <span style="white-space: nowrap">foobazz</span><br />
<font class="test-proof">Also</font> bar<br />
<font class="test-proof">foo style</font> hand <br />
<font class="test-proof">bar style</font> ball<br />
<font class="test-proof">foo position</font> bak<br />
<br class="bar" />
</p>"""
R_EXTRACT_DATA = re.compile("<font\s[^>]*>[\s]*(.*?)[\s]*</font>[\s]*(.*?)[\s]*<br />", re.IGNORECASE)
R_STRIP_TAGS = re.compile("<span\s[^>]*>|</span>", re.IGNORECASE)
def strip_tags(str):
"""Strip un-necessary <span> tags
"""
return R_STRIP_TAGS.sub("", str)
def get_info(str):
"""Extract useful info from the given string
"""
data = R_EXTRACT_DATA.findall(str)
data_dict = {}
for x in [(x[0], strip_tags(x[1])) for x in data]:
data_dict[x[0]] = x[1]
return data_dict
print get_info(str)
A:
You want to find the strings preceded by > and followed by <, ignoring trailing or leading whitespace. You can do this quite easily with a loop looking at each character in the string, or regular expressions could help. Something like >[ \t]*[^<]+[ \t]*<.
You could also use re.split and a regex representing the tag contents, something like <[^>]*> as the splitter, you will get some empty entries in the array, but these are easily deleted.
| How do i extract my required data from HTML file? | This is the HTML I have:
p_tags = '''<p class="foo-body">
<font class="test-proof">Full name</font> Foobar<br />
<font class="test-proof">Born</font> July 7, 1923, foo, bar<br />
<font class="test-proof">Current age</font> 27 years 226 days<br />
<font class="test-proof">Major teams</font> <span style="white-space: nowrap">Japan,</span> <span style="white-space: nowrap">Jakarta,</span> <span style="white-space: nowrap">bazz,</span> <span style="white-space: nowrap">foo,</span> <span style="white-space: nowrap">foobazz</span><br />
<font class="test-proof">Also</font> bar<br />
<font class="test-proof">foo style</font> hand <br />
<font class="test-proof">bar style</font> ball<br />
<font class="test-proof">foo position</font> bak<br />
<br class="bar" />
</p>'''
This is my Python code, using Beautiful Soup:
def get_info(p_tags):
"""Returns brief information."""
head_list = []
detail_list = []
# This works fine
for head in p_tags.findAll('font', 'test-proof'):
head_list.append(head.contents[0])
# Some problem with this?
for index in xrange(2, 30, 4):
detail_list.append(p_tags.contents[index])
return dict([(l, detail_list[head_list.index(l)]) for l in head_list])
I get the proper head_list from the HTML but the detail_list is not working.
head_list = [u'Full name',
u'Born',
u'Current age',
u'Major teams',
u'Also',
u'foo style',
u'bar style',
u'foo position']
I wanted something like this
{
'Full name': 'Foobar',
'Born': 'July 7, 1923, foo, bar',
'Current age': '78 years 226 days',
'Major teams': 'Japan, Jakarta, bazz, foo, foobazz',
'Also': 'bar',
'foo style': 'hand',
'bar style': 'ball',
'foo position': 'bak'
}
Any help would be appreciable. Thanks in advance.
| [
"The issue is that your HTML is not very well thought out -- you have a \"mixed content model\" where your labels and your data are interleaved. Your labels are wrapped in <font> Tags, but your data is in NavigableString nodes.\nYou need to iterate over the contents of p_tag. There will be two kinds of nodes: Tag nodes (which have your <font> tags) and NavigableString nodes which have the other bits of text.\nfrom beautifulsoup import *\nlabel_value_pairs = []\nfor n in p_tag.contents:\n if isinstance(n,Tag) and tag == \"font\"\n label= n.string\n elif isinstance(n, NavigableString):\n value= n.string\n label_value_pairs.append( label, value )\n else:\n # Generally tag == \"br\"\n pass\nprint dict( label_value_pairs )\n\nSomething approximately like that.\n",
"I started answering this before I realised you were using 'beautiful soup' but here's a parser that I think works with your example string written using the HTMLParser library\nfrom HTMLParser import HTMLParser\n\nresults = {}\nclass myParse(HTMLParser):\n\n def __init__(self):\n self.state = \"\"\n HTMLParser.__init__(self)\n\n def handle_starttag(self, tag, attrs):\n attrs = dict(attrs)\n if tag == \"font\" and attrs.has_key(\"class\") and attrs['class'] == \"test-proof\":\n self.state = \"getKey\"\n\n def handle_endtag(self, tag):\n if self.state == \"getKey\" and tag == \"font\":\n self.state = \"getValue\"\n\n def handle_data(self, data):\n data = data.strip()\n if not data:\n return\n if self.state == \"getKey\":\n self.resultsKey = data\n elif self.state == \"getValue\":\n if results.has_key(self.resultsKey):\n results[self.resultsKey] += \" \" + data \n else: \n results[self.resultsKey] = data\n\n\nif __name__ == \"__main__\":\n p_tags = \"\"\"<p class=\"foo-body\"> <font class=\"test-proof\">Full name</font> Foobar<br /> <font class=\"test-proof\">Born</font> July 7, 1923, foo, bar<br /> <font class=\"test-proof\">Current age</font> 27 years 226 days<br /> <font class=\"test-proof\">Major teams</font> <span style=\"white-space: nowrap\">Japan,</span> <span style=\"white-space: nowrap\">Jakarta,</span> <span style=\"white-space: nowrap\">bazz,</span> <span style=\"white-space: nowrap\">foo,</span> <span style=\"white-space: nowrap\">foobazz</span><br /> <font class=\"test-proof\">Also</font> bar<br /> <font class=\"test-proof\">foo style</font> hand <br /> <font class=\"test-proof\">bar style</font> ball<br /> <font class=\"test-proof\">foo position</font> bak<br /> <br class=\"bar\" /></p>\"\"\"\n parser = myParse()\n parser.feed(p_tags)\n print results\n\nGives the result:\n{'foo position': 'bak', \n'Major teams': 'Japan, Jakarta, bazz, foo, foobazz', \n'Also': 'bar', \n'Current age': '27 years 226 days', \n'Born': 'July 7, 1923, foo, bar' , \n'foo style': 'hand', \n'bar style': 'ball', \n'Full name': 'Foobar'}\n\n",
"Sorry for the unnecessarily complex code, I badly need a big dose of caffeine ;)\nimport re\n\nstr = \"\"\"<p class=\"foo-body\">\n <font class=\"test-proof\">Full name</font> Foobar<br />\n <font class=\"test-proof\">Born</font> July 7, 1923, foo, bar<br />\n <font class=\"test-proof\">Current age</font> 27 years 226 days<br />\n <font class=\"test-proof\">Major teams</font> <span style=\"white-space: nowrap\">Japan,</span> <span style=\"white-space: nowrap\">Jakarta,</span> <span style=\"white-space: nowrap\">bazz,</span> <span style=\"white-space: nowrap\">foo,</span> <span style=\"white-space: nowrap\">foobazz</span><br />\n <font class=\"test-proof\">Also</font> bar<br />\n <font class=\"test-proof\">foo style</font> hand <br />\n <font class=\"test-proof\">bar style</font> ball<br />\n <font class=\"test-proof\">foo position</font> bak<br />\n <br class=\"bar\" />\n</p>\"\"\"\n\nR_EXTRACT_DATA = re.compile(\"<font\\s[^>]*>[\\s]*(.*?)[\\s]*</font>[\\s]*(.*?)[\\s]*<br />\", re.IGNORECASE)\nR_STRIP_TAGS = re.compile(\"<span\\s[^>]*>|</span>\", re.IGNORECASE)\n\ndef strip_tags(str):\n \"\"\"Strip un-necessary <span> tags\n \"\"\"\n return R_STRIP_TAGS.sub(\"\", str)\n\ndef get_info(str):\n \"\"\"Extract useful info from the given string\n \"\"\"\n data = R_EXTRACT_DATA.findall(str)\n data_dict = {}\n\n for x in [(x[0], strip_tags(x[1])) for x in data]:\n data_dict[x[0]] = x[1]\n\n return data_dict\n\nprint get_info(str)\n\n",
"You want to find the strings preceded by > and followed by <, ignoring trailing or leading whitespace. You can do this quite easily with a loop looking at each character in the string, or regular expressions could help. Something like >[ \\t]*[^<]+[ \\t]*<.\nYou could also use re.split and a regex representing the tag contents, something like <[^>]*> as the splitter, you will get some empty entries in the array, but these are easily deleted.\n"
] | [
4,
4,
2,
0
] | [] | [] | [
"beautifulsoup",
"python",
"screen_scraping"
] | stackoverflow_0000560936_beautifulsoup_python_screen_scraping.txt |
Q:
Want procmail to run a custom python script, everytime a new mail shows up
I have a pretty usual requirement with procmail but I am unable to get the results somehow. I have procmailrc file with this content:
:0
* ^To.*@myhost
| /usr/bin/python /work/scripts/privilege_emails_forward.py
Wherein my custom python script(privilege_emails_forward.py) will be scanning through the email currently received and do some operations on the mail content. But I am unable to get the script getting executed at the first shot(let alone scanning through the mail content).
Is this a correct way of invoking an external program(python) as soon as new mail arrives?
And how does my python program(privilege_emails_forward.py) will receive the mail as input? I mean as sys.argv or stdin????
A:
That is just fine, just put fw after :0 (:0 fw). Your python program will receive the mail on stdin. You have to 'echo' the possibly transformed mail on stdout.
fw means:
f Consider the pipe as a filter.
w Wait for the filter or program to finish and check its exitcode (normally ignored); if the filter is unsuccessful, then the text will not have been filtered.
My SPAM checker (bogofilter) just works like that. It adds headers and later procmail-rules do something depending on these headers.
A:
The log excerpt clearly states that your script is executed, even if it doesn't show the desired effect. I'd expect procmail to log an error if the execution failed.
Anyway, make sure that the user (uid) that procmail is executed with has the correct permissions to execute your script. Wire the script into procmail only if you succeeded testing with something like this (replace 'procmail' with the correct uid):
# sudo -u procmail /bin/sh -c '/bin/cat /work/scripts/mail.txt | /usr/bin/python /work/scripts/privilege_emails_forward.py'
Depending on your sudo configuration, you'd have to run this as root. Oh, and make sure you use absolute file paths.
| Want procmail to run a custom python script, everytime a new mail shows up | I have a pretty usual requirement with procmail but I am unable to get the results somehow. I have procmailrc file with this content:
:0
* ^To.*@myhost
| /usr/bin/python /work/scripts/privilege_emails_forward.py
Wherein my custom python script(privilege_emails_forward.py) will be scanning through the email currently received and do some operations on the mail content. But I am unable to get the script getting executed at the first shot(let alone scanning through the mail content).
Is this a correct way of invoking an external program(python) as soon as new mail arrives?
And how does my python program(privilege_emails_forward.py) will receive the mail as input? I mean as sys.argv or stdin????
| [
"That is just fine, just put fw after :0 (:0 fw). Your python program will receive the mail on stdin. You have to 'echo' the possibly transformed mail on stdout.\nfw means:\n\nf Consider the pipe as a filter.\nw Wait for the filter or program to finish and check its exitcode (normally ignored); if the filter is unsuccessful, then the text will not have been filtered.\n\nMy SPAM checker (bogofilter) just works like that. It adds headers and later procmail-rules do something depending on these headers.\n",
"The log excerpt clearly states that your script is executed, even if it doesn't show the desired effect. I'd expect procmail to log an error if the execution failed.\nAnyway, make sure that the user (uid) that procmail is executed with has the correct permissions to execute your script. Wire the script into procmail only if you succeeded testing with something like this (replace 'procmail' with the correct uid):\n\n# sudo -u procmail /bin/sh -c '/bin/cat /work/scripts/mail.txt | /usr/bin/python /work/scripts/privilege_emails_forward.py'\n\nDepending on your sudo configuration, you'd have to run this as root. Oh, and make sure you use absolute file paths.\n"
] | [
11,
5
] | [] | [] | [
"email",
"procmail",
"python"
] | stackoverflow_0000557906_email_procmail_python.txt |
Q:
python MySQL module class file name
I am confused how directory name, file name and class name all work together.
This is what I have at the moment
app.py
database/
client.py
staff.py
order.py
Inside client.py I have a single class called client, which acts as the database model (MVC). The same with my other files: staff.py has a class called staff, order.py has order.
Then in app.py I do:
from database import client as model
c = model.client()
And then I get confused. In an ideal world this is what I want to do:
Keep my database model classes in separate files in their own directory.
Use them like this:
c = model.client()
o = model.order()
s = model.staff()
The only way I can see to do this is to put all my classes in a single file called model.py and save this at the root.
I'm sure I am missing something very basic here.
A:
Python has two basic ways of importing content. Modules and Packages.
A module is simply a python file on the include path: order.py
If order.py defines a class named foo, then access to that class could be had by:
import order
o = order.foo()
In order to use the syntax from the orignial question, you would need to ensure that your model.py file has the following attributes: [client, staff, order]
However, that typically means placing them in a single file. Which is what you are trying to avoid.
A package is a directory with an __init__.py inside of it. The init.py initializes the package (ie. it is run on first import), and you can have either modules or sub-packages within that directory.
model
__init__.py
client.py
staff.py
order.py
That way, to access any of the sub modules, you would simply say:
import model.client
However, that is simply importing the module. It is not importing any of the attributes of the module. So in order to access a class inside the module, you would need to specify it:
import model.client
o = model.client.clientclass()
This is a bit tedious, but very well organized.
Best of both (where performance isn't a big deal):
If you type the following code in __init__.py:
from .client import clientclass as client
from .staff import staffclass as staff
from .order import orderclass as order
Then you have auto-loaded all of your classes, and they can be accessed as:
import model
c = model.client()
s = model.staff()
o = model.order()
In the end, it may be more simple to stick with the non-magical way to do it:
import model.client
o = model.client.clientclass()
--Gahooa
| python MySQL module class file name | I am confused how directory name, file name and class name all work together.
This is what I have at the moment
app.py
database/
client.py
staff.py
order.py
Inside client.py I have a single class called client, which acts as the database model (MVC). The same with my other files: staff.py has a class called staff, order.py has order.
Then in app.py I do:
from database import client as model
c = model.client()
And then I get confused. In an ideal world this is what I want to do:
Keep my database model classes in separate files in their own directory.
Use them like this:
c = model.client()
o = model.order()
s = model.staff()
The only way I can see to do this is to put all my classes in a single file called model.py and save this at the root.
I'm sure I am missing something very basic here.
| [
"Python has two basic ways of importing content. Modules and Packages.\n\nA module is simply a python file on the include path: order.py\nIf order.py defines a class named foo, then access to that class could be had by:\nimport order\no = order.foo()\n\nIn order to use the syntax from the orignial question, you would need to ensure that your model.py file has the following attributes: [client, staff, order]\nHowever, that typically means placing them in a single file. Which is what you are trying to avoid.\n\nA package is a directory with an __init__.py inside of it. The init.py initializes the package (ie. it is run on first import), and you can have either modules or sub-packages within that directory.\nmodel\n __init__.py\n client.py\n staff.py\n order.py\n\nThat way, to access any of the sub modules, you would simply say:\nimport model.client\n\nHowever, that is simply importing the module. It is not importing any of the attributes of the module. So in order to access a class inside the module, you would need to specify it:\nimport model.client\no = model.client.clientclass() \n\nThis is a bit tedious, but very well organized.\n\nBest of both (where performance isn't a big deal):\nIf you type the following code in __init__.py:\nfrom .client import clientclass as client\nfrom .staff import staffclass as staff\nfrom .order import orderclass as order\n\nThen you have auto-loaded all of your classes, and they can be accessed as:\nimport model\nc = model.client()\ns = model.staff()\no = model.order()\n\n\nIn the end, it may be more simple to stick with the non-magical way to do it: \nimport model.client\no = model.client.clientclass() \n\n--Gahooa\n"
] | [
4
] | [] | [] | [
"class",
"directory",
"file",
"module",
"python"
] | stackoverflow_0000561791_class_directory_file_module_python.txt |
Q:
Where should I post my python code?
Today I needed to parse some data out from an xlsx file (Office open XML Spreadsheet). I could have just opened the files in openoffice and exported to csv. However I will need to reimport data from this spreadsheet later, and I wanted to eliminate the manual operation.
I searched on the net for xlsx parser, and all I found was a stackoverflow question asking the same thing: Parsing and generating Microsoft Office 2007 files (.docx, .xlsx, .pptx)
So I rolled my own.
It's 134 lines of code for the parsing and accessing off a spreadsheet, and 54 lines of code of unit tests. This of course is only tested on the 1 file I needed it, and aside from how it's used in the unit tests there are is no documentation as off now. It uses zipfile, minidom, re and unittest, so perfectly portable and platform independent.
Since I don't blog, and I don't have any desire to turn this into a python library for OfficeOpen XML, I am stuck wondering where I should post this code. I have solved a problem that I am sure others will get in the future. So I want to post my code under public domain somewhere for anyone to copy and paste into their app and adjust to fix their problem.
The implementation is simple, and here is a quick overview off the features:
workbook = Workbook(filename) # open a file
for sheet in workbook: pass # iterate over the worksheets
workbook["sheetname"] # access a sheet by name, also possible to do by index from 0
sheet["A1"] # Access cell
sheet["A"] # Access column
sheet["1"] # Access row
cell.value # Cell value - only tested with ints and strings.
Thanks for all the replies. I was going to hoste it on activestate, but the page kept crashing when sending me the activation mail. So I can't activate my code to post it.
My second choice was codeproject, and I wrote up a nice article about the file. Sadly that page crashes when I try to submit my post.
So I put it on github for any to see and branch off:
http://github.com/staale/python-xlsx/tree/master
I don't want to do all the work for the python project hosting, so that's out.
Accepting the git answer, as that was the only thing that worked for me. And git rocks.
Edit: Gah, lost my entire post at codeproject, and I did such a nice writeup. Screw it, I have spent more time trying to share this than it took coding it. So I am calling it done for my part as off now. Unless I decide to tweak it more later.
A:
GitHub would also be a great place to post this. Especially as that would allow others to quickly fork their own copies and make any improvements or modifications they need. These changes would then also be available to anyone else who wants them.
A:
You should post it here. There are plenty of recipes here and yours would fit in perfectly.
Stack Overflow is meant to be a wiki, where people search for questions and find answers. That being said, if you want to post it here, what you would do is open a question relevant to your answer, then respond to your own question with your answer.
A:
If your code is short enough to copy and paste, you might want to post it as a Python recipe. That site is a great resource for learning Python techniques and its best contents have been compiled into a book.
If your code is reusable as-is then you should post your Python code in the Python Package Index (pypi). Organize your source code, read this tutorial on how to write a setup.py for your package. Once you have your free pypi account and have written setup.py, run python setup.py register to claim your package's name and post its metadata to the index. setup.py can also upload your package's source or binaries to pypi, for example python setup.py sdist upload would build and upload the source distribution.
Once your package is a part of the Python Package Index, other Python programmers can download and install it automatically with a number of tools including easy_install your_package.
A:
err, with a more descriptive title, i am sure many people would have found it here itself.(but i wasn't aware of the not-a-question tag).
Hence, you can get a free blog, and just put in the bits you want to share, that way you would also have a steady online reference whenever you need it.
A:
In general, CodeProject is a great place to post code as long as you're willing to write a small article about the code. (One of the good things about CodeProject is that they do require some verbiage about the code.) The site gets an extraordinary amount of traffic, so anything you post there will be seen.
A:
It seems to me that Python Package Index is the right place for you
A:
May I humbly suggest my site; http://utilitymill.com. It lets you not only post your Python code, but also makes it into a runnable web utility.
Users can collaberate on the code and write-ups, and it even gives you an automatic RESTful API for the utility for free.
| Where should I post my python code? | Today I needed to parse some data out from an xlsx file (Office open XML Spreadsheet). I could have just opened the files in openoffice and exported to csv. However I will need to reimport data from this spreadsheet later, and I wanted to eliminate the manual operation.
I searched on the net for xlsx parser, and all I found was a stackoverflow question asking the same thing: Parsing and generating Microsoft Office 2007 files (.docx, .xlsx, .pptx)
So I rolled my own.
It's 134 lines of code for the parsing and accessing off a spreadsheet, and 54 lines of code of unit tests. This of course is only tested on the 1 file I needed it, and aside from how it's used in the unit tests there are is no documentation as off now. It uses zipfile, minidom, re and unittest, so perfectly portable and platform independent.
Since I don't blog, and I don't have any desire to turn this into a python library for OfficeOpen XML, I am stuck wondering where I should post this code. I have solved a problem that I am sure others will get in the future. So I want to post my code under public domain somewhere for anyone to copy and paste into their app and adjust to fix their problem.
The implementation is simple, and here is a quick overview off the features:
workbook = Workbook(filename) # open a file
for sheet in workbook: pass # iterate over the worksheets
workbook["sheetname"] # access a sheet by name, also possible to do by index from 0
sheet["A1"] # Access cell
sheet["A"] # Access column
sheet["1"] # Access row
cell.value # Cell value - only tested with ints and strings.
Thanks for all the replies. I was going to hoste it on activestate, but the page kept crashing when sending me the activation mail. So I can't activate my code to post it.
My second choice was codeproject, and I wrote up a nice article about the file. Sadly that page crashes when I try to submit my post.
So I put it on github for any to see and branch off:
http://github.com/staale/python-xlsx/tree/master
I don't want to do all the work for the python project hosting, so that's out.
Accepting the git answer, as that was the only thing that worked for me. And git rocks.
Edit: Gah, lost my entire post at codeproject, and I did such a nice writeup. Screw it, I have spent more time trying to share this than it took coding it. So I am calling it done for my part as off now. Unless I decide to tweak it more later.
| [
"GitHub would also be a great place to post this. Especially as that would allow others to quickly fork their own copies and make any improvements or modifications they need. These changes would then also be available to anyone else who wants them.\n",
"You should post it here. There are plenty of recipes here and yours would fit in perfectly.\nStack Overflow is meant to be a wiki, where people search for questions and find answers. That being said, if you want to post it here, what you would do is open a question relevant to your answer, then respond to your own question with your answer.\n",
"If your code is short enough to copy and paste, you might want to post it as a Python recipe. That site is a great resource for learning Python techniques and its best contents have been compiled into a book.\nIf your code is reusable as-is then you should post your Python code in the Python Package Index (pypi). Organize your source code, read this tutorial on how to write a setup.py for your package. Once you have your free pypi account and have written setup.py, run python setup.py register to claim your package's name and post its metadata to the index. setup.py can also upload your package's source or binaries to pypi, for example python setup.py sdist upload would build and upload the source distribution.\nOnce your package is a part of the Python Package Index, other Python programmers can download and install it automatically with a number of tools including easy_install your_package.\n",
"err, with a more descriptive title, i am sure many people would have found it here itself.(but i wasn't aware of the not-a-question tag).\nHence, you can get a free blog, and just put in the bits you want to share, that way you would also have a steady online reference whenever you need it.\n",
"In general, CodeProject is a great place to post code as long as you're willing to write a small article about the code. (One of the good things about CodeProject is that they do require some verbiage about the code.) The site gets an extraordinary amount of traffic, so anything you post there will be seen.\n",
"It seems to me that Python Package Index is the right place for you\n",
"May I humbly suggest my site; http://utilitymill.com. It lets you not only post your Python code, but also makes it into a runnable web utility. \nUsers can collaberate on the code and write-ups, and it even gives you an automatic RESTful API for the utility for free.\n"
] | [
6,
5,
2,
1,
1,
0,
0
] | [] | [] | [
"excel_2007",
"python"
] | stackoverflow_0000556967_excel_2007_python.txt |
Q:
I want to load all of the unit-tests in a tree, can it be done?
I have a heirarchical folder full of Python unit-tests. They are all importable ".py" files which define TestCase objects. This folder contains thousands of files in many nested subdirectories and was written by somebody else. I do not have permission to change it, I just have to run it.
I want to generate a single TestSuite object which contains all of the TestCases in the folder. Is there an easy and elegant way to do this?
Thanks
A:
The nose application may be useful for you, either directly, or to show how to implement this.
http://code.google.com/p/python-nose/ seems to be the home page.
Basically, what you want to do is walk the source tree (os.walk), use imp.load_module
to load the module, use unittest.defaultTestLoader to load the tests from the module into a TestSuite, and then use that in whatever way you need to use it.
Or at least that's approximately what I do in my custom TestRunner implementation
(bzr get http://code.liw.fi/coverage-test-runner/bzr/trunk).
A:
Look at the unittest.TestLoader (https://docs.python.org/library/unittest.html#loading-and-running-tests)
And the os.walk (https://docs.python.org/library/os.html#files-and-directories)
You should be able to traverse your package tree using the TestLoader to build a suite which you can then run.
Something along the lines of this.
runner = unittest.TextTestRunner()
superSuite = unittest.TestSuite()
for path, dirs, files in os.walk( 'path/to/tree' ):
# if a CVS dir or whatever: continue
for f in files:
# if not a python file: continue
suite= unittest.defaultTestLoader.loadTestsFromModule( os.path.join(path,f)
superSuite .addTests(suite ) # OR runner.run( suite)
runner.run( superSuite )
You can either walk through the tree simply running each test (runner.run(suite)) or you can accumulate a superSuite of all individual suites and run the whole mass as a single test (runner.run( superSuite )).
You don't need to do both, but I included both sets of suggestions in the above (untested) code.
A:
The test directory of the Python Library source shows the way.
The README file describes how to write Python Regression Tests for library modules.
The regrtest.py module starts with:
"""Regression test.
This will find all modules whose name is "test_*" in the test
directory, and run them.
| I want to load all of the unit-tests in a tree, can it be done? | I have a heirarchical folder full of Python unit-tests. They are all importable ".py" files which define TestCase objects. This folder contains thousands of files in many nested subdirectories and was written by somebody else. I do not have permission to change it, I just have to run it.
I want to generate a single TestSuite object which contains all of the TestCases in the folder. Is there an easy and elegant way to do this?
Thanks
| [
"The nose application may be useful for you, either directly, or to show how to implement this.\nhttp://code.google.com/p/python-nose/ seems to be the home page.\nBasically, what you want to do is walk the source tree (os.walk), use imp.load_module\nto load the module, use unittest.defaultTestLoader to load the tests from the module into a TestSuite, and then use that in whatever way you need to use it.\nOr at least that's approximately what I do in my custom TestRunner implementation\n(bzr get http://code.liw.fi/coverage-test-runner/bzr/trunk).\n",
"Look at the unittest.TestLoader (https://docs.python.org/library/unittest.html#loading-and-running-tests)\nAnd the os.walk (https://docs.python.org/library/os.html#files-and-directories)\nYou should be able to traverse your package tree using the TestLoader to build a suite which you can then run.\nSomething along the lines of this.\nrunner = unittest.TextTestRunner()\nsuperSuite = unittest.TestSuite()\nfor path, dirs, files in os.walk( 'path/to/tree' ):\n # if a CVS dir or whatever: continue\n for f in files:\n # if not a python file: continue\n suite= unittest.defaultTestLoader.loadTestsFromModule( os.path.join(path,f)\n superSuite .addTests(suite ) # OR runner.run( suite)\nrunner.run( superSuite )\n\nYou can either walk through the tree simply running each test (runner.run(suite)) or you can accumulate a superSuite of all individual suites and run the whole mass as a single test (runner.run( superSuite )).\nYou don't need to do both, but I included both sets of suggestions in the above (untested) code.\n",
"The test directory of the Python Library source shows the way.\nThe README file describes how to write Python Regression Tests for library modules.\nThe regrtest.py module starts with:\n\"\"\"Regression test.\n\nThis will find all modules whose name is \"test_*\" in the test\ndirectory, and run them.\n\n"
] | [
4,
2,
1
] | [] | [] | [
"python",
"unit_testing"
] | stackoverflow_0000562349_python_unit_testing.txt |
Q:
Python Imaging Library save function syntax
Simple one I think but essentially I need to know what the syntax is for the save function on the PIL. The help is really vague and I can't find anything online. Any help'd be great, thanks :).
A:
From the PIL Handbook:
im.save(outfile, options...)
im.save(outfile, format, options...)
Simplest case:
im.save('my_image.png')
or whatever. In this case, the type of the image will be determined from the extension. Is there a particular problem you're having? Or specific saving option that you'd like to use but aren't sure how to do so?
You may be able to find additional information in the documentation on each filetype. The PIL Handbox Appendixes list the different file types that are supported. In some cases, options are given for save. For example, on the JPEG file format page, we're told that save supports
quality
optimize, and
progressive
with notes about each option.
A:
Image.save(filename[, format[, options]]). You can usually just use Image.save(filename) since it automatically figures out the file type for you from the extension.
| Python Imaging Library save function syntax | Simple one I think but essentially I need to know what the syntax is for the save function on the PIL. The help is really vague and I can't find anything online. Any help'd be great, thanks :).
| [
"From the PIL Handbook:\nim.save(outfile, options...)\n\nim.save(outfile, format, options...)\n\nSimplest case:\nim.save('my_image.png')\n\nor whatever. In this case, the type of the image will be determined from the extension. Is there a particular problem you're having? Or specific saving option that you'd like to use but aren't sure how to do so?\nYou may be able to find additional information in the documentation on each filetype. The PIL Handbox Appendixes list the different file types that are supported. In some cases, options are given for save. For example, on the JPEG file format page, we're told that save supports\n\nquality\noptimize, and \nprogressive\n\nwith notes about each option.\n",
"Image.save(filename[, format[, options]]). You can usually just use Image.save(filename) since it automatically figures out the file type for you from the extension.\n"
] | [
18,
1
] | [] | [] | [
"python",
"python_imaging_library"
] | stackoverflow_0000562519_python_python_imaging_library.txt |
Q:
python upload - where are tmp/FILES?
I'm running python 2.4 from cgi and I'm trying to upload to a cloud service using a python api. In php, the $_FILE array contains a "tmp" element which is where the file lives until you place it where you want it. What's the equivalent in python?
if I do this
fileitem = form['file']
fileitem.filename is the name of the file
if i print fileitem, the array simply contains the file name and what looks to be the file itself.
I am trying to stream things and it requires the tmp location when using the php api.
A:
The file is a real file, but the cgi.FieldStorage unlinked it as soon as it was created so that it would exist only as long as you keep it open, and no longer has a real path on the file system.
You can, however, change this...
You can extend the cgi.FieldStorage and replace the make_file method to place the file wherever you want:
import os
import cgi
class MyFieldStorage(cgi.FieldStorage):
def make_file(self, binary=None):
return open(os.path.join('/tmp', self.filename), 'wb')
You must also keep in mind that the FieldStorage object only creates a real file if it recieves more than 1000B (otherwise it is a cStringIO.StringIO)
EDIT: The cgi module actually makes the file with the tempfile module, so check that out if you want lots of gooey details.
A:
Here's a code snippet taken from my site:
h = open("user_uploaded_file", "wb")
while 1:
data = form["file"].file.read(4096)
if not data:
break
h.write(data)
h.close()
Hope this helps.
| python upload - where are tmp/FILES? | I'm running python 2.4 from cgi and I'm trying to upload to a cloud service using a python api. In php, the $_FILE array contains a "tmp" element which is where the file lives until you place it where you want it. What's the equivalent in python?
if I do this
fileitem = form['file']
fileitem.filename is the name of the file
if i print fileitem, the array simply contains the file name and what looks to be the file itself.
I am trying to stream things and it requires the tmp location when using the php api.
| [
"The file is a real file, but the cgi.FieldStorage unlinked it as soon as it was created so that it would exist only as long as you keep it open, and no longer has a real path on the file system.\nYou can, however, change this...\nYou can extend the cgi.FieldStorage and replace the make_file method to place the file wherever you want:\nimport os\nimport cgi\n\nclass MyFieldStorage(cgi.FieldStorage):\n def make_file(self, binary=None):\n return open(os.path.join('/tmp', self.filename), 'wb')\n\nYou must also keep in mind that the FieldStorage object only creates a real file if it recieves more than 1000B (otherwise it is a cStringIO.StringIO)\nEDIT: The cgi module actually makes the file with the tempfile module, so check that out if you want lots of gooey details.\n",
"Here's a code snippet taken from my site:\nh = open(\"user_uploaded_file\", \"wb\")\nwhile 1:\n data = form[\"file\"].file.read(4096)\n if not data:\n break\n h.write(data)\nh.close()\n\nHope this helps.\n"
] | [
2,
1
] | [] | [] | [
"cgi",
"mosso",
"python",
"upload"
] | stackoverflow_0000562278_cgi_mosso_python_upload.txt |
Q:
Context processor using Werkzeug and Jinja2
My application is running on App Engine and is implemented using Werkzeug and Jinja2. I'd like to have something functionally equivalent of Django's own context processor: a callable that takes a request and adds something to the template context. I already have a "context processors" that add something to the template context, but how do I get this request part working? I implemented context processors as a callables that just return a dictionary that later is used to update context.
For example, I'd like to add something that is contained in request.environ.
A:
One way of achieving this is through late-bound template globals using the thread-local proxy in Werkzeug.
A simple example that puts the request into the the template globals:
from werkzeug import Local, LocalManager
local = Local()
local_manager = LocalManager([local])
from jinja2 import Environment, FileSystemLoader
# Create a global dict using the local's proxy to the request attribute
global_dict = {'request': local('request')}
jinja2_env = Environment(loader=FileSystemLoader('/'))
jinja2_env.globals.update(global_dict)
def application(environ, start_response):
"""A WSGI Application"""
# later, bind the actual attribute to the local object
local.request = request = Request(environ)
# continue to view handling code
# ...
application = local_manager.make_middleware(application)
Now in any of your templates, the current request will appear bound to the variable "request". Of course that could be anything else in environ. The trick is to use the local proxy, then set the value before you render any template.
I should probably also add that a framework like Glashammer (Werkzeug+Jinja2) streamlines this process for you by using events. Many functions can connect to the events during the process of the WSGI call (for example, when a request is created) and they can put stuff in the template namespace at that point.
A:
Well, using what Ali wrote I came to the solution that is specific to App Engine (because of its import cache). Unfortunately, Ali's code does not work with App Engine, because the code that sets Jinja globals are imported only once (making the globals effectively static).
I had to write my own render() function and update the context there. For completeness sake, below is the code I came to:
def render(template, **kwargs):
response_code = kwargs.pop('response_code', 200)
mimetype = kwargs.pop('mimetype', 'text/html')
for item in getattr(settings, 'CONTEXT_PROCESSORS', []):
try:
processor = import_string(item)
kwargs.update(processor(local.request))
except (ImportError, AttributeError), e:
logging.error(e)
return Response(jinja_env.get_template(template).render(**kwargs),
status=response_code, mimetype=mimetype)
This is App Engine specific. In other environments Ali's code works as expected (and that's why I am retagging my question).
| Context processor using Werkzeug and Jinja2 | My application is running on App Engine and is implemented using Werkzeug and Jinja2. I'd like to have something functionally equivalent of Django's own context processor: a callable that takes a request and adds something to the template context. I already have a "context processors" that add something to the template context, but how do I get this request part working? I implemented context processors as a callables that just return a dictionary that later is used to update context.
For example, I'd like to add something that is contained in request.environ.
| [
"One way of achieving this is through late-bound template globals using the thread-local proxy in Werkzeug.\nA simple example that puts the request into the the template globals:\nfrom werkzeug import Local, LocalManager\nlocal = Local()\nlocal_manager = LocalManager([local])\n\nfrom jinja2 import Environment, FileSystemLoader\n\n# Create a global dict using the local's proxy to the request attribute\nglobal_dict = {'request': local('request')}\njinja2_env = Environment(loader=FileSystemLoader('/'))\njinja2_env.globals.update(global_dict)\n\ndef application(environ, start_response):\n \"\"\"A WSGI Application\"\"\"\n # later, bind the actual attribute to the local object\n local.request = request = Request(environ)\n\n # continue to view handling code\n # ...\n\napplication = local_manager.make_middleware(application)\n\nNow in any of your templates, the current request will appear bound to the variable \"request\". Of course that could be anything else in environ. The trick is to use the local proxy, then set the value before you render any template.\nI should probably also add that a framework like Glashammer (Werkzeug+Jinja2) streamlines this process for you by using events. Many functions can connect to the events during the process of the WSGI call (for example, when a request is created) and they can put stuff in the template namespace at that point.\n",
"Well, using what Ali wrote I came to the solution that is specific to App Engine (because of its import cache). Unfortunately, Ali's code does not work with App Engine, because the code that sets Jinja globals are imported only once (making the globals effectively static).\nI had to write my own render() function and update the context there. For completeness sake, below is the code I came to:\ndef render(template, **kwargs):\n response_code = kwargs.pop('response_code', 200)\n mimetype = kwargs.pop('mimetype', 'text/html')\n for item in getattr(settings, 'CONTEXT_PROCESSORS', []):\n try:\n processor = import_string(item)\n kwargs.update(processor(local.request))\n except (ImportError, AttributeError), e:\n logging.error(e)\n return Response(jinja_env.get_template(template).render(**kwargs),\n status=response_code, mimetype=mimetype)\n\nThis is App Engine specific. In other environments Ali's code works as expected (and that's why I am retagging my question).\n"
] | [
4,
3
] | [] | [] | [
"django",
"google_app_engine",
"jinja2",
"python",
"werkzeug"
] | stackoverflow_0000539116_django_google_app_engine_jinja2_python_werkzeug.txt |
Q:
Can you recommend a Python SOAP client that can accept WS-Attachments?
I've read mixed reviews of both Suds and ZSI -- two Python SOAP libraries. However, I'm unclear whether either of them can support WS-Attachments. I'd prefer to use Suds (appears to be more straightforward), but I'll defer to whichever library suits my needs.
A:
For your requirements I'd have to recommend ZSI. From its documentation,
It can also be used to build applications using SOAP Messages with Attachments.
Their website is not as pretty as Suds but the package includes promising documentation.
SOAPpy has support for attachments on its TODO list. Suds does not mention the word "attachments" anywhere. If you need attachments and don't want to implement them yourself, then ZSI is your choice.
A:
I believe soaplib can handle attachments. I'm just not sure exactly how compliant it is with WS-Attachments because they don't trumpet it.
Here's a sample client that, their words, allows "multi-part mime payloads":
helloworld_attach.py
A:
In my experience Suds was the only Python package that actually works. I did not use attachments.
| Can you recommend a Python SOAP client that can accept WS-Attachments? | I've read mixed reviews of both Suds and ZSI -- two Python SOAP libraries. However, I'm unclear whether either of them can support WS-Attachments. I'd prefer to use Suds (appears to be more straightforward), but I'll defer to whichever library suits my needs.
| [
"For your requirements I'd have to recommend ZSI. From its documentation,\n\nIt can also be used to build applications using SOAP Messages with Attachments.\n\nTheir website is not as pretty as Suds but the package includes promising documentation.\nSOAPpy has support for attachments on its TODO list. Suds does not mention the word \"attachments\" anywhere. If you need attachments and don't want to implement them yourself, then ZSI is your choice.\n",
"I believe soaplib can handle attachments. I'm just not sure exactly how compliant it is with WS-Attachments because they don't trumpet it.\nHere's a sample client that, their words, allows \"multi-part mime payloads\":\nhelloworld_attach.py\n",
"In my experience Suds was the only Python package that actually works. I did not use attachments.\n"
] | [
1,
1,
0
] | [] | [] | [
"python",
"soap"
] | stackoverflow_0000491404_python_soap.txt |
Q:
String separation in required format, Pythonic way? (with or w/o Regex)
I have a string in the format:
t='@abc @def Hello this part is text'
I want to get this:
l=["abc", "def"]
s='Hello this part is text'
I did this:
a=t[t.find(' ',t.rfind('@')):].strip()
s=t[:t.find(' ',t.rfind('@'))].strip()
b=a.split('@')
l=[i.strip() for i in b][1:]
It works for the most part, but it fails when the text part has the '@'.
Eg, when:
t='@abc @def My email is red@hjk.com'
it fails. The @names are there in the beginning and there can be text after @names, which may possibly contain @.
Clearly I can append initally with a space and find out first word without '@'. But that doesn't seem an elegant solution.
What is a pythonic way of solving this?
A:
Building unashamedly on MrTopf's effort:
import re
rx = re.compile("((?:@\w+ +)+)(.*)")
t='@abc @def @xyz Hello this part is text and my email is foo@ba.r'
a,s = rx.match(t).groups()
l = re.split('[@ ]+',a)[1:-1]
print l
print s
prints:
['abc', 'def', 'xyz']
Hello this part is text and my email is foo@ba.r
Justly called to account by hasen j, let me clarify how this works:
/@\w+ +/
matches a single tag - @ followed by at least one alphanumeric or _ followed by at least one space character. + is greedy, so if there is more than one space, it will grab them all.
To match any number of these tags, we need to add a plus (one or more things) to the pattern for tag; so we need to group it with parentheses:
/(@\w+ +)+/
which matches one-or-more tags, and, being greedy, matches all of them. However, those parentheses now fiddle around with our capture groups, so we undo that by making them into an anonymous group:
/(?:@\w+ +)+/
Finally, we make that into a capture group and add another to sweep up the rest:
/((?:@\w+ +)+)(.*)/
A last breakdown to sum up:
((?:@\w+ +)+)(.*)
(?:@\w+ +)+
( @\w+ +)
@\w+ +
Note that in reviewing this, I've improved it - \w didn't need to be in a set, and it now allows for multiple spaces between tags. Thanks, hasen-j!
A:
t='@abc @def Hello this part is text'
words = t.split(' ')
names = []
while words:
w = words.pop(0)
if w.startswith('@'):
names.append(w[1:])
else:
break
text = ' '.join(words)
print names
print text
A:
How about this:
Splitting by space.
foreach word, check
2.1. if word starts with @ then Push to first list
2.2. otherwise just join the remaining words by spaces.
A:
[i.strip('@') for i in t.split(' ', 2)[:2]] # for a fixed number of @def
a = [i.strip('@') for i in t.split(' ') if i.startswith('@')]
s = ' '.join(i for i in t.split(' ') if not i.startwith('@'))
A:
You might also use regular expressions:
import re
rx = re.compile("@([\w]+) @([\w]+) (.*)")
t='@abc @def Hello this part is text and my email is foo@ba.r'
a,b,s = rx.match(t).groups()
But this all depends on how your data can look like. So you might need to adjust it. What it does is basically creating group via () and checking for what's allowed in them.
A:
[edit: this is implementing what was suggested by Osama above]
This will create L based on the @ variables from the beginning of the string, and then once a non @ var is found, just grab the rest of the string.
t = '@one @two @three some text afterward with @ symbols@ meow@meow'
words = t.split(' ') # split into list of words based on spaces
L = []
s = ''
for i in range(len(words)): # go through each word
word = words[i]
if word[0] == '@': # grab @'s from beginning of string
L.append(word[1:])
continue
s = ' '.join(words[i:]) # put spaces back in
break # you can ignore the rest of the words
You can refactor this to be less code, but I'm trying to make what is going on obvious.
A:
Here's just another variation that uses split() and no regexpes:
t='@abc @def My email is red@hjk.com'
tags = []
words = iter(t.split())
# iterate over words until first non-tag word
for w in words:
if not w.startswith("@"):
# join this word and all the following
s = w + " " + (" ".join(words))
break
tags.append(w[1:])
else:
s = "" # handle string with only tags
print tags, s
Here's a shorter but perhaps a bit cryptic version that uses a regexp to find the first space followed by a non-@ character:
import re
t = '@abc @def My email is red@hjk.com @extra bye'
m = re.search(r"\s([^@].*)$", t)
tags = [tag[1:] for tag in t[:m.start()].split()]
s = m.group(1)
print tags, s # ['abc', 'def'] My email is red@hjk.com @extra bye
This doesn't work properly if there are no tags or no text. The format is underspecified. You'll need to provide more test cases to validate.
| String separation in required format, Pythonic way? (with or w/o Regex) | I have a string in the format:
t='@abc @def Hello this part is text'
I want to get this:
l=["abc", "def"]
s='Hello this part is text'
I did this:
a=t[t.find(' ',t.rfind('@')):].strip()
s=t[:t.find(' ',t.rfind('@'))].strip()
b=a.split('@')
l=[i.strip() for i in b][1:]
It works for the most part, but it fails when the text part has the '@'.
Eg, when:
t='@abc @def My email is red@hjk.com'
it fails. The @names are there in the beginning and there can be text after @names, which may possibly contain @.
Clearly I can append initally with a space and find out first word without '@'. But that doesn't seem an elegant solution.
What is a pythonic way of solving this?
| [
"Building unashamedly on MrTopf's effort:\nimport re\nrx = re.compile(\"((?:@\\w+ +)+)(.*)\")\nt='@abc @def @xyz Hello this part is text and my email is foo@ba.r'\na,s = rx.match(t).groups()\nl = re.split('[@ ]+',a)[1:-1]\nprint l\nprint s\n\nprints:\n\n['abc', 'def', 'xyz']\n Hello this part is text and my email is foo@ba.r\n\n\nJustly called to account by hasen j, let me clarify how this works:\n/@\\w+ +/\n\nmatches a single tag - @ followed by at least one alphanumeric or _ followed by at least one space character. + is greedy, so if there is more than one space, it will grab them all.\nTo match any number of these tags, we need to add a plus (one or more things) to the pattern for tag; so we need to group it with parentheses:\n/(@\\w+ +)+/\n\nwhich matches one-or-more tags, and, being greedy, matches all of them. However, those parentheses now fiddle around with our capture groups, so we undo that by making them into an anonymous group:\n/(?:@\\w+ +)+/\n\nFinally, we make that into a capture group and add another to sweep up the rest:\n/((?:@\\w+ +)+)(.*)/\n\nA last breakdown to sum up:\n((?:@\\w+ +)+)(.*)\n (?:@\\w+ +)+\n ( @\\w+ +)\n @\\w+ +\n\n\nNote that in reviewing this, I've improved it - \\w didn't need to be in a set, and it now allows for multiple spaces between tags. Thanks, hasen-j!\n",
"t='@abc @def Hello this part is text'\n\nwords = t.split(' ')\n\nnames = []\nwhile words:\n w = words.pop(0)\n if w.startswith('@'):\n names.append(w[1:])\n else:\n break\n\ntext = ' '.join(words)\n\nprint names\nprint text\n\n",
"How about this:\n\nSplitting by space.\nforeach word, check \n2.1. if word starts with @ then Push to first list\n2.2. otherwise just join the remaining words by spaces.\n\n",
" [i.strip('@') for i in t.split(' ', 2)[:2]] # for a fixed number of @def\n a = [i.strip('@') for i in t.split(' ') if i.startswith('@')]\n s = ' '.join(i for i in t.split(' ') if not i.startwith('@'))\n\n",
"You might also use regular expressions:\nimport re\nrx = re.compile(\"@([\\w]+) @([\\w]+) (.*)\")\nt='@abc @def Hello this part is text and my email is foo@ba.r'\na,b,s = rx.match(t).groups()\n\nBut this all depends on how your data can look like. So you might need to adjust it. What it does is basically creating group via () and checking for what's allowed in them.\n",
"[edit: this is implementing what was suggested by Osama above]\nThis will create L based on the @ variables from the beginning of the string, and then once a non @ var is found, just grab the rest of the string.\nt = '@one @two @three some text afterward with @ symbols@ meow@meow'\n\nwords = t.split(' ') # split into list of words based on spaces\nL = []\ns = ''\nfor i in range(len(words)): # go through each word\n word = words[i]\n if word[0] == '@': # grab @'s from beginning of string\n L.append(word[1:])\n continue\n s = ' '.join(words[i:]) # put spaces back in\n break # you can ignore the rest of the words\n\nYou can refactor this to be less code, but I'm trying to make what is going on obvious.\n",
"Here's just another variation that uses split() and no regexpes:\nt='@abc @def My email is red@hjk.com'\ntags = []\nwords = iter(t.split())\n\n# iterate over words until first non-tag word\nfor w in words:\n if not w.startswith(\"@\"):\n # join this word and all the following\n s = w + \" \" + (\" \".join(words))\n break\n tags.append(w[1:])\nelse:\n s = \"\" # handle string with only tags\n\nprint tags, s\n\nHere's a shorter but perhaps a bit cryptic version that uses a regexp to find the first space followed by a non-@ character:\nimport re\nt = '@abc @def My email is red@hjk.com @extra bye'\nm = re.search(r\"\\s([^@].*)$\", t)\ntags = [tag[1:] for tag in t[:m.start()].split()]\ns = m.group(1)\nprint tags, s # ['abc', 'def'] My email is red@hjk.com @extra bye\n\nThis doesn't work properly if there are no tags or no text. The format is underspecified. You'll need to provide more test cases to validate.\n"
] | [
13,
7,
5,
3,
3,
3,
1
] | [] | [] | [
"format",
"python",
"regex",
"string"
] | stackoverflow_0000558105_format_python_regex_string.txt |
Q:
Import python functions into a .NET language?
I am a C# .NET programmer and am learning Python. I have downloaded IronPython, and know that it can call into .NET libraries.
I'm wondering whether there is a way to do the reverse, that is to call into some existing "classic" Python libraries in my C# code, maybe using .NET Interop.
I'd like to be able to access functions in libraries such as pygame.
A:
Ironpython 2.0 is CPython 2.5 compatible, so pure Python that uses <=2.5 APIs should work fine under Ironpython. I believe Ironpython code can then be compiled into a DLL.
For C-extensions like Pygame, you might want to take a look at Ironclad. It's a project to allow for C-extensions to be used within Ironpython. This may also give you the native code bridge you're looking for.
A:
You can use Python for .Net, which allows you to 'use CLR services and continue to use existing Python code and C-based extensions while maintaining native execution speeds for Python code.'
Further, 'A key goal for this project has been that Python for .NET should "work just the way you'd expect in Python", except for cases that are .NET specific (in which case the goal is to work "just the way you'd expect in C#"). In addition, with the IronPython project gaining traction, it is my goal that code written for IronPython run without modification under Python for .NET.'
Hope this helps
| Import python functions into a .NET language? | I am a C# .NET programmer and am learning Python. I have downloaded IronPython, and know that it can call into .NET libraries.
I'm wondering whether there is a way to do the reverse, that is to call into some existing "classic" Python libraries in my C# code, maybe using .NET Interop.
I'd like to be able to access functions in libraries such as pygame.
| [
"Ironpython 2.0 is CPython 2.5 compatible, so pure Python that uses <=2.5 APIs should work fine under Ironpython. I believe Ironpython code can then be compiled into a DLL.\nFor C-extensions like Pygame, you might want to take a look at Ironclad. It's a project to allow for C-extensions to be used within Ironpython. This may also give you the native code bridge you're looking for.\n",
"You can use Python for .Net, which allows you to 'use CLR services and continue to use existing Python code and C-based extensions while maintaining native execution speeds for Python code.' \nFurther, 'A key goal for this project has been that Python for .NET should \"work just the way you'd expect in Python\", except for cases that are .NET specific (in which case the goal is to work \"just the way you'd expect in C#\"). In addition, with the IronPython project gaining traction, it is my goal that code written for IronPython run without modification under Python for .NET.'\nHope this helps\n"
] | [
5,
3
] | [] | [] | [
"c#",
"ironpython",
"python"
] | stackoverflow_0000561626_c#_ironpython_python.txt |
Q:
Implementing chat in an application?
I'm making a game and I am using Python for the server side.
It would be fairly trivial to implement chat myself using Python - that's not my question.
My question is
I was just wondering if there were any pre-made chat servers or some kind of service that I would be able to implement inside of my game instead of rolling my own chat server?
Maybe like a different process I could run next to my game server process?
A:
I recommend using XMPP/Jabber. There are a lot of libraries for clients and servers in different languages. It's free/open source.
http://en.wikipedia.org/wiki/XMPP
A:
Maybe you could use IRC as a chat service, I know of irclib for python, its more of a client but in theory, you could use it to proxy another IRC server from the game server.
It's a little hackish, I just thought I'd mention it.
A:
Honestly, I think it'd be best for you to roll your own and get it tightly integrated with your program. I know there's no sense in reinventing the wheel, but there are several advantages to doing so in your case: integration, learning, security, and simplicity.
| Implementing chat in an application? | I'm making a game and I am using Python for the server side.
It would be fairly trivial to implement chat myself using Python - that's not my question.
My question is
I was just wondering if there were any pre-made chat servers or some kind of service that I would be able to implement inside of my game instead of rolling my own chat server?
Maybe like a different process I could run next to my game server process?
| [
"I recommend using XMPP/Jabber. There are a lot of libraries for clients and servers in different languages. It's free/open source.\nhttp://en.wikipedia.org/wiki/XMPP\n",
"Maybe you could use IRC as a chat service, I know of irclib for python, its more of a client but in theory, you could use it to proxy another IRC server from the game server.\nIt's a little hackish, I just thought I'd mention it.\n",
"Honestly, I think it'd be best for you to roll your own and get it tightly integrated with your program. I know there's no sense in reinventing the wheel, but there are several advantages to doing so in your case: integration, learning, security, and simplicity.\n"
] | [
10,
1,
1
] | [] | [] | [
"chat",
"python"
] | stackoverflow_0000561301_chat_python.txt |
Q:
Anybody tried mosso CloudFiles with Google AppEngine?
I'm wondering if anybody tried to integrate mosso CloudFiles with an application running on Google AppEngine (mosso does not provide testing sandbox so I cann't check for myself without registering)? Looking at the code it seems that this will not work due to httplib and urllib limitations in AppEngine environment, but maybe somebody has patched cloudfiles?
A:
It appears to implement a simple RESTful API, so there's no reason you couldn't use it from App Engine. Previously, you'd have had to write your own library to do so, using App Engine's urlfetch API, but with the release of SDK 1.1.9, you can now use urllib and httplib instead.
| Anybody tried mosso CloudFiles with Google AppEngine? | I'm wondering if anybody tried to integrate mosso CloudFiles with an application running on Google AppEngine (mosso does not provide testing sandbox so I cann't check for myself without registering)? Looking at the code it seems that this will not work due to httplib and urllib limitations in AppEngine environment, but maybe somebody has patched cloudfiles?
| [
"It appears to implement a simple RESTful API, so there's no reason you couldn't use it from App Engine. Previously, you'd have had to write your own library to do so, using App Engine's urlfetch API, but with the release of SDK 1.1.9, you can now use urllib and httplib instead.\n"
] | [
1
] | [] | [] | [
"cloud",
"google_app_engine",
"mosso",
"python",
"storage"
] | stackoverflow_0000564460_cloud_google_app_engine_mosso_python_storage.txt |
Q:
can my programs access more than 4GB of memory?
if I run python on a 64bit machine with a 64bit operating system, will my programs be able to access the full range of memory? I.e. Could I build a list with 10billion entries, assuming I had enough RAM? If not, are there other programming languages that would allow this?
A:
You'll need to be sure that Python has been built as a 64 bit application. For example, on Win64 you'll be able to run the 32bit build of Python.exe but it won't get the benefits of the 64 bit environment as Windows will run it in a 32bit sandbox.
A:
The language python itself has no such restrictions, but perhaps your operating system or your python runtime (pypy, cpython, jython) could have such restrictions.
What combination of python runtime and OS do you want to use?
| can my programs access more than 4GB of memory? | if I run python on a 64bit machine with a 64bit operating system, will my programs be able to access the full range of memory? I.e. Could I build a list with 10billion entries, assuming I had enough RAM? If not, are there other programming languages that would allow this?
| [
"You'll need to be sure that Python has been built as a 64 bit application. For example, on Win64 you'll be able to run the 32bit build of Python.exe but it won't get the benefits of the 64 bit environment as Windows will run it in a 32bit sandbox.\n",
"The language python itself has no such restrictions, but perhaps your operating system or your python runtime (pypy, cpython, jython) could have such restrictions.\nWhat combination of python runtime and OS do you want to use?\n"
] | [
7,
3
] | [] | [] | [
"64_bit",
"python"
] | stackoverflow_0000565030_64_bit_python.txt |
Q:
Using jep.invoke() method
I need to call a function from a python script and pass in parameters into it. I have a test python script which I can call and run from java using Jepp - this then adds the person.
Eg Test.py
import Finding
from Finding import *
f = Finding()
f.addFinding("John", "Doe", 27)
Within my Finding class I have addFinding(firstname, lastName, age)
However, I wish to be able to do this from within java. Should I be using the jep.invoke() method. Does anyone have a hello world example of such a thing being done or forward me to some good examples?
Does anyone have any suggestions please?
Thanks in advance
A:
Easier way to run python code in java is to use jython.
EDIT: Found an article with examples in the jython website.
| Using jep.invoke() method | I need to call a function from a python script and pass in parameters into it. I have a test python script which I can call and run from java using Jepp - this then adds the person.
Eg Test.py
import Finding
from Finding import *
f = Finding()
f.addFinding("John", "Doe", 27)
Within my Finding class I have addFinding(firstname, lastName, age)
However, I wish to be able to do this from within java. Should I be using the jep.invoke() method. Does anyone have a hello world example of such a thing being done or forward me to some good examples?
Does anyone have any suggestions please?
Thanks in advance
| [
"Easier way to run python code in java is to use jython.\nEDIT: Found an article with examples in the jython website.\n"
] | [
0
] | [] | [] | [
"java",
"python"
] | stackoverflow_0000565060_java_python.txt |
Q:
Pydev and Pylons inside virtual environment, auto completion won’t work
I have Pydev installed and running without problem with Python 2.6. I installed Pylons 0.9.7 RC 4 into virtual environment, then configured new interpreter to pint into virtual environment and this one is used for pylons project. My problem is that code auto completion does not work for a classes from base library (one that are installed with base python installation), and it works without any problem with classes from virtual environment.
TIA
A:
perhaps this or this would help
BTW: I guess that this is the correct behavior, this interpreter uses only the packages that are installed withing the virtualenv (this is the whole intent and purpose of the virtualenv isn't it?)
| Pydev and Pylons inside virtual environment, auto completion won’t work | I have Pydev installed and running without problem with Python 2.6. I installed Pylons 0.9.7 RC 4 into virtual environment, then configured new interpreter to pint into virtual environment and this one is used for pylons project. My problem is that code auto completion does not work for a classes from base library (one that are installed with base python installation), and it works without any problem with classes from virtual environment.
TIA
| [
"perhaps this or this would help\nBTW: I guess that this is the correct behavior, this interpreter uses only the packages that are installed withing the virtualenv (this is the whole intent and purpose of the virtualenv isn't it?)\n"
] | [
4
] | [] | [] | [
"pydev",
"pylons",
"python",
"virtualenv"
] | stackoverflow_0000540538_pydev_pylons_python_virtualenv.txt |
Q:
Setting values to the output of a formset in Django
This question is somewhat linked to a question I asked previously:
Generating and submitting a dynamic number of objects in a form with Django
I'm wondering, if I've got separate default values for each form within a formset, am I able to pre-populate the fields? For instance, a form requiring extra customer information to be pre-populated with the users names? In cases like adding an email field to an already existing table, and updating many of them at once.
Does Django provide an easy way to do this?
A:
Pass in a list of dicts which contain the default values you want to set for each form:
http://docs.djangoproject.com/en/dev/topics/forms/formsets/#using-initial-data-with-a-formset
| Setting values to the output of a formset in Django | This question is somewhat linked to a question I asked previously:
Generating and submitting a dynamic number of objects in a form with Django
I'm wondering, if I've got separate default values for each form within a formset, am I able to pre-populate the fields? For instance, a form requiring extra customer information to be pre-populated with the users names? In cases like adding an email field to an already existing table, and updating many of them at once.
Does Django provide an easy way to do this?
| [
"Pass in a list of dicts which contain the default values you want to set for each form:\nhttp://docs.djangoproject.com/en/dev/topics/forms/formsets/#using-initial-data-with-a-formset\n"
] | [
1
] | [] | [] | [
"django",
"django_forms",
"formset",
"python"
] | stackoverflow_0000565034_django_django_forms_formset_python.txt |
Q:
python code for django view
MODEL:
class Pathology(models.Model):
pathology = models.CharField(max_length=100)
class Publication(models.Model):
pubtitle = models.TextField()
class Pathpubcombo(models.Model):
pathology = models.ForeignKey(Pathology)
publication = models.ForeignKey(Publication)
List of pathology sent to HTML template as drop down menu
VIEW:
def search(request):
pathology_list = Pathology.objects.select_related().order_by('pathology')
User selects one pathology name from drop down menu and id retrieved by
VIEW:
def pathology(request):
pathology_id = request.POST['pathology_id']
p = get_object_or_404(Pathology, pk=pathology_id)
Where I'm stuck. I need the python/django syntax to write the following:
The pathology_id must now retrieve the publication_id from the table Pathpubcombo (the intermediary manytomany table). Once the publication_id is retrieved then it must be used to retrieve all the attributes from the publication table and those attributes are sent to another html template for display to the user.
A:
you should be using many-to-many relations as described here:
http://www.djangoproject.com/documentation/models/many_to_many/
Like:
class Pathology(models.Model):
pathology = models.CharField(max_length=100)
publications = models.ManyToManyField(Publication)
class Publication(models.Model):
pubtitle = models.TextField()
Then
def pathology(request):
pathology_id = request.POST['pathology_id']
p = get_object_or_404(Pathology, pk=pathology_id)
publications = p.publications.all()
return render_to_response('my_template.html',
{'publications':publications},
context_instance=RequestContext(request))
Hope this works, haven't tested it, but you get the idea.
edit:
You can also use select_related() if there is no possibility to rename tables and use django's buildin support.
http://docs.djangoproject.com/en/dev/ref/models/querysets/#id4
| python code for django view | MODEL:
class Pathology(models.Model):
pathology = models.CharField(max_length=100)
class Publication(models.Model):
pubtitle = models.TextField()
class Pathpubcombo(models.Model):
pathology = models.ForeignKey(Pathology)
publication = models.ForeignKey(Publication)
List of pathology sent to HTML template as drop down menu
VIEW:
def search(request):
pathology_list = Pathology.objects.select_related().order_by('pathology')
User selects one pathology name from drop down menu and id retrieved by
VIEW:
def pathology(request):
pathology_id = request.POST['pathology_id']
p = get_object_or_404(Pathology, pk=pathology_id)
Where I'm stuck. I need the python/django syntax to write the following:
The pathology_id must now retrieve the publication_id from the table Pathpubcombo (the intermediary manytomany table). Once the publication_id is retrieved then it must be used to retrieve all the attributes from the publication table and those attributes are sent to another html template for display to the user.
| [
"you should be using many-to-many relations as described here:\nhttp://www.djangoproject.com/documentation/models/many_to_many/\nLike:\nclass Pathology(models.Model):\n pathology = models.CharField(max_length=100)\n publications = models.ManyToManyField(Publication)\n\nclass Publication(models.Model):\n pubtitle = models.TextField()\n\nThen\ndef pathology(request):\n pathology_id = request.POST['pathology_id'] \n p = get_object_or_404(Pathology, pk=pathology_id)\n publications = p.publications.all()\n return render_to_response('my_template.html',\n {'publications':publications},\n context_instance=RequestContext(request))\n\nHope this works, haven't tested it, but you get the idea.\nedit: \nYou can also use select_related() if there is no possibility to rename tables and use django's buildin support.\nhttp://docs.djangoproject.com/en/dev/ref/models/querysets/#id4\n"
] | [
5
] | [] | [] | [
"django",
"many_to_many",
"python",
"syntax"
] | stackoverflow_0000566083_django_many_to_many_python_syntax.txt |
Q:
How can I make this one-liner work in DOS?
python -c "for x in range(1,10) print x"
I enjoy python one liners with -c, but it is limited when indentation is needed.
Any ideas?
A:
python -c "for x in range(1,10): print x"
Just add the colon.
To address the question in the comments:
How can I make this work though? python -c "import calendar;print calendar.prcal(2009);for x in range(1,10): print x"
python -c "for x in range(1,10): x==1 and __import__('calendar').prcal(2009); print x;"
As you can see it's pretty gross. We can't import before the loop. To get around this we check if x is at the first iteration in the loop, if so we do the import.
More examples here.
A:
Not a python script, but might help:
for /L %i in (1, 1, 10) do echo %i
A:
python -c "for x in range(1,10): print x"
Remember the ":" !!
A:
Don't you just want this?
python -c “for x in range(1,10): print x”
A:
Here's a solution that doesn't require putting a statement after the colon, which is not considered very highly.
python2 -c "print '\n'.join([str(x) for x in range(1,10)])"
What's more pythonic than a list comprehension!
A:
python -c 'print "\n".join(map(str, range(1,10)))'
but what's wrong in a "real" python script? (you know, a foo.py launched via "python foo.py")
If you really like one-liners, I suggest perl :)
| How can I make this one-liner work in DOS? | python -c "for x in range(1,10) print x"
I enjoy python one liners with -c, but it is limited when indentation is needed.
Any ideas?
| [
"python -c \"for x in range(1,10): print x\"\n\nJust add the colon.\nTo address the question in the comments:\n\nHow can I make this work though? python -c \"import calendar;print calendar.prcal(2009);for x in range(1,10): print x\"\n\npython -c \"for x in range(1,10): x==1 and __import__('calendar').prcal(2009); print x;\"\n\nAs you can see it's pretty gross. We can't import before the loop. To get around this we check if x is at the first iteration in the loop, if so we do the import.\nMore examples here.\n",
"Not a python script, but might help:\nfor /L %i in (1, 1, 10) do echo %i\n\n",
"python -c \"for x in range(1,10): print x\"\n\nRemember the \":\" !!\n",
"Don't you just want this?\npython -c “for x in range(1,10): print x”\n",
"Here's a solution that doesn't require putting a statement after the colon, which is not considered very highly.\npython2 -c \"print '\\n'.join([str(x) for x in range(1,10)])\"\n\nWhat's more pythonic than a list comprehension!\n",
"python -c 'print \"\\n\".join(map(str, range(1,10)))'\n\nbut what's wrong in a \"real\" python script? (you know, a foo.py launched via \"python foo.py\")\nIf you really like one-liners, I suggest perl :)\n"
] | [
12,
3,
3,
1,
1,
0
] | [] | [] | [
"command_line",
"python"
] | stackoverflow_0000566559_command_line_python.txt |
Q:
Building a "complete" number range w/out overlaps
I need to build a full "number range" set given a series of numbers. I start with a list such as :
ID START
* 0
a 4
b 70
c 700
d 701
e 85
where "def" is the default range & should "fill-in" the gaps
"overlaps" are value (70, 700, 701) in starting data
And need the following result:
ID START END
* 0 - 39
a 4 - 49
* 5 - 69
c 700 - 7009
d 701 - 7019
b 702 - 709
* 71 - 849
e 85 - 859
* 86 - 9
What I am trying to figure out is if there is some sort of algorithm out there or design pattern to tackle this. I have some ideas but I thought I'd run it by the "experts" first. I am using Python.
Any ideas / direction would be greatly appreciated. Some initial ideas I have:
Build a "range" list w/ the start & end values padded to the full length. So default would be 0000 to 9999
Build a "splits" list that is built on the fly
Loop through "range" list comparing each value to the values in the splits list.
In the event that an overlap is found, remove the value in the splits list and add the new range(s).
A:
import operator
ranges = {
'4' : 'a',
'70' : 'b',
'700': 'c',
'701': 'd',
'85' : 'e',
'87' : 'a',
}
def id_for_value(value):
possible = '*'
for idvalue, id in sorted(ranges.iteritems()):
if value.startswith(idvalue):
possible = id
elif idvalue > value:
break
return possible
That is enough to know the id of a certain value. Testing:
assert id_for_value('10') == '*'
assert id_for_value('499') == 'a'
assert id_for_value('703') == 'b'
assert id_for_value('7007') == 'c'
assert id_for_value('7017') == 'd'
assert id_for_value('76') == id_for_value('83') == '*'
assert id_for_value('857') == 'e'
assert id_for_value('8716') == 'a'
If you really want the range, you can use itertools.groupby to calculate it:
def firstlast(iterator):
""" Returns the first and last value of an iterator"""
first = last = iterator.next()
for value in iterator:
last = value
return first, last
maxlen = max(len(x) for x in ranges) + 1
test_range = ('%0*d' % (maxlen, i) for i in xrange(10 ** maxlen))
result = dict((firstlast(gr), id)
for id, gr in itertools.groupby(test_range, key=id_for_value))
Gives:
{('0000', '3999'): '*',
('4000', '4999'): 'a',
('5000', '6999'): '*',
('7000', '7009'): 'c',
('7010', '7019'): 'd',
('7020', '7099'): 'b',
('7100', '8499'): '*',
('8500', '8599'): 'e',
('8600', '8699'): '*',
('8700', '8799'): 'a',
('8800', '9999'): '*'}
| Building a "complete" number range w/out overlaps | I need to build a full "number range" set given a series of numbers. I start with a list such as :
ID START
* 0
a 4
b 70
c 700
d 701
e 85
where "def" is the default range & should "fill-in" the gaps
"overlaps" are value (70, 700, 701) in starting data
And need the following result:
ID START END
* 0 - 39
a 4 - 49
* 5 - 69
c 700 - 7009
d 701 - 7019
b 702 - 709
* 71 - 849
e 85 - 859
* 86 - 9
What I am trying to figure out is if there is some sort of algorithm out there or design pattern to tackle this. I have some ideas but I thought I'd run it by the "experts" first. I am using Python.
Any ideas / direction would be greatly appreciated. Some initial ideas I have:
Build a "range" list w/ the start & end values padded to the full length. So default would be 0000 to 9999
Build a "splits" list that is built on the fly
Loop through "range" list comparing each value to the values in the splits list.
In the event that an overlap is found, remove the value in the splits list and add the new range(s).
| [
"import operator\n\nranges = {\n '4' : 'a',\n '70' : 'b',\n '700': 'c',\n '701': 'd',\n '85' : 'e',\n '87' : 'a',\n}\n\ndef id_for_value(value):\n possible = '*'\n for idvalue, id in sorted(ranges.iteritems()):\n if value.startswith(idvalue):\n possible = id\n elif idvalue > value:\n break\n return possible\n\nThat is enough to know the id of a certain value. Testing:\nassert id_for_value('10') == '*'\nassert id_for_value('499') == 'a'\nassert id_for_value('703') == 'b'\nassert id_for_value('7007') == 'c'\nassert id_for_value('7017') == 'd'\nassert id_for_value('76') == id_for_value('83') == '*'\nassert id_for_value('857') == 'e'\nassert id_for_value('8716') == 'a'\n\nIf you really want the range, you can use itertools.groupby to calculate it:\ndef firstlast(iterator):\n \"\"\" Returns the first and last value of an iterator\"\"\"\n first = last = iterator.next()\n for value in iterator:\n last = value\n return first, last\n\nmaxlen = max(len(x) for x in ranges) + 1\ntest_range = ('%0*d' % (maxlen, i) for i in xrange(10 ** maxlen))\nresult = dict((firstlast(gr), id) \n for id, gr in itertools.groupby(test_range, key=id_for_value))\n\nGives:\n{('0000', '3999'): '*',\n ('4000', '4999'): 'a',\n ('5000', '6999'): '*',\n ('7000', '7009'): 'c',\n ('7010', '7019'): 'd',\n ('7020', '7099'): 'b',\n ('7100', '8499'): '*',\n ('8500', '8599'): 'e',\n ('8600', '8699'): '*',\n ('8700', '8799'): 'a',\n ('8800', '9999'): '*'}\n\n"
] | [
0
] | [] | [] | [
"numbers",
"overlap",
"python",
"range"
] | stackoverflow_0000566574_numbers_overlap_python_range.txt |
Q:
How can I anonymise XML data for selected tags?
My question is as follows:
I have to read a big XML file, 50 MB; and anonymise some tags/fields that relate to private issues, like name surname address, email, phone number, etc...
I know exactly which tags in XML are to be anonymised.
s|<a>alpha</a>|MD5ed(alpha)|e;
s|<h>beta</h>|MD5ed(beta)|e;
where alpha and beta refer to any characters within, which will also be hashed, using probably an algorithm like MD5.
I will only convert the tag value, not the tags themselves.
I hope, I am clear enough about my problem. How do I achieve this?
A:
You have to do something like the following in Python.
import xml.etree.ElementTree as xml # or lxml or whatever
import hashlib
theDoc= xml.parse( "sample.xml" )
for alphaTag in theDoc.findall( "xpath/to/tag" ):
print alphaTag, alphaTag.text
alphaTag.text = hashlib.md5(alphaTag.text).hexdigest()
xml.dump(theDoc)
A:
Bottom line: don't parse XML using regex.
Use your language's DOM parsing libraries instead, and if you know the elements you need to anonymize, grab them using XPath and hash their contents by setting their innerText/innerHTML properties (or whatever your language calls them).
A:
Using regexps is indeed dangerous, unless you know exactly the format of the file, it's easy to parse with regexps, and you are sure that it will not change in the future.
Otherwise you could indeed use XML::Twig,as below. An alternative would be to use XML::LibXML, although the file might be a bit big to load it entirely in memory (then again, maybe not, memory is cheap these days) so you might have to use the pull mode, which I don't know much about.
Compact XML::Twig code:
#!/usr/bin/perl
use strict;
use warnings;
use XML::Twig;
use Digest::MD5 'md5_base64';
my @tags_to_anonymize= qw( name surname address email phone);
# the handler for each element ($_) sets its content with the md5 and then flushes
my %handlers= map { $_ => sub { $_->set_text( md5_base64( $_->text))->flush } } @tags_to_anonymize;
XML::Twig->new( twig_roots => \%handlers, twig_print_outside_roots => 1)
->parsefile( "my_big_file.xml")
->flush;
A:
As Welbog said, don't try to parse XML with a regex. You'll regret it eventually.
Probably the easiest way to do this is using XML::Twig. It can process XML in chunks, which lets you handle very large files.
Another possibility would be using SAX, especially with XML::SAX::Machines. I've never really used that myself, but it's a stream-oriented system, so it should be able to handle large files. The downside is that you'll probably have to write more code to collect the text inside each tag that you care about (where XML::Twig will collect that text for you).
| How can I anonymise XML data for selected tags? | My question is as follows:
I have to read a big XML file, 50 MB; and anonymise some tags/fields that relate to private issues, like name surname address, email, phone number, etc...
I know exactly which tags in XML are to be anonymised.
s|<a>alpha</a>|MD5ed(alpha)|e;
s|<h>beta</h>|MD5ed(beta)|e;
where alpha and beta refer to any characters within, which will also be hashed, using probably an algorithm like MD5.
I will only convert the tag value, not the tags themselves.
I hope, I am clear enough about my problem. How do I achieve this?
| [
"You have to do something like the following in Python.\nimport xml.etree.ElementTree as xml # or lxml or whatever\nimport hashlib\ntheDoc= xml.parse( \"sample.xml\" )\nfor alphaTag in theDoc.findall( \"xpath/to/tag\" ):\n print alphaTag, alphaTag.text\n alphaTag.text = hashlib.md5(alphaTag.text).hexdigest()\nxml.dump(theDoc)\n\n",
"Bottom line: don't parse XML using regex.\nUse your language's DOM parsing libraries instead, and if you know the elements you need to anonymize, grab them using XPath and hash their contents by setting their innerText/innerHTML properties (or whatever your language calls them).\n",
"Using regexps is indeed dangerous, unless you know exactly the format of the file, it's easy to parse with regexps, and you are sure that it will not change in the future.\nOtherwise you could indeed use XML::Twig,as below. An alternative would be to use XML::LibXML, although the file might be a bit big to load it entirely in memory (then again, maybe not, memory is cheap these days) so you might have to use the pull mode, which I don't know much about.\nCompact XML::Twig code:\n#!/usr/bin/perl\n\nuse strict;\nuse warnings;\n\nuse XML::Twig;\nuse Digest::MD5 'md5_base64';\n\nmy @tags_to_anonymize= qw( name surname address email phone);\n\n# the handler for each element ($_) sets its content with the md5 and then flushes\nmy %handlers= map { $_ => sub { $_->set_text( md5_base64( $_->text))->flush } } @tags_to_anonymize;\n\nXML::Twig->new( twig_roots => \\%handlers, twig_print_outside_roots => 1)\n ->parsefile( \"my_big_file.xml\")\n ->flush;\n\n",
"As Welbog said, don't try to parse XML with a regex. You'll regret it eventually.\nProbably the easiest way to do this is using XML::Twig. It can process XML in chunks, which lets you handle very large files.\nAnother possibility would be using SAX, especially with XML::SAX::Machines. I've never really used that myself, but it's a stream-oriented system, so it should be able to handle large files. The downside is that you'll probably have to write more code to collect the text inside each tag that you care about (where XML::Twig will collect that text for you).\n"
] | [
6,
4,
4,
3
] | [] | [] | [
"anonymize",
"perl",
"python",
"xml"
] | stackoverflow_0000565823_anonymize_perl_python_xml.txt |
Q:
How to stop Tkinter Frame from shrinking to fit its contents?
This is the code that's giving me trouble.
f = Frame(root, width=1000, bg="blue")
f.pack(fill=X, expand=True)
l = Label(f, text="hi", width=10, bg="red", fg="white")
l.pack()
If I comment out the lines with the Label, the Frame displays with the right width. However, adding the Label seems to shrink the Frame down to the Label's size. Is there a way to prevent that from happening?
A:
By default, both pack and grid shrink or grow a widget to fit its contents, which is what you want 99.9% of the time. The term that describes this feature is geometry propagation. There is a command to turn geometry propagation on or off when using pack (pack_propagate) and grid (grid_propagate).
Since you are using pack the syntax would be:
f.pack_propagate(0)
or maybe root.pack_propagate(0), depending on which widgets you actually want to affect. However, because you haven't given the frame height, its default height is one pixel so you still may not see the interior widgets. To get the full effect of what you want, you need to give the containing frame both a width and a height.
That being said, the vast majority of the time you should let Tkinter compute the size. When you turn geometry propagation off your GUI won't respond well to changes in resolution, changes in fonts, etc. Tkinter's geometry managers (pack, place and grid) are remarkably powerful. You should learn to take advantage of that power by using the right tool for the job.
| How to stop Tkinter Frame from shrinking to fit its contents? | This is the code that's giving me trouble.
f = Frame(root, width=1000, bg="blue")
f.pack(fill=X, expand=True)
l = Label(f, text="hi", width=10, bg="red", fg="white")
l.pack()
If I comment out the lines with the Label, the Frame displays with the right width. However, adding the Label seems to shrink the Frame down to the Label's size. Is there a way to prevent that from happening?
| [
"By default, both pack and grid shrink or grow a widget to fit its contents, which is what you want 99.9% of the time. The term that describes this feature is geometry propagation. There is a command to turn geometry propagation on or off when using pack (pack_propagate) and grid (grid_propagate).\nSince you are using pack the syntax would be:\nf.pack_propagate(0)\n\nor maybe root.pack_propagate(0), depending on which widgets you actually want to affect. However, because you haven't given the frame height, its default height is one pixel so you still may not see the interior widgets. To get the full effect of what you want, you need to give the containing frame both a width and a height.\nThat being said, the vast majority of the time you should let Tkinter compute the size. When you turn geometry propagation off your GUI won't respond well to changes in resolution, changes in fonts, etc. Tkinter's geometry managers (pack, place and grid) are remarkably powerful. You should learn to take advantage of that power by using the right tool for the job.\n"
] | [
76
] | [] | [] | [
"frame",
"label",
"python",
"tkinter"
] | stackoverflow_0000563827_frame_label_python_tkinter.txt |
Q:
PyDev debugger different from command line django runserver command
I am trying to debug a problem with a django view. When I run it on the command line.
I don't get any of these messages. However when I run the it in the PyDev debugger i get these error messages. I am running with the --noreload option.
What do these error messages mean?
Why do I not get them when I run it on the command line?
/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py:697: RuntimeWarning: tp_compare didn't return -1 or -2 for exception
return _active[_get_ident()]
Exception exceptions.SystemError: 'error return without exception set' in <generator object at 0x786c10> ignored
Exception exceptions.SystemError: 'error return without exception set' in <generator object at 0x7904e0> ignored
A:
I seem to recall having similar issues debugging in PyDev related to the auto-reload mechanism of Django's test server. You can turn reloading off by passing --noreload to your runserver command. From there you just have to train yourself to restart your test server after making a code change while debugging.
EDIT
It's been a while since I used PyDev together with Django, but I do recall there being some warning messages spit out to the console that didn't affect my ability to debug. There are quite a few message board posts related to that message in debugging other Python libraries, but I didn't find any that have a resolution.
I guess it is benign as long as you can ignore it and still debug your code. I don't think you need to be concerned that it's a problem with your application code, but something deep down in PyDev or the Python debugging facilities.
| PyDev debugger different from command line django runserver command | I am trying to debug a problem with a django view. When I run it on the command line.
I don't get any of these messages. However when I run the it in the PyDev debugger i get these error messages. I am running with the --noreload option.
What do these error messages mean?
Why do I not get them when I run it on the command line?
/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py:697: RuntimeWarning: tp_compare didn't return -1 or -2 for exception
return _active[_get_ident()]
Exception exceptions.SystemError: 'error return without exception set' in <generator object at 0x786c10> ignored
Exception exceptions.SystemError: 'error return without exception set' in <generator object at 0x7904e0> ignored
| [
"I seem to recall having similar issues debugging in PyDev related to the auto-reload mechanism of Django's test server. You can turn reloading off by passing --noreload to your runserver command. From there you just have to train yourself to restart your test server after making a code change while debugging.\nEDIT\nIt's been a while since I used PyDev together with Django, but I do recall there being some warning messages spit out to the console that didn't affect my ability to debug. There are quite a few message board posts related to that message in debugging other Python libraries, but I didn't find any that have a resolution. \nI guess it is benign as long as you can ignore it and still debug your code. I don't think you need to be concerned that it's a problem with your application code, but something deep down in PyDev or the Python debugging facilities.\n"
] | [
1
] | [] | [] | [
"django",
"eclipse",
"pydev",
"python"
] | stackoverflow_0000566819_django_eclipse_pydev_python.txt |
Q:
How can I process command line arguments in Python?
What would be an easy expression to process command line arguments if I'm expecting anything like 001 or 999 (let's limit expectations to 001...999 range for this time), and few other arguments passed, and would like to ignore any unexpected?
I understand if for example I need to find out if "debug" was passed among parameters it'll be something like that:
if 'debug' in argv[1:]:
print 'Will be running in debug mode.'
How to find out if 009 or 575 was passed?
All those are expected calls:
python script.py
python script.py 011
python script.py 256 debug
python script.py 391 xls
python script.py 999 debug pdf
At this point I don't care about calls like that:
python script.py 001 002 245 568
python script.py some unexpected argument
python script.py 0001
python script.py 02
...first one - because of more than one "numeric" argument; second - because of... well, unexpected arguments; third and fourth - because of non-3-digits arguments.
A:
As others answered, optparse is the best option, but if you just want quick code try something like this:
import sys, re
first_re = re.compile(r'^\d{3}$')
if len(sys.argv) > 1:
if first_re.match(sys.argv[1]):
print "Primary argument is : ", sys.argv[1]
else:
raise ValueError("First argument should be ...")
args = sys.argv[2:]
else:
args = ()
# ... anywhere in code ...
if 'debug' in args:
print 'debug flag'
if 'xls' in args:
print 'xls flag'
EDIT: Here's an optparse example because so many people are answering optparse without really explaining why, or explaining what you have to change to make it work.
The primary reason to use optparse is it gives you more flexibility for expansion later, and gives you more flexibility on the command line. In other words, your options can appear in any order and usage messages are generated automatically. However to make it work with optparse you need to change your specifications to put '-' or '--' in front of the optional arguments and you need to allow all the arguments to be in any order.
So here's an example using optparse:
import sys, re, optparse
first_re = re.compile(r'^\d{3}$')
parser = optparse.OptionParser()
parser.set_defaults(debug=False,xls=False)
parser.add_option('--debug', action='store_true', dest='debug')
parser.add_option('--xls', action='store_true', dest='xls')
(options, args) = parser.parse_args()
if len(args) == 1:
if first_re.match(args[0]):
print "Primary argument is : ", args[0]
else:
raise ValueError("First argument should be ...")
elif len(args) > 1:
raise ValueError("Too many command line arguments")
if options.debug:
print 'debug flag'
if options.xls:
print 'xls flag'
The differences here with optparse and your spec is that now you can have command lines like:
python script.py --debug --xls 001
and you can easily add new options by calling parser.add_option()
A:
Have a look at the optparse module. Dealing with sys.argv yourself is fine for really simple stuff, but it gets out of hand quickly.
Note that you may find optparse easier to use if you can change your argument format a little; e.g. replace debug with --debug and xls with --xls or --output=xls.
A:
optparse is your best friend for parsing the command line. Also look into argparse; it's not in the standard library, though.
A:
If you want to implement actual command line switches, give getopt a look. It's incredibly simple to use, too.
A:
Van Gale is largely correct in using the regular expression against the argument. However, it is NOT absolutely necessary to make everything an option when using optparse, which splits sys.argv into options and arguments, based on whether a "-" or "--" is in front or not. Some example code to go through just the arguments:
import sys
import optparse
claParser = optparse.OptionParser()
claParser.add_option(
(opts, args) = claParser.parse_args()
if (len(args) >= 1):
print "Arguments:"
for arg in args:
print " " + arg
else:
print "No arguments"
sys.exit(0)
Yes, the args array is parsed much the same way as sys.argv would be, but the ability to easily add options if needed has been added. For more about optparse, check out the relevant Python doc.
| How can I process command line arguments in Python? | What would be an easy expression to process command line arguments if I'm expecting anything like 001 or 999 (let's limit expectations to 001...999 range for this time), and few other arguments passed, and would like to ignore any unexpected?
I understand if for example I need to find out if "debug" was passed among parameters it'll be something like that:
if 'debug' in argv[1:]:
print 'Will be running in debug mode.'
How to find out if 009 or 575 was passed?
All those are expected calls:
python script.py
python script.py 011
python script.py 256 debug
python script.py 391 xls
python script.py 999 debug pdf
At this point I don't care about calls like that:
python script.py 001 002 245 568
python script.py some unexpected argument
python script.py 0001
python script.py 02
...first one - because of more than one "numeric" argument; second - because of... well, unexpected arguments; third and fourth - because of non-3-digits arguments.
| [
"As others answered, optparse is the best option, but if you just want quick code try something like this:\nimport sys, re\n\nfirst_re = re.compile(r'^\\d{3}$')\n\nif len(sys.argv) > 1:\n\n if first_re.match(sys.argv[1]):\n print \"Primary argument is : \", sys.argv[1]\n else:\n raise ValueError(\"First argument should be ...\")\n\n args = sys.argv[2:]\n\nelse:\n\n args = ()\n\n# ... anywhere in code ...\n\nif 'debug' in args:\n print 'debug flag'\n\nif 'xls' in args:\n print 'xls flag'\n\nEDIT: Here's an optparse example because so many people are answering optparse without really explaining why, or explaining what you have to change to make it work.\nThe primary reason to use optparse is it gives you more flexibility for expansion later, and gives you more flexibility on the command line. In other words, your options can appear in any order and usage messages are generated automatically. However to make it work with optparse you need to change your specifications to put '-' or '--' in front of the optional arguments and you need to allow all the arguments to be in any order.\nSo here's an example using optparse:\nimport sys, re, optparse\n\nfirst_re = re.compile(r'^\\d{3}$')\n\nparser = optparse.OptionParser()\nparser.set_defaults(debug=False,xls=False)\nparser.add_option('--debug', action='store_true', dest='debug')\nparser.add_option('--xls', action='store_true', dest='xls')\n(options, args) = parser.parse_args()\n\nif len(args) == 1:\n if first_re.match(args[0]):\n print \"Primary argument is : \", args[0]\n else:\n raise ValueError(\"First argument should be ...\")\nelif len(args) > 1:\n raise ValueError(\"Too many command line arguments\")\n\nif options.debug:\n print 'debug flag'\n\nif options.xls:\n print 'xls flag'\n\nThe differences here with optparse and your spec is that now you can have command lines like:\npython script.py --debug --xls 001\n\nand you can easily add new options by calling parser.add_option()\n",
"Have a look at the optparse module. Dealing with sys.argv yourself is fine for really simple stuff, but it gets out of hand quickly.\nNote that you may find optparse easier to use if you can change your argument format a little; e.g. replace debug with --debug and xls with --xls or --output=xls.\n",
"optparse is your best friend for parsing the command line. Also look into argparse; it's not in the standard library, though.\n",
"If you want to implement actual command line switches, give getopt a look. It's incredibly simple to use, too.\n",
"Van Gale is largely correct in using the regular expression against the argument. However, it is NOT absolutely necessary to make everything an option when using optparse, which splits sys.argv into options and arguments, based on whether a \"-\" or \"--\" is in front or not. Some example code to go through just the arguments:\nimport sys\nimport optparse\n\nclaParser = optparse.OptionParser()\nclaParser.add_option(\n(opts, args) = claParser.parse_args()\nif (len(args) >= 1):\n print \"Arguments:\"\n for arg in args:\n print \" \" + arg\nelse:\n print \"No arguments\"\nsys.exit(0)\n\nYes, the args array is parsed much the same way as sys.argv would be, but the ability to easily add options if needed has been added. For more about optparse, check out the relevant Python doc.\n"
] | [
32,
16,
2,
2,
0
] | [] | [] | [
"command_line",
"command_line_arguments",
"python"
] | stackoverflow_0000567879_command_line_command_line_arguments_python.txt |
Q:
How to programmatically insert comments into a Microsoft Word document?
Looking for a way to programmatically insert comments (using the comments feature in Word) into a specific location in a MS Word document. I would prefer an approach that is usable across recent versions of MS Word standard formats and implementable in a non-Windows environment (ideally using Python and/or Common Lisp). I have been looking at the OpenXML SDK but can't seem to find a solution there.
A:
Here is what I did:
Create a simple document with word (i.e. a very small one)
Add a comment in Word
Save as docx.
Use the zip module of python to access the archive (docx files are ZIP archives).
Dump the content of the entry "word/document.xml" in the archive. This is the XML of the document itself.
This should give you an idea what you need to do. After that, you can use one of the XML libraries in Python to parse the document, change it and add it back to a new ZIP archive with the extension ".docx". Simply copy every other entry from the original ZIP and you have a new, valid Word document.
There is also a library which might help: openxmllib
A:
If this is server side (non-interactive) use of the Word application itself is unsupported (but I see this is not applicable). So either take that route or use the OpenXML SDK to learn the markup needed to create a comment. With that knowledge it is all about manipulating data.
The .docx format is a ZIP of XML files with a defines structure, so mostly once you get into the ZIP and get the right XML file it becomes a matter of modifying an XML DOM.
The best route might be to take a docx, copy it, add a comment (using Word) to one, and compare. A diff will show you the kind of elements/structures you need to be looking up in the SDK (or ISO/Ecma standard).
| How to programmatically insert comments into a Microsoft Word document? | Looking for a way to programmatically insert comments (using the comments feature in Word) into a specific location in a MS Word document. I would prefer an approach that is usable across recent versions of MS Word standard formats and implementable in a non-Windows environment (ideally using Python and/or Common Lisp). I have been looking at the OpenXML SDK but can't seem to find a solution there.
| [
"Here is what I did:\n\nCreate a simple document with word (i.e. a very small one)\nAdd a comment in Word\nSave as docx.\nUse the zip module of python to access the archive (docx files are ZIP archives).\nDump the content of the entry \"word/document.xml\" in the archive. This is the XML of the document itself.\n\nThis should give you an idea what you need to do. After that, you can use one of the XML libraries in Python to parse the document, change it and add it back to a new ZIP archive with the extension \".docx\". Simply copy every other entry from the original ZIP and you have a new, valid Word document.\nThere is also a library which might help: openxmllib\n",
"If this is server side (non-interactive) use of the Word application itself is unsupported (but I see this is not applicable). So either take that route or use the OpenXML SDK to learn the markup needed to create a comment. With that knowledge it is all about manipulating data.\nThe .docx format is a ZIP of XML files with a defines structure, so mostly once you get into the ZIP and get the right XML file it becomes a matter of modifying an XML DOM.\nThe best route might be to take a docx, copy it, add a comment (using Word) to one, and compare. A diff will show you the kind of elements/structures you need to be looking up in the SDK (or ISO/Ecma standard).\n"
] | [
7,
2
] | [] | [] | [
"common_lisp",
"ms_word",
"openxml",
"python"
] | stackoverflow_0000568972_common_lisp_ms_word_openxml_python.txt |
Q:
How to extract frequency information from an input audio stream (using PortAudio)?
I want to record sound (voice) using PortAudio (PyAudio) and output the corresponding sound wave on the screen. Hopeless as I am, I am unable to extract the frequency information from the audio stream so that I can draw it in Hz/time form.
Here's an example code snippet that records and plays recorded audio for five seconds, in case it helps any:
p = pyaudio.PyAudio()
chunk = 1024
seconds = 5
stream = p.open(format=pyaudio.paInt16,
channels=1,
rate=44100,
input=True,
output=True)
for i in range(0, 44100 / chunk * seconds):
data = stream.read(chunk)
stream.write(data, chunk)
I wish to extract the needed information from the above variable "data". (Or use some other high-level approach with PortAudio or another library with Python bindings.)
I'd be very grateful for any help! Even vaguely related tidbits of audio-analyzing wisdom are appreciated. :)
A:
What you want is probably the Fourier transform of the audio data. There is several packages that can calculate that for you. scipy and numpy is two of them. It is often named "Fast Fourier Transform" (FFT), but that is just the name of the algorithm.
Here is an example of it's usage: https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/spectrum.py
A:
The Fourier Transform will not help you a lot if you want the analysis to be conducted in both the frequency and time domain. You might want to have a look at "Wavelet Transforms". There is a package called pywavelets...
http://www.pybytes.com/pywavelets/#discrete-wavelet-transform-dwt
| How to extract frequency information from an input audio stream (using PortAudio)? | I want to record sound (voice) using PortAudio (PyAudio) and output the corresponding sound wave on the screen. Hopeless as I am, I am unable to extract the frequency information from the audio stream so that I can draw it in Hz/time form.
Here's an example code snippet that records and plays recorded audio for five seconds, in case it helps any:
p = pyaudio.PyAudio()
chunk = 1024
seconds = 5
stream = p.open(format=pyaudio.paInt16,
channels=1,
rate=44100,
input=True,
output=True)
for i in range(0, 44100 / chunk * seconds):
data = stream.read(chunk)
stream.write(data, chunk)
I wish to extract the needed information from the above variable "data". (Or use some other high-level approach with PortAudio or another library with Python bindings.)
I'd be very grateful for any help! Even vaguely related tidbits of audio-analyzing wisdom are appreciated. :)
| [
"What you want is probably the Fourier transform of the audio data. There is several packages that can calculate that for you. scipy and numpy is two of them. It is often named \"Fast Fourier Transform\" (FFT), but that is just the name of the algorithm.\nHere is an example of it's usage: https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/spectrum.py\n",
"The Fourier Transform will not help you a lot if you want the analysis to be conducted in both the frequency and time domain. You might want to have a look at \"Wavelet Transforms\". There is a package called pywavelets...\nhttp://www.pybytes.com/pywavelets/#discrete-wavelet-transform-dwt\n"
] | [
4,
1
] | [] | [] | [
"frequency",
"portaudio",
"python",
"voice"
] | stackoverflow_0000259451_frequency_portaudio_python_voice.txt |
Q:
Logging output of external program with (wx)python
I'm writing a GUI for using the oracle exp/imp commands and starting sql-scripts through sqlplus. The subprocess class makes it easy to launch the commands, but I need some additional functionality. I want to get rid of the command prompt when using my wxPython GUI, but I still need a way to show the output of the exp/imp commands.
I already tried these two methods:
command = "exp userid=user/pwd@nsn file=dump.dmp"
process = subprocess.Popen(command, stdout=subprocess.PIPE)
output = process.communicate()[0]
process = subprocess.Popen(command, stdout=subprocess.PIPE)
process.wait()
output = process.stdout.read()
Through one of these methods (forgot which one) I really got the output of exp/imp, but only after the command finishes, which is quite worthless to me, as I need a frequent update during these potentially long running operations. And sqlplus made even more problems, as sqlplus mostly wants some input when an error occurs. When this happens python waits for the process to finish but the user can't see the prompt, so you don't know how long to wait or what to do...
What I'd like to have is a wrapper that outputs everything I can see on the standard commandline. I want to log this to a file and show it inside a wxPython control.
I also tried the code from this page: http://code.activestate.com/recipes/440554/
but this can't read the output either.
The OutputWrapper from this answer doesn't work either: How can I capture all exceptions from a wxPython application?
Any help would be appreciated!
EDIT:
The subprocesses don't seem to flush their output. I already tried it with .readline().
My Tool has to run on windows and unix, so pexpect is no solution if there's no windows version. And using cx_oracle would be extreme overkill as I would have to rebuild the whole functionality of exp, imp and sqlplus.
A:
The solution is to use a list for your command
command = ["exp", "userid=user/pwd@nsn", "file=dump.dmp"]
process = subprocess.Popen(command, stdout=subprocess.PIPE)
then you read process.stdout in a line-by-line basis:
line = process.stdout.readline()
that way you can update the GUI without waiting. IF the subprocess you are running (exp) flushes output. It is possible that the output is buffered, then you won't see anything until the output buffer is full. If that is the case then you are probably out of luck.
A:
If you're on Linux, check out pexpect. It does exactly what you want.
If you need to work on Windows, maybe you should bite the bullet and use Python bindings to Oracle, such as cx_Oracle, instead of running CL stuff via subprocess.
A:
Are these solutions able to capture stderr as well? I see you have stdout= option above. How do you make sure to get stderr as well? Another question is is there a way to use import logging/import logging.handlers to capture command stdout/stderr. It would be interesting to be able to use the logger with its buildt in formatters/rotaters,etc.
| Logging output of external program with (wx)python | I'm writing a GUI for using the oracle exp/imp commands and starting sql-scripts through sqlplus. The subprocess class makes it easy to launch the commands, but I need some additional functionality. I want to get rid of the command prompt when using my wxPython GUI, but I still need a way to show the output of the exp/imp commands.
I already tried these two methods:
command = "exp userid=user/pwd@nsn file=dump.dmp"
process = subprocess.Popen(command, stdout=subprocess.PIPE)
output = process.communicate()[0]
process = subprocess.Popen(command, stdout=subprocess.PIPE)
process.wait()
output = process.stdout.read()
Through one of these methods (forgot which one) I really got the output of exp/imp, but only after the command finishes, which is quite worthless to me, as I need a frequent update during these potentially long running operations. And sqlplus made even more problems, as sqlplus mostly wants some input when an error occurs. When this happens python waits for the process to finish but the user can't see the prompt, so you don't know how long to wait or what to do...
What I'd like to have is a wrapper that outputs everything I can see on the standard commandline. I want to log this to a file and show it inside a wxPython control.
I also tried the code from this page: http://code.activestate.com/recipes/440554/
but this can't read the output either.
The OutputWrapper from this answer doesn't work either: How can I capture all exceptions from a wxPython application?
Any help would be appreciated!
EDIT:
The subprocesses don't seem to flush their output. I already tried it with .readline().
My Tool has to run on windows and unix, so pexpect is no solution if there's no windows version. And using cx_oracle would be extreme overkill as I would have to rebuild the whole functionality of exp, imp and sqlplus.
| [
"The solution is to use a list for your command\ncommand = [\"exp\", \"userid=user/pwd@nsn\", \"file=dump.dmp\"]\nprocess = subprocess.Popen(command, stdout=subprocess.PIPE)\n\nthen you read process.stdout in a line-by-line basis:\nline = process.stdout.readline()\n\nthat way you can update the GUI without waiting. IF the subprocess you are running (exp) flushes output. It is possible that the output is buffered, then you won't see anything until the output buffer is full. If that is the case then you are probably out of luck.\n",
"If you're on Linux, check out pexpect. It does exactly what you want.\nIf you need to work on Windows, maybe you should bite the bullet and use Python bindings to Oracle, such as cx_Oracle, instead of running CL stuff via subprocess.\n",
"Are these solutions able to capture stderr as well? I see you have stdout= option above. How do you make sure to get stderr as well? Another question is is there a way to use import logging/import logging.handlers to capture command stdout/stderr. It would be interesting to be able to use the logger with its buildt in formatters/rotaters,etc. \n"
] | [
1,
1,
0
] | [
"Try this:\nimport subprocess\n\ncommand = \"ping google.com\"\n\nprocess = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)\noutput = process.stdout\nwhile 1:\n print output.readline(),\n\n"
] | [
-1
] | [
"oracle",
"python",
"sqlplus",
"wxpython"
] | stackoverflow_0000531708_oracle_python_sqlplus_wxpython.txt |
Q:
Getting Forms on Page in Python
I'm working on a web vulnerability scanner. I have completed 30% of the program, in that it can scan only HTTP GET methods. But I've hit a snag now: I have no idea how I shall make the program pentest the POST method.
I had the idea to make it extract the form data/names from all the pages on the website, but I have no idea how I should do that. Any ideas?
A:
Use BeautifulSoup for screen scraping.
For heavier scripting, use twill :
twill is a simple language that allows users to browse the Web from a command-line interface. With twill, you can navigate through Web sites that use forms, cookies, and most standard Web features.
With twill, you can easily fill forms and POST them back to a server. Twill has a Python API. A from-filling example:
from twill.commands import go, showforms, formclear, fv, submit
go('http://issola.caltech.edu/~t/qwsgi/qwsgi-demo.cgi/')
go('./widgets')
showforms()
formclear('1')
fv("1", "name", "test")
fv("1", "password", "testpass")
fv("1", "confirm", "yes")
showforms()
submit('0')
A:
Are you asking how to use urllib2 to execute a POST method?
You might want to look at the examples.
After trying some of that, you might want to post code with a more specific question.
A:
If you know how to collect the data/names from the form, you just need a way to deal with http POST method. I guess you will need a solution for sending multipart form-data.
You should look at the MultipartPostHandler:
http://odin.himinbi.org/MultipartPostHandler.py
And if you need to support unicode file names , see a fix at:
http://peerit.blogspot.com/2007/07/multipartposthandler-doesnt-work-for.html
| Getting Forms on Page in Python | I'm working on a web vulnerability scanner. I have completed 30% of the program, in that it can scan only HTTP GET methods. But I've hit a snag now: I have no idea how I shall make the program pentest the POST method.
I had the idea to make it extract the form data/names from all the pages on the website, but I have no idea how I should do that. Any ideas?
| [
"Use BeautifulSoup for screen scraping.\nFor heavier scripting, use twill :\n\ntwill is a simple language that allows users to browse the Web from a command-line interface. With twill, you can navigate through Web sites that use forms, cookies, and most standard Web features.\n\nWith twill, you can easily fill forms and POST them back to a server. Twill has a Python API. A from-filling example:\nfrom twill.commands import go, showforms, formclear, fv, submit\n\ngo('http://issola.caltech.edu/~t/qwsgi/qwsgi-demo.cgi/')\ngo('./widgets')\nshowforms()\n\nformclear('1')\nfv(\"1\", \"name\", \"test\")\nfv(\"1\", \"password\", \"testpass\")\nfv(\"1\", \"confirm\", \"yes\")\nshowforms()\n\nsubmit('0')\n\n",
"Are you asking how to use urllib2 to execute a POST method?\nYou might want to look at the examples.\nAfter trying some of that, you might want to post code with a more specific question.\n",
"If you know how to collect the data/names from the form, you just need a way to deal with http POST method. I guess you will need a solution for sending multipart form-data.\nYou should look at the MultipartPostHandler:\nhttp://odin.himinbi.org/MultipartPostHandler.py\nAnd if you need to support unicode file names , see a fix at:\nhttp://peerit.blogspot.com/2007/07/multipartposthandler-doesnt-work-for.html\n"
] | [
3,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000569477_python.txt |
Q:
Statistics with numpy
I am working at some plots and statistics for work and I am not sure how I can do some statistics using numpy: I have a list of prices and another one of basePrices. And I want to know how many prices are with X percent above basePrice, how many are with Y percent above basePrice.
Is there a simple way to do that using numpy?
A:
Say you have
>>> prices = array([100, 200, 150, 145, 300])
>>> base_prices = array([90, 220, 100, 350, 350])
Then the number of prices that are more than 10% above the base price are
>>> sum(prices > 1.10 * base_prices)
2
A:
Just for amusement, here's a slightly different take on dF's answer:
>>> prices = array([100, 200, 150, 145, 300])
>>> base_prices = array([90, 220, 100, 350, 350])
>>> ratio = prices / base_prices
Then you can extract the number that are 5% above, 10% above, etc. with
>>> sum(ratio > 1.05)
2
>>> sum(ratio > 1.10)
2
>>> sum(ratio > 1.15)
1
A:
In addition to df's answer, if you want to know the specific prices that are above the base prices, you can do:
prices[prices > (1.10 * base_prices)]
A:
I don't think you need numpy ...
prices = [40.0, 150.0, 35.0, 65.0, 90.0]
baseprices = [45.0, 130.0, 40.0, 80.0, 100.0]
x = .1
y = .5
# how many are within 10%
len([p for p,bp in zip(prices,baseprices) if p <= (1+x)*bp]) # 1
# how many are within 50%
len([p for p,bp in zip(prices,baseprices) if p <= (1+y)*bp]) # 5
| Statistics with numpy | I am working at some plots and statistics for work and I am not sure how I can do some statistics using numpy: I have a list of prices and another one of basePrices. And I want to know how many prices are with X percent above basePrice, how many are with Y percent above basePrice.
Is there a simple way to do that using numpy?
| [
"Say you have\n>>> prices = array([100, 200, 150, 145, 300])\n>>> base_prices = array([90, 220, 100, 350, 350])\n\nThen the number of prices that are more than 10% above the base price are\n>>> sum(prices > 1.10 * base_prices)\n2\n\n",
"Just for amusement, here's a slightly different take on dF's answer:\n>>> prices = array([100, 200, 150, 145, 300])\n>>> base_prices = array([90, 220, 100, 350, 350])\n>>> ratio = prices / base_prices\n\nThen you can extract the number that are 5% above, 10% above, etc. with\n>>> sum(ratio > 1.05)\n2\n>>> sum(ratio > 1.10)\n2\n>>> sum(ratio > 1.15)\n1\n\n",
"In addition to df's answer, if you want to know the specific prices that are above the base prices, you can do:\nprices[prices > (1.10 * base_prices)]\n",
"I don't think you need numpy ...\nprices = [40.0, 150.0, 35.0, 65.0, 90.0]\nbaseprices = [45.0, 130.0, 40.0, 80.0, 100.0]\nx = .1\ny = .5\n\n# how many are within 10%\nlen([p for p,bp in zip(prices,baseprices) if p <= (1+x)*bp]) # 1\n\n# how many are within 50%\nlen([p for p,bp in zip(prices,baseprices) if p <= (1+y)*bp]) # 5\n\n"
] | [
8,
2,
1,
0
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0000570137_numpy_python.txt |
Q:
With Twisted, how can 'connectionMade' fire a specific Deferred?
This is part of a larger program; I'll explain only the relevant parts. Basically, my code wants to create a new connection to a remote host. This should return a Deferred, which fires once the connection is established, so I can send something on it.
I'm creating the connection with twisted.internet.interfaces.IReactorSSL.connectSSL. That calls buildProtocol on my ClientFactory instance to get a new connection (twisted.internet.protocol.Protocol) object, and returns a twisted.internet.interfaces.IConnector. When the connection is started, Twisted calls startedConnecting on the factory, giving it the IConnector. When the connection is actually made, the protocol's connectionMade callback is called, with no arguments.
Now, if I only needed one connection per host/port, the rest would be easy. Before calling connectSSL, I would create a Deferred and put it in a dictionary keyed on (host, port). Then, in the protocol's connectionMade, I could use self.transport.getPeer() to retrieve the host/port, use it to look up the Deferred, and fire its callbacks. But this obviously breaks down if I want to create more than one connection.
The problem is that I can't see any other way to associate a Deferred I created before calling connectSSL with the connectionMade later on.
A:
Looking at this some more, I think I've come up with a solution, although hopefully there is a better way; this seems kind of weird.
Twisted has a class, ClientCreator that is used for producing simple single-use connections. It in theory does what I want; connects and returns a Deferred that fires when the connection is established. I didn't think I could use this, though, since I'd lose the ability to pass arguments to the protocol constructor, and therefore have no way to share state between connections.
However, I just realized that the ClientFactory constructor does accept *args to pass to the protocol constructor. Or at least it looks like it; there is virtually no documentation for this. In that case, I can give it a reference to my factory (or whatever else, if the factory is no longer necessary). And I get back the Deferred that fires when the connection is established.
| With Twisted, how can 'connectionMade' fire a specific Deferred? | This is part of a larger program; I'll explain only the relevant parts. Basically, my code wants to create a new connection to a remote host. This should return a Deferred, which fires once the connection is established, so I can send something on it.
I'm creating the connection with twisted.internet.interfaces.IReactorSSL.connectSSL. That calls buildProtocol on my ClientFactory instance to get a new connection (twisted.internet.protocol.Protocol) object, and returns a twisted.internet.interfaces.IConnector. When the connection is started, Twisted calls startedConnecting on the factory, giving it the IConnector. When the connection is actually made, the protocol's connectionMade callback is called, with no arguments.
Now, if I only needed one connection per host/port, the rest would be easy. Before calling connectSSL, I would create a Deferred and put it in a dictionary keyed on (host, port). Then, in the protocol's connectionMade, I could use self.transport.getPeer() to retrieve the host/port, use it to look up the Deferred, and fire its callbacks. But this obviously breaks down if I want to create more than one connection.
The problem is that I can't see any other way to associate a Deferred I created before calling connectSSL with the connectionMade later on.
| [
"Looking at this some more, I think I've come up with a solution, although hopefully there is a better way; this seems kind of weird.\nTwisted has a class, ClientCreator that is used for producing simple single-use connections. It in theory does what I want; connects and returns a Deferred that fires when the connection is established. I didn't think I could use this, though, since I'd lose the ability to pass arguments to the protocol constructor, and therefore have no way to share state between connections.\nHowever, I just realized that the ClientFactory constructor does accept *args to pass to the protocol constructor. Or at least it looks like it; there is virtually no documentation for this. In that case, I can give it a reference to my factory (or whatever else, if the factory is no longer necessary). And I get back the Deferred that fires when the connection is established.\n"
] | [
0
] | [] | [] | [
"connection",
"python",
"reactor",
"twisted"
] | stackoverflow_0000570397_connection_python_reactor_twisted.txt |
Q:
Detect when a Python module unloads
I have a module that uses ctypes to wrap some functionality from a static library into a class. When the module loads, it calls an initialize function in the static library. When the module is unloaded (presumably when the interpreter exits), there's an unload function in the library that I'd like to be called. How can I create this hook?
A:
Use the atexit module:
import mymodule
import atexit
# call mymodule.unload('param1', 'param2') when the interpreter exits:
atexit.register(mymodule.unload, 'param1', 'param2')
Another simple example from the docs, using register as a decorator:
import atexit
@atexit.register
def goodbye():
print "You are now leaving the Python sector."
| Detect when a Python module unloads | I have a module that uses ctypes to wrap some functionality from a static library into a class. When the module loads, it calls an initialize function in the static library. When the module is unloaded (presumably when the interpreter exits), there's an unload function in the library that I'd like to be called. How can I create this hook?
| [
"Use the atexit module:\nimport mymodule\nimport atexit\n\n# call mymodule.unload('param1', 'param2') when the interpreter exits:\natexit.register(mymodule.unload, 'param1', 'param2')\n\nAnother simple example from the docs, using register as a decorator:\nimport atexit\n\n@atexit.register\ndef goodbye():\n print \"You are now leaving the Python sector.\"\n\n"
] | [
16
] | [] | [] | [
"python"
] | stackoverflow_0000570636_python.txt |
Q:
Python c-api and unicode strings
I need to convert between python objects and c strings of various encodings. Going from a c string to a unicode object was fairly simple using PyUnicode_Decode, however Im not sure how to go the other way
//char* can be a wchar_t or any other element size, just make sure it is correctly terminated for its encoding
Unicode(const char *str, size_t bytes, const char *encoding="utf-16", const char *errors="strict")
:Object(PyUnicode_Decode(str, bytes, encoding, errors))
{
//check for any python exceptions
ExceptionCheck();
}
I want to create another function that takes the python Unicode string and puts it in a buffer using a given encodeing, eg:
//fills buffer with a null terminated string in encoding
void AsCString(char *buffer, size_t bufferBytes,
const char *encoding="utf-16", const char *errors="strict")
{
...
}
I suspect it has somthing to do with PyUnicode_AsEncodedString however that returns a PyObject so I'm not sure how to put that into my buffer...
Note: both methods above are members of a c++ Unicode class that wraps the python api
I'm using Python 3.0
A:
I suspect it has somthing to do with PyUnicode_AsEncodedString however that returns a PyObject so I'm not sure how to put that into my buffer...
The PyObject returned is a PyStringObject, so you just need to use PyString_Size and PyString_AsString to get a pointer to the string's buffer and memcpy it to your own buffer.
If you're looking for a way to go directly from a PyUnicode object into your own char buffer, I don't think that you can do that.
| Python c-api and unicode strings | I need to convert between python objects and c strings of various encodings. Going from a c string to a unicode object was fairly simple using PyUnicode_Decode, however Im not sure how to go the other way
//char* can be a wchar_t or any other element size, just make sure it is correctly terminated for its encoding
Unicode(const char *str, size_t bytes, const char *encoding="utf-16", const char *errors="strict")
:Object(PyUnicode_Decode(str, bytes, encoding, errors))
{
//check for any python exceptions
ExceptionCheck();
}
I want to create another function that takes the python Unicode string and puts it in a buffer using a given encodeing, eg:
//fills buffer with a null terminated string in encoding
void AsCString(char *buffer, size_t bufferBytes,
const char *encoding="utf-16", const char *errors="strict")
{
...
}
I suspect it has somthing to do with PyUnicode_AsEncodedString however that returns a PyObject so I'm not sure how to put that into my buffer...
Note: both methods above are members of a c++ Unicode class that wraps the python api
I'm using Python 3.0
| [
"\nI suspect it has somthing to do with PyUnicode_AsEncodedString however that returns a PyObject so I'm not sure how to put that into my buffer...\n\nThe PyObject returned is a PyStringObject, so you just need to use PyString_Size and PyString_AsString to get a pointer to the string's buffer and memcpy it to your own buffer.\nIf you're looking for a way to go directly from a PyUnicode object into your own char buffer, I don't think that you can do that.\n"
] | [
3
] | [] | [] | [
"c",
"python",
"python_c_api"
] | stackoverflow_0000570781_c_python_python_c_api.txt |
Q:
Django "SuspiciousOperation" Error While Deleting Uploaded File
I'm developing in Django on Windows XP using the manage.py runserver command to serve files. Apache isn't involved. When I login to the administration and try to delete a file I get a "SuspiciousOperation" error.
Here's the traceback:
http://dpaste.com/123112/
Here's my full model:
http://dpaste.com/hold/123110/
How can I get rid of this "SuspiciousOperation" error?
EDIT: Here are my media settings:
MEDIA_ROOT = 'C:/Server/Projects/postnzb/static/'
MEDIA_URL = '/static/'
A:
What is your MEDIA_ROOT in settings.py? From the back-trace, it seems you have set your MEDIA_ROOT to /static/.
This error is coming since Django is trying to access /static/ to which it has no access. Put an absolute pathname for MEDIA_ROOT like C:/Documents/static/ and give full permissions to Django to access that directory.
That should solve your problem.
Addendum: Since your MEDIA_ROOT seems to be OK, I am guessing that you are using MEDIA_URL for deleting the file instead of MEDIA_ROOT. Indeed, from the error it seems that Django was trying to access the /static/files/8.nzb and was denied access. Clearly, /static/ is your MEDIA_URL and not your MEDIA_ROOT. The model methods should never try accessing the files using the MEDIA_URL. I am sure a review of your code will spot the error.
Update: I skimmed your code and it seems you are setting File.nzb to %(1)sfiles/%(2)s.nzb' % {'1': settings.MEDIA_URL, '2': self.pk} which uses its MEDIA_URL and then in the delete() method you are calling the delete() method of the super-class of File as super(File, self).delete() which is obviously wrong as it will try deleting File.nzb and will try accessing the file through the MEDIA_URL. Fixing that will get rid of the error. I will leave the exact solution as an exercise to you :)
| Django "SuspiciousOperation" Error While Deleting Uploaded File | I'm developing in Django on Windows XP using the manage.py runserver command to serve files. Apache isn't involved. When I login to the administration and try to delete a file I get a "SuspiciousOperation" error.
Here's the traceback:
http://dpaste.com/123112/
Here's my full model:
http://dpaste.com/hold/123110/
How can I get rid of this "SuspiciousOperation" error?
EDIT: Here are my media settings:
MEDIA_ROOT = 'C:/Server/Projects/postnzb/static/'
MEDIA_URL = '/static/'
| [
"What is your MEDIA_ROOT in settings.py? From the back-trace, it seems you have set your MEDIA_ROOT to /static/.\nThis error is coming since Django is trying to access /static/ to which it has no access. Put an absolute pathname for MEDIA_ROOT like C:/Documents/static/ and give full permissions to Django to access that directory.\nThat should solve your problem.\nAddendum: Since your MEDIA_ROOT seems to be OK, I am guessing that you are using MEDIA_URL for deleting the file instead of MEDIA_ROOT. Indeed, from the error it seems that Django was trying to access the /static/files/8.nzb and was denied access. Clearly, /static/ is your MEDIA_URL and not your MEDIA_ROOT. The model methods should never try accessing the files using the MEDIA_URL. I am sure a review of your code will spot the error.\nUpdate: I skimmed your code and it seems you are setting File.nzb to %(1)sfiles/%(2)s.nzb' % {'1': settings.MEDIA_URL, '2': self.pk} which uses its MEDIA_URL and then in the delete() method you are calling the delete() method of the super-class of File as super(File, self).delete() which is obviously wrong as it will try deleting File.nzb and will try accessing the file through the MEDIA_URL. Fixing that will get rid of the error. I will leave the exact solution as an exercise to you :)\n"
] | [
5
] | [] | [] | [
"django",
"python",
"windows"
] | stackoverflow_0000570952_django_python_windows.txt |
Q:
In Python, how might one log in, answer a web form via HTTP POST (not url-encoded), and fetch a returned XML file?
I am basically trying to export a configuration file, once a week. While the product in question allows you to log in manually via a web client, enter some information, and get an XML file back when you submit, there's no facility for automating this. I can get away with using Python 2.5 (have used for a while) or 2.6 (unfamiliar) to do this.
I think I need to have some way to authenticate against the product. Although I can view the cookie in Firefox, when I looked through the actual cookie.txt file, it was not present. Didn't show up after clearing my private data and re-authenticating. Odd. Should I be shooting for the Cookie module, or could this be some arcane method of authentication that only looks like cookies? How would I know?
I think I need the httplib module to do the HTTP POST, but I don't know how to do the multipart/form-data encoding. Have I overlooked a handy option, or is this something where I must roll my own?
I assume I can get the XML file back in the HTTPesponse from httplib.
I've fetched web things via Python before, but not with POST, multipart encoding, and authentication in the mix.
A:
urllib2 should cover all of this.
Here's a Basic Authentication example.
Here's a Post with multipart/form-data.
A:
Try mechanize module.
A:
You should look at the MultipartPostHandler:
http://odin.himinbi.org/MultipartPostHandler.py
And if you need to support unicode file names , see a fix at:
http://peerit.blogspot.com/2007/07/multipartposthandler-doesnt-work-for.html
| In Python, how might one log in, answer a web form via HTTP POST (not url-encoded), and fetch a returned XML file? | I am basically trying to export a configuration file, once a week. While the product in question allows you to log in manually via a web client, enter some information, and get an XML file back when you submit, there's no facility for automating this. I can get away with using Python 2.5 (have used for a while) or 2.6 (unfamiliar) to do this.
I think I need to have some way to authenticate against the product. Although I can view the cookie in Firefox, when I looked through the actual cookie.txt file, it was not present. Didn't show up after clearing my private data and re-authenticating. Odd. Should I be shooting for the Cookie module, or could this be some arcane method of authentication that only looks like cookies? How would I know?
I think I need the httplib module to do the HTTP POST, but I don't know how to do the multipart/form-data encoding. Have I overlooked a handy option, or is this something where I must roll my own?
I assume I can get the XML file back in the HTTPesponse from httplib.
I've fetched web things via Python before, but not with POST, multipart encoding, and authentication in the mix.
| [
"urllib2 should cover all of this.\nHere's a Basic Authentication example.\nHere's a Post with multipart/form-data.\n",
"Try mechanize module.\n",
"You should look at the MultipartPostHandler:\nhttp://odin.himinbi.org/MultipartPostHandler.py\nAnd if you need to support unicode file names , see a fix at:\nhttp://peerit.blogspot.com/2007/07/multipartposthandler-doesnt-work-for.html\n"
] | [
3,
2,
0
] | [] | [] | [
"authentication",
"cookies",
"post",
"python"
] | stackoverflow_0000571083_authentication_cookies_post_python.txt |
Q:
How to program a schedule
I have to build a program that schedules based on certain rules. I'm not sure how to explain it, so let me give you an example..
You have Five People A,B,C,D,E. And you Have another set of people S1 S2 S3 S4 S5 S6 S7.
If A B C D and E are available every hour from 9 to 5, and S1 S2 S3 S4 S5 S6 and S7 have a list of 3 people they want to see from {A,B,C,D,E}
That's my problem and I'm not sure where to begin...
Thanks for your help!
A:
Here's some python code that will do the trick. You will want to update VISITOR_PEOPLE. And if some people get to schedule before others, you'll need to reorder VISITOR_IDS.
Edit: I added some more code to account for the fact that people can't be in a different place at the same time. You might want to make that more efficient (i.e. don't try to schedule a time that will not work). I'll let you figure that out though ;)
import sys
HOURS = ['9:00AM', '10:00AM', '11:00AM', '12:00PM', '1:00PM', '2:00PM', '3:00PM', '4:00PM']
PEOPLE_IDS = ['A', 'B', 'C', 'D', 'E']
VISITOR_IDS = ['S1', 'S2', 'S3', 'S4', 'S5', 'S6', 'S7']
VISITOR_PEOPLE = {'S1': ['A', 'B', 'C'],
'S2': ['A', 'D', 'E'],
'S3': ['B', 'E', 'D'],
'S4': ['D', 'E', 'A'],
'S5': ['C', 'D', 'E'],
'S6': ['A', 'D', 'C'],
'S7': ['B', 'C', 'D']
}
def main():
people = {}
for id in PEOPLE_IDS:
people[id] = Person(id)
visitors = {}
for id in VISITOR_IDS:
visitors[id] = Visitor(id, VISITOR_PEOPLE[id], people)
for v in visitors.values():
v.printSchedule()
class Person:
def __init__(self, id):
self.id = id
self.schedule = [False]*8 # False = free, True = busy
def scheduleTime(self):
# schedules next available hour and returns that hour
for i in range(len(self.schedule)):
if not self.schedule[i]:
self.schedule[i] = True
return HOURS[i]
return 'unavailable'
def unscheduleTime(self, index):
self.schedule[index] = False
class Visitor:
def __init__(self, id, people_requests, people):
self.id = id
self.schedule = {} # {person_id: hour}
for p in people_requests:
bad_times = set() # times that Visitor is busy
time = people[p].scheduleTime()
while time in self.schedule.values(): # keep scheduling a time until you get one that works for both the Visitor and Person
bad_times.add(time)
time = people[p].scheduleTime()
self.schedule[p] = time
for t in bad_times: # unschedule bad_times from Person
people[p].unscheduleTime(HOURS.index(t))
def printSchedule(self):
print 'Schedule for %s [Person (time)]:' % self.id
for p,t in self.schedule.items():
print ' %s (%s)' % (p,t)
if __name__ == '__main__':
sys.exit(main())
A:
Here's one approach:
Start with S1, assign him the three people he wants at 9am, then go to S2 and try to schedule his meeting at 9am. Continue until you have a conflict, then move that meeting to 10am. Return to 9am for the next one. If there is a conflict at 10 also, move to 11, etc.
Once the program tries to schedule a meeting after hours, you'll know you've hit a case where all the meetings aren't possible in a single day.
A:
There's quite a bit to be said about scheduling algorithms, so for your own sake you might want to crack open your text (or at least do some wiki-research) and get a feel for the most common techniques.
Personally, I'd start by considering the functions you need to have available to create this schedule.
For example, you need a a function along the lines of def testAppointment(meetingSubject, meetingTime): that tests whether or not a given appointment is valid.
You might also want a function def listAvailableTimes(meetingSubject)): that returns a list of every time that a particular meeting subject has available.
Do you see where I'm going with this? Build up the functions that give you the information needed to solve the problem, then go about your "main loop," so to speak.
| How to program a schedule | I have to build a program that schedules based on certain rules. I'm not sure how to explain it, so let me give you an example..
You have Five People A,B,C,D,E. And you Have another set of people S1 S2 S3 S4 S5 S6 S7.
If A B C D and E are available every hour from 9 to 5, and S1 S2 S3 S4 S5 S6 and S7 have a list of 3 people they want to see from {A,B,C,D,E}
That's my problem and I'm not sure where to begin...
Thanks for your help!
| [
"Here's some python code that will do the trick. You will want to update VISITOR_PEOPLE. And if some people get to schedule before others, you'll need to reorder VISITOR_IDS.\nEdit: I added some more code to account for the fact that people can't be in a different place at the same time. You might want to make that more efficient (i.e. don't try to schedule a time that will not work). I'll let you figure that out though ;)\nimport sys\n\nHOURS = ['9:00AM', '10:00AM', '11:00AM', '12:00PM', '1:00PM', '2:00PM', '3:00PM', '4:00PM']\nPEOPLE_IDS = ['A', 'B', 'C', 'D', 'E']\nVISITOR_IDS = ['S1', 'S2', 'S3', 'S4', 'S5', 'S6', 'S7']\nVISITOR_PEOPLE = {'S1': ['A', 'B', 'C'], \n 'S2': ['A', 'D', 'E'], \n 'S3': ['B', 'E', 'D'], \n 'S4': ['D', 'E', 'A'], \n 'S5': ['C', 'D', 'E'], \n 'S6': ['A', 'D', 'C'], \n 'S7': ['B', 'C', 'D']\n }\n\ndef main():\n people = {}\n for id in PEOPLE_IDS:\n people[id] = Person(id)\n visitors = {}\n for id in VISITOR_IDS:\n visitors[id] = Visitor(id, VISITOR_PEOPLE[id], people)\n for v in visitors.values():\n v.printSchedule()\n\nclass Person:\n def __init__(self, id):\n self.id = id\n self.schedule = [False]*8 # False = free, True = busy\n def scheduleTime(self):\n # schedules next available hour and returns that hour\n for i in range(len(self.schedule)):\n if not self.schedule[i]:\n self.schedule[i] = True\n return HOURS[i]\n return 'unavailable'\n def unscheduleTime(self, index):\n self.schedule[index] = False\n\nclass Visitor:\n def __init__(self, id, people_requests, people):\n self.id = id\n self.schedule = {} # {person_id: hour}\n for p in people_requests:\n bad_times = set() # times that Visitor is busy\n time = people[p].scheduleTime()\n while time in self.schedule.values(): # keep scheduling a time until you get one that works for both the Visitor and Person\n bad_times.add(time)\n time = people[p].scheduleTime()\n self.schedule[p] = time\n for t in bad_times: # unschedule bad_times from Person\n people[p].unscheduleTime(HOURS.index(t))\n def printSchedule(self):\n print 'Schedule for %s [Person (time)]:' % self.id\n for p,t in self.schedule.items():\n print ' %s (%s)' % (p,t)\n\nif __name__ == '__main__':\n sys.exit(main())\n\n",
"Here's one approach:\nStart with S1, assign him the three people he wants at 9am, then go to S2 and try to schedule his meeting at 9am. Continue until you have a conflict, then move that meeting to 10am. Return to 9am for the next one. If there is a conflict at 10 also, move to 11, etc.\nOnce the program tries to schedule a meeting after hours, you'll know you've hit a case where all the meetings aren't possible in a single day. \n",
"There's quite a bit to be said about scheduling algorithms, so for your own sake you might want to crack open your text (or at least do some wiki-research) and get a feel for the most common techniques. \nPersonally, I'd start by considering the functions you need to have available to create this schedule. \nFor example, you need a a function along the lines of def testAppointment(meetingSubject, meetingTime): that tests whether or not a given appointment is valid. \nYou might also want a function def listAvailableTimes(meetingSubject)): that returns a list of every time that a particular meeting subject has available. \nDo you see where I'm going with this? Build up the functions that give you the information needed to solve the problem, then go about your \"main loop,\" so to speak. \n"
] | [
3,
2,
1
] | [] | [] | [
"python",
"scheduling"
] | stackoverflow_0000570912_python_scheduling.txt |
Q:
Split tags in python
I have a file that contains this:
<html>
<head>
<title> Hello! - {{ today }}</title>
</head>
<body>
{{ runner_up }}
avasd
{{ blabla }}
sdvas
{{ oooo }}
</body>
</html>
What is the best or most Pythonic way to extract the {{today}}, {{runner_up}}, etc.?
I know it can be done with splits/regular expressions, but I wondered if there were another way.
PS: consider the data loaded in a variable called thedata.
Edit: I think that the HTML example was bad, because it directed some commenters to BeautifulSoup. So, here is a new input data:
Fix grammatical or {{spelling}} errors.
Clarify meaning without changing it.
Correct minor {{mistakes}}.
Add related resources or links.
Always respect the original {{author}}.
Output:
spelling
mistakes
author
A:
Mmkay, well here's a generator solution that seems to work well for me. You can also provide different open and close tags if you like.
def get_tags(s, open_delim ='{{',
close_delim ='}}' ):
while True:
# Search for the next two delimiters in the source text
start = s.find(open_delim)
end = s.find(close_delim)
# We found a non-empty match
if -1 < start < end:
# Skip the length of the open delimiter
start += len(open_delim)
# Spit out the tag
yield s[start:end].strip()
# Truncate string to start from last match
s = s[end+len(close_delim):]
else:
return
Run against your target input like so:
# prints: today, runner_up, blabla, oooo
for tag in get_tags(html):
print tag
Edit: it also works against your new example :). In my obviously quick testing, it also seemed to handle malformed tags in a reasonable way, though I make no guarantees of its robustness!
A:
try templatemaker, a reverse-template maker. it can actually learn them automatically from examples!
A:
I know you said no regex/split, but I couldn't help but try for a one-liner solution:
import re
for s in re.findall("\{\{.*\}\}",thedata):
print s.replace("{","").replace("}","")
EDIT: JFS
Compare:
>>> re.findall('\{\{.*\}\}', '{{a}}b{{c}}')
['{{a}}b{{c}}']
>>> re.findall('{{(.+?)}}', '{{a}}b{{c}}')
['a', 'c']
A:
If the data is that straightforward, a simple regex would do the trick.
A:
J.F. Sebastian wrote this in a comment but I thought it was good enough to deserve its own answer:
re.findall(r'{{(.+?)}}', thestring)
I know the OP was asking for a way that didn't involve splits or regexes - so maybe this doesn't quite answer the question as stated. But this one line of code definitely gets my vote as the most Pythonic way to accomplish the task.
| Split tags in python | I have a file that contains this:
<html>
<head>
<title> Hello! - {{ today }}</title>
</head>
<body>
{{ runner_up }}
avasd
{{ blabla }}
sdvas
{{ oooo }}
</body>
</html>
What is the best or most Pythonic way to extract the {{today}}, {{runner_up}}, etc.?
I know it can be done with splits/regular expressions, but I wondered if there were another way.
PS: consider the data loaded in a variable called thedata.
Edit: I think that the HTML example was bad, because it directed some commenters to BeautifulSoup. So, here is a new input data:
Fix grammatical or {{spelling}} errors.
Clarify meaning without changing it.
Correct minor {{mistakes}}.
Add related resources or links.
Always respect the original {{author}}.
Output:
spelling
mistakes
author
| [
"Mmkay, well here's a generator solution that seems to work well for me. You can also provide different open and close tags if you like.\ndef get_tags(s, open_delim ='{{', \n close_delim ='}}' ):\n\n while True:\n\n # Search for the next two delimiters in the source text\n start = s.find(open_delim)\n end = s.find(close_delim)\n\n # We found a non-empty match\n if -1 < start < end:\n\n # Skip the length of the open delimiter\n start += len(open_delim)\n\n # Spit out the tag\n yield s[start:end].strip()\n\n # Truncate string to start from last match\n s = s[end+len(close_delim):]\n\n else:\n return\n\nRun against your target input like so:\n# prints: today, runner_up, blabla, oooo\nfor tag in get_tags(html):\n print tag\n\nEdit: it also works against your new example :). In my obviously quick testing, it also seemed to handle malformed tags in a reasonable way, though I make no guarantees of its robustness!\n",
"try templatemaker, a reverse-template maker. it can actually learn them automatically from examples!\n",
"I know you said no regex/split, but I couldn't help but try for a one-liner solution:\nimport re\nfor s in re.findall(\"\\{\\{.*\\}\\}\",thedata):\n print s.replace(\"{\",\"\").replace(\"}\",\"\")\n\nEDIT: JFS\nCompare:\n>>> re.findall('\\{\\{.*\\}\\}', '{{a}}b{{c}}')\n['{{a}}b{{c}}']\n>>> re.findall('{{(.+?)}}', '{{a}}b{{c}}')\n['a', 'c']\n\n",
"If the data is that straightforward, a simple regex would do the trick.\n",
"J.F. Sebastian wrote this in a comment but I thought it was good enough to deserve its own answer:\nre.findall(r'{{(.+?)}}', thestring)\n\nI know the OP was asking for a way that didn't involve splits or regexes - so maybe this doesn't quite answer the question as stated. But this one line of code definitely gets my vote as the most Pythonic way to accomplish the task.\n"
] | [
8,
3,
2,
1,
1
] | [] | [] | [
"python",
"split",
"template_engine"
] | stackoverflow_0000571186_python_split_template_engine.txt |
Q:
How can I get the newest file from an FTP server?
I am using Python to connect to an FTP server that contains a new list of data once every hour. I am only connecting once a day, and I only want to download the newest file in the directory. Is there a way to do this?
A:
Seems like any system that is automatically generating a file once an hour is likely to be using an automated naming scheme. Are you over thinking the problem by asking the server for the newest file instead of more easily parsing the file names?
This wouldn't work in all cases, and if the directory got large it might become time consuming to get the file listing. But it seems likely to work in most cases.
A:
Look at ftplib in your current version of python. You can see a function to handle the result of the LIST command that you would issue to do a dir, if you know a last time that you run a successful script then you can parse the result from the LIST and act on the new files on the directory. See the ftplib for more info on how to do it. The retrlines function is what I would expect to use.
| How can I get the newest file from an FTP server? | I am using Python to connect to an FTP server that contains a new list of data once every hour. I am only connecting once a day, and I only want to download the newest file in the directory. Is there a way to do this?
| [
"Seems like any system that is automatically generating a file once an hour is likely to be using an automated naming scheme. Are you over thinking the problem by asking the server for the newest file instead of more easily parsing the file names? \nThis wouldn't work in all cases, and if the directory got large it might become time consuming to get the file listing. But it seems likely to work in most cases.\n",
"Look at ftplib in your current version of python. You can see a function to handle the result of the LIST command that you would issue to do a dir, if you know a last time that you run a successful script then you can parse the result from the LIST and act on the new files on the directory. See the ftplib for more info on how to do it. The retrlines function is what I would expect to use.\n"
] | [
1,
-1
] | [] | [] | [
"ftp",
"python"
] | stackoverflow_0000570433_ftp_python.txt |
Q:
Python pysqlite not accepting my qmark parameterization
I think I am being a bonehead, maybe not importing the right package, but when I do...
from pysqlite2 import dbapi2 as sqlite
import types
import re
import sys
...
def create_asgn(self):
stmt = "CREATE TABLE ? (login CHAR(8) PRIMARY KEY NOT NULL, grade INTEGER NOT NULL)"
stmt2 = "insert into asgn values ('?', ?)"
self.cursor.execute(stmt, (sys.argv[2],))
self.cursor.execute(stmt2, [sys.argv[2], sys.argv[3]])
...
I get the error pysqlite2.dbapi2.OperationalError: near "?": syntax error
This makes very little sense to me, as the docs show that pysqlite is qmark parametrized. I am new to python and db-api though, help me out! THANKS
A:
That's because parameters can only be passed to VALUES. The table name can't be parametrized.
Also you have quotes around a parametrized argument on the second query. Remove the quotes, escaping is handled by the underlining library automatically for you.
A:
Try removing the quotes in the line that assigns to stmt2:
stmt2 = "insert into asgn values (?, ?)"
Also, as nosklo says, you can't use question-mark parameterisation with CREATE TABLE statements. Stick the table name into the SQL directly.
A:
If you really want to do it, try something like this:
def read(db="projects"):
sql = "select * from %s"
sql = sql % db
c.execute(sql)
| Python pysqlite not accepting my qmark parameterization | I think I am being a bonehead, maybe not importing the right package, but when I do...
from pysqlite2 import dbapi2 as sqlite
import types
import re
import sys
...
def create_asgn(self):
stmt = "CREATE TABLE ? (login CHAR(8) PRIMARY KEY NOT NULL, grade INTEGER NOT NULL)"
stmt2 = "insert into asgn values ('?', ?)"
self.cursor.execute(stmt, (sys.argv[2],))
self.cursor.execute(stmt2, [sys.argv[2], sys.argv[3]])
...
I get the error pysqlite2.dbapi2.OperationalError: near "?": syntax error
This makes very little sense to me, as the docs show that pysqlite is qmark parametrized. I am new to python and db-api though, help me out! THANKS
| [
"That's because parameters can only be passed to VALUES. The table name can't be parametrized.\nAlso you have quotes around a parametrized argument on the second query. Remove the quotes, escaping is handled by the underlining library automatically for you.\n",
"Try removing the quotes in the line that assigns to stmt2:\n stmt2 = \"insert into asgn values (?, ?)\"\n\nAlso, as nosklo says, you can't use question-mark parameterisation with CREATE TABLE statements. Stick the table name into the SQL directly.\n",
"If you really want to do it, try something like this:\ndef read(db=\"projects\"):\nsql = \"select * from %s\"\nsql = sql % db\nc.execute(sql)\n\n"
] | [
7,
2,
1
] | [] | [] | [
"pysqlite",
"python",
"python_db_api",
"sqlite"
] | stackoverflow_0000474261_pysqlite_python_python_db_api_sqlite.txt |
Q:
The `%` operator
I want to find all the numbers divisble by all the numbers between 1 and 5. how do I write the program so that if the remainder of 'start' divided by all the numbers that x goes through is equal to 0 that it will print start. Is there any syntax that will calculate what I'm looking for. thanks.
import math
def main():
one = 1
start = 1
while one == 1:
for x in range(1, 5):
if start % x == 0:
print start
start += 1
A:
First of all, you seem to ask for all multiples of 60. Those can be rendered easily like this (beware, this is an infinite loop):
from itertools import count
for i in count():
print i*60
If you just oversimplified your example, this is a more pythonic (and correct) solution of what you wrote (again an infinite loop):
from itertools import count
# put any test you like in this function
def test(number):
return all((number % i) == 0 for i in range(1,6))
my_numbers = (number for number in count() if test(number))
for number in my_numbers:
print number
You had a grave bug in your original code: range(1,5) equals [1, 2, 3, 4], so it would not test whether a number is divisble by 5!
PS: You have used that insane one = 1 construct before, and we showd you how to code that in a better way. Please learn from our answers!
A:
if I understood correctly you want something like this:
start = 1
while (True):
flgEvenlyDiv = True
for x in range(1, 5):
if (start % x != 0):
flgEvenlyDiv = False
break
if (flgEvenlyDiv == True):
print start
start += 1
| The `%` operator | I want to find all the numbers divisble by all the numbers between 1 and 5. how do I write the program so that if the remainder of 'start' divided by all the numbers that x goes through is equal to 0 that it will print start. Is there any syntax that will calculate what I'm looking for. thanks.
import math
def main():
one = 1
start = 1
while one == 1:
for x in range(1, 5):
if start % x == 0:
print start
start += 1
| [
"First of all, you seem to ask for all multiples of 60. Those can be rendered easily like this (beware, this is an infinite loop):\nfrom itertools import count\n\nfor i in count():\n print i*60\n\nIf you just oversimplified your example, this is a more pythonic (and correct) solution of what you wrote (again an infinite loop):\nfrom itertools import count\n\n# put any test you like in this function\ndef test(number):\n return all((number % i) == 0 for i in range(1,6))\n\nmy_numbers = (number for number in count() if test(number))\n\nfor number in my_numbers:\n print number\n\nYou had a grave bug in your original code: range(1,5) equals [1, 2, 3, 4], so it would not test whether a number is divisble by 5!\nPS: You have used that insane one = 1 construct before, and we showd you how to code that in a better way. Please learn from our answers!\n",
"if I understood correctly you want something like this:\nstart = 1\nwhile (True):\n flgEvenlyDiv = True\n for x in range(1, 5):\n if (start % x != 0):\n flgEvenlyDiv = False\n break\n\n if (flgEvenlyDiv == True):\n print start\n start += 1\n\n"
] | [
3,
0
] | [] | [] | [
"math",
"python"
] | stackoverflow_0000571538_math_python.txt |
Q:
What's the reason of providing some of the default methods in the global scope in Python?
What's the reason of providing some of the default methods in the global scope, like the len function, instead of providing them on an instance level, like:
list.len()
instead of:
len (list)
I find methods like len to be harder to discover than instance methods.
Is there any reason behind this?
A:
This question is very similar to this one. And the answers is the same:
Because Guido van Rossum, the creator of Python, thinks that prefix notation is more readable in some cases. Here is the complete answer. I'm going to quote some parts:
(a) For some operations, prefix notation just reads better than
postfix -- prefix (and infix!) operations have a long tradition in
mathematics which likes notations where the visuals help the
mathematician thinking about a problem. Compare the easy with which we
rewrite a formula like x*(a+b) into xa + xb to the clumsiness of
doing the same thing using a raw OO notation.
(b) When I read code that says len(x) I know that it is asking for
the length of something. This tells me two things: the result is an
integer, and the argument is some kind of container. To the contrary,
when I read x.len(), I have to already know that x is some kind of
container implementing an interface or inheriting from a class that
has a standard len(). Witness the confusion we occasionally have when
a class that is not implementing a mapping has a get() or keys()
method, or something that isn't a file has a write() method.
Saying the same thing in another way, I see 'len' as a built-in
operation. I'd hate to lose that. I can't say for sure whether you
meant that or not, but 'def len(self): ...' certainly sounds like you
want to demote it to an ordinary method. I'm strongly -1 on that.
A:
This FAQ entry might answer your question:
4.7 Why does Python use methods for some functionality (e.g. list.index())
but functions for other (e.g.
len(list))?
The major reason is history. Functions
were used for those operations that
were generic for a group of types and
which were intended to work even for
objects that didn't have methods at
all (e.g. tuples). It is also
convenient to have a function that can
readily be applied to an amorphous
collection of objects when you use the
functional features of Python (map(),
apply() et al).
In fact, implementing len(), max(),
min() as a built-in function is
actually less code than implementing
them as methods for each type. One can
quibble about individual cases but
it's a part of Python, and it's too
late to make such fundamental changes
now. The functions have to remain to
avoid massive code breakage.
Note that for string operations Python
has moved from external functions (the
string module) to methods. However,
len() is still a function.
There is a list of all those functions (not too many) here in the docs.
A:
Once you learn the "len" method, you know you can apply it to any sequence. You don't have to read the doc for each sequence you encounter to find out whether or not it has a len method. There's an expectation that it does.
Built-in functions are not harder to discover because there's a list of built-in methods in the python documentation. You can learn them all in one sitting from this page instead of having to hunt for them all over the class library Java style.
A:
Besides the FAQ answer provided by MrTopf, lists (and other objects) do have a length method __len__()
The len() function is just a shorthand for calling that method.
A:
They are fairly discoverable:
>>> import __builtin__
>>> dir(__builtin__)
['ArithmeticError', 'AssertionError', ...
...
or better
>>> help(__builtin__)
Help on built-in module __builtin__:
NAME
__builtin__ - Built-in functions, exceptions, and other objects.
...
| What's the reason of providing some of the default methods in the global scope in Python? | What's the reason of providing some of the default methods in the global scope, like the len function, instead of providing them on an instance level, like:
list.len()
instead of:
len (list)
I find methods like len to be harder to discover than instance methods.
Is there any reason behind this?
| [
"This question is very similar to this one. And the answers is the same:\nBecause Guido van Rossum, the creator of Python, thinks that prefix notation is more readable in some cases. Here is the complete answer. I'm going to quote some parts:\n\n(a) For some operations, prefix notation just reads better than\n postfix -- prefix (and infix!) operations have a long tradition in\n mathematics which likes notations where the visuals help the\n mathematician thinking about a problem. Compare the easy with which we\n rewrite a formula like x*(a+b) into xa + xb to the clumsiness of\n doing the same thing using a raw OO notation.\n(b) When I read code that says len(x) I know that it is asking for\n the length of something. This tells me two things: the result is an\n integer, and the argument is some kind of container. To the contrary,\n when I read x.len(), I have to already know that x is some kind of\n container implementing an interface or inheriting from a class that\n has a standard len(). Witness the confusion we occasionally have when\n a class that is not implementing a mapping has a get() or keys()\n method, or something that isn't a file has a write() method.\nSaying the same thing in another way, I see 'len' as a built-in\n operation. I'd hate to lose that. I can't say for sure whether you\n meant that or not, but 'def len(self): ...' certainly sounds like you\n want to demote it to an ordinary method. I'm strongly -1 on that.\n\n",
"This FAQ entry might answer your question:\n\n4.7 Why does Python use methods for some functionality (e.g. list.index())\n but functions for other (e.g.\n len(list))?\nThe major reason is history. Functions\n were used for those operations that\n were generic for a group of types and\n which were intended to work even for\n objects that didn't have methods at\n all (e.g. tuples). It is also\n convenient to have a function that can\n readily be applied to an amorphous\n collection of objects when you use the\n functional features of Python (map(),\n apply() et al).\nIn fact, implementing len(), max(),\n min() as a built-in function is\n actually less code than implementing\n them as methods for each type. One can\n quibble about individual cases but\n it's a part of Python, and it's too\n late to make such fundamental changes\n now. The functions have to remain to\n avoid massive code breakage.\nNote that for string operations Python\n has moved from external functions (the\n string module) to methods. However,\n len() is still a function.\n\nThere is a list of all those functions (not too many) here in the docs.\n",
"Once you learn the \"len\" method, you know you can apply it to any sequence. You don't have to read the doc for each sequence you encounter to find out whether or not it has a len method. There's an expectation that it does.\nBuilt-in functions are not harder to discover because there's a list of built-in methods in the python documentation. You can learn them all in one sitting from this page instead of having to hunt for them all over the class library Java style.\n",
"Besides the FAQ answer provided by MrTopf, lists (and other objects) do have a length method __len__()\nThe len() function is just a shorthand for calling that method.\n",
"They are fairly discoverable:\n>>> import __builtin__\n>>> dir(__builtin__)\n['ArithmeticError', 'AssertionError', ...\n...\n\nor better\n>>> help(__builtin__)\n\nHelp on built-in module __builtin__:\n\nNAME\n __builtin__ - Built-in functions, exceptions, and other objects.\n...\n\n"
] | [
6,
4,
2,
2,
0
] | [
"You can write\nre.match(r\"\\w+\", \"dog\")\n\ninstead of\npattern = re.compile(r\"\\w+\")\npattern.match(\"dog\") \n\nLike You noticed, sometimes it doesn't make sense.\n"
] | [
-1
] | [
"function",
"python"
] | stackoverflow_0000571522_function_python.txt |
Q:
Adding elements to python generators
Is it possible to append elements to a python generator?
I'm currently trying to get all images from a set of disorganized folders and write them to a new directory. To get the files, I'm using os.walk() which returns a list of image files in a single directory. While I can make a generator out of this single list, I don't know how to combine all these lists into one single generator. Any help would be much appreciated.
Related:
Flattening a shallow list in python
A:
You are looking for itertools.chain. It will combine multiple iterables into a single one, like this:
>>> import itertools
>>> for i in itertools.chain([1,2,3], [4,5,6]):
... print(i)
...
1
2
3
4
5
6
A:
This should do it, where directories is your list of directories:
import os
import itertools
generators = [os.walk(d) for d in directories]
for root, dirs, files in itertools.chain(*generators):
print root, dirs, files
A:
def files_gen(topdir='.'):
for root, dirs, files in os.walk(topdir):
# ... do some stuff with files
for f in files:
yield os.path.join(root, f)
# ... do other stuff
for f in files_gen():
print f
| Adding elements to python generators | Is it possible to append elements to a python generator?
I'm currently trying to get all images from a set of disorganized folders and write them to a new directory. To get the files, I'm using os.walk() which returns a list of image files in a single directory. While I can make a generator out of this single list, I don't know how to combine all these lists into one single generator. Any help would be much appreciated.
Related:
Flattening a shallow list in python
| [
"You are looking for itertools.chain. It will combine multiple iterables into a single one, like this:\n>>> import itertools \n>>> for i in itertools.chain([1,2,3], [4,5,6]):\n... print(i)\n... \n1\n2\n3\n4\n5\n6\n\n",
"This should do it, where directories is your list of directories:\nimport os\nimport itertools\n\ngenerators = [os.walk(d) for d in directories]\nfor root, dirs, files in itertools.chain(*generators):\n print root, dirs, files\n\n",
"def files_gen(topdir='.'):\n for root, dirs, files in os.walk(topdir):\n # ... do some stuff with files\n for f in files:\n yield os.path.join(root, f)\n # ... do other stuff\n\nfor f in files_gen():\n print f\n\n"
] | [
29,
19,
4
] | [
"Like this.\ndef threeGens( i, j, k ):\n for x in range(i):\n yield x\n for x in range(j):\n yield x\n for x in range(k):\n yield x\n\nWorks well. \n"
] | [
-1
] | [
"append",
"generator",
"python"
] | stackoverflow_0000571850_append_generator_python.txt |
Q:
Solr search with escaping solr reserved keywords
How do i query fields that contain solr reserved keywords as ":" in solr?
For instance,
q = 'uri:http://www.example.com'
throws up an error for "http://www.example.com" containing reserved word ":"
A:
I just tested this and it seem that simply escaping ":" like ":" does the trick:
q = 'uri:http\://www.example.com'
For my the index of my own site I tend to simply store the path of the URL though as I know the domain myself so that wasn't an issue for me before. But if you index external URLs then of course you need the full URL.
A:
Escape/replace Lucene reserved characters during indexing and store original value in separate field (stored="true" indexed="false" in schema). If you replace reserved characters with space, you'll get http www.example.com in indexed field and http://www.example.com in stored. Depending on the type of indexed field, you'd be able to query for exact value (if it is plain string) or for tokens (if it has analyzer).
| Solr search with escaping solr reserved keywords | How do i query fields that contain solr reserved keywords as ":" in solr?
For instance,
q = 'uri:http://www.example.com'
throws up an error for "http://www.example.com" containing reserved word ":"
| [
"I just tested this and it seem that simply escaping \":\" like \":\" does the trick:\nq = 'uri:http\\://www.example.com'\n\nFor my the index of my own site I tend to simply store the path of the URL though as I know the domain myself so that wasn't an issue for me before. But if you index external URLs then of course you need the full URL.\n",
"Escape/replace Lucene reserved characters during indexing and store original value in separate field (stored=\"true\" indexed=\"false\" in schema). If you replace reserved characters with space, you'll get http www.example.com in indexed field and http://www.example.com in stored. Depending on the type of indexed field, you'd be able to query for exact value (if it is plain string) or for tokens (if it has analyzer).\n"
] | [
5,
1
] | [] | [] | [
"pysolr",
"python",
"solr"
] | stackoverflow_0000572599_pysolr_python_solr.txt |
Q:
Drawing a chart with proportional X axis in Python
Is there an easy way to draw a date/value chart in Python, if the "dates" axis had non-equidistant values?
For example, given these:
2009-02-01: 10
2009-02-02: 13
2009-02-07: 25
2009-03-01: 80
I'd like the chart to show that between the 2nd and 3nd value there's a longer gap than between the 1st and the 2nd.
I tried a few chart libraries but they all seem to assume that the X axis has non-scalar values..
(Side note: the chart should be exportable to PNG/GIF/whatever)
Thanks for your time!
A:
you should be able to do this with matplotlib barchart. you can use xticks to give the x-axis date values, and the 'left' sizes don't have to be homogeneous. see the documentation for barchart for a full list of parameters.
A:
If it is enough for you to get a PNG with the chart, you can use Google Chart API which will give you a PNG image you can save. You could also use your data to parse the URL using Python.
EDIT: Image URL for your example (very basic).
| Drawing a chart with proportional X axis in Python | Is there an easy way to draw a date/value chart in Python, if the "dates" axis had non-equidistant values?
For example, given these:
2009-02-01: 10
2009-02-02: 13
2009-02-07: 25
2009-03-01: 80
I'd like the chart to show that between the 2nd and 3nd value there's a longer gap than between the 1st and the 2nd.
I tried a few chart libraries but they all seem to assume that the X axis has non-scalar values..
(Side note: the chart should be exportable to PNG/GIF/whatever)
Thanks for your time!
| [
"you should be able to do this with matplotlib barchart. you can use xticks to give the x-axis date values, and the 'left' sizes don't have to be homogeneous. see the documentation for barchart for a full list of parameters.\n",
"If it is enough for you to get a PNG with the chart, you can use Google Chart API which will give you a PNG image you can save. You could also use your data to parse the URL using Python.\nEDIT: Image URL for your example (very basic).\n"
] | [
2,
1
] | [] | [] | [
"charts",
"python"
] | stackoverflow_0000572808_charts_python.txt |
Q:
Python C-API Object Allocation
I want to use the new and delete operators for creating and destroying my objects.
The problem is python seems to break it into several stages. tp_new, tp_init and tp_alloc for creation and tp_del, tp_free and tp_dealloc for destruction. However c++ just has new which allocates and fully constructs the object and delete which destructs and deallocates the object.
Which of the python tp_* methods do I need to provide and what must they do?
Also I want to be able to create the object directly in c++ eg "PyObject *obj = new MyExtensionObject(args);" Will I also need to overload the new operator in some way to support this?
I also would like to be able to subclass my extension types in python, is there anything special I need to do to support this?
I'm using python 3.0.1.
EDIT:
ok, tp_init seems to make objects a bit too mutable for what I'm doing (eg take a Texture object, changing the contents after creation is fine, but change fundamental aspects of it such as, size, bitdept, etc will break lots of existing c++ stuff that assumes those sort of things are fixed). If I dont implement it will it simply stop people calling __init__ AFTER its constructed (or at least ignore the call, like tuple does). Or should I have some flag that throws an exception or somthing if tp_init is called more than once on the same object?
Apart from that I think ive got most of the rest sorted.
extern "C"
{
//creation + destruction
PyObject* global_alloc(PyTypeObject *type, Py_ssize_t items)
{
return (PyObject*)new char[type->tp_basicsize + items*type->tp_itemsize];
}
void global_free(void *mem)
{
delete[] (char*)mem;
}
}
template<class T> class ExtensionType
{
PyTypeObject *t;
ExtensionType()
{
t = new PyTypeObject();//not sure on this one, what is the "correct" way to create an empty type object
memset((void*)t, 0, sizeof(PyTypeObject));
static PyVarObject init = {PyObject_HEAD_INIT, 0};
*((PyObject*)t) = init;
t->tp_basicsize = sizeof(T);
t->tp_itemsize = 0;
t->tp_name = "unknown";
t->tp_alloc = (allocfunc) global_alloc;
t->tp_free = (freefunc) global_free;
t->tp_new = (newfunc) T::obj_new;
t->tp_dealloc = (destructor)T::obj_dealloc;
...
}
...bunch of methods for changing stuff...
PyObject *Finalise()
{
...
}
};
template <class T> PyObjectExtension : public PyObject
{
...
extern "C" static PyObject* obj_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds)
{
void *mem = (void*)subtype->tp_alloc(subtype, 0);
return (PyObject*)new(mem) T(args, kwds)
}
extern "C" static void obj_dealloc(PyObject *obj)
{
~T();
obj->ob_type->tp_free(obj);//most of the time this is global_free(obj)
}
...
};
class MyObject : PyObjectExtension<MyObject>
{
public:
static PyObject* InitType()
{
ExtensionType<MyObject> extType();
...sets other stuff...
return extType.Finalise();
}
...
};
A:
The documentation for these is at http://docs.python.org/3.0/c-api/typeobj.html and
http://docs.python.org/3.0/extending/newtypes.html describes how to make your own type.
tp_alloc does the low-level memory allocation for the instance. This is equivalent to malloc(), plus initialize the refcnt to 1. Python has it's own allocator, PyType_GenericAlloc, but a type can implement a specialized allocator.
tp_new is the same as Python's __new__. It's usually used for immutable objects where the data is stored in the instance itself, as compared to a pointer to data. For example, strings and tuples store their data in the instance, instead of using a char * or a PyTuple *.
For this case, tp_new figures out how much memory is needed, based on the input parameters, and calls tp_alloc to get the memory, then initializes the essential fields. tp_new does not need to call tp_alloc. It can for example return a cached object.
tp_init is the same as Python's __init__. Most of your initialization should be in this function.
The distinction between __new__ and __init__ is called two-stage initialization, or two-phase initialization.
You say "c++ just has new" but that's not correct. tp_alloc corresponds a custom arena allocator in C++, __new__ corresponds to a custom type allocator (a factory function), and __init__ is more like the constructor. That last link discusses more about the parallels between C++ and Python style.
Also read http://www.python.org/download/releases/2.2/descrintro/ for details about how __new__ and __init__ interact.
You write that you want to "create the object directly in c++". That's rather difficult because at the least you'll have to convert any Python exceptions that occurred during object instantiation into a C++ exception. You might try looking at Boost::Python for some help with this task. Or you can use a two-phase initialization. ;)
A:
I don't know the python APIs at all, but if python splits up allocation and initialization, you should be able to use placement new.
e.g.:
// tp_alloc
void *buffer = new char[sizeof(MyExtensionObject)];
// tp_init or tp_new (not sure what the distinction is there)
new (buffer) MyExtensionObject(args);
return static_cast<MyExtensionObject*>(buffer);
...
// tp_del
myExtensionObject->~MyExtensionObject(); // call dtor
// tp_dealloc (or tp_free? again I don't know the python apis)
delete [] (static_cast<char*>(static_cast<void*>(myExtensionObject)));
| Python C-API Object Allocation | I want to use the new and delete operators for creating and destroying my objects.
The problem is python seems to break it into several stages. tp_new, tp_init and tp_alloc for creation and tp_del, tp_free and tp_dealloc for destruction. However c++ just has new which allocates and fully constructs the object and delete which destructs and deallocates the object.
Which of the python tp_* methods do I need to provide and what must they do?
Also I want to be able to create the object directly in c++ eg "PyObject *obj = new MyExtensionObject(args);" Will I also need to overload the new operator in some way to support this?
I also would like to be able to subclass my extension types in python, is there anything special I need to do to support this?
I'm using python 3.0.1.
EDIT:
ok, tp_init seems to make objects a bit too mutable for what I'm doing (eg take a Texture object, changing the contents after creation is fine, but change fundamental aspects of it such as, size, bitdept, etc will break lots of existing c++ stuff that assumes those sort of things are fixed). If I dont implement it will it simply stop people calling __init__ AFTER its constructed (or at least ignore the call, like tuple does). Or should I have some flag that throws an exception or somthing if tp_init is called more than once on the same object?
Apart from that I think ive got most of the rest sorted.
extern "C"
{
//creation + destruction
PyObject* global_alloc(PyTypeObject *type, Py_ssize_t items)
{
return (PyObject*)new char[type->tp_basicsize + items*type->tp_itemsize];
}
void global_free(void *mem)
{
delete[] (char*)mem;
}
}
template<class T> class ExtensionType
{
PyTypeObject *t;
ExtensionType()
{
t = new PyTypeObject();//not sure on this one, what is the "correct" way to create an empty type object
memset((void*)t, 0, sizeof(PyTypeObject));
static PyVarObject init = {PyObject_HEAD_INIT, 0};
*((PyObject*)t) = init;
t->tp_basicsize = sizeof(T);
t->tp_itemsize = 0;
t->tp_name = "unknown";
t->tp_alloc = (allocfunc) global_alloc;
t->tp_free = (freefunc) global_free;
t->tp_new = (newfunc) T::obj_new;
t->tp_dealloc = (destructor)T::obj_dealloc;
...
}
...bunch of methods for changing stuff...
PyObject *Finalise()
{
...
}
};
template <class T> PyObjectExtension : public PyObject
{
...
extern "C" static PyObject* obj_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds)
{
void *mem = (void*)subtype->tp_alloc(subtype, 0);
return (PyObject*)new(mem) T(args, kwds)
}
extern "C" static void obj_dealloc(PyObject *obj)
{
~T();
obj->ob_type->tp_free(obj);//most of the time this is global_free(obj)
}
...
};
class MyObject : PyObjectExtension<MyObject>
{
public:
static PyObject* InitType()
{
ExtensionType<MyObject> extType();
...sets other stuff...
return extType.Finalise();
}
...
};
| [
"The documentation for these is at http://docs.python.org/3.0/c-api/typeobj.html and \nhttp://docs.python.org/3.0/extending/newtypes.html describes how to make your own type.\ntp_alloc does the low-level memory allocation for the instance. This is equivalent to malloc(), plus initialize the refcnt to 1. Python has it's own allocator, PyType_GenericAlloc, but a type can implement a specialized allocator.\ntp_new is the same as Python's __new__. It's usually used for immutable objects where the data is stored in the instance itself, as compared to a pointer to data. For example, strings and tuples store their data in the instance, instead of using a char * or a PyTuple *.\nFor this case, tp_new figures out how much memory is needed, based on the input parameters, and calls tp_alloc to get the memory, then initializes the essential fields. tp_new does not need to call tp_alloc. It can for example return a cached object.\ntp_init is the same as Python's __init__. Most of your initialization should be in this function.\nThe distinction between __new__ and __init__ is called two-stage initialization, or two-phase initialization. \nYou say \"c++ just has new\" but that's not correct. tp_alloc corresponds a custom arena allocator in C++, __new__ corresponds to a custom type allocator (a factory function), and __init__ is more like the constructor. That last link discusses more about the parallels between C++ and Python style.\nAlso read http://www.python.org/download/releases/2.2/descrintro/ for details about how __new__ and __init__ interact.\nYou write that you want to \"create the object directly in c++\". That's rather difficult because at the least you'll have to convert any Python exceptions that occurred during object instantiation into a C++ exception. You might try looking at Boost::Python for some help with this task. Or you can use a two-phase initialization. ;)\n",
"I don't know the python APIs at all, but if python splits up allocation and initialization, you should be able to use placement new.\ne.g.:\n // tp_alloc\n void *buffer = new char[sizeof(MyExtensionObject)];\n // tp_init or tp_new (not sure what the distinction is there)\n new (buffer) MyExtensionObject(args);\n return static_cast<MyExtensionObject*>(buffer);\n\n ...\n // tp_del\n myExtensionObject->~MyExtensionObject(); // call dtor\n // tp_dealloc (or tp_free? again I don't know the python apis)\n delete [] (static_cast<char*>(static_cast<void*>(myExtensionObject)));\n\n"
] | [
11,
0
] | [] | [] | [
"c",
"c++",
"python",
"python_3.x",
"python_c_api"
] | stackoverflow_0000573275_c_c++_python_python_3.x_python_c_api.txt |
Q:
CPython internal structures
GAE has various limitations, one of which is size of biggest allocatable block of memory amounting to 1Mb (now 10 times more, but that doesn't change the question). The limitation means that one cannot put more then some number of items in list() as CPython would try to allocate contiguous memory block for element pointers. Having huge list()s can be considered bad programming practice, but even if no huge structure is created in program itself, CPython maintains some behind the scenes.
It appears that CPython is maintaining single global list of objects or something. I.e. application that has many small objects tend to allocate bigger and bigger single blocks of memory.
First idea was gc, and disabling it changes application behavior a bit but still some structures are maintained.
A simplest short application that experience the issue is:
a = b = []
number_of_lists = 8000000
for i in xrange(number_of_lists):
b.append([])
b = b[0]
Can anyone enlighten me how to prevent CPython from allocating huge internal structures when having many objects in application?
A:
On a 32-bit system, each of the 8000000 lists you create will allocate 20 bytes for the list object itself, plus 16 bytes for a vector of list elements. So you are trying to allocate at least (20+16) * 8000000 = 20168000000 bytes, about 20 GB. And that's in the best case, if the system malloc only allocates exactly as much memory as requested.
I calculated the size of the list object as follows:
2 Pointers in the PyListObject structure itself (see listobject.h)
1 Pointer and one Py_ssize_t for the PyObject_HEAD part of the list object (see object.h)
one Py_ssize_t for the PyObject_VAR_HEAD (also in object.h)
The vector of list elements is slightly overallocated to avoid having to resize it at each append - see list_resize in listobject.c. The sizes are 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... Thus, your one-element lists will allocate room for 4 elements.
Your data structure is a somewhat pathological example, paying the price of a variable-sized list object without utilizing it - all your lists have only a single element. You could avoid the 12 bytes overallocation by using tuples instead of lists, but to further reduce the memory consumption, you will have to use a different data structure that uses fewer objects. It's hard to be more specific, as I don't know what you are trying to accomplish.
A:
I'm a bit confused as to what you're asking. In that code example, nothing should be garbage collected, as you're never actually killing off any references. You're holding a reference to the top level list in a and you're adding nested lists (held in b at each iteration) inside of that. If you remove the 'a =', then you've got unreferenced objects.
Edit: In response to the first part, yes, Python holds a list of objects so it can know what to cull. Is that the whole question? If not, comment/edit your question and I'll do my best to help fill in the gaps.
A:
What are you trying to accomplish with the
a = b = []
and
b = b[0]
statements? It's certainly odd to see statements like that in Python, because they don't do what you might naively expect: in that example, a and b are two names for the same list (think pointers in C). If you're doing a lot of manipulation like that, it's easy to confuse the garbage collector (and yourself!) because you've got a lot of strange references floating around that haven't been properly cleared.
It's hard to diagnose what's wrong with that code without knowing why you want to do what it appears to be doing. Sure, it exposes a bit of interpreter weirdness... but I'm guessing you're approaching your problem in an odd way, and a more Pythonic approach might yield better results.
A:
So that you're aware of it, Python has its own allocator. You can disable it using --without-pyalloc during the configure step.
However, the largest arena is 256KB so that shouldn't be the problem. You can also compile Python with debugging enabled, using --with-pydebug. This would give you more information about memory use.
I suspect your hunch and am sure that oefe's diagnosis are correct. A list uses contiguous memory, so if your list gets too large for a system arena then you're out of luck. If you're really adventurous you can reimplement PyList to use multiple blocks, but that's going to be a lot of work since various bits of Python expect contiguous data.
| CPython internal structures | GAE has various limitations, one of which is size of biggest allocatable block of memory amounting to 1Mb (now 10 times more, but that doesn't change the question). The limitation means that one cannot put more then some number of items in list() as CPython would try to allocate contiguous memory block for element pointers. Having huge list()s can be considered bad programming practice, but even if no huge structure is created in program itself, CPython maintains some behind the scenes.
It appears that CPython is maintaining single global list of objects or something. I.e. application that has many small objects tend to allocate bigger and bigger single blocks of memory.
First idea was gc, and disabling it changes application behavior a bit but still some structures are maintained.
A simplest short application that experience the issue is:
a = b = []
number_of_lists = 8000000
for i in xrange(number_of_lists):
b.append([])
b = b[0]
Can anyone enlighten me how to prevent CPython from allocating huge internal structures when having many objects in application?
| [
"On a 32-bit system, each of the 8000000 lists you create will allocate 20 bytes for the list object itself, plus 16 bytes for a vector of list elements. So you are trying to allocate at least (20+16) * 8000000 = 20168000000 bytes, about 20 GB. And that's in the best case, if the system malloc only allocates exactly as much memory as requested.\nI calculated the size of the list object as follows:\n\n2 Pointers in the PyListObject structure itself (see listobject.h)\n1 Pointer and one Py_ssize_t for the PyObject_HEAD part of the list object (see object.h)\none Py_ssize_t for the PyObject_VAR_HEAD (also in object.h)\n\nThe vector of list elements is slightly overallocated to avoid having to resize it at each append - see list_resize in listobject.c. The sizes are 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... Thus, your one-element lists will allocate room for 4 elements.\nYour data structure is a somewhat pathological example, paying the price of a variable-sized list object without utilizing it - all your lists have only a single element. You could avoid the 12 bytes overallocation by using tuples instead of lists, but to further reduce the memory consumption, you will have to use a different data structure that uses fewer objects. It's hard to be more specific, as I don't know what you are trying to accomplish.\n",
"I'm a bit confused as to what you're asking. In that code example, nothing should be garbage collected, as you're never actually killing off any references. You're holding a reference to the top level list in a and you're adding nested lists (held in b at each iteration) inside of that. If you remove the 'a =', then you've got unreferenced objects.\nEdit: In response to the first part, yes, Python holds a list of objects so it can know what to cull. Is that the whole question? If not, comment/edit your question and I'll do my best to help fill in the gaps.\n",
"What are you trying to accomplish with the\na = b = []\n\nand\nb = b[0]\n\nstatements? It's certainly odd to see statements like that in Python, because they don't do what you might naively expect: in that example, a and b are two names for the same list (think pointers in C). If you're doing a lot of manipulation like that, it's easy to confuse the garbage collector (and yourself!) because you've got a lot of strange references floating around that haven't been properly cleared.\nIt's hard to diagnose what's wrong with that code without knowing why you want to do what it appears to be doing. Sure, it exposes a bit of interpreter weirdness... but I'm guessing you're approaching your problem in an odd way, and a more Pythonic approach might yield better results.\n",
"So that you're aware of it, Python has its own allocator. You can disable it using --without-pyalloc during the configure step.\nHowever, the largest arena is 256KB so that shouldn't be the problem. You can also compile Python with debugging enabled, using --with-pydebug. This would give you more information about memory use.\nI suspect your hunch and am sure that oefe's diagnosis are correct. A list uses contiguous memory, so if your list gets too large for a system arena then you're out of luck. If you're really adventurous you can reimplement PyList to use multiple blocks, but that's going to be a lot of work since various bits of Python expect contiguous data.\n"
] | [
8,
0,
0,
0
] | [] | [] | [
"cpython",
"data_structures",
"google_app_engine",
"internals",
"python"
] | stackoverflow_0000572780_cpython_data_structures_google_app_engine_internals_python.txt |
Q:
Any way to create a NumPy matrix with C API?
I read the documentation on NumPy C API I could find, but still wasn't able to find out whether there is a possibility to construct a matrix object with C API — not a two-dimensional array. The function is intended for work with math matrices, and I don't want strange results if the user calls matrix multiplication forgetting to convert this value from an array to a matrix (multiplication and exponentiation being the only difference that matrix subclass has).
A:
You can call any python callable with the PyObject_Call* functions.
PyObject *numpy = PyImport_ImportModule("numpy");
PyObject *numpy_matrix = PyObject_GetAttrString(numpy, "matrix");
PyObject *my_matrix = PyObject_CallFunction(numpy_matrix, "(s)", "0 0; 0 0");
This will create a matrix my_matrix of size 2x2.
EDIT: Changed references to numpy.zeros/numpy.ndarray to numpy.matrix instead.
I also found a good tutorial on the subject: http://starship.python.net/crew/hinsen/NumPyExtensions.html
A:
numpy.matrix is an ordinary class defined in numpy/core/defmatrix.py. You can construct it using C API as any other instance of user-defined class in Python.
| Any way to create a NumPy matrix with C API? | I read the documentation on NumPy C API I could find, but still wasn't able to find out whether there is a possibility to construct a matrix object with C API — not a two-dimensional array. The function is intended for work with math matrices, and I don't want strange results if the user calls matrix multiplication forgetting to convert this value from an array to a matrix (multiplication and exponentiation being the only difference that matrix subclass has).
| [
"You can call any python callable with the PyObject_Call* functions.\nPyObject *numpy = PyImport_ImportModule(\"numpy\");\nPyObject *numpy_matrix = PyObject_GetAttrString(numpy, \"matrix\");\nPyObject *my_matrix = PyObject_CallFunction(numpy_matrix, \"(s)\", \"0 0; 0 0\");\n\nThis will create a matrix my_matrix of size 2x2.\nEDIT: Changed references to numpy.zeros/numpy.ndarray to numpy.matrix instead.\nI also found a good tutorial on the subject: http://starship.python.net/crew/hinsen/NumPyExtensions.html\n",
"numpy.matrix is an ordinary class defined in numpy/core/defmatrix.py. You can construct it using C API as any other instance of user-defined class in Python.\n"
] | [
6,
3
] | [] | [] | [
"numpy",
"python",
"python_c_api"
] | stackoverflow_0000573487_numpy_python_python_c_api.txt |
Q:
Using wget in python (Error Code Help me)
Heres my code.
import os, sys
if len(sys.argv) != 2:
sys.exit(1)
h = os.popen("wget -r %s") % sys.argv[1]
fil = open("links.txt","w")
dir = os.listdir("%s") % sys.argv[1]
for file in dir:
print file.replace("@","?")
fil.write("%s/"+file.replace("@","?")) % sys.argv[1]
fil.write("\n")
h.close()
running the it, like this python project.py http://google.com
gives me error code.
1.py:5 RuntimeWarning: tp_compare didnt return -1 or -2 for exception
h = os.popen("wget -r %s") % sys.argv[1]
Traceback (most recent call last):
File "1.py" line 5, in <module>
h = os.popen("wget -r %s") % sys.argv[1]
TypeError: unsupported operand type<s> for %: 'file' and 'str'
What are im going wrong. (Still learning python) Any solution/ tip?
I dont explain the code, i think you understand what im trying to do
A:
h = os.popen("wget -r %s" % sys.argv[1])
use the subprocess module, os.popen is obsolete
python has urllib, you can consider using that to have pure python code
there is pycurl
A:
I think you want:
h = os.popen("wget -r %s" % sys.argv[1])
A:
You're putting the % operator in the wrong place: you need to put it directly after the format string:
h = os.popen("wget -r %s" % sys.argv[1])
...
dir = os.listdir("%s" % sys.argv[1])
...
fil.write(("%s/"+file.replace("@","?")) % sys.argv[1])
Alternatively, since you're just using %s, just do plain and simple string concatenation:
h = os.popen("wget -r " + sys.argv[1])
...
dir = os.listdir(sys.argv[1])
...
fil.write(sys.argv[1] + "/" + file.replace("@","?"))
| Using wget in python (Error Code Help me) | Heres my code.
import os, sys
if len(sys.argv) != 2:
sys.exit(1)
h = os.popen("wget -r %s") % sys.argv[1]
fil = open("links.txt","w")
dir = os.listdir("%s") % sys.argv[1]
for file in dir:
print file.replace("@","?")
fil.write("%s/"+file.replace("@","?")) % sys.argv[1]
fil.write("\n")
h.close()
running the it, like this python project.py http://google.com
gives me error code.
1.py:5 RuntimeWarning: tp_compare didnt return -1 or -2 for exception
h = os.popen("wget -r %s") % sys.argv[1]
Traceback (most recent call last):
File "1.py" line 5, in <module>
h = os.popen("wget -r %s") % sys.argv[1]
TypeError: unsupported operand type<s> for %: 'file' and 'str'
What are im going wrong. (Still learning python) Any solution/ tip?
I dont explain the code, i think you understand what im trying to do
| [
"\nh = os.popen(\"wget -r %s\" % sys.argv[1])\nuse the subprocess module, os.popen is obsolete\npython has urllib, you can consider using that to have pure python code\nthere is pycurl\n\n",
"I think you want:\nh = os.popen(\"wget -r %s\" % sys.argv[1])\n\n",
"You're putting the % operator in the wrong place: you need to put it directly after the format string:\nh = os.popen(\"wget -r %s\" % sys.argv[1])\n...\ndir = os.listdir(\"%s\" % sys.argv[1])\n...\nfil.write((\"%s/\"+file.replace(\"@\",\"?\")) % sys.argv[1])\n\nAlternatively, since you're just using %s, just do plain and simple string concatenation:\nh = os.popen(\"wget -r \" + sys.argv[1])\n...\ndir = os.listdir(sys.argv[1])\n...\nfil.write(sys.argv[1] + \"/\" + file.replace(\"@\",\"?\"))\n\n"
] | [
9,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0000573914_python.txt |
Q:
Has anyone used SciPy with IronPython?
I've been able to use the standard Python modules from IronPython, but I haven't gotten SciPy to work yet. Has anyone been able to use SciPy from IronPython? What did you have to do to make it work?
Update: See Numerical computing in IronPython with Ironclad
Update: Microsoft is partnering with Enthought to make SciPy for .NET.
A:
Some of my workmates are working on Ironclad, a project that will make extension modules for CPython work in IronPython. It's still in development, but parts of numpy, scipy and some other modules already work. You should try it out to see whether the parts of scipy you need are supported.
It's an open-source project, so if you're interested you could even help. In any case, some feedback about what you're trying to do and what parts we should look at next is helpful too.
A:
Anything with components written in C (for example NumPy, which is a component of SciPy) will not work on IronPython as the external language interface works differently. Any C language component will probably not work unless it has been explicitly ported to work with IronPython.
You might have to dig into the individual modules and check to see which ones work or are pure python and find out which if any of the C-based ones have been ported yet.
| Has anyone used SciPy with IronPython? | I've been able to use the standard Python modules from IronPython, but I haven't gotten SciPy to work yet. Has anyone been able to use SciPy from IronPython? What did you have to do to make it work?
Update: See Numerical computing in IronPython with Ironclad
Update: Microsoft is partnering with Enthought to make SciPy for .NET.
| [
"Some of my workmates are working on Ironclad, a project that will make extension modules for CPython work in IronPython. It's still in development, but parts of numpy, scipy and some other modules already work. You should try it out to see whether the parts of scipy you need are supported. \nIt's an open-source project, so if you're interested you could even help. In any case, some feedback about what you're trying to do and what parts we should look at next is helpful too.\n",
"Anything with components written in C (for example NumPy, which is a component of SciPy) will not work on IronPython as the external language interface works differently. Any C language component will probably not work unless it has been explicitly ported to work with IronPython.\nYou might have to dig into the individual modules and check to see which ones work or are pure python and find out which if any of the C-based ones have been ported yet.\n"
] | [
12,
8
] | [] | [] | [
"ironpython",
"python",
"python.net",
"scipy"
] | stackoverflow_0000574604_ironpython_python_python.net_scipy.txt |
Q:
Include html part in a mail with python libgmail
I've a question about its usage: i need to send an html formatted mail. I prepare my message with
ga = libgmail.GmailAccount(USERNAME,PASSWORD)
msg = MIMEMultipart('alternative')
msg.attach(part1)
msg.attach(part2)
...
ga.sendMessage(msg.as_string())
This way doesn't works, it seems can't send msg with sendMessage method.
What is the right way? : D
A:
If you refer to libgmail from sourceforge, you need to compose your messages with the email module.
Generate the HTML message as a MIME document, and include it as a part of a multipart MIME message. When you have a fully constructed multipart MIME, pass it along as a string to the libgmail constructor, using to .as_string() method.
An example in the doc contains the code for a similar requirement:
# Create message container - the correct MIME type is multipart/alternative.
msg = MIMEMultipart('alternative')
msg['Subject'] = "Link"
msg['From'] = me
msg['To'] = you
...
# Record the MIME types of both parts - text/plain and text/html.
# ... text and html are strings with appropriate content.
part1 = MIMEText(text, 'plain')
part2 = MIMEText(html, 'html')
# Attach parts into message container.
# According to RFC 2046, the last part of a multipart message, in this case
# the HTML message, is best and preferred.
msg.attach(part1)
msg.attach(part2)
| Include html part in a mail with python libgmail | I've a question about its usage: i need to send an html formatted mail. I prepare my message with
ga = libgmail.GmailAccount(USERNAME,PASSWORD)
msg = MIMEMultipart('alternative')
msg.attach(part1)
msg.attach(part2)
...
ga.sendMessage(msg.as_string())
This way doesn't works, it seems can't send msg with sendMessage method.
What is the right way? : D
| [
"If you refer to libgmail from sourceforge, you need to compose your messages with the email module.\nGenerate the HTML message as a MIME document, and include it as a part of a multipart MIME message. When you have a fully constructed multipart MIME, pass it along as a string to the libgmail constructor, using to .as_string() method.\nAn example in the doc contains the code for a similar requirement:\n# Create message container - the correct MIME type is multipart/alternative.\nmsg = MIMEMultipart('alternative')\nmsg['Subject'] = \"Link\"\nmsg['From'] = me\nmsg['To'] = you\n...\n# Record the MIME types of both parts - text/plain and text/html.\n# ... text and html are strings with appropriate content.\npart1 = MIMEText(text, 'plain')\npart2 = MIMEText(html, 'html')\n\n# Attach parts into message container.\n# According to RFC 2046, the last part of a multipart message, in this case\n# the HTML message, is best and preferred.\nmsg.attach(part1)\nmsg.attach(part2)\n\n"
] | [
1
] | [] | [] | [
"gmail",
"html",
"libgmail",
"mime",
"python"
] | stackoverflow_0000574861_gmail_html_libgmail_mime_python.txt |
Q:
Docs for the internals of CPython Implementation
I am currently in the process of making an embedded system port of the CPython 3.0 Python interpreter and I'm particularly interested in any references or documentation that provides details about the design and structure of code for Release 3.0 or even about any of the 2.x releases.
One useful document I have found so far is this informational PEP on the implementation - which is a good overview - but is still pretty high level. Hoping to come across something that gives [much] more detail on more of the modules or perhaps even covers something about porting considerations.
A:
There's the documentation for the C API, which is essentially the API for the internals of Python. It won't cover porting details, though. The code itself is fairly well documented. You might try reading in and around the area you'll need to modify.
A:
Most of the documentation is stored in the minds of various core developers. :) A good resource for you would be the #python-dev IRC channel on freenode where many of them hang out.
There's also some scattered information on the Python wiki.
| Docs for the internals of CPython Implementation | I am currently in the process of making an embedded system port of the CPython 3.0 Python interpreter and I'm particularly interested in any references or documentation that provides details about the design and structure of code for Release 3.0 or even about any of the 2.x releases.
One useful document I have found so far is this informational PEP on the implementation - which is a good overview - but is still pretty high level. Hoping to come across something that gives [much] more detail on more of the modules or perhaps even covers something about porting considerations.
| [
"There's the documentation for the C API, which is essentially the API for the internals of Python. It won't cover porting details, though. The code itself is fairly well documented. You might try reading in and around the area you'll need to modify.\n",
"Most of the documentation is stored in the minds of various core developers. :) A good resource for you would be the #python-dev IRC channel on freenode where many of them hang out.\nThere's also some scattered information on the Python wiki.\n"
] | [
8,
1
] | [] | [] | [
"cpython",
"python"
] | stackoverflow_0000574004_cpython_python.txt |
Q:
Django and Sqlite Concurrency issue
I've done a bit of reading related to the concurrency issues with sqlite, but I don't see how they'd apply to Django since it's inherently single threaded. I'm not using any multiprocess modules either. I have absolutely no experience with concurrent programming either, so if someone can identify WHY the following code is causing an OperationalError: 'database is locked' I'd be grateful.
views.py
def screening(request, ovramt=None):
errors = []
if request.method == "POST":
form = ScreeningForm(request.POST)
if form.is_valid():
print "Woo valid!!"
return HttpResponse()
else: # GET
if ovramt is None:
o = Ovramt.objects.select_related(depth=1).latest("date_completed")
print "found?"
print o.id
else:
try:
o = Ovramt.objects.select_related(depth=1).get(id=ovramt)
except:
errors.append("OVRAMT NOT FOUND")
if o.residents.count() <= 0:
o.add_active_residents()
residents = list(o.residents)
models.py
def add_active_residents(self):
ssa_res = SSA_Resident.objects.select_related(depth=1).filter(ssa=self.ssa, active=True)
for r in ssa_res:
self.residents.add(r.resident) # Fails Here
self.save()
The add_active_residents method works fine, until it is called from the views module. Is there an open connection to the database open in the view which prevents writing from the model? Does someone have an explanation why this code will error?
A:
In the following method function
def add_active_residents(self):
ssa_res = SSA_Resident.objects.select_related(depth=1).filter(ssa=self.ssa, active=True)
for r in ssa_res:
self.residents.add(r.resident) # Fails Here
self.save()
Why is there a select_related? You only really need the FK's of ssa_res items. Why do additional queries for related items?
A:
Are you using Python 2.6?
If so, this is (apparently) a known issue that can be mitigated by adding:
DATABASE_OPTIONS = {'timeout': 30}
to your settings.py
See http://code.djangoproject.com/ticket/9409
A:
My understanding is that only write operations will result in a db-locked condition.
http://www.sqlite.org/lockingv3.html
It's hard to say what the problem is without knowing how django is handling sqlite internally.
Speaking from using sqlite with standard cgi, I've noticed that in some cases it can take a 'long' time to release the lock. You may want to increase the timeout value mentioned by Matthew Christensen.
A:
Sounds like you are actually running a multithreaded application, despite what you say. I am a bit clueless about Django, but I would assume that even though it might be single-threaded, whatever debugging server, or production server you run your application in won't be "inherently single threaded".
| Django and Sqlite Concurrency issue | I've done a bit of reading related to the concurrency issues with sqlite, but I don't see how they'd apply to Django since it's inherently single threaded. I'm not using any multiprocess modules either. I have absolutely no experience with concurrent programming either, so if someone can identify WHY the following code is causing an OperationalError: 'database is locked' I'd be grateful.
views.py
def screening(request, ovramt=None):
errors = []
if request.method == "POST":
form = ScreeningForm(request.POST)
if form.is_valid():
print "Woo valid!!"
return HttpResponse()
else: # GET
if ovramt is None:
o = Ovramt.objects.select_related(depth=1).latest("date_completed")
print "found?"
print o.id
else:
try:
o = Ovramt.objects.select_related(depth=1).get(id=ovramt)
except:
errors.append("OVRAMT NOT FOUND")
if o.residents.count() <= 0:
o.add_active_residents()
residents = list(o.residents)
models.py
def add_active_residents(self):
ssa_res = SSA_Resident.objects.select_related(depth=1).filter(ssa=self.ssa, active=True)
for r in ssa_res:
self.residents.add(r.resident) # Fails Here
self.save()
The add_active_residents method works fine, until it is called from the views module. Is there an open connection to the database open in the view which prevents writing from the model? Does someone have an explanation why this code will error?
| [
"In the following method function\ndef add_active_residents(self):\n ssa_res = SSA_Resident.objects.select_related(depth=1).filter(ssa=self.ssa, active=True)\n for r in ssa_res:\n self.residents.add(r.resident) # Fails Here\n self.save()\n\nWhy is there a select_related? You only really need the FK's of ssa_res items. Why do additional queries for related items?\n",
"Are you using Python 2.6?\nIf so, this is (apparently) a known issue that can be mitigated by adding:\nDATABASE_OPTIONS = {'timeout': 30}\n\nto your settings.py\nSee http://code.djangoproject.com/ticket/9409\n",
"My understanding is that only write operations will result in a db-locked condition. \nhttp://www.sqlite.org/lockingv3.html\nIt's hard to say what the problem is without knowing how django is handling sqlite internally.\nSpeaking from using sqlite with standard cgi, I've noticed that in some cases it can take a 'long' time to release the lock. You may want to increase the timeout value mentioned by Matthew Christensen.\n",
"Sounds like you are actually running a multithreaded application, despite what you say. I am a bit clueless about Django, but I would assume that even though it might be single-threaded, whatever debugging server, or production server you run your application in won't be \"inherently single threaded\".\n"
] | [
4,
2,
2,
1
] | [] | [] | [
"concurrency",
"django",
"python",
"sqlite"
] | stackoverflow_0000572009_concurrency_django_python_sqlite.txt |
Q:
How does python close files that have been gc'ed?
I had always assumed that a file would leak if it was opened without being closed, but I just verified that if I enter the following lines of code, the file will close:
>>> f = open('somefile.txt')
>>> del f
Just out of sheer curiosity, how does this work? I notice that file doesn't include a __del__ method.
A:
In CPython, at least, files are closed when the file object is deallocated. See the file_dealloc function in Objects/fileobject.c in the CPython source. Dealloc methods are sort-of like __del__ for C types, except without some of the problems inherent to __del__.
A:
Hence the with statement.
For Python 2.5, use
from __future__ import with_statement
(For Python 2.6 or 3.x, do nothing)
with open( "someFile", "rU" ) as aFile:
# process the file
pass
# At this point, the file was closed by the with statement.
# Bonus, it's also out of scope of the with statement,
# and eligible for GC.
A:
Python uses reference counting and deterministic destruction in addition to garbage collection. When there is no more references to an object, the object is released immediately. Releasing a file closes it.
This is different than e.g. Java where there is only nondeterministic garbage collection. This means you connot know when the object is released, so you will have to close the file manually.
Note that reference counting is not perfect. You can have objects with circular references, which is not reachable from the progam. Thats why Python has garbage collection in addition to reference counting.
A:
Best guess is that because the file type is a built-in type, the interpreter itself handles closing the file on garbage collection.
Alternatively, you are only checking after the python interpreter has exited, and all "leaked" file handles are closed anyways.
| How does python close files that have been gc'ed? | I had always assumed that a file would leak if it was opened without being closed, but I just verified that if I enter the following lines of code, the file will close:
>>> f = open('somefile.txt')
>>> del f
Just out of sheer curiosity, how does this work? I notice that file doesn't include a __del__ method.
| [
"In CPython, at least, files are closed when the file object is deallocated. See the file_dealloc function in Objects/fileobject.c in the CPython source. Dealloc methods are sort-of like __del__ for C types, except without some of the problems inherent to __del__.\n",
"Hence the with statement.\nFor Python 2.5, use\nfrom __future__ import with_statement\n\n(For Python 2.6 or 3.x, do nothing)\nwith open( \"someFile\", \"rU\" ) as aFile:\n # process the file\n pass\n# At this point, the file was closed by the with statement.\n# Bonus, it's also out of scope of the with statement,\n# and eligible for GC.\n\n",
"Python uses reference counting and deterministic destruction in addition to garbage collection. When there is no more references to an object, the object is released immediately. Releasing a file closes it.\nThis is different than e.g. Java where there is only nondeterministic garbage collection. This means you connot know when the object is released, so you will have to close the file manually.\nNote that reference counting is not perfect. You can have objects with circular references, which is not reachable from the progam. Thats why Python has garbage collection in addition to reference counting.\n",
"Best guess is that because the file type is a built-in type, the interpreter itself handles closing the file on garbage collection.\nAlternatively, you are only checking after the python interpreter has exited, and all \"leaked\" file handles are closed anyways.\n"
] | [
20,
4,
2,
0
] | [] | [] | [
"del",
"file",
"garbage_collection",
"python"
] | stackoverflow_0000575278_del_file_garbage_collection_python.txt |
Q:
How can you extract all 6 letter Latin words to a list?
I need to have all 6 letter Latin words in a list.
I would also like to have words which follow the pattern Xyzzyx in a list.
I have used little Python.
A:
Regular expressions are your friend, my friend! Is this homework?
Here's an example that's close to what you want:
egrep "^\w{6}$" /usr/share/dict/words | egrep "(.)(.)(.)\3\2\1"
I'll leave it as an exercise for the reader to create a latin word list and deal with the uppercase X in the second regex, but the general idea should be evident.
A:
Note that unless your list contains all of the nouns' declensions and verbs' conjugations, your program won't produce anything like all the six-letter words in Latin.
For instance, your list probably contains only the nominative case of the nouns. First-declension nouns whose nominative case is five letters long (e.g. mensa) have a six-letter genitive case (e.g. mensae). All of the declensions contain cases where the noun's length is different from its nominative case.
The same's even more true of verbs, each of which have (at least) four principal parts, which can be of varying length, and whose conjugations can be of varying lengths as well. So the first-person singular present tense of lego is four letters long, but its infinitive legere is six; porto is five in the first-person singular but six in the the second-person singular, portas.
I suppose it's possible in principle to build an engine that programmatically declines and conjugates Latin words given enough metainformation about each word. Python would actually be a pretty good language to do that in. But that's a much bigger task than just writing a regular expression.
| How can you extract all 6 letter Latin words to a list? | I need to have all 6 letter Latin words in a list.
I would also like to have words which follow the pattern Xyzzyx in a list.
I have used little Python.
| [
"Regular expressions are your friend, my friend! Is this homework?\nHere's an example that's close to what you want:\negrep \"^\\w{6}$\" /usr/share/dict/words | egrep \"(.)(.)(.)\\3\\2\\1\"\n\nI'll leave it as an exercise for the reader to create a latin word list and deal with the uppercase X in the second regex, but the general idea should be evident.\n",
"Note that unless your list contains all of the nouns' declensions and verbs' conjugations, your program won't produce anything like all the six-letter words in Latin.\nFor instance, your list probably contains only the nominative case of the nouns. First-declension nouns whose nominative case is five letters long (e.g. mensa) have a six-letter genitive case (e.g. mensae). All of the declensions contain cases where the noun's length is different from its nominative case.\nThe same's even more true of verbs, each of which have (at least) four principal parts, which can be of varying length, and whose conjugations can be of varying lengths as well. So the first-person singular present tense of lego is four letters long, but its infinitive legere is six; porto is five in the first-person singular but six in the the second-person singular, portas.\nI suppose it's possible in principle to build an engine that programmatically declines and conjugates Latin words given enough metainformation about each word. Python would actually be a pretty good language to do that in. But that's a much bigger task than just writing a regular expression.\n"
] | [
5,
0
] | [] | [] | [
"data_mining",
"python",
"regex"
] | stackoverflow_0000574952_data_mining_python_regex.txt |
Q:
Sorting dictionary keys in python
I have a dict where each key references an int value. What's the best way to sort the keys into a list depending on the values?
A:
I like this one:
sorted(d, key=d.get)
A:
>>> mydict = {'a':1,'b':3,'c':2}
>>> sorted(mydict, key=lambda key: mydict[key])
['a', 'c', 'b']
A:
my_list = sorted(dict.items(), key=lambda x: x[1])
A:
[v[0] for v in sorted(foo.items(), key=lambda(k,v): (v,k))]
| Sorting dictionary keys in python | I have a dict where each key references an int value. What's the best way to sort the keys into a list depending on the values?
| [
"I like this one:\nsorted(d, key=d.get)\n\n",
">>> mydict = {'a':1,'b':3,'c':2}\n>>> sorted(mydict, key=lambda key: mydict[key])\n['a', 'c', 'b']\n\n",
"my_list = sorted(dict.items(), key=lambda x: x[1])\n\n",
"[v[0] for v in sorted(foo.items(), key=lambda(k,v): (v,k))]\n\n"
] | [
357,
107,
17,
4
] | [] | [] | [
"python",
"sorting"
] | stackoverflow_0000575819_python_sorting.txt |
Q:
Succesive calls to cProfile/pstats no updating properly
I'm trying to make successive calls of some profiler code however on the second call to the function the update time of the profile file changes but the actual profiler stats stay the same. This isn't the code I'm running but it's as simplified an example I can come up with that shows the same behaviour.
On running, the first time ctrl+c is pressed it shows stats, second time same thing but rather than being fully updated as expected only the time is, and third time program actually quits. If trying, ideally wait at a few seconds between ctrl+c presses.
Adding profiler.enable() after the 8th lines does give full updates between calls however it adds a lot of extra profiler data for things I don't want to be profiling.
Any suggestions for a happy medium where I get full updates but without the extra fluff?
import signal, sys, time, cProfile, pstats
call = 0
def sigint_handler(signal, frame):
global call
if call < 2:
profiler.dump_stats("profile.prof")
stats = pstats.Stats("profile.prof")
stats.strip_dirs().sort_stats('cumulative').print_stats()
call += 1
else:
sys.exit()
def wait():
time.sleep(1)
def main_io_loop():
signal.signal(signal.SIGINT, sigint_handler)
while 1:
wait()
profiler = cProfile.Profile()
profiler.runctx("main_io_loop()", globals(), locals())
A:
Calling profiler.dump_stats (implemented in cProfile.py) calls profiler.create_stats, which in turns calls profiler.disable().
You need to call profiler.enable() to make it work again. No, this is not documented.
The following seems to do what you want. Note that I got rid of the intermediate data file since pstats.Stats knows how to get the data from the profiler directly.
import signal, sys, time, pstats, cProfile
call = 0
def sigint_handler(signal, frame):
global call
if call < 2:
stats = pstats.Stats(profiler)
stats.strip_dirs().sort_stats('cumulative').print_stats()
profiler.enable()
call += 1
else:
sys.exit()
def wait():
time.sleep(1)
def main_io_loop():
signal.signal(signal.SIGINT, sigint_handler)
while 1:
wait()
profiler = cProfile.Profile()
profiler.runctx("main_io_loop()", globals(), locals())
| Succesive calls to cProfile/pstats no updating properly | I'm trying to make successive calls of some profiler code however on the second call to the function the update time of the profile file changes but the actual profiler stats stay the same. This isn't the code I'm running but it's as simplified an example I can come up with that shows the same behaviour.
On running, the first time ctrl+c is pressed it shows stats, second time same thing but rather than being fully updated as expected only the time is, and third time program actually quits. If trying, ideally wait at a few seconds between ctrl+c presses.
Adding profiler.enable() after the 8th lines does give full updates between calls however it adds a lot of extra profiler data for things I don't want to be profiling.
Any suggestions for a happy medium where I get full updates but without the extra fluff?
import signal, sys, time, cProfile, pstats
call = 0
def sigint_handler(signal, frame):
global call
if call < 2:
profiler.dump_stats("profile.prof")
stats = pstats.Stats("profile.prof")
stats.strip_dirs().sort_stats('cumulative').print_stats()
call += 1
else:
sys.exit()
def wait():
time.sleep(1)
def main_io_loop():
signal.signal(signal.SIGINT, sigint_handler)
while 1:
wait()
profiler = cProfile.Profile()
profiler.runctx("main_io_loop()", globals(), locals())
| [
"Calling profiler.dump_stats (implemented in cProfile.py) calls profiler.create_stats, which in turns calls profiler.disable().\nYou need to call profiler.enable() to make it work again. No, this is not documented.\nThe following seems to do what you want. Note that I got rid of the intermediate data file since pstats.Stats knows how to get the data from the profiler directly.\nimport signal, sys, time, pstats, cProfile\n\ncall = 0\n\ndef sigint_handler(signal, frame):\n global call\n if call < 2:\n stats = pstats.Stats(profiler)\n stats.strip_dirs().sort_stats('cumulative').print_stats()\n profiler.enable()\n call += 1\n else:\n sys.exit()\n\ndef wait():\n time.sleep(1)\n\ndef main_io_loop():\n signal.signal(signal.SIGINT, sigint_handler)\n while 1:\n wait()\n\nprofiler = cProfile.Profile()\nprofiler.runctx(\"main_io_loop()\", globals(), locals())\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0000575325_python.txt |
Q:
Is there a special trick to downloading a zip file and writing it to disk with Python?
I am FTPing a zip file from a remote FTP site using Python's ftplib. I then attempt to write it to disk. The file write works, however most attempts to open the zip using WinZip or WinRar fail; both apps claim the file is corrupted. Oddly however, when right clicking and attempting to extract the file using WinRar, the file will extract.
So to be clear, the file write will work, but will not open inside the popular zip apps, but will decompress using those same apps. Note that the Python zipfile module never fails to extract the zips.
Here is the code that I'm using to get the zip file from the FTP site (please ignore the bad tabbing, that's not the issue).
filedata = None
def appender(chunk):
global filedata
filedata += chunk
def getfile(filename):
try:
ftp = None
try:
ftp = FTP(address)
ftp.login('user', 'password')
except Exception, e:
print e
command = 'RETR ' + filename
idx = filename.rfind('/')
path = filename[0:idx]
ftp.cwd(path)
fileonly = filename[idx+1:len(filename)]
ftp.retrbinary('RETR ' + filename, appender)
global filedata
data = filedata
ftp.close()
filedata = ''
return data
except Exception, e:
print e
data = getfile('/archives/myfile.zip')
file = open(pathtoNTFileShare, 'wb')
file.write(data)
file.close()
A:
Pass file.write directly inside the retrbinary function instead of passing appender. This will work and it will also not use that much RAM when you are downloading a big file.
If you'd like the data stored inside a variable though, you can also have a variable named:
blocks = []
Then pass to retrbinary instead of appender:
blocks.append
Your current appender function is wrong. += will not work correctly when there is binary data because it will try to do a string append and stop at the first NULL it sees.
As mentioned by @Lee B you can also use urllib2 or Curl. But your current code is almost correct if you make the small modifications I mentioned above.
A:
I've never used that library, but urllib2 works fine, and is more straightforward. Curl is even better.
Looking at your code, I can see a couple of things wrong. Your exception catching only prints the exception, then continues. For fatal errors like not getting an FTP connection, they need to print the message and then exit. Also, your filedata starts off as None, then your appender uses += to add to that, so you're trying to append a string + None, which gives a TypeError when I try it here. I'm surprised it's working at all; I would have guessed that the appender would throw an exception, and so the FTP copy would abort.
While re-reading, I just noticed another answer about use of += on binary data. That could well be it; python tries to be smart sometimes, and could be "helping" when you join strings with whitespace or NULs in them, or something like that. Your best bet there is to have the file open (let's call it outfile), and use your appender to just outfile.write(chunk).
| Is there a special trick to downloading a zip file and writing it to disk with Python? | I am FTPing a zip file from a remote FTP site using Python's ftplib. I then attempt to write it to disk. The file write works, however most attempts to open the zip using WinZip or WinRar fail; both apps claim the file is corrupted. Oddly however, when right clicking and attempting to extract the file using WinRar, the file will extract.
So to be clear, the file write will work, but will not open inside the popular zip apps, but will decompress using those same apps. Note that the Python zipfile module never fails to extract the zips.
Here is the code that I'm using to get the zip file from the FTP site (please ignore the bad tabbing, that's not the issue).
filedata = None
def appender(chunk):
global filedata
filedata += chunk
def getfile(filename):
try:
ftp = None
try:
ftp = FTP(address)
ftp.login('user', 'password')
except Exception, e:
print e
command = 'RETR ' + filename
idx = filename.rfind('/')
path = filename[0:idx]
ftp.cwd(path)
fileonly = filename[idx+1:len(filename)]
ftp.retrbinary('RETR ' + filename, appender)
global filedata
data = filedata
ftp.close()
filedata = ''
return data
except Exception, e:
print e
data = getfile('/archives/myfile.zip')
file = open(pathtoNTFileShare, 'wb')
file.write(data)
file.close()
| [
"Pass file.write directly inside the retrbinary function instead of passing appender. This will work and it will also not use that much RAM when you are downloading a big file. \nIf you'd like the data stored inside a variable though, you can also have a variable named: \nblocks = []\n\nThen pass to retrbinary instead of appender: \nblocks.append\n\nYour current appender function is wrong. += will not work correctly when there is binary data because it will try to do a string append and stop at the first NULL it sees. \nAs mentioned by @Lee B you can also use urllib2 or Curl. But your current code is almost correct if you make the small modifications I mentioned above. \n",
"I've never used that library, but urllib2 works fine, and is more straightforward. Curl is even better.\nLooking at your code, I can see a couple of things wrong. Your exception catching only prints the exception, then continues. For fatal errors like not getting an FTP connection, they need to print the message and then exit. Also, your filedata starts off as None, then your appender uses += to add to that, so you're trying to append a string + None, which gives a TypeError when I try it here. I'm surprised it's working at all; I would have guessed that the appender would throw an exception, and so the FTP copy would abort.\nWhile re-reading, I just noticed another answer about use of += on binary data. That could well be it; python tries to be smart sometimes, and could be \"helping\" when you join strings with whitespace or NULs in them, or something like that. Your best bet there is to have the file open (let's call it outfile), and use your appender to just outfile.write(chunk).\n"
] | [
2,
1
] | [] | [] | [
"ftp",
"ftplib",
"python"
] | stackoverflow_0000576238_ftp_ftplib_python.txt |
Q:
Django Project structure, recommended structure to share an extended auth "User" model across apps?
I'm wondering what the common project/application structure is when the user model extended/sub-classed and this Resulting User model is shared and used across multiple apps.
I'd like to reference the same user model in multiple apps.
I haven't built the login interface yet, so I'm not sure how it should fit together.
The following comes to mind:
project.loginapp.app1
project.loginapp.app2
Is there a common pattern for this situation?
Would login best be handled by a 'login app'?
Similar to this question but more specific.
django application configuration
UPDATE
Clarified my use-case above.
I'd like to add fields (extend or subclass?) to the existing auth user model. And then reference that model in multiple apps.
A:
Why are you extending User? Please clarify.
If you're adding more information about the users, you don't need to roll your own user and auth system. Django's version of that is quite solid. The user management is located in django.contrib.auth.
If you need to customize the information stored with users, first define a model such as
class Profile(models.Model):
...
user = models.ForeignKey("django.contrib.auth.models.User", unique=True)
and then set
AUTH_PROFILE_MODULE = "appname.profile"
in your settings.py
The advantage of setting this allows you to use code like this in your views:
def my_view(request):
profile = request.user.get_profile()
etc...
If you're trying to provide more ways for users to authenticate, you can add an auth backend. Extend or re-implement django.contrib.auth.backends.ModelBackend and set it as
your AUTHENTICATION_BACKENDS in settings.py.
If you want to make use of a different permissions or groups concept than is provided by django, there's nothing that will stop you. Django makes use of those two concepts only in django.contrib.admin (That I know of), and you are free to use some other concept for those topics as you see fit.
A:
You should check first if the contrib.auth module satisfies your needs, so you don't have to reinvent the wheel:
http://docs.djangoproject.com/en/dev/topics/auth/#topics-auth
edit:
Check this snippet that creates a UserProfile after the creation of a new User.
def create_user_profile_handler(sender, instance, created, **kwargs):
if not created: return
user_profile = UserProfile.objects.create(user=instance)
user_profile.save()
post_save.connect(create_user_profile_handler, sender=User)
A:
i think the 'project/app' names are badly chosen. it's more like 'site/module'. an app can be very useful without having views, for example.
check the 2008 DjangoCon talks on YouTube, especially the one about reusable apps, it will make you think totally different about how to structure your project.
| Django Project structure, recommended structure to share an extended auth "User" model across apps? | I'm wondering what the common project/application structure is when the user model extended/sub-classed and this Resulting User model is shared and used across multiple apps.
I'd like to reference the same user model in multiple apps.
I haven't built the login interface yet, so I'm not sure how it should fit together.
The following comes to mind:
project.loginapp.app1
project.loginapp.app2
Is there a common pattern for this situation?
Would login best be handled by a 'login app'?
Similar to this question but more specific.
django application configuration
UPDATE
Clarified my use-case above.
I'd like to add fields (extend or subclass?) to the existing auth user model. And then reference that model in multiple apps.
| [
"Why are you extending User? Please clarify.\nIf you're adding more information about the users, you don't need to roll your own user and auth system. Django's version of that is quite solid. The user management is located in django.contrib.auth.\nIf you need to customize the information stored with users, first define a model such as \nclass Profile(models.Model):\n ...\n user = models.ForeignKey(\"django.contrib.auth.models.User\", unique=True)\n\nand then set \nAUTH_PROFILE_MODULE = \"appname.profile\"\n\nin your settings.py\nThe advantage of setting this allows you to use code like this in your views:\ndef my_view(request):\n profile = request.user.get_profile()\n etc...\n\nIf you're trying to provide more ways for users to authenticate, you can add an auth backend. Extend or re-implement django.contrib.auth.backends.ModelBackend and set it as\nyour AUTHENTICATION_BACKENDS in settings.py. \nIf you want to make use of a different permissions or groups concept than is provided by django, there's nothing that will stop you. Django makes use of those two concepts only in django.contrib.admin (That I know of), and you are free to use some other concept for those topics as you see fit.\n",
"You should check first if the contrib.auth module satisfies your needs, so you don't have to reinvent the wheel:\nhttp://docs.djangoproject.com/en/dev/topics/auth/#topics-auth\nedit:\nCheck this snippet that creates a UserProfile after the creation of a new User.\ndef create_user_profile_handler(sender, instance, created, **kwargs):\n if not created: return\n\n user_profile = UserProfile.objects.create(user=instance)\n user_profile.save()\n\npost_save.connect(create_user_profile_handler, sender=User) \n\n",
"i think the 'project/app' names are badly chosen. it's more like 'site/module'. an app can be very useful without having views, for example.\ncheck the 2008 DjangoCon talks on YouTube, especially the one about reusable apps, it will make you think totally different about how to structure your project.\n"
] | [
7,
3,
2
] | [] | [] | [
"django",
"django_models",
"django_project_architect",
"python"
] | stackoverflow_0000576345_django_django_models_django_project_architect_python.txt |
Q:
How to convert rational and decimal number strings to floats in python?
How can I convert strings which can denote decimal or rational numbers to floats
>>> ["0.1234", "1/2"]
['0.1234', '1/2']
I'd want [0.1234, 0.5].
eval is what I was thinking but no luck:
>>> eval("1/2")
0
A:
I'd parse the string if conversion fails:
>>> def convert(s):
try:
return float(s)
except ValueError:
num, denom = s.split('/')
return float(num) / float(denom)
...
>>> convert("0.1234")
0.1234
>>> convert("1/2")
0.5
Generally using eval is a bad idea, since it's a security risk. Especially if the string being evaluated came from outside the system.
A:
As others have pointed out, using eval is potentially a security risk, and certainly a bad habit to get into.
(if you don't think it's as risky as exec, imagine evaling something like: __import__('os').system('rm -rf /'))
However, if you have python 2.6 or up, you can use ast.literal_eval, for which the string provided:
may only consist of the following
Python literal structures: strings,
numbers, tuples, lists, dicts,
booleans, and None.
Thus it should be quite safe :-)
A:
Another option (also only for 2.6 and up) is the fractions module.
>>> from fractions import Fraction
>>> Fraction("0.1234")
Fraction(617, 5000)
>>> Fraction("1/2")
Fraction(1, 2)
>>> float(Fraction("0.1234"))
0.1234
>>> float(Fraction("1/2"))
0.5
A:
Use from __future__ import division to get the behavior you want. Then, in a pinch, you can do something like
from __future__ import division
strings = ["0.1234", "1/2", "2/3"]
numbers = map(eval, strings)
to get a list of floats out of your strings. If you want to do this the "right" way, don't use eval(), but instead write a function that accepts a string and calls float() on it if it contains no slash, or parses the string and divides the numerator and denominator if there's a slash in it.
One way to do it:
def parse_float_string(x)
parts = x.split('/', 1)
if len(parts) == 1:
return float(x)
elif len(parts) == 2:
return float(parts[0])/float(parts[1])
else:
raise ValueError
Then just map(parse_float_string, strings) will get you your list.
A:
The / operator does integer division. Try:
>>> eval("1.0*" + "1/2")
0.5
Because eval() is potentially dangerous, you should always check precisely what you are passing into it:
>>> import re
>>> s = "1/2"
>>> if re.match(r"\d+/\d+$", s):
... eval("1.0*" + s)
...
0.5
However, if you go to the trouble of matching the input against a regex in the first place, you might as well use r"(\d+)/(\d+)$" to extract the numerator and denominator, do the division yourself, and entirely avoid eval():
>>> m = re.match(r"(\d+)/(\d+)$", s)
>>> if m:
... float(m.group(1)) / float(m.group(2))
...
0.5
A:
The problem with eval is that, as in python, the quotient of integers is an integer. So, you have several choices.
The first is simply to make integer division return floats:
from __future__ import division
The other is to split the rational number:
reduce(lambda x, y: x*y, map(int, rat_str.split("/")), 1)
Where rat_str is the string with a rational number.
A:
In Python 3, this should work.
>>> x = ["0.1234", "1/2"]
>>> [eval(i) for i in x]
[0.1234, 0.5]
A:
sympy can help you out here:
import sympy
half = sympy.Rational('1/2')
p1234 = sympy.Rational('0.1234')
print '%f, %f" % (half, p1234)
A:
That's because 1 and 2 are interpreted by Python as integers and not floats. It needs to be 1.0/2.0 or some mix of that.
A:
The suggestions with from __future__ import division combined with eval will certainly work.
It's probably worth pointing out that the suggestions that don't use eval but rather parse the string do so because eval is dangerous: if there is some way for an arbitrary string to get sent to eval, then your system is vulnerable. So it's a bad habit. (But if this is just quick and dirty code, it's probably not that big a deal!)
| How to convert rational and decimal number strings to floats in python? | How can I convert strings which can denote decimal or rational numbers to floats
>>> ["0.1234", "1/2"]
['0.1234', '1/2']
I'd want [0.1234, 0.5].
eval is what I was thinking but no luck:
>>> eval("1/2")
0
| [
"I'd parse the string if conversion fails:\n>>> def convert(s):\n try:\n return float(s)\n except ValueError:\n num, denom = s.split('/')\n return float(num) / float(denom)\n...\n\n>>> convert(\"0.1234\")\n0.1234\n\n>>> convert(\"1/2\")\n0.5\n\nGenerally using eval is a bad idea, since it's a security risk. Especially if the string being evaluated came from outside the system.\n",
"As others have pointed out, using eval is potentially a security risk, and certainly a bad habit to get into.\n(if you don't think it's as risky as exec, imagine evaling something like: __import__('os').system('rm -rf /'))\nHowever, if you have python 2.6 or up, you can use ast.literal_eval, for which the string provided:\n\nmay only consist of the following\n Python literal structures: strings,\n numbers, tuples, lists, dicts,\n booleans, and None.\n\nThus it should be quite safe :-)\n",
"Another option (also only for 2.6 and up) is the fractions module.\n>>> from fractions import Fraction\n>>> Fraction(\"0.1234\")\nFraction(617, 5000)\n>>> Fraction(\"1/2\")\nFraction(1, 2)\n>>> float(Fraction(\"0.1234\"))\n0.1234\n>>> float(Fraction(\"1/2\"))\n0.5\n\n",
"Use from __future__ import division to get the behavior you want. Then, in a pinch, you can do something like \nfrom __future__ import division\nstrings = [\"0.1234\", \"1/2\", \"2/3\"]\nnumbers = map(eval, strings)\n\nto get a list of floats out of your strings. If you want to do this the \"right\" way, don't use eval(), but instead write a function that accepts a string and calls float() on it if it contains no slash, or parses the string and divides the numerator and denominator if there's a slash in it.\nOne way to do it:\ndef parse_float_string(x)\n parts = x.split('/', 1)\n if len(parts) == 1:\n return float(x)\n elif len(parts) == 2:\n return float(parts[0])/float(parts[1])\n else:\n raise ValueError\n\nThen just map(parse_float_string, strings) will get you your list.\n",
"The / operator does integer division. Try:\n>>> eval(\"1.0*\" + \"1/2\")\n0.5\n\nBecause eval() is potentially dangerous, you should always check precisely what you are passing into it:\n>>> import re\n>>> s = \"1/2\"\n>>> if re.match(r\"\\d+/\\d+$\", s):\n... eval(\"1.0*\" + s)\n...\n0.5\n\nHowever, if you go to the trouble of matching the input against a regex in the first place, you might as well use r\"(\\d+)/(\\d+)$\" to extract the numerator and denominator, do the division yourself, and entirely avoid eval():\n>>> m = re.match(r\"(\\d+)/(\\d+)$\", s)\n>>> if m:\n... float(m.group(1)) / float(m.group(2))\n...\n0.5\n\n",
"The problem with eval is that, as in python, the quotient of integers is an integer. So, you have several choices.\nThe first is simply to make integer division return floats:\nfrom __future__ import division\n\nThe other is to split the rational number:\nreduce(lambda x, y: x*y, map(int, rat_str.split(\"/\")), 1)\n\nWhere rat_str is the string with a rational number.\n",
"In Python 3, this should work.\n>>> x = [\"0.1234\", \"1/2\"]\n>>> [eval(i) for i in x]\n[0.1234, 0.5]\n\n",
"sympy can help you out here:\nimport sympy\n\nhalf = sympy.Rational('1/2')\np1234 = sympy.Rational('0.1234')\nprint '%f, %f\" % (half, p1234)\n\n",
"That's because 1 and 2 are interpreted by Python as integers and not floats. It needs to be 1.0/2.0 or some mix of that.\n",
"The suggestions with from __future__ import division combined with eval will certainly work.\nIt's probably worth pointing out that the suggestions that don't use eval but rather parse the string do so because eval is dangerous: if there is some way for an arbitrary string to get sent to eval, then your system is vulnerable. So it's a bad habit. (But if this is just quick and dirty code, it's probably not that big a deal!)\n"
] | [
19,
7,
7,
4,
3,
2,
1,
1,
0,
0
] | [] | [] | [
"python",
"rational_numbers"
] | stackoverflow_0000575925_python_rational_numbers.txt |
Q:
Are Python threads buggy?
A reliable coder friend told me that Python's current multi-threading implementation is seriously buggy - enough to avoid using altogether. What can said about this rumor?
A:
Python threads are good for concurrent I/O programming. Threads are swapped out of the CPU as soon as they block waiting for input from file, network, etc. This allows other Python threads to use the CPU while others wait. This would allow you to write a multi-threaded web server or web crawler, for example.
However, Python threads are serialized by the GIL when they enter interpreter core. This means that if two threads are crunching numbers, only one can run at any given moment. It also means that you can't take advantage of multi-core or multi-processor architectures.
There are solutions like running multiple Python interpreters concurrently, using a C based threading library. This is not for the faint of heart and the benefits might not be worth the trouble. Let's hope for an all Python solution in a future release.
A:
The standard implementation of Python (generally known as CPython as it is written in C) uses OS threads, but since there is the Global Interpreter Lock, only one thread at a time is allowed to run Python code. But within those limitations, the threading libraries are robust and widely used.
If you want to be able to use multiple CPU cores, there are a few options. One is to use multiple python interpreters concurrently, as mentioned by others. Another option is to use a different implementation of Python that does not use a GIL. The two main options are Jython and IronPython.
Jython is written in Java, and is now fairly mature, though some incompatibilities remain. For example, the web framework Django does not run perfectly yet, but is getting closer all the time. Jython is great for thread safety, comes out better in benchmarks and has a cheeky message for those wanting the GIL.
IronPython uses the .NET framework and is written in C#. Compatibility is reaching the stage where Django can run on IronPython (at least as a demo) and there are guides to using threads in IronPython.
A:
The GIL (Global Interpreter Lock) might be a problem, but the API is quite OK. Try out the excellent processing module, which implements the Threading API for separate processes. I am using that right now (albeit on OS X, have yet to do some testing on Windows) and am really impressed. The Queue class is really saving my bacon in terms of managing complexity!
EDIT: it seemes the processing module is being included in the standard library as of version 2.6 (import multiprocessing). Joy!
A:
As far as I know there are no real bugs, but the performance when threading in cPython is really bad (compared to most other threading implementations, but usually good enough if all most of the threads do is block) due to the GIL (Global Interpreter Lock), so really it is implementation specific rather than language specific. Jython, for example, does not suffer from this due to using the Java thread model.
See this post on why it is not really feasible to remove the GIL from the cPython implementation, and this for some practical elaboration and workarounds.
Do a quick google for "Python GIL" for more information.
A:
If you want to code in python and get great threading support, you might want to check out IronPython or Jython. Since the python code in IronPython and Jython run on the .NET CLR and Java VM respectively, they enjoy the great threading support built into those libraries. In addition to that, IronPython doesn't have the GIL, an issue that prevents CPython threads from taking full advantage of multi-core architectures.
| Are Python threads buggy? | A reliable coder friend told me that Python's current multi-threading implementation is seriously buggy - enough to avoid using altogether. What can said about this rumor?
| [
"Python threads are good for concurrent I/O programming. Threads are swapped out of the CPU as soon as they block waiting for input from file, network, etc. This allows other Python threads to use the CPU while others wait. This would allow you to write a multi-threaded web server or web crawler, for example.\nHowever, Python threads are serialized by the GIL when they enter interpreter core. This means that if two threads are crunching numbers, only one can run at any given moment. It also means that you can't take advantage of multi-core or multi-processor architectures.\nThere are solutions like running multiple Python interpreters concurrently, using a C based threading library. This is not for the faint of heart and the benefits might not be worth the trouble. Let's hope for an all Python solution in a future release.\n",
"The standard implementation of Python (generally known as CPython as it is written in C) uses OS threads, but since there is the Global Interpreter Lock, only one thread at a time is allowed to run Python code. But within those limitations, the threading libraries are robust and widely used.\nIf you want to be able to use multiple CPU cores, there are a few options. One is to use multiple python interpreters concurrently, as mentioned by others. Another option is to use a different implementation of Python that does not use a GIL. The two main options are Jython and IronPython.\nJython is written in Java, and is now fairly mature, though some incompatibilities remain. For example, the web framework Django does not run perfectly yet, but is getting closer all the time. Jython is great for thread safety, comes out better in benchmarks and has a cheeky message for those wanting the GIL.\nIronPython uses the .NET framework and is written in C#. Compatibility is reaching the stage where Django can run on IronPython (at least as a demo) and there are guides to using threads in IronPython.\n",
"The GIL (Global Interpreter Lock) might be a problem, but the API is quite OK. Try out the excellent processing module, which implements the Threading API for separate processes. I am using that right now (albeit on OS X, have yet to do some testing on Windows) and am really impressed. The Queue class is really saving my bacon in terms of managing complexity!\nEDIT: it seemes the processing module is being included in the standard library as of version 2.6 (import multiprocessing). Joy!\n",
"As far as I know there are no real bugs, but the performance when threading in cPython is really bad (compared to most other threading implementations, but usually good enough if all most of the threads do is block) due to the GIL (Global Interpreter Lock), so really it is implementation specific rather than language specific. Jython, for example, does not suffer from this due to using the Java thread model.\nSee this post on why it is not really feasible to remove the GIL from the cPython implementation, and this for some practical elaboration and workarounds.\nDo a quick google for \"Python GIL\" for more information.\n",
"If you want to code in python and get great threading support, you might want to check out IronPython or Jython. Since the python code in IronPython and Jython run on the .NET CLR and Java VM respectively, they enjoy the great threading support built into those libraries. In addition to that, IronPython doesn't have the GIL, an issue that prevents CPython threads from taking full advantage of multi-core architectures.\n"
] | [
58,
16,
9,
5,
2
] | [
"I've used it in several applications and have never had nor heard of threading being anything other than 100% reliable, as long as you know its limits. You can't spawn 1000 threads at the same time and expect your program to run properly on Windows, however you can easily write a worker pool and just feed it 1000 operations, and keep everything nice and under control.\n"
] | [
-2
] | [
"multithreading",
"python"
] | stackoverflow_0000034020_multithreading_python.txt |
Q:
How to exit a module before it has finished parsing?
I have a module that imports a module, but in some cases the module being imported may not exist. After the module is imported there is a class inherits from a class the imported module. If I was to catch the ImportError exception in the case the module doesn't exist, how can I stop Python from parsing the rest of the module? I'm open to other solutions if that's not possible.
Here is a basic example (selfaware.py):
try:
from skynet import SkyNet
except ImportError:
class SelfAwareSkyNet():
pass
exit_module_parsing_here()
class SelfAwareSkyNet(SkyNet):
pass
The only ways I can think to do this is:
Before importing the selfaware.py module, check if the skynet module is available, and simply pass or create a stub class. This will cause DRY if selfaware.py is imported multiple times.
Within selfaware.py have the class defined withing the try block. e.g.:
try:
from skynet import SkyNet
class SelfAwareSkyNet(SkyNet):
pass
except ImportError:
class SelfAwareSkyNet():
pass
A:
try: supports an else: clause
try:
from skynet import SkyNet
except ImportError:
class SelfAwareSkyNet():
pass
else:
class SelfAwareSkyNet(SkyNet):
pass
A:
You could use:
try:
from skynet import SkyNet
inherit_from = SkyNet
except ImportError:
inherit_from = object
class SelfAwareSkyeNet(inherit_from):
pass
This works only if the implementation do not differ.
Edit: New solution after comment.
| How to exit a module before it has finished parsing? | I have a module that imports a module, but in some cases the module being imported may not exist. After the module is imported there is a class inherits from a class the imported module. If I was to catch the ImportError exception in the case the module doesn't exist, how can I stop Python from parsing the rest of the module? I'm open to other solutions if that's not possible.
Here is a basic example (selfaware.py):
try:
from skynet import SkyNet
except ImportError:
class SelfAwareSkyNet():
pass
exit_module_parsing_here()
class SelfAwareSkyNet(SkyNet):
pass
The only ways I can think to do this is:
Before importing the selfaware.py module, check if the skynet module is available, and simply pass or create a stub class. This will cause DRY if selfaware.py is imported multiple times.
Within selfaware.py have the class defined withing the try block. e.g.:
try:
from skynet import SkyNet
class SelfAwareSkyNet(SkyNet):
pass
except ImportError:
class SelfAwareSkyNet():
pass
| [
"try: supports an else: clause\ntry:\n from skynet import SkyNet\n\nexcept ImportError:\n class SelfAwareSkyNet():\n pass\n\nelse:\n class SelfAwareSkyNet(SkyNet):\n pass\n\n",
"You could use:\ntry:\n from skynet import SkyNet\n inherit_from = SkyNet\nexcept ImportError:\n inherit_from = object\n\nclass SelfAwareSkyeNet(inherit_from):\n pass\n\nThis works only if the implementation do not differ.\nEdit: New solution after comment.\n"
] | [
7,
2
] | [] | [] | [
"import",
"module",
"python"
] | stackoverflow_0000577119_import_module_python.txt |
Q:
How do I design sms service?
I want to design a website that can send and receive sms.
How should I approach the problem ?
What are the resources available ?
I know php,python what else do I need or are the better options available?
How can experiment using my pc only?[somthing like localhost]
What are some good hosting services for this? [edit this]
[Add more questions you can think of?]
A:
You can take a look at Kannel. It's so simple to create SMS services using it. Just define a keyword, then put in the URL to which the incoming SMS request will be routed (you'll get the info such as mobile number and SMS text in query string parameters), then whatever output your web script generates (you can use any web scripting/language/platform) will be sent back to the sender.
It's simple to test. You can use your own PC and just use the fakesmsc "SMS center" and just send it HTTP requests. If you have a GSM modem you can use that too, utilising the modem's AT command set.
A:
First thing, You need to sign up for an account (SMS gateway), most of them also give you example code how to send and receive sms using their API. Then you will wrap the the sms functionality around your sites logic.
e.g http://www.clickatell.com/developers/php.php
A:
I've copied this from an answer I gave in relation to this question. However, in addition to the text below, take a look at Wadja's SMS Gateway deals (API link)... they appear to be a really good option at the moment, though I've not used them, personally.
Your main option for sending SMS messages is using an existing SMS provider. In my experience (which is extensive with SMS messaging web applications), you will often find that negotiating with different providers is the best way to get the best deal for your application.
Different providers often offer different services, and different features. My favourite provider, and indeed, the one that has happily negotiated with me for lower rates in the past, is TM4B (http://www.tm4b.com). These guys have excellent rates, cover a huge proportion of the globe, and have excellent customer service.
Below is some code extracted (and some parts obfuscated) from one of my live web applications, for sending a simple message via their API:
require_once("tm4b.lib.php");
$smsEngine = new tm4b();
// Prepare the array for sending
$smsRequest["username"] = "YOURUNAME";
$smsRequest["password"] = "YOURPWORD";
$smsRequest["to"] = "+441234554443";
$smsRequest["from"] = "ME!";
$smsRequest["msg"] = "Hello, test message!";
// Do the actual sending
$smsResult = $smsEngine->ClientAPI($smsRequest);
// Check the result
if( $smsResult['status'] == "ok" ) {
print "Message sent!";
} else {
print "Message not sent.";
}
Many other providers that I've used in the past, have very similar interfaces, and all are really competitive when it comes to pricing. You simply have to look around for a provider that suits your needs.
In regard to cost, you're looking at prices ranging from a few pence/cents for most Western countries (prices are a little bit higher for most third-world countries, though, so beware). Most providers you will have to pay in bulk, if you want decent rates from them, but they'll often negotiate with you for 'smaller-than-usual' batches. Most providers do offer a post-pay option, but only when you've successfully completed a few transactions with them... others offer it from the start, but the prices are extortionate.
Hope it helps!
A:
You need a SMS server. This should get you started.
A:
Since my company does this sometimes (text promotions etc, though our main focus is much much lower level stuff), I figured I should pitch in.
By far the simplest way is to use a service such as Clickatell, which provides a HTTP API, as well as FTP and SMPP amongst others. I don't know how Clickatell deals with receiving messages, however, as we use direct SMPP binds to our local mobile operators for this.
If you are willing to pay for it, you should be able to get an SMPP bind to your local mobile operator, but its often expensive. This would also allow you to purchase your own shortcode.
You may also want to give mBlox or Nextcell a look. A quick Google search will turn up more.
you could also buy a GSM modem, which would allow you to send and receive messages as you normally would with a phone, except through a PC. This usually means you will pay whatever you would with a phone. (In Ireland anyway)
| How do I design sms service? | I want to design a website that can send and receive sms.
How should I approach the problem ?
What are the resources available ?
I know php,python what else do I need or are the better options available?
How can experiment using my pc only?[somthing like localhost]
What are some good hosting services for this? [edit this]
[Add more questions you can think of?]
| [
"You can take a look at Kannel. It's so simple to create SMS services using it. Just define a keyword, then put in the URL to which the incoming SMS request will be routed (you'll get the info such as mobile number and SMS text in query string parameters), then whatever output your web script generates (you can use any web scripting/language/platform) will be sent back to the sender.\nIt's simple to test. You can use your own PC and just use the fakesmsc \"SMS center\" and just send it HTTP requests. If you have a GSM modem you can use that too, utilising the modem's AT command set.\n",
"First thing, You need to sign up for an account (SMS gateway), most of them also give you example code how to send and receive sms using their API. Then you will wrap the the sms functionality around your sites logic. \ne.g http://www.clickatell.com/developers/php.php\n",
"I've copied this from an answer I gave in relation to this question. However, in addition to the text below, take a look at Wadja's SMS Gateway deals (API link)... they appear to be a really good option at the moment, though I've not used them, personally.\n\nYour main option for sending SMS messages is using an existing SMS provider. In my experience (which is extensive with SMS messaging web applications), you will often find that negotiating with different providers is the best way to get the best deal for your application.\nDifferent providers often offer different services, and different features. My favourite provider, and indeed, the one that has happily negotiated with me for lower rates in the past, is TM4B (http://www.tm4b.com). These guys have excellent rates, cover a huge proportion of the globe, and have excellent customer service.\nBelow is some code extracted (and some parts obfuscated) from one of my live web applications, for sending a simple message via their API:\n\nrequire_once(\"tm4b.lib.php\");\n$smsEngine = new tm4b();\n\n// Prepare the array for sending\n$smsRequest[\"username\"] = \"YOURUNAME\";\n$smsRequest[\"password\"] = \"YOURPWORD\";\n$smsRequest[\"to\"] = \"+441234554443\";\n$smsRequest[\"from\"] = \"ME!\";\n$smsRequest[\"msg\"] = \"Hello, test message!\";\n\n// Do the actual sending\n$smsResult = $smsEngine->ClientAPI($smsRequest);\n\n// Check the result\nif( $smsResult['status'] == \"ok\" ) {\n print \"Message sent!\";\n} else {\n print \"Message not sent.\";\n}\n\n\nMany other providers that I've used in the past, have very similar interfaces, and all are really competitive when it comes to pricing. You simply have to look around for a provider that suits your needs.\nIn regard to cost, you're looking at prices ranging from a few pence/cents for most Western countries (prices are a little bit higher for most third-world countries, though, so beware). Most providers you will have to pay in bulk, if you want decent rates from them, but they'll often negotiate with you for 'smaller-than-usual' batches. Most providers do offer a post-pay option, but only when you've successfully completed a few transactions with them... others offer it from the start, but the prices are extortionate.\n\nHope it helps!\n",
"You need a SMS server. This should get you started.\n",
"Since my company does this sometimes (text promotions etc, though our main focus is much much lower level stuff), I figured I should pitch in.\nBy far the simplest way is to use a service such as Clickatell, which provides a HTTP API, as well as FTP and SMPP amongst others. I don't know how Clickatell deals with receiving messages, however, as we use direct SMPP binds to our local mobile operators for this.\nIf you are willing to pay for it, you should be able to get an SMPP bind to your local mobile operator, but its often expensive. This would also allow you to purchase your own shortcode.\nYou may also want to give mBlox or Nextcell a look. A quick Google search will turn up more.\nyou could also buy a GSM modem, which would allow you to send and receive messages as you normally would with a phone, except through a PC. This usually means you will pay whatever you would with a phone. (In Ireland anyway)\n"
] | [
3,
2,
2,
0,
0
] | [] | [] | [
"bulksms",
"mobile_phones",
"php",
"python",
"sms"
] | stackoverflow_0000576940_bulksms_mobile_phones_php_python_sms.txt |
Q:
gtk TextView widget doesn't update during function
I'm new to GUI programming with python and gtk, so this is a bit of a beginners question.
I have a function that is called when a button is pressed which does various tasks, and a TextView widget which I write to after each task is completed. The problem is that the TextView widget doesn't update until the entire function has finished. I need it to update after each task.
A:
After each update to the TextView call
while gtk.events_pending():
gtk.main_iteration()
You can do your update through a custom function:
def my_insert(self, widget, report, text):
report.insert_at_cursor(text)
while gtk.events_pending():
gtk.main_iteration()
From the PyGTK FAQ:
How can I force updates to the application windows during a long callback or other internal operation?
| gtk TextView widget doesn't update during function | I'm new to GUI programming with python and gtk, so this is a bit of a beginners question.
I have a function that is called when a button is pressed which does various tasks, and a TextView widget which I write to after each task is completed. The problem is that the TextView widget doesn't update until the entire function has finished. I need it to update after each task.
| [
"After each update to the TextView call\nwhile gtk.events_pending():\n gtk.main_iteration()\n\nYou can do your update through a custom function:\ndef my_insert(self, widget, report, text):\n\n report.insert_at_cursor(text)\n while gtk.events_pending():\n gtk.main_iteration()\n\nFrom the PyGTK FAQ:\nHow can I force updates to the application windows during a long callback or other internal operation?\n"
] | [
4
] | [] | [] | [
"gtk",
"pygtk",
"python"
] | stackoverflow_0000577302_gtk_pygtk_python.txt |