content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: case-insensitive alphabetical sorting of nested lists i'm trying to sort this nested list by inner's list first element: ak = [ ['a',1],['E',2],['C',13],['A',11],['b',9] ] ak.sort(cmp=lambda x, y: cmp(x[0], y[0])) for i in ak: { print i } by default python considers A > a, hence the output i get is: ['A', 11] ['C', 13] ['E', 2] ['a', 1] ['b', 9] i've tried converting all list values to even case during comparison by adding x[0].lower etc. but no use. How do i force python (i'm working on 2.4 version) to do case-insensitive alphabetical sorting? Edited: Jarret and Brian, your assumptions about parenthesis were right! I did forget to put them in! And Jarret, your suggestions about not using "cmp" works like a charm! Thanks a lot guys! A: Try: ak.sort(key=lambda x:x[0].lower()) I would recommend that you avoid using cmp as this has been deprecated in Python 2.6, and removed in 3.0. I know you're using 2.4, but the reason cmp has fallen into disfavour is that it is a very slow way to sort. I'm not sure why your effort with lower failed, though... perhaps you forgot to use the function call versus just the function name? (ie: cmp(x[0].lower(), y[0].lower()) versus cmp(x[0].lower, y[0].lower)) A: ak.sort(cmp=lambda x, y: cmp(x[0].lower(), y[0].lower())) Did you forget the parens in x[0].lower()?
case-insensitive alphabetical sorting of nested lists
i'm trying to sort this nested list by inner's list first element: ak = [ ['a',1],['E',2],['C',13],['A',11],['b',9] ] ak.sort(cmp=lambda x, y: cmp(x[0], y[0])) for i in ak: { print i } by default python considers A > a, hence the output i get is: ['A', 11] ['C', 13] ['E', 2] ['a', 1] ['b', 9] i've tried converting all list values to even case during comparison by adding x[0].lower etc. but no use. How do i force python (i'm working on 2.4 version) to do case-insensitive alphabetical sorting? Edited: Jarret and Brian, your assumptions about parenthesis were right! I did forget to put them in! And Jarret, your suggestions about not using "cmp" works like a charm! Thanks a lot guys!
[ "Try:\nak.sort(key=lambda x:x[0].lower())\n\nI would recommend that you avoid using cmp as this has been deprecated in Python 2.6, and removed in 3.0. I know you're using 2.4, but the reason cmp has fallen into disfavour is that it is a very slow way to sort.\nI'm not sure why your effort with lower failed, though... perhaps you forgot to use the function call versus just the function name? (ie: cmp(x[0].lower(), y[0].lower()) versus cmp(x[0].lower, y[0].lower))\n", "ak.sort(cmp=lambda x, y: cmp(x[0].lower(), y[0].lower()))\n\nDid you forget the parens in x[0].lower()?\n" ]
[ 9, 3 ]
[]
[]
[ "python" ]
stackoverflow_0000710262_python.txt
Q: Python 3.0 Windows/COM How to access a COM object from a python file using python 3.0. And, yes, I know that not a lot of people are using Python 3.0. Switching back to 2.6 is a huge hassle for me, so I don't want to unless I absolutely have to. I appreciate your time, and any assistance! A: Install pywin32 and then create the object using it's progid: import win32com.client object = win32com.client.Dispatch("Outlook.Application") See also the Python and COM tutorial. EDIT: Hmm... looks like they may not have a python 3.0 version yet.
Python 3.0 Windows/COM
How to access a COM object from a python file using python 3.0. And, yes, I know that not a lot of people are using Python 3.0. Switching back to 2.6 is a huge hassle for me, so I don't want to unless I absolutely have to. I appreciate your time, and any assistance!
[ "Install pywin32 and then create the object using it's progid:\nimport win32com.client\nobject = win32com.client.Dispatch(\"Outlook.Application\")\n\nSee also the Python and COM tutorial.\nEDIT: Hmm... looks like they may not have a python 3.0 version yet. \n" ]
[ 6 ]
[]
[]
[ "com", "python", "python_3.x" ]
stackoverflow_0000710278_com_python_python_3.x.txt
Q: How do I parse XML from a Google App Engine app? How do I parse XML from a Google App Engine app? Any examples? A: Since the question was asked, Google has whitelisted pyexpat, which includes minidom, so you can use the following code without having to upload any libraries: from xml.dom import minidom dom = minidom.parseString('<eg>example text</eg>') More information: http://docs.python.org/library/xml.dom.minidom.html A: Take a look at existing answers on XML and Python. Something like this could work: from cStringIO import StringIO from xml.etree import cElementTree as etree xml = "<a>aaa<b>bbb</b></a>" for event, elem in etree.iterparse(StringIO(xml)): print elem.text It prints: bbb aaa A: AFAIK Google App Engine provides a fairly complete Python environment for you to use. Since Python comes with "batteries included" you may want to evaluate the different APIs which vanilla Python offers you: http://docs.python.org/library/markup.html
How do I parse XML from a Google App Engine app?
How do I parse XML from a Google App Engine app? Any examples?
[ "Since the question was asked, Google has whitelisted pyexpat, which includes minidom, so you can use the following code without having to upload any libraries:\nfrom xml.dom import minidom\n\ndom = minidom.parseString('<eg>example text</eg>')\n\nMore information:\nhttp://docs.python.org/library/xml.dom.minidom.html\n", "Take a look at existing answers on XML and Python.\nSomething like this could work:\nfrom cStringIO import StringIO\nfrom xml.etree import cElementTree as etree\n\nxml = \"<a>aaa<b>bbb</b></a>\"\n\nfor event, elem in etree.iterparse(StringIO(xml)):\n print elem.text\n\nIt prints:\nbbb\naaa\n\n", "AFAIK Google App Engine provides a fairly complete Python environment for you to use. Since Python comes with \"batteries included\" you may want to evaluate the different APIs which vanilla Python offers you: http://docs.python.org/library/markup.html\n" ]
[ 20, 8, 4 ]
[]
[]
[ "google_app_engine", "parsing", "python", "xml" ]
stackoverflow_0000410954_google_app_engine_parsing_python_xml.txt
Q: Why do languages like Java use hierarchical package names, while Python does not? I haven't done enterprise work in Java, but I often see the reverse-domain-name package naming convention. For example, for a Stack Overflow Java package you'd put your code underneath package com.stackoverflow. I just ran across a Python package that uses the Java-like convention, and I wasn't sure what the arguments for and against it are, or whether they apply to Python in the same way as Java. What are the reasons you'd prefer one over the other? Do those reasons apply across the languages? A: Python doesn't do this because you end up with a problem -- who owns the "com" package that almost everything else is a subpackage of? Python's method of establishing package hierarchy (through the filesystem hierarchy) does not play well with this convention at all. Java can get away with it because package hierarchy is defined by the structure of the string literals fed to the 'package' statement, so there doesn't need to be an explicit "com" package anywhere. There's also the question of what to do if you want to publicly release a package but don't own a domain name that's suitable for bodging into the package name, or if you end up changing (or losing) your domain name for some reason. (Do later updates need a different package name? How do you know that com.nifty_consultants.nifty_utility is a newer version of com.joe_blow_software.nifty_utility? Or, conversely, how do you know that it's not a newer version? If you miss your domain renewal and the name gets snatched by a domain camper, and someone else buys the name from them, and they want to publicly release software packages, should they then use the same name that you had already used?) Domain names and software package names, it seems to me, address two entirely different problems, and have entirely different complicating factors. I personally dislike Java's convention because (IMHO) it violates separation of concerns. Avoiding namespace collisions is nice and all, but I hate the thought of my software's namespace being defined by (and dependent on) the marketing department's interaction with some third-party bureaucracy. To clarify my point further, in response to JeeBee's comment: In Python, a package is a directory containing an __init__.py file (and presumably one or more module files). A package hierarchy requires that each higher-level package be a full, legitimate package. If two packages (especially from different vendors, but even not-directly-related packages from the same vendor) share a top-level package name, whether that name is 'com' or 'web' or 'utils' or whatever, each one must provide an __init__.py for that top-level package. We must also assume that these packages are likely to be installed in the same place in the directory tree, i.e. site-packages/[pkg]/[subpkg]. The filesystem thus enforces that there is only one [pkg]/__init__.py -- so which one wins? There is not (and cannot be) a general-case correct answer to that question. Nor can we reasonably merge the two files together. Since we can't know what another package might need to do in that __init__.py, subpackages sharing a top-level package cannot be assumed to work when both are installed unless they are specifically written to be compatible with each other (at least in this one file). This would be a distribution nightmare and would pretty much invalidate the entire point of nesting packages. This is not specific to reverse-domain-name package hierarchies, though they provide the most obvious bad example and (IMO) are philosophically questionable -- it's really the practical issue of shared top-level packages, rather than the philosophical questions, that are my main concern here. (On the other hand, a single large package using subpackages to better organize itself is a great idea, since those subpackages are specifically designed to work and live together. This is not so common in Python, though, because a single conceptual package doesn't tend to require a large enough number of files to need the extra layer of organization.) A: If Guido himself announced that the reverse domain convention ought to be followed, it wouldn't be adopted, unless there were significant changes to the implementation of import in python. Consider: python searches an import path at run-time with a fail-fast algorithm; java searches a path with an exhaustive algorithm both at compile-time and run-time. Go ahead, try arranging your directories like this: folder_on_path/ com/ __init__.py domain1/ module.py __init__.py other_folder_on_path/ com/ __init__.py domain2/ module.py __init__.py Then try: from com.domain1 import module from com.domain2 import module Exactly one of those statements will succeed. Why? Because either folder_on_path or other_folder_on_path comes higher on the search path. When python sees from com. it grabs the first com package it can. If that happens to contain domain1, then the first import will succeed; if not, it throws an ImportError and gives up. Why? Because import must occur at runtime, potentially at any point in the flow of the code (although most often at the beginning). Nobody wants an exhaustive tree-walk at that point to verify that there's no possible match. It assumes that if it finds a package named com, it is the com package. Moreover, python doesn't distinguish between the following statements: from com import domain1 from com.domain1 import module from com.domain1.module import variable The concept of verifying that com is the com is going to be different in each case. In java, you really only have to deal with the second case, and that can be accomplished by walking through the file system (I guess an advantage of naming classes and files the same). In python, if you tried to accomplish import with nothing but file system assistance, the first case could (almost) be transparently the same (init.py wouldn't run), the second case could be accomplished, but you would lose the initial running of module.py, but the third case is entirely unattainable. The code has to execute for variable to be available. And this is another main point: import does more than resolve namespaces, it executes code. Now, you could get away with this if every python package ever distributed required an installation process that searched for the com folder, and then the domain, and so on and so on, but this makes packaging considerably harder, destroys drag-and-drop capability, and makes packaging and all-out nuisance. A: "What are the reasons you'd prefer one over the other?" Python's style is simpler. Java's style allows same-name products from different organizations. "Do those reasons apply across the languages?" Yes. You can easily have top level Python packages named "com", "org", "mil", "net", "edu" and "gov" and put your packages as subpackages in these. Edit. You have some complexity when you do this, because everyone has to cooperate and not pollute these top-level packages with their own cruft. Python didn't start doing that because the namespace collision -- as a practical matter -- turn out to be rather rare. Java started out doing that because the folks who developed Java foresaw lots of people cluelessly choosing the same name for their packages and needing to sort out the collisions and ownership issues. Java folks didn't foresee the Open Source community picking weird off-the-wall unique names to avoid name collisions. Everyone who writes an xml parser, interestingly, doesn't call it "parser". They seem to call it "Saxon" or "Xalan" or something completely strange. A: Somewhere on Joel on Software, Joel has a comparison between two methods of growing a company: the Ben & Jerry's method, which starts small and grows organically, and the Amazon method of raising a whole lot of money and staking very wide claims from the start. When Sun introduced Java, it was with fanfare and hype. Java was supposed to take over. Most future relevant software development would be on web-delivered Java applets. There would be brass bands and even ponies. In this context, it was sensible to establish, up front, a naming convention that was internet-based, corporation-friendly, and on a planetary scale. OK, it didn't turn out quite as Sun hoped, but they planned as if they would succeed. Personally, I despise projects that can be undermined by success. Python was a project by Guido van Rossum initially, and it was quite some time before the community was confident it would survive if van Rossum was hit by a bus. There were, as far as I know, no initial plans to take over the world, and it was not intended as a web applet language. Therefore, during the formative stages of the language, there was no reason to want a vast hierarchy for a naming scheme. In that more informal community, one selected a more or less whimsical project name and checked to see if somebody else was already using it. (Naming a computer language after a British comedy show might be considered whimsical just to start.) There was no perceived need to cater to a big but unimaginative and clumsy naming scheme. A: It's a great way of preventing name collisions, and takes full advantage of the existing domain name system, so it requires no additional bureaucracy or registration. It is simple and brilliant. By reversing the domain name it also gives it a hierarchical structure, which is handy. So you can have sub-packages on the end. The only downside is the length of the name, but to me that is not a downside at all. I think it is a pretty good idea for any language that would support it. Why don't JavaScript libraries do it, for example? Their global namespace is a big problem, yet Javascript libraries use simple global identifiers like '$' which clash with other Javascript libraries. A: The idea is to keep name spaces conflict free. Instead of unreadable UUIDs or the like, the reverse domain name is unlikely to get in someone else's way. Very simple, but pragmatic. Moreover when using 3rd party libs it might give you a clue as to where they came from (for updates, support etc.) A: Python does have it, it's just a much flatter hierarchy. Look at os.path, for example. And there's nothing stopping library designers making much deeper ones, e.g. Django. Fundamentally, I think Python is designed on the idea that you want to get stuff done without having to specify or type too much in advance. This greatly helps with scripting and command-line use. There are several parts of 'The Zen of Python' that address the rationale for this: Simple is better than complex. Flat is better than nested. Beautiful is better than ugly. (The Java system looks ugly to me.) On the other hand, there's: Namespaces are one honking great idea -- let's do more of those! A: Java is able to do it like this, since it is a recommended Java standard practice, and pretty much universally accepted by the Java community. Python dos not have this convention.
Why do languages like Java use hierarchical package names, while Python does not?
I haven't done enterprise work in Java, but I often see the reverse-domain-name package naming convention. For example, for a Stack Overflow Java package you'd put your code underneath package com.stackoverflow. I just ran across a Python package that uses the Java-like convention, and I wasn't sure what the arguments for and against it are, or whether they apply to Python in the same way as Java. What are the reasons you'd prefer one over the other? Do those reasons apply across the languages?
[ "Python doesn't do this because you end up with a problem -- who owns the \"com\" package that almost everything else is a subpackage of? Python's method of establishing package hierarchy (through the filesystem hierarchy) does not play well with this convention at all. Java can get away with it because package hierarchy is defined by the structure of the string literals fed to the 'package' statement, so there doesn't need to be an explicit \"com\" package anywhere.\nThere's also the question of what to do if you want to publicly release a package but don't own a domain name that's suitable for bodging into the package name, or if you end up changing (or losing) your domain name for some reason. (Do later updates need a different package name? How do you know that com.nifty_consultants.nifty_utility is a newer version of com.joe_blow_software.nifty_utility? Or, conversely, how do you know that it's not a newer version? If you miss your domain renewal and the name gets snatched by a domain camper, and someone else buys the name from them, and they want to publicly release software packages, should they then use the same name that you had already used?)\nDomain names and software package names, it seems to me, address two entirely different problems, and have entirely different complicating factors. I personally dislike Java's convention because (IMHO) it violates separation of concerns. Avoiding namespace collisions is nice and all, but I hate the thought of my software's namespace being defined by (and dependent on) the marketing department's interaction with some third-party bureaucracy. \nTo clarify my point further, in response to JeeBee's comment: In Python, a package is a directory containing an __init__.py file (and presumably one or more module files). A package hierarchy requires that each higher-level package be a full, legitimate package. If two packages (especially from different vendors, but even not-directly-related packages from the same vendor) share a top-level package name, whether that name is 'com' or 'web' or 'utils' or whatever, each one must provide an __init__.py for that top-level package. We must also assume that these packages are likely to be installed in the same place in the directory tree, i.e. site-packages/[pkg]/[subpkg]. The filesystem thus enforces that there is only one [pkg]/__init__.py -- so which one wins? There is not (and cannot be) a general-case correct answer to that question. Nor can we reasonably merge the two files together. Since we can't know what another package might need to do in that __init__.py, subpackages sharing a top-level package cannot be assumed to work when both are installed unless they are specifically written to be compatible with each other (at least in this one file). This would be a distribution nightmare and would pretty much invalidate the entire point of nesting packages. This is not specific to reverse-domain-name package hierarchies, though they provide the most obvious bad example and (IMO) are philosophically questionable -- it's really the practical issue of shared top-level packages, rather than the philosophical questions, that are my main concern here. \n(On the other hand, a single large package using subpackages to better organize itself is a great idea, since those subpackages are specifically designed to work and live together. This is not so common in Python, though, because a single conceptual package doesn't tend to require a large enough number of files to need the extra layer of organization.)\n", "If Guido himself announced that the reverse domain convention ought to be followed, it wouldn't be adopted, unless there were significant changes to the implementation of import in python.\nConsider: python searches an import path at run-time with a fail-fast algorithm; java searches a path with an exhaustive algorithm both at compile-time and run-time. Go ahead, try arranging your directories like this:\nfolder_on_path/\n com/\n __init__.py\n domain1/\n module.py\n __init__.py\n\n\nother_folder_on_path/\n com/\n __init__.py\n domain2/\n module.py\n __init__.py\n\nThen try:\nfrom com.domain1 import module\nfrom com.domain2 import module\n\nExactly one of those statements will succeed. Why? Because either folder_on_path or other_folder_on_path comes higher on the search path. When python sees from com. it grabs the first com package it can. If that happens to contain domain1, then the first import will succeed; if not, it throws an ImportError and gives up. Why? Because import must occur at runtime, potentially at any point in the flow of the code (although most often at the beginning). Nobody wants an exhaustive tree-walk at that point to verify that there's no possible match. It assumes that if it finds a package named com, it is the com package.\nMoreover, python doesn't distinguish between the following statements:\nfrom com import domain1\nfrom com.domain1 import module\nfrom com.domain1.module import variable\n\nThe concept of verifying that com is the com is going to be different in each case. In java, you really only have to deal with the second case, and that can be accomplished by walking through the file system (I guess an advantage of naming classes and files the same). In python, if you tried to accomplish import with nothing but file system assistance, the first case could (almost) be transparently the same (init.py wouldn't run), the second case could be accomplished, but you would lose the initial running of module.py, but the third case is entirely unattainable. The code has to execute for variable to be available. And this is another main point: import does more than resolve namespaces, it executes code.\nNow, you could get away with this if every python package ever distributed required an installation process that searched for the com folder, and then the domain, and so on and so on, but this makes packaging considerably harder, destroys drag-and-drop capability, and makes packaging and all-out nuisance.\n", "\"What are the reasons you'd prefer one over the other?\"\nPython's style is simpler. Java's style allows same-name products from different organizations.\n\"Do those reasons apply across the languages?\"\nYes. You can easily have top level Python packages named \"com\", \"org\", \"mil\", \"net\", \"edu\" and \"gov\" and put your packages as subpackages in these. \nEdit. You have some complexity when you do this, because everyone has to cooperate and not pollute these top-level packages with their own cruft. \nPython didn't start doing that because the namespace collision -- as a practical matter -- turn out to be rather rare.\nJava started out doing that because the folks who developed Java foresaw lots of people cluelessly choosing the same name for their packages and needing to sort out the collisions and ownership issues.\nJava folks didn't foresee the Open Source community picking weird off-the-wall unique names to avoid name collisions. Everyone who writes an xml parser, interestingly, doesn't call it \"parser\". They seem to call it \"Saxon\" or \"Xalan\" or something completely strange. \n", "Somewhere on Joel on Software, Joel has a comparison between two methods of growing a company: the Ben & Jerry's method, which starts small and grows organically, and the Amazon method of raising a whole lot of money and staking very wide claims from the start.\nWhen Sun introduced Java, it was with fanfare and hype. Java was supposed to take over. Most future relevant software development would be on web-delivered Java applets. There would be brass bands and even ponies. In this context, it was sensible to establish, up front, a naming convention that was internet-based, corporation-friendly, and on a planetary scale.\nOK, it didn't turn out quite as Sun hoped, but they planned as if they would succeed. Personally, I despise projects that can be undermined by success.\nPython was a project by Guido van Rossum initially, and it was quite some time before the community was confident it would survive if van Rossum was hit by a bus. There were, as far as I know, no initial plans to take over the world, and it was not intended as a web applet language.\nTherefore, during the formative stages of the language, there was no reason to want a vast hierarchy for a naming scheme. In that more informal community, one selected a more or less whimsical project name and checked to see if somebody else was already using it. (Naming a computer language after a British comedy show might be considered whimsical just to start.) There was no perceived need to cater to a big but unimaginative and clumsy naming scheme.\n", "It's a great way of preventing name collisions, and takes full advantage of the existing domain name system, so it requires no additional bureaucracy or registration. It is simple and brilliant.\nBy reversing the domain name it also gives it a hierarchical structure, which is handy. So you can have sub-packages on the end.\nThe only downside is the length of the name, but to me that is not a downside at all. I think it is a pretty good idea for any language that would support it.\nWhy don't JavaScript libraries do it, for example? Their global namespace is a big problem, yet Javascript libraries use simple global identifiers like '$' which clash with other Javascript libraries.\n", "The idea is to keep name spaces conflict free. Instead of unreadable UUIDs or the like, the reverse domain name is unlikely to get in someone else's way.\nVery simple, but pragmatic.\nMoreover when using 3rd party libs it might give you a clue as to where they came from (for updates, support etc.)\n", "Python does have it, it's just a much flatter hierarchy. Look at os.path, for example. And there's nothing stopping library designers making much deeper ones, e.g. Django.\nFundamentally, I think Python is designed on the idea that you want to get stuff done without having to specify or type too much in advance. This greatly helps with scripting and command-line use. There are several parts of 'The Zen of Python' that address the rationale for this:\n\nSimple is better than complex.\nFlat is better than nested.\nBeautiful is better than ugly. (The Java system looks ugly to me.)\n\nOn the other hand, there's:\n\nNamespaces are one honking great idea -- let's do more of those!\n\n", "Java is able to do it like this, since it is a recommended Java standard practice, and pretty much universally accepted by the Java community. Python dos not have this convention.\n" ]
[ 18, 14, 12, 12, 11, 6, 2, 0 ]
[]
[]
[ "java", "package", "python" ]
stackoverflow_0000709036_java_package_python.txt
Q: How to tell the difference between an iterator and an iterable? In Python the interface of an iterable is a subset of the iterator interface. This has the advantage that in many cases they can be treated in the same way. However, there is an important semantic difference between the two, since for an iterable __iter__ returns a new iterator object and not just self. How can I test that an iterable is really an iterable and not an iterator? Conceptually I understand iterables to be collections, while an iterator only manages the iteration (i.e. keeps track of the position) but is not a collection itself. The difference is for example important when one wants to loop multiple times. If an iterator is given then the second loop will not work since the iterator was already used up and directly raises StopIteration. It is tempting to test for a next method, but this seems dangerous and somehow wrong. Should I just check that the second loop was empty? Is there any way to do such a test in a more pythonic way? I know that this sound like a classic case of LBYL against EAFP, so maybe I should just give up? Or am I missing something? Edit: S.Lott says in his answer below that this is primarily a problem of wanting to do multiple passes over the iterator, and that one should not do this in the first place. However, in my case the data is very large and depending on the situation has to be passed over multiple times for data processing (there is absolutely no way around this). The iterable is also provided by the user, and for situations where a single pass is enough it will work with an iterator (e.g. created by a generator for simplicity). But it would be nice to safeguard against the case were a user provides only an iterator when multiple passes are needed. Edit 2: Actually this is a very nice Example for Abstract Base Classes. The __iter__ methods in an iterator and an iterable have the same name but are sematically different! So hasattr is useless, but isinstance provides a clean solution. A: 'iterator' if obj is iter(obj) else 'iterable' A: However, there is an important semantic difference between the two... Not really semantic or important. They're both iterable -- they both work with a for statement. The difference is for example important when one wants to loop multiple times. When does this ever come up? You'll have to be more specific. In the rare cases when you need to make two passes through an iterable collection, there are often better algorithms. For example, let's say you're processing a list. You can iterate through a list all you want. Why did you get tangled up with an iterator instead of the iterable? Okay that didn't work. Okay, here's one. You're reading a file in two passes, and you need to know how to reset the iterable. In this case, it's a file, and seek is required; or a close and a reopen. That feels icky. You can readlines to get a list which allows two passes with no complexity. So that's not necessary. Wait, what if we have a file so big we can't read it all into memory? And, for obscure reasons, we can't seek, either. What then? Now, we're down to the nitty-gritty of two passes. On the first pass, we accumulated something. An index or a summary or something. An index has all the file's data. A summary, often, is a restructuring of the data. With a small change from "summary" to "restructure", we've preserved the file's data in the new structure. In both cases, we don't need the file -- we can use the index or the summary. All "two-pass" algorithms can be changed to one pass of the original iterator or iterable and a second pass of a different data structure. This is neither LYBL or EAFP. This is algorithm design. You don't need to reset an iterator -- YAGNI. Edit Here's an example of an iterator/iterable issue. It's simply a poorly-designed algorithm. it = iter(xrange(3)) for i in it: print i,; #prints 1,2,3 for i in it: print i,; #prints nothing This is trivially fixed. it = range(3) for i in it: print i for i in it: print i The "multiple times in parallel" is trivially fixed. Write an API that requires an iterable. And when someone refuses to read the API documentation or refuses to follow it after having read it, their stuff breaks. As it should. The "nice to safeguard against the case were a user provides only an iterator when multiple passes are needed" are both examples of insane people writing code that breaks our simple API. If someone is insane enough to read most (but not all of the API doc) and provide an iterator when an iterable was required, you need to find this person and teach them (1) how to read all the API documentation and (2) follow the API documentation. The "safeguard" issue isn't very realistic. These crazy programmers are remarkably rare. And in the few cases when it does arise, you know who they are and can help them. Edit 2 The "we have to read the same structure multiple times" algorithms are a fundamental problem. Do not do this. for element in someBigIterable: function1( element ) for element in someBigIterable: function2( element ) ... Do this, instead. for element in someBigIterable: function1( element ) function2( element ) ... Or, consider something like this. for element in someBigIterable: for f in ( function1, function2, function3, ... ): f( element ) In most cases, this kind of "pivot" of your algorithms results in a program that might be easier to optimize and might be a net improvement in performance. A: import itertools def process(iterable): work_iter, backup_iter= itertools.tee(iterable) for item in work_iter: # bla bla if need_to_startover(): for another_item in backup_iter: That damn time machine that Raymond borrowed from Guido… A: Because of Python's duck typing, Any object is iterable if it defines the next() and __iter__() method returns itself. If the object itself doesnt have the next() method, the __iter__() can return any object, that has a next() method You could refer this question to see Iterability in Python
How to tell the difference between an iterator and an iterable?
In Python the interface of an iterable is a subset of the iterator interface. This has the advantage that in many cases they can be treated in the same way. However, there is an important semantic difference between the two, since for an iterable __iter__ returns a new iterator object and not just self. How can I test that an iterable is really an iterable and not an iterator? Conceptually I understand iterables to be collections, while an iterator only manages the iteration (i.e. keeps track of the position) but is not a collection itself. The difference is for example important when one wants to loop multiple times. If an iterator is given then the second loop will not work since the iterator was already used up and directly raises StopIteration. It is tempting to test for a next method, but this seems dangerous and somehow wrong. Should I just check that the second loop was empty? Is there any way to do such a test in a more pythonic way? I know that this sound like a classic case of LBYL against EAFP, so maybe I should just give up? Or am I missing something? Edit: S.Lott says in his answer below that this is primarily a problem of wanting to do multiple passes over the iterator, and that one should not do this in the first place. However, in my case the data is very large and depending on the situation has to be passed over multiple times for data processing (there is absolutely no way around this). The iterable is also provided by the user, and for situations where a single pass is enough it will work with an iterator (e.g. created by a generator for simplicity). But it would be nice to safeguard against the case were a user provides only an iterator when multiple passes are needed. Edit 2: Actually this is a very nice Example for Abstract Base Classes. The __iter__ methods in an iterator and an iterable have the same name but are sematically different! So hasattr is useless, but isinstance provides a clean solution.
[ "'iterator' if obj is iter(obj) else 'iterable'\n\n", "\nHowever, there is an important semantic difference between the two...\n\nNot really semantic or important. They're both iterable -- they both work with a for statement.\n\nThe difference is for example important when one wants to loop multiple times.\n\nWhen does this ever come up? You'll have to be more specific. In the rare cases when you need to make two passes through an iterable collection, there are often better algorithms.\nFor example, let's say you're processing a list. You can iterate through a list all you want. Why did you get tangled up with an iterator instead of the iterable? Okay that didn't work.\nOkay, here's one. You're reading a file in two passes, and you need to know how to reset the iterable. In this case, it's a file, and seek is required; or a close and a reopen. That feels icky. You can readlines to get a list which allows two passes with no complexity. So that's not necessary.\nWait, what if we have a file so big we can't read it all into memory? And, for obscure reasons, we can't seek, either. What then?\nNow, we're down to the nitty-gritty of two passes. On the first pass, we accumulated something. An index or a summary or something. An index has all the file's data. A summary, often, is a restructuring of the data. With a small change from \"summary\" to \"restructure\", we've preserved the file's data in the new structure. In both cases, we don't need the file -- we can use the index or the summary.\nAll \"two-pass\" algorithms can be changed to one pass of the original iterator or iterable and a second pass of a different data structure.\nThis is neither LYBL or EAFP. This is algorithm design. You don't need to reset an iterator -- YAGNI. \n\nEdit\nHere's an example of an iterator/iterable issue. It's simply a poorly-designed algorithm.\nit = iter(xrange(3))\nfor i in it: print i,; #prints 1,2,3 \nfor i in it: print i,; #prints nothing\n\nThis is trivially fixed.\nit = range(3)\nfor i in it: print i\nfor i in it: print i\n\nThe \"multiple times in parallel\" is trivially fixed. Write an API that requires an iterable. And when someone refuses to read the API documentation or refuses to follow it after having read it, their stuff breaks. As it should.\nThe \"nice to safeguard against the case were a user provides only an iterator when multiple passes are needed\" are both examples of insane people writing code that breaks our simple API.\nIf someone is insane enough to read most (but not all of the API doc) and provide an iterator when an iterable was required, you need to find this person and teach them (1) how to read all the API documentation and (2) follow the API documentation.\nThe \"safeguard\" issue isn't very realistic. These crazy programmers are remarkably rare. And in the few cases when it does arise, you know who they are and can help them.\n\nEdit 2\nThe \"we have to read the same structure multiple times\" algorithms are a fundamental problem.\nDo not do this.\nfor element in someBigIterable:\n function1( element )\nfor element in someBigIterable:\n function2( element )\n...\n\nDo this, instead.\nfor element in someBigIterable:\n function1( element )\n function2( element )\n ...\n\nOr, consider something like this.\nfor element in someBigIterable:\n for f in ( function1, function2, function3, ... ):\n f( element )\n\nIn most cases, this kind of \"pivot\" of your algorithms results in a program that might be easier to optimize and might be a net improvement in performance.\n", "import itertools\n\ndef process(iterable):\n work_iter, backup_iter= itertools.tee(iterable)\n\n for item in work_iter:\n # bla bla\n if need_to_startover():\n for another_item in backup_iter:\n\nThat damn time machine that Raymond borrowed from Guido…\n", "Because of Python's duck typing, \nAny object is iterable if it defines the next() and __iter__() method returns itself.\nIf the object itself doesnt have the next() method, the __iter__() can return any object, that has a next() method\nYou could refer this question to see Iterability in Python\n" ]
[ 12, 3, 2, 0 ]
[]
[]
[ "iterator", "python" ]
stackoverflow_0000709084_iterator_python.txt
Q: generating plural forms into a .pot file I'm internationalizing a python program and cant get plural forms into the .pot file. I have marked string that require plural translations with a _pl() eg. self.write_info(_pl("%(num)d track checked", "%(num)d tracks checked", song_obj.song_count) % {"num" : song_obj.song_count}) Then I'm running: xgettext --language=Python --keyword=_pl --output=output.pot *.py Only the first (singular) string is generated in the pot file. A: I haven't used this with Python, and can't test at the moment, but try --keyword=_pl:1,2 instead. From the GNU gettext docs: --keyword[=keywordspec]’ Additional keyword to be looked for (without keywordspec means not to use default keywords). If keywordspec is a C identifier id, xgettext looks for strings in the first argument of each call to the function or macro id. If keywordspec is of the form ‘id:argnum’, xgettext looks for strings in the argnumth argument of the call. If keywordspec is of the form ‘id:argnum1,argnum2’, xgettext looks for strings in the argnum1st argument and in the argnum2nd argument of the call, and treats them as singular/plural variants for a message with plural handling.
generating plural forms into a .pot file
I'm internationalizing a python program and cant get plural forms into the .pot file. I have marked string that require plural translations with a _pl() eg. self.write_info(_pl("%(num)d track checked", "%(num)d tracks checked", song_obj.song_count) % {"num" : song_obj.song_count}) Then I'm running: xgettext --language=Python --keyword=_pl --output=output.pot *.py Only the first (singular) string is generated in the pot file.
[ "I haven't used this with Python, and can't test at the moment, but try --keyword=_pl:1,2 instead.\nFrom the GNU gettext docs:\n\n--keyword[=keywordspec]’\n Additional keyword to be looked for (without keywordspec means not to use default keywords).\nIf keywordspec is a C identifier id, xgettext looks for strings in the first argument of each call to the function or macro id. If keywordspec is of the form ‘id:argnum’, xgettext looks for strings in the argnumth argument of the call. If keywordspec is of the form ‘id:argnum1,argnum2’, xgettext looks for strings in the argnum1st argument and in the argnum2nd argument of the call, and treats them as singular/plural variants for a message with plural handling.\n\n" ]
[ 3 ]
[]
[]
[ "internationalization", "python", "xgettext" ]
stackoverflow_0000711637_internationalization_python_xgettext.txt
Q: Is there a good python module that does HTML encoding/escaping in C? There is cgi.escape but that appears to be implemented in pure python. It seems like most frameworks like Django also just run some regular expressions. This is something we do a lot, so it would be good to have it be as fast as possible. Maybe C implementations wouldn't be much faster than a series of regexes for this? A: See lxml, which is based on libxml2. While it's primarily a XML library, HTML support is available.
Is there a good python module that does HTML encoding/escaping in C?
There is cgi.escape but that appears to be implemented in pure python. It seems like most frameworks like Django also just run some regular expressions. This is something we do a lot, so it would be good to have it be as fast as possible. Maybe C implementations wouldn't be much faster than a series of regexes for this?
[ "See lxml, which is based on libxml2. While it's primarily a XML library, HTML support is available.\n" ]
[ 0 ]
[]
[]
[ "escaping", "python", "python_module" ]
stackoverflow_0000712113_escaping_python_python_module.txt
Q: Backslashes being added into my cookie in Python I am working with Python's SimpleCookie and I ran into this problem and I am not sure if it is something with my syntax or what. Also, this is classwork for my Python class so it is meant to teach about Python so this is far from the way I would do this in the real world. Anyway, so basically I am keeping information input into a form in a cookie. I am attempting to append to the previous cookie with the new information entered. But for some reason on the third entry of data the cookie suddenly gets "\" in it. I am not sure where they are coming from though. This is the type of output I am getting: "\"\\"\\\\"test:more\\\\":rttre\\":more\":and more" #!/usr/local/bin/python import cgi,os,time,Cookie #error checking import cgitb cgitb.enable() if 'HTTP_COOKIE' in os.environ: cookies = os.environ['HTTP_COOKIE'] cookies = cookies.split('; ') for myCookie in cookies: myCookie = myCookie.split('=') name = myCookie[0] value = myCookie[1] if name == 'critter' : hideMe = value #import critterClass #get info from form form = cgi.FieldStorage() critterName = form.getvalue('input') input2 = form.getvalue('input2') hiddenCookie = form.getvalue('hiddenCookie') hiddenVar = form.getvalue('hiddenVar') #make cookie cookie = Cookie.SimpleCookie() #set critter Cookie if critterName is not None: cookie['critter'] = critterName #If already named else: #if action asked, append cookie if input2 is not None: cookie['critter'] = hideMe+":"+input2 else: cookie['critter'] = "default" print cookie print "Content-type: text/html\n\n" if ((critterName is None) and (input2 is None)): print """ <form name="critter" id="critter" method="post" action="critter.py"> <label for="name">Name your pet: <input type="text" name="input" id="input" /></label> <input type="submit" name="submit" id="submit" value="Submit" /> </form> """ else: formTwo =""" <form name="critter2" id="critter2" method="post" action="critter.py"> <label for="name">%s wants to: <input type="text" name="input2" id="input2" /></label> <input type="hidden" name="hiddenVar" id="hiddenVar" value="%s" /> <input type="submit" name="submit" id="submit" value="Submit" /> </form> [name,play,feed,mood,levels,end] """ print formTwo % (critterName,critterName) if 'HTTP_COOKIE' in os.environ: cookies = os.environ['HTTP_COOKIE'] cookies = cookies.split('; ') for myCookie in cookies: myCookie = myCookie.split('=') name = myCookie[0] value = myCookie[1] if name == 'critter' : print "name"+name print "value"+value A: As explained by others, the backslashes are escaping double quote characters you insert into the cookie value. The (hidden) mechanism in action here is the SimpleCookie class. The BaseCookie.output() method returns a string representation suitable to be sent as HTTP headers. It will insert escape characters (backslashes) before double quote characters and before backslash characters. The print cookie statement activates BaseCookie.output(). On each trip your string makes through the cookie's output() method, backslashes are multiplied (starting with the 1st pair of quotes). >>> c1=Cookie.SimpleCookie() >>> c1['name']='A:0' >>> print c1 Set-Cookie: name="A:0" >>> c1['name']=r'"A:0"' >>> print c1 Set-Cookie: name="\"A:0\"" >>> c1['name']=r'"\"A:0\""' >>> print c1 Set-Cookie: name="\"\\\"A:0\\\"\"" >>> A: I'm not sure, but it looks like regular Python string escaping. If you have a string containing a backslash or a double quote, for instance, Python will often print it in escaped form, to make the printed string a valid string literal. The following snippet illustrates: >>> a='hell\'s bells, \"my\" \\' >>> a 'hell\'s bells, "my" \\' >>> print a hell's bells, "my" \ Not sure if this is relevant, perhaps someone with more domain knowledge can chime in. A: The slashes result from escaping the double quotes. Apparently, the first time through, your code is seeing the double quote, and escaping it by adding a back-slash. Then it reads the escaped backslash, and escapes the backslash by prepending it with -- a backslash. Then it reads.... The problem is happening when you call append. A: As others have already said, you are experiencing string escaping issues as soon as you add "and more" onto the end of the cookie. Until that point, the cookie header is being returned from the SimpleCookie without enclosing quotes. (If there are no spaces in the cookie value, then enclosing quotes are not needed.) # HTTP cookie header with no spaces in value Set-Cookie: cookie=value # HTTP cookie header with spaces in value Set-Cookie: cookie="value with spaces" I would suggest using the same SimpleCookie class to parse the cookie header initially, saving you from doing it by hand, and also handling unescaping the strings properly. cookies = Cookie.SimpleCookie(os.environ.get('HTTP_COOKIE', '')) print cookies['critter'].value edit: This whole deal with the spaces does not apply to this question (although it can in certain circumstances come and bite you when you are not expecting it.) But my suggestion to use the SimpleCookie to parse still stands. A: Backslashes are used for "escaping" characters in strings that would otherwise have special meaning, in effect depriving them of their special meaning. The classic case is the way you can include quotes in quoted strings, such as: Bob said "Hey!" which can be written as a string this way: "Bob said \"Hey!\"" Of course, you may want to have a regular backslash in there, so "\" just means a single backslash. EDIT: In response to your comment on another answer (about using a regexp to remove the slashes) I think you're picking up the wrong end of the stick. The slashes aren't the problem, they are a symptom. The problem is that you're doing round trips treating strings representing quoted strings as if they were plain old strings. Imagine two friends, Bob and Sam, having a conversation: Bob: Hey! Sam: Did you say "Hey!"? Bob: Did you say "Did you say \"Hey!\"?"? That's why the don't show up until the third time. A: Others have already pointed out that this is a result of backslash-escapes of quotes and backslashes. I just wanted to point out that if you look carefully at the structure of the output you cite, you can see how the structure is being built here. The cookie value that you're getting from SimpleCookie is wrapped in quotes -- the (unprocessed) cookie has, e.g., `'[...], critter="value1", [...]'` After you split on ', ' and '=', you have a string that contains "value1". You then append a new value to that string, so that the string contains "value1":value2. The next time through, you get that string back, but with another set of quotes wrapping it -- conceptually, ""value1":value2". But in order to make it so that a web browser will not see two quote characters at the beginning and think that's all there is, the inner set of quotes is being escaped, so it's actually returned as "\"value1\":value2". You then append yet another chunk, make another pass back and forth between server and client, and the next time (because those backslashes need escaped now too) you get "\"\\"value1\\":value2\":value3". And so on. The correct solution, as has already been pointed out, is to let SimpleCookie do the parsing instead of chopping up the strings yourself.
Backslashes being added into my cookie in Python
I am working with Python's SimpleCookie and I ran into this problem and I am not sure if it is something with my syntax or what. Also, this is classwork for my Python class so it is meant to teach about Python so this is far from the way I would do this in the real world. Anyway, so basically I am keeping information input into a form in a cookie. I am attempting to append to the previous cookie with the new information entered. But for some reason on the third entry of data the cookie suddenly gets "\" in it. I am not sure where they are coming from though. This is the type of output I am getting: "\"\\"\\\\"test:more\\\\":rttre\\":more\":and more" #!/usr/local/bin/python import cgi,os,time,Cookie #error checking import cgitb cgitb.enable() if 'HTTP_COOKIE' in os.environ: cookies = os.environ['HTTP_COOKIE'] cookies = cookies.split('; ') for myCookie in cookies: myCookie = myCookie.split('=') name = myCookie[0] value = myCookie[1] if name == 'critter' : hideMe = value #import critterClass #get info from form form = cgi.FieldStorage() critterName = form.getvalue('input') input2 = form.getvalue('input2') hiddenCookie = form.getvalue('hiddenCookie') hiddenVar = form.getvalue('hiddenVar') #make cookie cookie = Cookie.SimpleCookie() #set critter Cookie if critterName is not None: cookie['critter'] = critterName #If already named else: #if action asked, append cookie if input2 is not None: cookie['critter'] = hideMe+":"+input2 else: cookie['critter'] = "default" print cookie print "Content-type: text/html\n\n" if ((critterName is None) and (input2 is None)): print """ <form name="critter" id="critter" method="post" action="critter.py"> <label for="name">Name your pet: <input type="text" name="input" id="input" /></label> <input type="submit" name="submit" id="submit" value="Submit" /> </form> """ else: formTwo =""" <form name="critter2" id="critter2" method="post" action="critter.py"> <label for="name">%s wants to: <input type="text" name="input2" id="input2" /></label> <input type="hidden" name="hiddenVar" id="hiddenVar" value="%s" /> <input type="submit" name="submit" id="submit" value="Submit" /> </form> [name,play,feed,mood,levels,end] """ print formTwo % (critterName,critterName) if 'HTTP_COOKIE' in os.environ: cookies = os.environ['HTTP_COOKIE'] cookies = cookies.split('; ') for myCookie in cookies: myCookie = myCookie.split('=') name = myCookie[0] value = myCookie[1] if name == 'critter' : print "name"+name print "value"+value
[ "As explained by others, the backslashes are escaping double quote characters you insert into the cookie value. The (hidden) mechanism in action here is the SimpleCookie class. The BaseCookie.output() method returns a string representation suitable to be sent as HTTP headers. It will insert escape characters (backslashes) before double quote characters and before backslash characters.\nThe\nprint cookie\n\nstatement activates BaseCookie.output().\nOn each trip your string makes through the cookie's output() method, backslashes are multiplied (starting with the 1st pair of quotes).\n>>> c1=Cookie.SimpleCookie()\n>>> c1['name']='A:0'\n>>> print c1\nSet-Cookie: name=\"A:0\"\n>>> c1['name']=r'\"A:0\"'\n>>> print c1\nSet-Cookie: name=\"\\\"A:0\\\"\"\n>>> c1['name']=r'\"\\\"A:0\\\"\"'\n>>> print c1\nSet-Cookie: name=\"\\\"\\\\\\\"A:0\\\\\\\"\\\"\"\n>>> \n\n", "I'm not sure, but it looks like regular Python string escaping. If you have a string containing a backslash or a double quote, for instance, Python will often print it in escaped form, to make the printed string a valid string literal.\nThe following snippet illustrates:\n>>> a='hell\\'s bells, \\\"my\\\" \\\\'\n>>> a\n'hell\\'s bells, \"my\" \\\\'\n>>> print a\nhell's bells, \"my\" \\\n\nNot sure if this is relevant, perhaps someone with more domain knowledge can chime in.\n", "The slashes result from escaping the double quotes. Apparently, the first time through, your code is seeing the double quote, and escaping it by adding a back-slash. Then it reads the escaped backslash, and escapes the backslash by prepending it with -- a backslash. Then it reads....\nThe problem is happening when you call append.\n", "As others have already said, you are experiencing string escaping issues as soon as you add \"and more\" onto the end of the cookie.\nUntil that point, the cookie header is being returned from the SimpleCookie without enclosing quotes. (If there are no spaces in the cookie value, then enclosing quotes are not needed.)\n# HTTP cookie header with no spaces in value\nSet-Cookie: cookie=value\n\n# HTTP cookie header with spaces in value\nSet-Cookie: cookie=\"value with spaces\"\n\nI would suggest using the same SimpleCookie class to parse the cookie header initially, saving you from doing it by hand, and also handling unescaping the strings properly.\ncookies = Cookie.SimpleCookie(os.environ.get('HTTP_COOKIE', ''))\nprint cookies['critter'].value\n\nedit: This whole deal with the spaces does not apply to this question (although it can in certain circumstances come and bite you when you are not expecting it.) But my suggestion to use the SimpleCookie to parse still stands.\n", "Backslashes are used for \"escaping\" characters in strings that would otherwise have special meaning, in effect depriving them of their special meaning. The classic case is the way you can include quotes in quoted strings, such as:\nBob said \"Hey!\"\n\nwhich can be written as a string this way:\n\"Bob said \\\"Hey!\\\"\"\n\nOf course, you may want to have a regular backslash in there, so \"\\\" just means a single backslash.\nEDIT: In response to your comment on another answer (about using a regexp to remove the slashes) I think you're picking up the wrong end of the stick. The slashes aren't the problem, they are a symptom. The problem is that you're doing round trips treating strings representing quoted strings as if they were plain old strings. Imagine two friends, Bob and Sam, having a conversation:\nBob: Hey!\nSam: Did you say \"Hey!\"?\nBob: Did you say \"Did you say \\\"Hey!\\\"?\"?\n\nThat's why the don't show up until the third time.\n", "Others have already pointed out that this is a result of backslash-escapes of quotes and backslashes. I just wanted to point out that if you look carefully at the structure of the output you cite, you can see how the structure is being built here.\nThe cookie value that you're getting from SimpleCookie is wrapped in quotes -- the (unprocessed) cookie has, e.g., \n`'[...], critter=\"value1\", [...]'`\n\nAfter you split on ', ' and '=', you have a string that contains \"value1\". You then append a new value to that string, so that the string contains \"value1\":value2.\nThe next time through, you get that string back, but with another set of quotes wrapping it -- conceptually, \"\"value1\":value2\". But in order to make it so that a web browser will not see two quote characters at the beginning and think that's all there is, the inner set of quotes is being escaped, so it's actually returned as \"\\\"value1\\\":value2\".\nYou then append yet another chunk, make another pass back and forth between server and client, and the next time (because those backslashes need escaped now too) you get \"\\\"\\\\\"value1\\\\\":value2\\\":value3\". And so on.\nThe correct solution, as has already been pointed out, is to let SimpleCookie do the parsing instead of chopping up the strings yourself.\n" ]
[ 3, 2, 2, 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000709937_python.txt
Q: How to work around needing to update a dictionary I need to delete a k/v pair from a dictionary in a loop. After getting RuntimeError: dictionary changed size during iteration I pickled the dictionary after deleting the k/v and in one of the outer loops I try to reopen the newly pickled/updated dictionary. However, as many of you will probably know-I get the same error-I think when it reaches the top of the loop. I do not use my dictionary in the outermost loop. So my question is-does anyone know how to get around this problem? I want to delete a k/V pair from a dictionary and use that resized dictionary on the next iteration of the loop. to focus the problem and use the solution from Cygil list=[27,29,23,30,3,5,40] testDict={} for x in range(25): tempDict={} tempDict['xsquared']=x*x tempDict['xinverse']=1.0/(x+1.0) testDict[(x,x+1)]=tempDict for item in list: print 'the Dictionary now has',len(testDict.keys()), ' keys' for key in testDict.keys(): if key[0]==item: del testDict[key] I am doing this because I have to have some research assistants compare some observations from two data sets that could not be matched because of name variants. The idea is to throw up a name from one data set (say set A) and then based on a key match find all the names attached to that key in the other dataset (set B). One a match has been identified I don't want to show the value from B again to speed things up for them. Because there are 6,000 observations I also don't want them to have to start at the beginning of A each time they get back to work. However, I can fix that by letting them chose to enter the last key from A they worked with. But I really need to reduce B once the match has been identified A: Without code, I'm assuming you're writing something like: for key in dict: if check_condition(dict[key]): del dict[key] If so, you can write for key in list(dict.keys()): if key in dict and check_condition(dict[key]): del dict[key] list(dict.keys()) returns a copy of the keys, not a view, which makes it possible to delete from the dictionary (you are iterating through a copy of the keys, not the keys in the dictionary itself, in this case.) A: Delete all keys whose value is > 15: for k in mydict.keys(): # makes a list of the keys and iterate # over the list, not over the dict. if mydict[k] > 15: del mydict[k] A: Change: for ansSeries in notmatched: To: for ansSeries in notmatched.copy():
How to work around needing to update a dictionary
I need to delete a k/v pair from a dictionary in a loop. After getting RuntimeError: dictionary changed size during iteration I pickled the dictionary after deleting the k/v and in one of the outer loops I try to reopen the newly pickled/updated dictionary. However, as many of you will probably know-I get the same error-I think when it reaches the top of the loop. I do not use my dictionary in the outermost loop. So my question is-does anyone know how to get around this problem? I want to delete a k/V pair from a dictionary and use that resized dictionary on the next iteration of the loop. to focus the problem and use the solution from Cygil list=[27,29,23,30,3,5,40] testDict={} for x in range(25): tempDict={} tempDict['xsquared']=x*x tempDict['xinverse']=1.0/(x+1.0) testDict[(x,x+1)]=tempDict for item in list: print 'the Dictionary now has',len(testDict.keys()), ' keys' for key in testDict.keys(): if key[0]==item: del testDict[key] I am doing this because I have to have some research assistants compare some observations from two data sets that could not be matched because of name variants. The idea is to throw up a name from one data set (say set A) and then based on a key match find all the names attached to that key in the other dataset (set B). One a match has been identified I don't want to show the value from B again to speed things up for them. Because there are 6,000 observations I also don't want them to have to start at the beginning of A each time they get back to work. However, I can fix that by letting them chose to enter the last key from A they worked with. But I really need to reduce B once the match has been identified
[ "Without code, I'm assuming you're writing something like:\nfor key in dict:\n if check_condition(dict[key]):\n del dict[key]\n\nIf so, you can write\nfor key in list(dict.keys()):\n if key in dict and check_condition(dict[key]):\n del dict[key]\n\nlist(dict.keys()) returns a copy of the keys, not a view, which makes it possible to delete from the dictionary (you are iterating through a copy of the keys, not the keys in the dictionary itself, in this case.)\n", "Delete all keys whose value is > 15:\nfor k in mydict.keys(): # makes a list of the keys and iterate\n # over the list, not over the dict.\n if mydict[k] > 15:\n del mydict[k]\n\n", "Change:\nfor ansSeries in notmatched:\n\nTo:\nfor ansSeries in notmatched.copy():\n\n" ]
[ 6, 4, 1 ]
[]
[]
[ "dictionary", "python", "runtime_error" ]
stackoverflow_0000712225_dictionary_python_runtime_error.txt
Q: Is it possible to get a timezone in Python given a UTC timestamp and a UTC offset? I have data that is the UTC offset and the UTC time. Given that, is it possible in Python to get the user's local timezone (mainly to figure if it is DST etc. probably using pytz), similar to the function in PHP timezone_name_from_abbr? For example: If my epoch time is 1238720309, I can get the UTC time as: >>> d = datetime.utcfromtimestamp(1238720309) >>> print d + dt.timedelta(0,-28800) #offset for pacific I think 2009-04-02 17:04:41.712143 This is correct except it is PDT right now, so it should be: 2009-04-02 18:04:41.712413 I need to get the timezone to use in pytz to figure out if it is daylight saving, I think? A: Since, in general, there is more than one possible time zone for a given time zone offset, the general answer is "No, not without more information". The more information is typically the location to which the time applies - which country, or state, or city. A: No. Time zones are too complicated and there are too many that are X hours from UTC. http://en.wikipedia.org/wiki/List_of_time_zones For example, -5 from UTC could be Canada, New York, Cuba, Jamaica, Ecuador, etc. The equator zones probably don't use DST since their day is roughly 12 hours year long. The south american ones, if they use some form of DST are probably on the opposite schedule of the north american ones because their summer/winter (i.e. short days/long days) schedules are also opposite. A: No. http://en.wikipedia.org/wiki/List_of_U.S._states_by_time_zone
Is it possible to get a timezone in Python given a UTC timestamp and a UTC offset?
I have data that is the UTC offset and the UTC time. Given that, is it possible in Python to get the user's local timezone (mainly to figure if it is DST etc. probably using pytz), similar to the function in PHP timezone_name_from_abbr? For example: If my epoch time is 1238720309, I can get the UTC time as: >>> d = datetime.utcfromtimestamp(1238720309) >>> print d + dt.timedelta(0,-28800) #offset for pacific I think 2009-04-02 17:04:41.712143 This is correct except it is PDT right now, so it should be: 2009-04-02 18:04:41.712413 I need to get the timezone to use in pytz to figure out if it is daylight saving, I think?
[ "Since, in general, there is more than one possible time zone for a given time zone offset, the general answer is \"No, not without more information\". The more information is typically the location to which the time applies - which country, or state, or city.\n", "No. Time zones are too complicated and there are too many that are X hours from UTC.\nhttp://en.wikipedia.org/wiki/List_of_time_zones\nFor example, -5 from UTC could be Canada, New York, Cuba, Jamaica, Ecuador, etc.\nThe equator zones probably don't use DST since their day is roughly 12 hours year long. The south american ones, if they use some form of DST are probably on the opposite schedule of the north american ones because their summer/winter (i.e. short days/long days) schedules are also opposite.\n", "No. http://en.wikipedia.org/wiki/List_of_U.S._states_by_time_zone\n" ]
[ 5, 1, 0 ]
[]
[]
[ "dst", "python", "pytz", "timezone", "utc" ]
stackoverflow_0000712322_dst_python_pytz_timezone_utc.txt
Q: How to deliver instance of object to instance of SocketServer.BaseRequestHandler? This is problem. My primary work is : deliver "s" object to "handle" method in TestRequestHandler class. My first step was : deliver "s" object through "point" method to TestServer class, but here im stuck. How to deliver "s" object to TestRequestHandler? Some suggestions? import threading import SocketServer from socket import * class TestRequestHandler(SocketServer.BaseRequestHandler): def __init__(self, request, client_address, server): SocketServer.BaseRequestHandler.__init__(self, request, client_address, server) return def setup(self): return SocketServer.BaseRequestHandler.setup(self) def handle(self): data = self.request.recv(1024) if (data): self.request.send(data) print data def finish(self): return SocketServer.BaseRequestHandler.finish(self) class TestServer(SocketServer.TCPServer): def __init__(self, server_address, handler_class=TestRequestHandler): print "__init__" SocketServer.TCPServer.__init__(self, server_address, handler_class) return def point(self,obj): self.obj = obj print "point" def server_activate(self): SocketServer.TCPServer.server_activate(self) return def serve_forever(self): print "serve_forever" while True: self.handle_request() return def handle_request(self): return SocketServer.TCPServer.handle_request(self) if __name__ == '__main__': s = socket(AF_INET, SOCK_STREAM) address = ('localhost', 6666) server = TestServer(address, TestRequestHandler) server.point(s) t = threading.Thread(target=server.serve_forever()) t.setDaemon(True) t.start() A: If I understand correctly, I think you perhaps are misunderstanding how the module works. You are already specifying an address of 'localhost:6666' for the server to bind on. When you start the server via your call to serve_forever(), this is going to cause the server to start listening to a socket on localhost:6666. According to the documentation, that socket is passed to your RequestHandler as the 'request' object. When data is received on the socket, your 'handle' method should be able to recv/send from/to that object using the documented socket API. If you want a further abstraction, it looks like your RequestHandler can extend from StreamRequestHandler and read/write to the socket using file-like objects instead. The point is, there is no need for you to create an additional socket and then try to force your server to use the new one instead. Part of the value of the SocketServer module is that it manages the lifecycle of the socket for you. On the flip side, if you want to test your server from a client's perspective, then you would want to create a socket that you can read/write your client requests on. But you would never pass this socket to your server, per se. You would probably do this in a completely separate process and test your server via IPC over the socket. Edit based on new information To get server A to open a socket to server B when server A receives data one solution is to simply open a socket from inside your RequestHandler. That said, there are likely some other design concerns that you will need to address based on the requirements of your service. For example, you may want to use a simple connection pool that say opens a few sockets to server B that server A can use like a resource. There may already be some libraries in Python that help with this. Given your current design, your RequestHandler has access to the server as a member variable so you could do something like this: class TestServer(SocketServer.TCPServer): def point (self, socketB): self.socketB = socketB # hold serverB socket class TestRequestHandler(SocketServer.BaseRequestHandler): def handle(self): data = self.request.recv(1024) if (data): self.request.send(data) print data self.server.socketB ... # Do whatever with the socketB But like I said, it may be better for you to have some sort of connection pool or other object that manages your server B socket such that your server A handler can just acquire/release the socket as incoming requests are handled. This way you can better deal with conditions where server B breaks the socket. Your current design wouldn't be able to handle broken sockets very easily. Just some thoughts... A: If the value of s is set once, and not reinitialized - you could make it a class variable as opposed to an instance variable of TestServer, and then have the handler retrieve it via a class method of TestServer in the handler's constructor. eg: TestServer._mySocket = s A: Ok, my main task is this. Construction of the listening server (A-server - localhost, 6666) which during start will open "hard" connection to the different server (B-server - localhost, 7777). When the customer send data to the A-server this (A-server) sends data (having that hard connection to the B-server) to B-server, the answer receives from the B-server to A-server and answer sends to the customer. Then again : the customer sends data, A-server receives them, then sends to the B-server, the answer receives data from the B-server and A-server send data to the customer. And so round and round. The connection to the B-server is closes just when the server A will stop. All above is the test of making this.
How to deliver instance of object to instance of SocketServer.BaseRequestHandler?
This is problem. My primary work is : deliver "s" object to "handle" method in TestRequestHandler class. My first step was : deliver "s" object through "point" method to TestServer class, but here im stuck. How to deliver "s" object to TestRequestHandler? Some suggestions? import threading import SocketServer from socket import * class TestRequestHandler(SocketServer.BaseRequestHandler): def __init__(self, request, client_address, server): SocketServer.BaseRequestHandler.__init__(self, request, client_address, server) return def setup(self): return SocketServer.BaseRequestHandler.setup(self) def handle(self): data = self.request.recv(1024) if (data): self.request.send(data) print data def finish(self): return SocketServer.BaseRequestHandler.finish(self) class TestServer(SocketServer.TCPServer): def __init__(self, server_address, handler_class=TestRequestHandler): print "__init__" SocketServer.TCPServer.__init__(self, server_address, handler_class) return def point(self,obj): self.obj = obj print "point" def server_activate(self): SocketServer.TCPServer.server_activate(self) return def serve_forever(self): print "serve_forever" while True: self.handle_request() return def handle_request(self): return SocketServer.TCPServer.handle_request(self) if __name__ == '__main__': s = socket(AF_INET, SOCK_STREAM) address = ('localhost', 6666) server = TestServer(address, TestRequestHandler) server.point(s) t = threading.Thread(target=server.serve_forever()) t.setDaemon(True) t.start()
[ "If I understand correctly, I think you perhaps are misunderstanding how the module works. You are already specifying an address of 'localhost:6666' for the server to bind on. \nWhen you start the server via your call to serve_forever(), this is going to cause the server to start listening to a socket on localhost:6666. \nAccording to the documentation, that socket is passed to your RequestHandler as the 'request' object. When data is received on the socket, your 'handle' method should be able to recv/send from/to that object using the documented socket API.\nIf you want a further abstraction, it looks like your RequestHandler can extend from StreamRequestHandler and read/write to the socket using file-like objects instead.\nThe point is, there is no need for you to create an additional socket and then try to force your server to use the new one instead. Part of the value of the SocketServer module is that it manages the lifecycle of the socket for you.\nOn the flip side, if you want to test your server from a client's perspective, then you would want to create a socket that you can read/write your client requests on. But you would never pass this socket to your server, per se. You would probably do this in a completely separate process and test your server via IPC over the socket.\nEdit based on new information\nTo get server A to open a socket to server B when server A receives data one solution is to simply open a socket from inside your RequestHandler. That said, there are likely some other design concerns that you will need to address based on the requirements of your service. \nFor example, you may want to use a simple connection pool that say opens a few sockets to server B that server A can use like a resource. There may already be some libraries in Python that help with this. \nGiven your current design, your RequestHandler has access to the server as a member variable so you could do something like this:\nclass TestServer(SocketServer.TCPServer):\n def point (self, socketB):\n self.socketB = socketB # hold serverB socket\n\nclass TestRequestHandler(SocketServer.BaseRequestHandler):\n\n def handle(self):\n data = self.request.recv(1024)\n\n if (data): \n self.request.send(data)\n print data\n\n self.server.socketB ... # Do whatever with the socketB\n\nBut like I said, it may be better for you to have some sort of connection pool or other object that manages your server B socket such that your server A handler can just acquire/release the socket as incoming requests are handled.\nThis way you can better deal with conditions where server B breaks the socket. Your current design wouldn't be able to handle broken sockets very easily. Just some thoughts...\n", "If the value of s is set once, and not reinitialized - you could make it a class variable as opposed to an instance variable of TestServer, and then have the handler retrieve it via a class method of TestServer in the handler's constructor.\neg: TestServer._mySocket = s\n", "Ok, my main task is this. Construction of the listening server (A-server - localhost, 6666) which during start will open \"hard\" connection to the different server (B-server - localhost, 7777).\nWhen the customer send data to the A-server this (A-server) sends data (having that hard connection to the B-server) to B-server, the answer receives from the B-server to A-server and answer sends to the customer.\nThen again : the customer sends data, A-server receives them, then sends to the B-server, the answer receives data from the B-server and A-server send data to the customer.\nAnd so round and round. The connection to the B-server is closes just when the server A will stop.\nAll above is the test of making this.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "python", "python_2.7", "sockets", "socketserver", "tcp" ]
stackoverflow_0000711002_python_python_2.7_sockets_socketserver_tcp.txt
Q: How to disable v1 tag in a Web service request with SoapPy? I'm trying to use SOAPpy to write a web service client. However after defining WSDL object, a call to a web-service method is wrapped in a <v1> .. actual parameters .. </v1> How can I disable this v1 tag? A: You can give the name of tag by providing name in the parameter call list, i.e: server.GetList(GetListRequest = { "order" : "asc" }) then v1 is replaced by GetListRequest as I originally wanted.
How to disable v1 tag in a Web service request with SoapPy?
I'm trying to use SOAPpy to write a web service client. However after defining WSDL object, a call to a web-service method is wrapped in a <v1> .. actual parameters .. </v1> How can I disable this v1 tag?
[ "You can give the name of tag by providing name in the parameter call list, i.e:\nserver.GetList(GetListRequest = { \"order\" : \"asc\" })\n\nthen v1 is replaced by GetListRequest as I originally wanted.\n" ]
[ 0 ]
[]
[]
[ "python", "soappy", "web_services" ]
stackoverflow_0000713522_python_soappy_web_services.txt
Q: How do I schedule a process' termination? I need to run a process, wait a few hours, kill it, and start it again. Is there an easy way that I can accomplish this with Python or Bash? I can run it in the background but how do I identify it to use kill on it? A: This is in Perl, but you should be able to translate it to Python. #!/usr/bin/perl use strict; use warnings; #set times to 0 for infinite times my ($times, $wait, $program, @args) = @ARGV; $times = -1 unless $times; while ($times--) { $times = -1 if $times < 0; #catch -2 and turn it back into -1 die "could not fork" unless defined(my $pid = fork); #replace child with the program we want to launch unless ($pid) { exec $program, @args; } #parent waits and kills the child if it isn't done yet sleep $wait; kill $pid; waitpid $pid, 0; #clean up child } Because I am trying to teach myself Python, here it is in Python (I do not trust this code): #!/usr/bin/python import os import sys import time times = int(sys.argv[1]) wait = int(sys.argv[2]) program = sys.argv[3] args = [] if len(sys.argv) >= 4: args = sys.argv[3:] if times == 0: times = -1 while times: times = times - 1 if times < 0: times = -1 pid = os.fork() if not pid: os.execvp(program, args) time.sleep(wait) os.kill(pid, 15) os.waitpid(pid, 0) A: With bash: while true ; do run_proc & PID=$! sleep 3600 kill $PID sleep 30 done The $! bash variable expands to the PID of the most recently started background process. The sleep just waits an hour, then the kill shuts down that process. The while loop just keeps doing it over and over. A: In python: import subprocess import time while True: p = subprocess.Popen(['/path/to/program', 'param1', 'param2']) time.sleep(2 * 60 * 60) # wait time in seconds - 2 hours p.kill() p.kill() is python >= 2.6. On python <= 2.5 you can use this instead: os.kill(p.pid, signal.SIGTERM) A: One idea: Save the process's PID (returned by fork() in your child process) to a file, then either schedule a cron job to kill it or kill it manually, reading the PID from the file. Another option: Create a shell script wrapper that automatically kills and restarts the process. Same as above, but you can keep the PID in memory, sleep for as long as you need, kill the process, then loop. A: Take a look at the start-stop-daemon utility. A: You could always write a script to search for those processes and kill them if found. Then add a cronjob to execute the script. Find process ID of a process with known name Kill processes with a known ID In python os.kill() can be used to kill a process given the id. A: Not an ideal method but if you know the name of the program and you know it's the only process of that name running on the system you can use this in cron: 0 */2 * * * kill `ps -ax | grep programName | grep -v grep | awk '{ print $1 }'` && ./scriptToStartProcess This will run every two hours on the hour and kill programName then start the process again.
How do I schedule a process' termination?
I need to run a process, wait a few hours, kill it, and start it again. Is there an easy way that I can accomplish this with Python or Bash? I can run it in the background but how do I identify it to use kill on it?
[ "This is in Perl, but you should be able to translate it to Python.\n#!/usr/bin/perl\n\nuse strict;\nuse warnings;\n\n#set times to 0 for infinite times\nmy ($times, $wait, $program, @args) = @ARGV;\n\n$times = -1 unless $times;\nwhile ($times--) {\n $times = -1 if $times < 0; #catch -2 and turn it back into -1\n die \"could not fork\" unless defined(my $pid = fork);\n\n #replace child with the program we want to launch\n unless ($pid) {\n exec $program, @args;\n }\n\n #parent waits and kills the child if it isn't done yet\n sleep $wait;\n\n kill $pid;\n waitpid $pid, 0; #clean up child\n}\n\nBecause I am trying to teach myself Python, here it is in Python (I do not trust this code):\n#!/usr/bin/python\n\nimport os\nimport sys\nimport time\n\ntimes = int(sys.argv[1])\nwait = int(sys.argv[2])\nprogram = sys.argv[3]\nargs = []\nif len(sys.argv) >= 4:\n args = sys.argv[3:]\n\nif times == 0:\n times = -1\n\nwhile times:\n times = times - 1\n if times < 0:\n times = -1\n\n pid = os.fork()\n\n if not pid:\n os.execvp(program, args)\n\n time.sleep(wait)\n\n os.kill(pid, 15)\n os.waitpid(pid, 0)\n\n", "With bash:\nwhile true ; do\n run_proc &\n PID=$!\n sleep 3600\n kill $PID\n sleep 30\ndone\n\nThe $! bash variable expands to the PID of the most recently started background process. The sleep just waits an hour, then the kill shuts down that process.\nThe while loop just keeps doing it over and over.\n", "In python:\nimport subprocess\nimport time\n\nwhile True: \n p = subprocess.Popen(['/path/to/program', 'param1', 'param2'])\n time.sleep(2 * 60 * 60) # wait time in seconds - 2 hours\n p.kill()\n\np.kill() is python >= 2.6.\nOn python <= 2.5 you can use this instead:\nos.kill(p.pid, signal.SIGTERM)\n\n", "One idea: Save the process's PID (returned by fork() in your child process) to a file, then either schedule a cron job to kill it or kill it manually, reading the PID from the file.\nAnother option: Create a shell script wrapper that automatically kills and restarts the process. Same as above, but you can keep the PID in memory, sleep for as long as you need, kill the process, then loop.\n", "Take a look at the start-stop-daemon utility. \n", "You could always write a script to search for those processes and kill them if found.\nThen add a cronjob to execute the script.\nFind process ID of a process with known name\nKill processes with a known ID\nIn python os.kill() can be used to kill a process given the id.\n", "Not an ideal method but if you know the name of the program and you know it's the only process of that name running on the system you can use this in cron:\n0 */2 * * * kill `ps -ax | grep programName | grep -v grep | awk '{ print $1 }'` && ./scriptToStartProcess\n\nThis will run every two hours on the hour and kill programName then start the process again.\n" ]
[ 3, 3, 2, 0, 0, 0, 0 ]
[]
[]
[ "bash", "kill", "process", "python", "unix" ]
stackoverflow_0000704203_bash_kill_process_python_unix.txt
Q: Preserving the Java-type of an object when passing it from Java to Jython I wonder if it possible to not have jython automagicaly transform java objects to python types when you put them in a Java ArrayList. Example copied from a jython-console: >>> b = java.lang.Boolean("True"); >>> type(b) <type 'javainstance'> >>> isinstance(b, java.lang.Boolean); 1 So far, everything is fine but if I put the object in an ArrayList >>> l = java.util.ArrayList(); >>> l.add(b) 1 >>> type(l.get(0)) <type 'int'> the object is transformed into a python-like boolean (i.e. an int) and... >>> isinstance(l.get(0), java.lang.Boolean) 0 which means that I can no longer see that this was once a java.lang.Boolean. Clarification I guess what really want to achieve is to get rid of the implicit conversion from Java-types to Python-types when passing objects from Java to Python. I will give another example for clarification. A Python module: import java import IPythonModule class PythonModule(IPythonModule): def method(self, data): print type(data); And a Java-Class that uses this module: import java.util.ArrayList; import org.python.core.PyList; import org.testng.annotations.*; import static org.testng.AssertJUnit.*; public class Test1 { IPythonModule m; @BeforeClass public void setUp() { JythonFactory jf = JythonFactory.getInstance(); m = (IPythonModule) jf.getJythonObject( "IPythonModule", "/Users/sg/workspace/JythonTests/src/PythonModule.py"); } @Test public void testFirst() { m.method(new Boolean("true")); } } Here I will see the output 'bool' because of the implicit conversion, but what I would really like is to see 'javainstance' or 'java.lang.Boolean'. If you want to run this code you will also need the JythonFactory-class that can be found here. A: You appear to be using an old version of Jython. In current Jython versions, the Python bool type corresponds to a Java Boolean. Jython is not transforming the Java type to a Python type on the way into the ArrayList - on the contrary, it will transform a primitive Python type to a primitive or wrapper Java type when passing it to a Java method, and a Java type to a Python type on the way out. You can observe this by printing the contents of the array. Note that the Python bool is capitalized (True); the Java Boolean is not. >>> from java.lang import Boolean >>> b = Boolean('True') >>> b true >>> from java.util import ArrayList >>> l = ArrayList() >>> l.add(b) True >>> l [true] >>> l.add(True) True >>> l [true, true] >>> list(l) [True, True] If this still doesn't do what you want, consider writing a small Java helper function that examines the array for you without conversion. It's arguably a bug that Jython doesn't automatically convert the Boolean you constructed into a Python bool, and in this case it gives you no advantage over using Boolean.TRUE or the Python True.
Preserving the Java-type of an object when passing it from Java to Jython
I wonder if it possible to not have jython automagicaly transform java objects to python types when you put them in a Java ArrayList. Example copied from a jython-console: >>> b = java.lang.Boolean("True"); >>> type(b) <type 'javainstance'> >>> isinstance(b, java.lang.Boolean); 1 So far, everything is fine but if I put the object in an ArrayList >>> l = java.util.ArrayList(); >>> l.add(b) 1 >>> type(l.get(0)) <type 'int'> the object is transformed into a python-like boolean (i.e. an int) and... >>> isinstance(l.get(0), java.lang.Boolean) 0 which means that I can no longer see that this was once a java.lang.Boolean. Clarification I guess what really want to achieve is to get rid of the implicit conversion from Java-types to Python-types when passing objects from Java to Python. I will give another example for clarification. A Python module: import java import IPythonModule class PythonModule(IPythonModule): def method(self, data): print type(data); And a Java-Class that uses this module: import java.util.ArrayList; import org.python.core.PyList; import org.testng.annotations.*; import static org.testng.AssertJUnit.*; public class Test1 { IPythonModule m; @BeforeClass public void setUp() { JythonFactory jf = JythonFactory.getInstance(); m = (IPythonModule) jf.getJythonObject( "IPythonModule", "/Users/sg/workspace/JythonTests/src/PythonModule.py"); } @Test public void testFirst() { m.method(new Boolean("true")); } } Here I will see the output 'bool' because of the implicit conversion, but what I would really like is to see 'javainstance' or 'java.lang.Boolean'. If you want to run this code you will also need the JythonFactory-class that can be found here.
[ "You appear to be using an old version of Jython. In current Jython versions, the Python bool type corresponds to a Java Boolean.\nJython is not transforming the Java type to a Python type on the way into the ArrayList - on the contrary, it will transform a primitive Python type to a primitive or wrapper Java type when passing it to a Java method, and a Java type to a Python type on the way out.\nYou can observe this by printing the contents of the array. Note that the Python bool is capitalized (True); the Java Boolean is not. \n>>> from java.lang import Boolean\n>>> b = Boolean('True')\n>>> b \ntrue\n>>> from java.util import ArrayList\n>>> l = ArrayList()\n>>> l.add(b)\nTrue\n>>> l\n[true]\n>>> l.add(True)\nTrue\n>>> l\n[true, true]\n>>> list(l) \n[True, True]\n\nIf this still doesn't do what you want, consider writing a small Java helper function that examines the array for you without conversion. It's arguably a bug that Jython doesn't automatically convert the Boolean you constructed into a Python bool, and in this case it gives you no advantage over using Boolean.TRUE or the Python True.\n" ]
[ 1 ]
[]
[]
[ "java", "jython", "python" ]
stackoverflow_0000713675_java_jython_python.txt
Q: questions re: current state of GUI programming with Python I recently did some work modifying a Python gui app that was using wxPython widgets. I've experimented with Python in fits and starts over last six or seven years, but this was the first time I did any work with a gui. I was pretty disappointed at what seems to be the current state of gui programming with Python. I like the Python language itself a lot, it's a fun change from the Delphi/ObjectPascal programming I'm used to, definitely a big productivity increase for general purpose programming tasks. I'd like to move to Python for everything. But wxPython is a huge step backwards from something like Delphi's VCL or .NET's WinForms. While Python itself offers nice productivity gains from generally programming a higher level of abstraction, wxPython is used at a way lower level of abstraction than the VCL. For example, I wasted a lot fo time trying to get a wxPython list object to behave the way I wanted it to. Just to add sortable columns involved several code-intensive steps, one to create and maintain a shadow-data-structure that provided the actual sort order, another to make it possible to show graphic-sort-direction-triangles in the column header, and there were a couple more I don't remember. All of these error prone steps could be accomplished simply by setting a property value using my Delphi grid component. My conclusion: while Python provides big productivity gains by raising level of abstraction for a lot of general purpose coding, wxPython is several levels of abstraction lower than the gui tools available for Delphi. Net result: gui programming with Delphi is way faster than gui programming with Python, and the resulting ui with Delphi is still more polished and full-featured. It doesn't seem to me like it's exaggerating to say that Delphi gui programming was more advanced back in 1995 than python gui programming with wxPython is in 2009. I did some investigating of other python gui frameworks and it didn't look like any were substantially better than wxPython. I also did some minimal investigation of gui formbuilders for wxPython, which would have made things a little bit better. But by most reports those solutions are buggy and even a great formbuilder wouldn't address my main complaints about wxPython, which are simply that it has fewer features and generally requires you to do gui programming at a much lower level of abstraction than I'm used to with Delphi's VCL. Some quick investigating into suggested python gui-dev solutions ( http://wiki.python.org/moin/GuiProgramming ) is honestly somewhat depressing for someone used to Delphi or .NET. Finally, I've got a couple of questions. First, am I missing something? Is there some gui-development solution for Python that can compare with VCL or WinForms programming? I don't necessarily care if it doesn't quite measure up to Delphi's VCL. I'm just looking for something that's in the same league. Second, could IronPython be the direction to go? I've mostly tried to avoid drinking the .NET koolaid, but maybe IronPython gives me a reason to finally give in. Even then, does IronPython fully integrate with WinForms, or would I need to have the forms themselves be backed by c# or vb.net? It looks to me like that definitely is the case with SharpDevelop and MonoDevelop (i.e, IronPython can't be used to do design-time gui building). Does VS.NET fully integrate IronPython with gui-building? It really seems to me like Python could "take over the world" in a way similar to the way that Visual Basic did back in the early 1990's, if some wonderful new gui-building solution came out for Python. Only this time with Python we'd have a whole new paradigm of fast, cross platform, and open source gui programming. Wouldn't corporations eat that up? Yeah, I know, web apps are the main focus of things these days, so a great Python-gui solution wouldn't create same revolution that VB once did. But I don't see gui programming disappearing and I'd like a nice modern, open source, high level solution. A: seems your complains are about wxPython, not about Python itself. try pyQt (or is it qtPython?) but, both wxPython and pyQt are just Python bindings to a C / C++ (respectively) library, it's just as (conceptually) low level as the originals. but, Qt is far superior to wx A: PyQt is a binding to Qt SDK from Nokia, and PyQt itself is delivered by a company called RiverBank. If licence is not important for you you can use PyQt under GPL or you 'll pay some money for commercial licence. PyQt is binding Qt 4.4 right now. Qt is not just GUI, it's a complete C/C++ SDK that help with networking, xml, media, db and other stuff, and PyQt transfer all this to python. With PyQt you'll use Qt Designer and you 'll transfer the .ui file to .py file by a simple command line. You 'll find many resources on the web about PyQt and good support from different communities, and even published books on PyQt. Many suggestions consider that RiverBank has no choice but to release the next version which 'll depend on Qt 4.5 under LGPL, we are waiting :). Another solution is Jython with Java Swing, very easy and elegant to write (specially under JDK 6), but not enough resources on internet. A: You may want to look at Jython (Python on the Java VM). It is very similar to Iron Python, and you can fore go the .Net koolaid. A: dabo puts wxPython programming at a higher level like what you're looking for. A: You're probably going to have to use the .net or java pythons, but check this out first and see if it meets your requirements: Kiwi A: Short answer: Don't try Tkinter - it's got all the problems described above. Long answer: Tkinter is not useful for large programs. Handling the various pieces with it somehow invariably degenerates to juggling (which never happens otherwise) and the resulting output doesn't look native or particularly polished. A: You are right, wxPython can definetely be improved. But i think Robin Dunn has done a great job so far, and still is. Especially the wxPython community is open to improvements, like recent inclusion of the widgets by Andrea, so like many community projects pick the one you like most, and improve it while using it. A: We've been quite happy using Python.Net to build our UIs in WinForms and using CPython for Presenter, Model. IronPython is also a good tool if you want to do python on Windows. A: There is Wax, whose purpose was to create a more pythonic interface to wxWidgets, but it seems its development has stalled.
questions re: current state of GUI programming with Python
I recently did some work modifying a Python gui app that was using wxPython widgets. I've experimented with Python in fits and starts over last six or seven years, but this was the first time I did any work with a gui. I was pretty disappointed at what seems to be the current state of gui programming with Python. I like the Python language itself a lot, it's a fun change from the Delphi/ObjectPascal programming I'm used to, definitely a big productivity increase for general purpose programming tasks. I'd like to move to Python for everything. But wxPython is a huge step backwards from something like Delphi's VCL or .NET's WinForms. While Python itself offers nice productivity gains from generally programming a higher level of abstraction, wxPython is used at a way lower level of abstraction than the VCL. For example, I wasted a lot fo time trying to get a wxPython list object to behave the way I wanted it to. Just to add sortable columns involved several code-intensive steps, one to create and maintain a shadow-data-structure that provided the actual sort order, another to make it possible to show graphic-sort-direction-triangles in the column header, and there were a couple more I don't remember. All of these error prone steps could be accomplished simply by setting a property value using my Delphi grid component. My conclusion: while Python provides big productivity gains by raising level of abstraction for a lot of general purpose coding, wxPython is several levels of abstraction lower than the gui tools available for Delphi. Net result: gui programming with Delphi is way faster than gui programming with Python, and the resulting ui with Delphi is still more polished and full-featured. It doesn't seem to me like it's exaggerating to say that Delphi gui programming was more advanced back in 1995 than python gui programming with wxPython is in 2009. I did some investigating of other python gui frameworks and it didn't look like any were substantially better than wxPython. I also did some minimal investigation of gui formbuilders for wxPython, which would have made things a little bit better. But by most reports those solutions are buggy and even a great formbuilder wouldn't address my main complaints about wxPython, which are simply that it has fewer features and generally requires you to do gui programming at a much lower level of abstraction than I'm used to with Delphi's VCL. Some quick investigating into suggested python gui-dev solutions ( http://wiki.python.org/moin/GuiProgramming ) is honestly somewhat depressing for someone used to Delphi or .NET. Finally, I've got a couple of questions. First, am I missing something? Is there some gui-development solution for Python that can compare with VCL or WinForms programming? I don't necessarily care if it doesn't quite measure up to Delphi's VCL. I'm just looking for something that's in the same league. Second, could IronPython be the direction to go? I've mostly tried to avoid drinking the .NET koolaid, but maybe IronPython gives me a reason to finally give in. Even then, does IronPython fully integrate with WinForms, or would I need to have the forms themselves be backed by c# or vb.net? It looks to me like that definitely is the case with SharpDevelop and MonoDevelop (i.e, IronPython can't be used to do design-time gui building). Does VS.NET fully integrate IronPython with gui-building? It really seems to me like Python could "take over the world" in a way similar to the way that Visual Basic did back in the early 1990's, if some wonderful new gui-building solution came out for Python. Only this time with Python we'd have a whole new paradigm of fast, cross platform, and open source gui programming. Wouldn't corporations eat that up? Yeah, I know, web apps are the main focus of things these days, so a great Python-gui solution wouldn't create same revolution that VB once did. But I don't see gui programming disappearing and I'd like a nice modern, open source, high level solution.
[ "seems your complains are about wxPython, not about Python itself. try pyQt (or is it qtPython?)\nbut, both wxPython and pyQt are just Python bindings to a C / C++ (respectively) library, it's just as (conceptually) low level as the originals.\nbut, Qt is far superior to wx\n", "PyQt is a binding to Qt SDK from Nokia, and PyQt itself is delivered by a company called RiverBank.\nIf licence is not important for you you can use PyQt under GPL or you 'll pay some money for commercial licence.\nPyQt is binding Qt 4.4 right now.\nQt is not just GUI, it's a complete C/C++ SDK that help with networking, xml, media, db and other stuff, and PyQt transfer all this to python.\nWith PyQt you'll use Qt Designer and you 'll transfer the .ui file to .py file by a simple command line. \nYou 'll find many resources on the web about PyQt and good support from different communities, and even published books on PyQt.\nMany suggestions consider that RiverBank has no choice but to release the next version which 'll depend on Qt 4.5 under LGPL, we are waiting :).\nAnother solution is Jython with Java Swing, very easy and elegant to write (specially under JDK 6), but not enough resources on internet. \n", "You may want to look at Jython (Python on the Java VM). It is very similar to Iron Python, and you can fore go the .Net koolaid.\n", "dabo puts wxPython programming at a higher level like what you're looking for.\n", "You're probably going to have to use the .net or java pythons, but check this out first and see if it meets your requirements:\nKiwi\n", "Short answer: Don't try Tkinter - it's got all the problems described above.\nLong answer: Tkinter is not useful for large programs. Handling the various pieces with it somehow invariably degenerates to juggling (which never happens otherwise) and the resulting output doesn't look native or particularly polished.\n", "You are right, wxPython can definetely be improved. But i think Robin Dunn has done a great job so far, and still is. \nEspecially the wxPython community is open to improvements, like recent inclusion of the widgets by Andrea, so like many community projects pick the one you like most, and improve it while using it.\n", "We've been quite happy using Python.Net to build our UIs in WinForms and using CPython for Presenter, Model. IronPython is also a good tool if you want to do python on Windows.\n", "There is Wax, whose purpose was to create a more pythonic interface to wxWidgets, but it seems its development has stalled.\n" ]
[ 11, 4, 3, 3, 2, 2, 0, 0, 0 ]
[]
[]
[ "python", "user_interface" ]
stackoverflow_0000707491_python_user_interface.txt
Q: Why Jython behaves inconsistently when tested with PyStone? I've been playing recently with Jython and decided to do some quick and dirty benchmarking with pystone. In order to have a reference, I first tested cPython 2.6, with an increasing numbers of loops (I thought this may be relevant as Jython should start to profit from the JIT only after some time). (richard garibaldi):/usr/local/src/pybench% python ~/tmp/pystone.py Pystone(1.1) time for 50000 passes = 1.04 This machine benchmarks at 48076.9 pystones/second (richard garibaldi):/usr/local/src/pybench% python ~/tmp/pystone.py 500000 Pystone(1.1) time for 500000 passes = 10.33 This machine benchmarks at 48402.7 pystones/second (richard garibaldi):/usr/local/src/pybench% python ~/tmp/pystone.py 1000000 Pystone(1.1) time for 1000000 passes = 19.6 This machine benchmarks at 51020.4 pystones/second As you can see, cPython behaves consistently: the time it takes to complete the test increases linearly to the number of loops. Knowing this, I started testing Jython. (richard garibaldi):/usr/local/src/pybench% jython ~/tmp/pystone.py Pystone(1.1) time for 50000 passes = 2.29807 This machine benchmarks at 21757.4 pystones/second (richard garibaldi):/usr/local/src/pybench% jython ~/tmp/pystone.py 500000 Pystone(1.1) time for 500000 passes = 10.931 This machine benchmarks at 45741.4 pystones/second (richard garibaldi):/usr/local/src/pybench% jython ~/tmp/pystone.py 1000000 Pystone(1.1) time for 1000000 passes = 107.183 This machine benchmarks at 9329.86 pystones/second During the first run Jython runs rather lousily in comparison to its C brother. When increased the number of loops it started feeling better, coming close to cPython, like my initial hypothesis predicted. Note that the number of loops increased 10 times, but it took Jython only about 5 times longer to complete them. So, as you imagine, I was expecting that Jython would really rock in the final test. To my great disappointment, however, it did really bad: more than twice slower than in the initial run. What are your hypotheses: why does Jython behave such an inconsistent manner? Could it be that GC is kicking in at some moment, and taking a lot of time? I've looked at PyStone's code and garbage collection doesn't seem to be turned off, but I would expect Java's GC to be at least as good as Python's... Do you think this slowing down is permanent, or it will go away at some point after increasing the number of loops? How shall Jython behave in a really long running processes? EDIT: unfortunately, I get java.lang.OutOfMemoryError if I increase the number of loops to 2 million... (Of course, Jython is still beta, so it should get better in the final release.) I am using Jython 2.5b1 (trunk:5903:5905, Jan 9 2009, 16:01:29), Java(TM) SE Runtime Environment (build 1.6.0_07-b06-153) and Java HotSpot(TM) 64-Bit Server VM (build 1.6.0_07-b06-57, mixed mode) on MacOS X 10.5. Thanks for your answers. A: This might be a bug in jython 2.5b1. You should consider reporting it back to the jython team. I have just run the pystone benchmark on my MacBook with the current stable release of jython (2.2.1) and I get slow but consistent results: mo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 50000 Pystone(1.1) time for 50000 passes = 2.365 This machine benchmarks at 21141.6 pystones/second mo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 500000 Pystone(1.1) time for 500000 passes = 22.246 This machine benchmarks at 22476 pystones/second mo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 1000000 Pystone(1.1) time for 1000000 passes = 43.94 This machine benchmarks at 22758.3 pystones/second mo$ java -version java version "1.5.0_16" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_16-b06-275) Java HotSpot(TM) Client VM (build 1.5.0_16-132, mixed mode, sharing) The cPython results for me are more or less the same. I reran eacht test three times and got very similar results all the time. I also tried giving java a bigger initial and maximum Heap (-Xms256m -Xmx512m) without a noteworthy result However, setting the JVM to -server (slower startup, better long running performance, not so good for "interactive" work) turned the picture a bit: mo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 50000 Pystone(1.1) time for 50000 passes = 1.848 This machine benchmarks at 27056.3 pystones/second mo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 500000 Pystone(1.1) time for 500000 passes = 9.998 This machine benchmarks at 50010 pystones/second mo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 1000000 Pystone(1.1) time for 1000000 passes = 19.9 This machine benchmarks at 50251.3 pystones/second I made one final run with (-server -Xms256m -Xmx512m): mo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 5000000 Pystone(1.1) time for 5000000 passes = 108.664 This machine benchmarks at 46013.4 pystones/second My guess would be, that the slow first run is due to VM startup/JIT not yet having really kicked in. The results of the longer runs are more or less consitent and show the effects of hotspot/JIT Maybe you could rerun your last test with a bigger heap? To change the JVM switches, just edit the jython file in your Jython installation. A: The same results from my laptop running Ubuntu Jaunty, with JRE 1.6.0_12-b04: nathell@breeze:/usr/lib/python2.5/test$ python pystone.py 500000 Pystone(1.1) time for 500000 passes = 12.98 This machine benchmarks at 38520.8 pystones/second nathell@breeze:/usr/lib/python2.5/test$ python pystone.py 1000000 Pystone(1.1) time for 1000000 passes = 26.05 This machine benchmarks at 38387.7 pystones/second nathell@breeze:/usr/lib/python2.5/test$ ~/jython/jython pystone.py Pystone(1.1) time for 50000 passes = 2.47788 This machine benchmarks at 20178.6 pystones/second nathell@breeze:/usr/lib/python2.5/test$ ~/jython/jython pystone.py 500000 Pystone(1.1) time for 500000 passes = 19.7294 This machine benchmarks at 25342.9 pystones/second nathell@breeze:/usr/lib/python2.5/test$ ~/jython/jython pystone.py 1000000 Pystone(1.1) time for 1000000 passes = 38.9272 This machine benchmarks at 25689 pystones/second So perhaps this is related to the JRE rather than Jython version, after all. The problems the Armed Bear Common Lisp project has had with early versions of JRE 1.6 might also hint at this. A: Benchmarking a runtime environment as complex as the JVM is hard. Even excluding the JIT and GC, you've got a big heap, memory layout and cache variation between runs. One thing that helps with Jython is simply running the benchmark more than once in a single VM session: once to warm up the JIT and one or more times you measure individually. I've done a lot of Jython benchmarking, and unfortunately it often takes 10-50 attempts to achieve a reasonable time You can use some JVM flags to observe GC and JIT behavior to get some idea how long the warmup period should be, though obviously you shouldn't benchmark with the debugging flags turned on. For example: % ./jython -J-XX:+PrintCompilation -J-verbose:gc 1 java.lang.String::hashCode (60 bytes) 2 java.lang.String::charAt (33 bytes) 3 java.lang.String::lastIndexOf (156 bytes) 4 java.lang.String::indexOf (151 bytes) [GC 1984K->286K(7616K), 0.0031513 secs] If you do all this, and use the HotSpot Server VM, you'll find Jython slightly faster than CPython on pystone, but this is in no way representative of Jython performance in general. The Jython developers are paying much more attention to correctness than performance for the 2.5 release; over the next year or so with a 2.6/2.7/3.0 release performance will be more emphasized. You can see a few of the pain points by looking at some microbenchmarks (originally derived from PyPy) I run. A: I'm pretty sure that the results can be improved by tweaking the JVM configuration (JRuby is using quite a few interesting flags for doing it) and I'm also pretty sure that the garbage collection can be tuned. If you are very interested in this benchmark here is a good resource for configuring your VM: Tuning Garbage Collection. I'd also take a look at JRuby configuration. ./alex A: my bench on a XP_Win32_PC : C:\jython\jython2.5b1>bench "50000" C:\jython\jython2.5b1>jython Lib\test\pystone.py "50000" Pystone(1.1) time for 50000 passes = 1.73489 This machine benchmarks at 28820.2 pystones/second C:\jython\jython2.5b1>bench "100000" C:\jython\jython2.5b1>jython Lib\test\pystone.py "100000" Pystone(1.1) time for 100000 passes = 3.36223 This machine benchmarks at 29742.2 pystones/second C:\jython\jython2.5b1>bench "500000" C:\jython\jython2.5b1>jython Lib\test\pystone.py "500000" Pystone(1.1) time for 500000 passes = 15.8116 This machine benchmarks at 31622.3 pystones/second C:\jython\jython2.5b1>bench "1000000" C:\jython\jython2.5b1>jython Lib\test\pystone.py "1000000" Pystone(1.1) time for 1000000 passes = 30.9763 This machine benchmarks at 32282.8 pystones/second C:\jython\jython2.5b1>jython Jython 2.5b1 (trunk:5903:5905, Jan 9 2009, 16:01:29) [Java HotSpot(TM) Client VM (Sun Microsystems Inc.)] on java1.5.0_17 It is not so fast, but ... no "special effects" Is it a java-vm 'problem' ? Add a comment if you want further infos to my benchmarking on this old Win32-PC
Why Jython behaves inconsistently when tested with PyStone?
I've been playing recently with Jython and decided to do some quick and dirty benchmarking with pystone. In order to have a reference, I first tested cPython 2.6, with an increasing numbers of loops (I thought this may be relevant as Jython should start to profit from the JIT only after some time). (richard garibaldi):/usr/local/src/pybench% python ~/tmp/pystone.py Pystone(1.1) time for 50000 passes = 1.04 This machine benchmarks at 48076.9 pystones/second (richard garibaldi):/usr/local/src/pybench% python ~/tmp/pystone.py 500000 Pystone(1.1) time for 500000 passes = 10.33 This machine benchmarks at 48402.7 pystones/second (richard garibaldi):/usr/local/src/pybench% python ~/tmp/pystone.py 1000000 Pystone(1.1) time for 1000000 passes = 19.6 This machine benchmarks at 51020.4 pystones/second As you can see, cPython behaves consistently: the time it takes to complete the test increases linearly to the number of loops. Knowing this, I started testing Jython. (richard garibaldi):/usr/local/src/pybench% jython ~/tmp/pystone.py Pystone(1.1) time for 50000 passes = 2.29807 This machine benchmarks at 21757.4 pystones/second (richard garibaldi):/usr/local/src/pybench% jython ~/tmp/pystone.py 500000 Pystone(1.1) time for 500000 passes = 10.931 This machine benchmarks at 45741.4 pystones/second (richard garibaldi):/usr/local/src/pybench% jython ~/tmp/pystone.py 1000000 Pystone(1.1) time for 1000000 passes = 107.183 This machine benchmarks at 9329.86 pystones/second During the first run Jython runs rather lousily in comparison to its C brother. When increased the number of loops it started feeling better, coming close to cPython, like my initial hypothesis predicted. Note that the number of loops increased 10 times, but it took Jython only about 5 times longer to complete them. So, as you imagine, I was expecting that Jython would really rock in the final test. To my great disappointment, however, it did really bad: more than twice slower than in the initial run. What are your hypotheses: why does Jython behave such an inconsistent manner? Could it be that GC is kicking in at some moment, and taking a lot of time? I've looked at PyStone's code and garbage collection doesn't seem to be turned off, but I would expect Java's GC to be at least as good as Python's... Do you think this slowing down is permanent, or it will go away at some point after increasing the number of loops? How shall Jython behave in a really long running processes? EDIT: unfortunately, I get java.lang.OutOfMemoryError if I increase the number of loops to 2 million... (Of course, Jython is still beta, so it should get better in the final release.) I am using Jython 2.5b1 (trunk:5903:5905, Jan 9 2009, 16:01:29), Java(TM) SE Runtime Environment (build 1.6.0_07-b06-153) and Java HotSpot(TM) 64-Bit Server VM (build 1.6.0_07-b06-57, mixed mode) on MacOS X 10.5. Thanks for your answers.
[ "This might be a bug in jython 2.5b1. You should consider reporting it back to the jython team. I have just run the pystone benchmark on my MacBook with the current stable release of jython (2.2.1) and I get slow but consistent results:\nmo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 50000\nPystone(1.1) time for 50000 passes = 2.365\nThis machine benchmarks at 21141.6 pystones/second\n\nmo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 500000\nPystone(1.1) time for 500000 passes = 22.246\nThis machine benchmarks at 22476 pystones/second\n\nmo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 1000000\nPystone(1.1) time for 1000000 passes = 43.94\nThis machine benchmarks at 22758.3 pystones/second\n\nmo$ java -version\njava version \"1.5.0_16\"\nJava(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_16-b06-275)\nJava HotSpot(TM) Client VM (build 1.5.0_16-132, mixed mode, sharing)\n\nThe cPython results for me are more or less the same. I reran eacht test three times and got very similar results all the time.\nI also tried giving java a bigger initial and maximum Heap (-Xms256m -Xmx512m) without a noteworthy result\nHowever, setting the JVM to -server (slower startup, better long running performance, not so good for \"interactive\" work) turned the picture a bit:\nmo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 50000\nPystone(1.1) time for 50000 passes = 1.848\nThis machine benchmarks at 27056.3 pystones/second\n\nmo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 500000\nPystone(1.1) time for 500000 passes = 9.998\nThis machine benchmarks at 50010 pystones/second\n\nmo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 1000000\nPystone(1.1) time for 1000000 passes = 19.9\nThis machine benchmarks at 50251.3 pystones/second\n\nI made one final run with (-server -Xms256m -Xmx512m):\nmo$ ~/Coding/Jython/jython2.2.1/jython pystone.py 5000000\nPystone(1.1) time for 5000000 passes = 108.664\nThis machine benchmarks at 46013.4 pystones/second\n\nMy guess would be, that the slow first run is due to VM startup/JIT not yet having really kicked in. The results of the longer runs are more or less consitent and show the effects of hotspot/JIT\nMaybe you could rerun your last test with a bigger heap? To change the JVM switches, just edit the jython file in your Jython installation.\n", "The same results from my laptop running Ubuntu Jaunty, with JRE 1.6.0_12-b04:\nnathell@breeze:/usr/lib/python2.5/test$ python pystone.py 500000\nPystone(1.1) time for 500000 passes = 12.98\nThis machine benchmarks at 38520.8 pystones/second\n\nnathell@breeze:/usr/lib/python2.5/test$ python pystone.py 1000000\nPystone(1.1) time for 1000000 passes = 26.05\nThis machine benchmarks at 38387.7 pystones/second\n\nnathell@breeze:/usr/lib/python2.5/test$ ~/jython/jython pystone.py\nPystone(1.1) time for 50000 passes = 2.47788\nThis machine benchmarks at 20178.6 pystones/second\n\nnathell@breeze:/usr/lib/python2.5/test$ ~/jython/jython pystone.py 500000\nPystone(1.1) time for 500000 passes = 19.7294\nThis machine benchmarks at 25342.9 pystones/second\n\nnathell@breeze:/usr/lib/python2.5/test$ ~/jython/jython pystone.py 1000000\nPystone(1.1) time for 1000000 passes = 38.9272\nThis machine benchmarks at 25689 pystones/second\n\nSo perhaps this is related to the JRE rather than Jython version, after all. The problems the Armed Bear Common Lisp project has had with early versions of JRE 1.6 might also hint at this.\n", "Benchmarking a runtime environment as complex as the JVM is hard. Even excluding the JIT and GC, you've got a big heap, memory layout and cache variation between runs.\nOne thing that helps with Jython is simply running the benchmark more than once in a single VM session: once to warm up the JIT and one or more times you measure individually. I've done a lot of Jython benchmarking, and unfortunately it often takes 10-50 attempts to achieve a reasonable time\nYou can use some JVM flags to observe GC and JIT behavior to get some idea how long the warmup period should be, though obviously you shouldn't benchmark with the debugging flags turned on. For example:\n% ./jython -J-XX:+PrintCompilation -J-verbose:gc\n 1 java.lang.String::hashCode (60 bytes)\n 2 java.lang.String::charAt (33 bytes)\n 3 java.lang.String::lastIndexOf (156 bytes)\n 4 java.lang.String::indexOf (151 bytes)\n[GC 1984K->286K(7616K), 0.0031513 secs]\n\nIf you do all this, and use the HotSpot Server VM, you'll find Jython slightly faster than CPython on pystone, but this is in no way representative of Jython performance in general. The Jython developers are paying much more attention to correctness than performance for the 2.5 release; over the next year or so with a 2.6/2.7/3.0 release performance will be more emphasized. You can see a few of the pain points by looking at some microbenchmarks (originally derived from PyPy) I run.\n", "I'm pretty sure that the results can be improved by tweaking the JVM configuration (JRuby is using quite a few interesting flags for doing it) and I'm also pretty sure that the garbage collection can be tuned.\nIf you are very interested in this benchmark here is a good resource for configuring your VM: Tuning Garbage Collection. I'd also take a look at JRuby configuration.\n./alex\n", "my bench on a XP_Win32_PC :\nC:\\jython\\jython2.5b1>bench \"50000\"\n\nC:\\jython\\jython2.5b1>jython Lib\\test\\pystone.py \"50000\"\nPystone(1.1) time for 50000 passes = 1.73489\nThis machine benchmarks at 28820.2 pystones/second\n\nC:\\jython\\jython2.5b1>bench \"100000\"\n\nC:\\jython\\jython2.5b1>jython Lib\\test\\pystone.py \"100000\"\nPystone(1.1) time for 100000 passes = 3.36223\nThis machine benchmarks at 29742.2 pystones/second\n\nC:\\jython\\jython2.5b1>bench \"500000\"\n\nC:\\jython\\jython2.5b1>jython Lib\\test\\pystone.py \"500000\"\nPystone(1.1) time for 500000 passes = 15.8116\nThis machine benchmarks at 31622.3 pystones/second\n\nC:\\jython\\jython2.5b1>bench \"1000000\"\n\nC:\\jython\\jython2.5b1>jython Lib\\test\\pystone.py \"1000000\"\nPystone(1.1) time for 1000000 passes = 30.9763\nThis machine benchmarks at 32282.8 pystones/second\n\nC:\\jython\\jython2.5b1>jython\nJython 2.5b1 (trunk:5903:5905, Jan 9 2009, 16:01:29)\n[Java HotSpot(TM) Client VM (Sun Microsystems Inc.)] on java1.5.0_17\n\nIt is not so fast, but ...\nno \"special effects\"\nIs it a java-vm 'problem' ?\nAdd a comment if you want further infos to my benchmarking on this old Win32-PC\n" ]
[ 2, 2, 2, 1, 1 ]
[]
[]
[ "benchmarking", "java", "jython", "performance", "python" ]
stackoverflow_0000597483_benchmarking_java_jython_performance_python.txt
Q: calling execfile() in custom namespace executes code in '__builtin__' namespace When I call execfile without passing the globals or locals arguments it creates objects in the current namespace, but if I call execfile and specify a dict for globals (and/or locals), it creates objects in the __builtin__ namespace. Take the following example: # exec.py def myfunc(): print 'myfunc created in %s namespace' % __name__ exec.py is execfile'd from main.py as follows. # main.py print 'execfile in global namespace:' execfile('exec.py') myfunc() print print 'execfile in custom namespace:' d = {} execfile('exec.py', d) d['myfunc']() when I run main.py from the commandline I get the following output. execfile in global namespace: myfunc created in __main__ namespace execfile in custom namespace: myfunc created in __builtin__ namespace Why is it being run in __builtin__ namespace in the second case? Furthermore, if I then try to run myfunc from __builtins__, I get an AttributeError. (This is what I would hope happens, but then why is __name__ set to __builtin__?) >>> __builtins__.myfunc() Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: 'module' object has no attribute 'myfunc' Can anyone explain this behaviour? Thanks A: First off, __name__ is not a namespace - its a reference to the name of the module it belongs to, ie: somemod.py -> somemod.__name__ == 'somemod' The exception to this being if you run a module as an executable from the commandline, then the __name__ is '__main__'. in your example there is a lucky coincidence that your module being run as main is also named main. Execfile executes the contents of the module WITHOUT importing it as a module. As such, the __name__ doesn't get set, because its not a module - its just an executed sequence of code. A: The execfile function is similar to the exec statement. If you look at the documentation for exec you'll see the following paragraph that explains the behavior. As a side effect, an implementation may insert additional keys into the dictionaries given besides those corresponding to variable names set by the executed code. For example, the current implementation may add a reference to the dictionary of the built-in module __builtin__ under the key __builtins__ (!). Edit: I now see that my answer applies to one possible interpretation of the question title. My answer does not apply to the actual question asked. A: As an aside, I prefer using __import__() over execfile: module = __import__(module_name) value = module.__dict__[function_name](arguments) This also works well when adding to the PYTHONPATH, so that modules in other directories can be imported: sys.path.insert(position, directory)
calling execfile() in custom namespace executes code in '__builtin__' namespace
When I call execfile without passing the globals or locals arguments it creates objects in the current namespace, but if I call execfile and specify a dict for globals (and/or locals), it creates objects in the __builtin__ namespace. Take the following example: # exec.py def myfunc(): print 'myfunc created in %s namespace' % __name__ exec.py is execfile'd from main.py as follows. # main.py print 'execfile in global namespace:' execfile('exec.py') myfunc() print print 'execfile in custom namespace:' d = {} execfile('exec.py', d) d['myfunc']() when I run main.py from the commandline I get the following output. execfile in global namespace: myfunc created in __main__ namespace execfile in custom namespace: myfunc created in __builtin__ namespace Why is it being run in __builtin__ namespace in the second case? Furthermore, if I then try to run myfunc from __builtins__, I get an AttributeError. (This is what I would hope happens, but then why is __name__ set to __builtin__?) >>> __builtins__.myfunc() Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: 'module' object has no attribute 'myfunc' Can anyone explain this behaviour? Thanks
[ "First off, __name__ is not a namespace - its a reference to the name of the module it belongs to, ie: somemod.py -> somemod.__name__ == 'somemod'\nThe exception to this being if you run a module as an executable from the commandline, then the __name__ is '__main__'.\nin your example there is a lucky coincidence that your module being run as main is also named main.\nExecfile executes the contents of the module WITHOUT importing it as a module. As such, the __name__ doesn't get set, because its not a module - its just an executed sequence of code.\n", "The execfile function is similar to the exec statement. If you look at the documentation for exec you'll see the following paragraph that explains the behavior.\n\nAs a side effect, an implementation may insert additional keys into the dictionaries given besides those corresponding to variable names set by the executed code. For example, the current implementation may add a reference to the dictionary of the built-in module __builtin__ under the key __builtins__ (!).\n\nEdit: I now see that my answer applies to one possible interpretation of the question title. My answer does not apply to the actual question asked.\n", "As an aside, I prefer using __import__() over execfile:\nmodule = __import__(module_name)\nvalue = module.__dict__[function_name](arguments)\n\nThis also works well when adding to the PYTHONPATH, so that modules in other directories can be imported:\nsys.path.insert(position, directory)\n\n" ]
[ 4, 1, 1 ]
[]
[]
[ "namespaces", "python" ]
stackoverflow_0000711066_namespaces_python.txt
Q: Turning ctypes data into python string as quickly as possible I'm trying to write a video application in PyQt4 and I've used Python ctypes to hook into an old legacy video decoder library. The library gives me 32-bit ARGB data and I need to turn that into a QImage. I've got it working as follows: # Copy the rgb image data from the pointer into the buffer memmove(self.rgb_buffer, self.rgb_buffer_ptr, self.buffer_size) # Copy the buffer to a python string imgdata = "" for a in self.rgb_buffer: imgdata = imgdata + a # Create a QImage from the string data img = QImage(imgdata, 720, 288, QImage.Format_ARGB32) The problem is that ctypes outputs the data as type "ctypes.c_char_Array_829440" and I need to turn it into a python string so that I can construct a QImage. My copying mechanism is currently taking almost 300ms per image so it's painfully slow. The decode and display part of the process is only taking about 50ms. Can anyone think of any cunning shortcuts I can take to speed up this process and avoid the need to copy the buffer twice as I'm currently doing? A: The ctypes.c_char_Array_829400 instance has the property .raw which returns a string possibly containing NUL bytes, and the property .value which returns the string up to the first NUL byte if it contains one or more. However, you can also use ctypes the access the string at self.rgb_buffer_ptr, like this: ctypes.string_at(self.rgb_buffer_ptr, self.buffer_size); this would avoid the need for the memmove call.
Turning ctypes data into python string as quickly as possible
I'm trying to write a video application in PyQt4 and I've used Python ctypes to hook into an old legacy video decoder library. The library gives me 32-bit ARGB data and I need to turn that into a QImage. I've got it working as follows: # Copy the rgb image data from the pointer into the buffer memmove(self.rgb_buffer, self.rgb_buffer_ptr, self.buffer_size) # Copy the buffer to a python string imgdata = "" for a in self.rgb_buffer: imgdata = imgdata + a # Create a QImage from the string data img = QImage(imgdata, 720, 288, QImage.Format_ARGB32) The problem is that ctypes outputs the data as type "ctypes.c_char_Array_829440" and I need to turn it into a python string so that I can construct a QImage. My copying mechanism is currently taking almost 300ms per image so it's painfully slow. The decode and display part of the process is only taking about 50ms. Can anyone think of any cunning shortcuts I can take to speed up this process and avoid the need to copy the buffer twice as I'm currently doing?
[ "The ctypes.c_char_Array_829400 instance has the property .raw which returns a string possibly containing NUL bytes, and the property .value which returns the string up to the first NUL byte if it contains one or more.\nHowever, you can also use ctypes the access the string at self.rgb_buffer_ptr, like this:\nctypes.string_at(self.rgb_buffer_ptr, self.buffer_size); this would avoid the need for the memmove call.\n" ]
[ 6 ]
[]
[]
[ "ctypes", "pyqt4", "python" ]
stackoverflow_0000714367_ctypes_pyqt4_python.txt
Q: WeakValueDictionary for holding any type Is there any way to work around the limitations of WeakValueDictionary to allow it to hold weak references to built-in types like dict or list? Can something be done at the C level in an extension module? I really need a weakref container that can hold (nearly) any type of object. A: According to the Python documentation you can create weak references to subclasses of dict and list... it's not a perfect solution, but if you're able to create a custom subclass of dict and use that instead of a native dict, it should be good enough. (I've never actually done this myself)
WeakValueDictionary for holding any type
Is there any way to work around the limitations of WeakValueDictionary to allow it to hold weak references to built-in types like dict or list? Can something be done at the C level in an extension module? I really need a weakref container that can hold (nearly) any type of object.
[ "According to the Python documentation you can create weak references to subclasses of dict and list... it's not a perfect solution, but if you're able to create a custom subclass of dict and use that instead of a native dict, it should be good enough. (I've never actually done this myself)\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0000715125_python.txt
Q: Best way to encode tuples with json In python I have a dictionary that maps tuples to a list of tuples. e.g. {(1,2): [(2,3),(1,7)]} I want to be able to encode this data use it with javascript, so I looked into json but it appears keys must be strings so my tuple does not work as a key. Is the best way to handle this is encode it as "1,2" and then parse it into something I want on the javascript? Or is there a more clever way to handle this. A: You might consider saying {"[1,2]": [(2,3),(1,7)]} and then when you need to get the value out, you can just parse the keys themselves as JSON objects, which all modern browsers can do with the built-in JSON.parse method (I'm using jQuery.each to iterate here but you could use anything): var myjson = JSON.parse('{"[1,2]": [[2,3],[1,7]]}'); $.each(myjson, function(keystr,val){ var key = JSON.parse(keystr); // do something with key and val }); On the other hand, you might want to just structure your object differently, e.g. {1: {2: [(2,3),(1,7)]}} so that instead of saying myjson[1,2] // doesn't work which is invalid Javascript syntax, you could say myjson[1][2] // returns [[2,3],[1,7]] A: If your key tuples are truly integer pairs, then the easiest and probably most straightforward approach would be as you suggest.... encode them to a string. You can do this in a one-liner: >>> simplejson.dumps(dict([("%d,%d" % k, v) for k, v in d.items()])) '{"1,2": [[2, 3], [1, 7]]}' This would get you a javascript data structure whose keys you could then split to get the points back again: '1,2'.split(',') A: My recommendation would be: { "1": [ { "2": [[2,3],[1,7]] } ] } It's still parsing, but depending on how you use it, it may be easier in this form. A: You can't use an array as a key in JSON. The best you can do is encode it. Sorry, but there's really no other sane way to do it. A: Could it simply be a two dimensional array? Then you may use integers as keys
Best way to encode tuples with json
In python I have a dictionary that maps tuples to a list of tuples. e.g. {(1,2): [(2,3),(1,7)]} I want to be able to encode this data use it with javascript, so I looked into json but it appears keys must be strings so my tuple does not work as a key. Is the best way to handle this is encode it as "1,2" and then parse it into something I want on the javascript? Or is there a more clever way to handle this.
[ "You might consider saying\n{\"[1,2]\": [(2,3),(1,7)]}\n\nand then when you need to get the value out, you can just parse the keys themselves as JSON objects, which all modern browsers can do with the built-in JSON.parse method (I'm using jQuery.each to iterate here but you could use anything):\nvar myjson = JSON.parse('{\"[1,2]\": [[2,3],[1,7]]}');\n$.each(myjson, function(keystr,val){\n var key = JSON.parse(keystr);\n // do something with key and val\n});\n\nOn the other hand, you might want to just structure your object differently, e.g.\n{1: {2: [(2,3),(1,7)]}}\n\nso that instead of saying\nmyjson[1,2] // doesn't work\n\nwhich is invalid Javascript syntax, you could say\nmyjson[1][2] // returns [[2,3],[1,7]]\n\n", "If your key tuples are truly integer pairs, then the easiest and probably most straightforward approach would be as you suggest.... encode them to a string. You can do this in a one-liner:\n>>> simplejson.dumps(dict([(\"%d,%d\" % k, v) for k, v in d.items()]))\n'{\"1,2\": [[2, 3], [1, 7]]}'\n\nThis would get you a javascript data structure whose keys you could then split to get the points back again:\n'1,2'.split(',')\n\n", "My recommendation would be:\n{ \"1\": [\n { \"2\": [[2,3],[1,7]] }\n ]\n}\n\nIt's still parsing, but depending on how you use it, it may be easier in this form.\n", "You can't use an array as a key in JSON. The best you can do is encode it. Sorry, but there's really no other sane way to do it.\n", "Could it simply be a two dimensional array? Then you may use integers as keys\n" ]
[ 28, 10, 3, 2, 1 ]
[]
[]
[ "json", "python" ]
stackoverflow_0000715550_json_python.txt
Q: Python SAX parser says XML file is not well-formed I stripped some tags that I thought were unnecessary from an XML file. Now when I try to parse it, my SAX parser throws an error and says my file is not well-formed. However, I know every start tag has an end tag. The file's opening tag has a link to an XML schema. Could this be causing the trouble? If so, then how do I fix it? Edit: I think I've found the problem. My character data contains "&lt" and "&gt" characters, presumably from html tags. After being parsed, these are converted to "<" and ">" characters, which seems to bother the SAX parser. Is there any way to prevent this from happening? A: I would suggest putting those tags back in and making sure it still works. Then, if you want to take them out, do it one at a time until it breaks. However, I question the wisdom of taking them out. If it's your XML file, you should understand it better. If it's a third-party XML file, you really shouldn't be fiddling with it (until you understand it better :-). A: Does the sax parser not give you details about where it thinks it's not well-formed? Have you tried loading the file into an XML editor and checking it there? Do other XML parsers accept it? The schema shouldn't change whether or not the XML is well-formed or not; it may well change whether it's valid or not. See the wikipedia entry for XML well-formedness for a little bit more, or the XML specs for a lot more detail :) EDIT: To represent "&" in text, you should escape it as &amp; So: &lt should be &amp;lt (assuming you really want ampersand, l, t). A: I would second recommendation to try to parse it using another XML parser. That should give an indication as to whether it's the document that's wrong, or parser. Also, the actual error message might be useful. One fairly common problem for example is that the xml declaration (if one is used, it's optional) must be the very first thing -- not even whitespace is allowed before it. A: You could load it into Firefox, if you don't have an XML editor. Firefox shows you the error.
Python SAX parser says XML file is not well-formed
I stripped some tags that I thought were unnecessary from an XML file. Now when I try to parse it, my SAX parser throws an error and says my file is not well-formed. However, I know every start tag has an end tag. The file's opening tag has a link to an XML schema. Could this be causing the trouble? If so, then how do I fix it? Edit: I think I've found the problem. My character data contains "&lt" and "&gt" characters, presumably from html tags. After being parsed, these are converted to "<" and ">" characters, which seems to bother the SAX parser. Is there any way to prevent this from happening?
[ "I would suggest putting those tags back in and making sure it still works. Then, if you want to take them out, do it one at a time until it breaks.\nHowever, I question the wisdom of taking them out. If it's your XML file, you should understand it better. If it's a third-party XML file, you really shouldn't be fiddling with it (until you understand it better :-).\n", "Does the sax parser not give you details about where it thinks it's not well-formed?\nHave you tried loading the file into an XML editor and checking it there? Do other XML parsers accept it?\nThe schema shouldn't change whether or not the XML is well-formed or not; it may well change whether it's valid or not. See the wikipedia entry for XML well-formedness for a little bit more, or the XML specs for a lot more detail :)\nEDIT: To represent \"&\" in text, you should escape it as &amp;\nSo:\n&lt\n\nshould be\n&amp;lt\n\n(assuming you really want ampersand, l, t).\n", "I would second recommendation to try to parse it using another XML parser. That should give an indication as to whether it's the document that's wrong, or parser.\nAlso, the actual error message might be useful. One fairly common problem for example is that the xml declaration (if one is used, it's optional) must be the very first thing -- not even whitespace is allowed before it.\n", "You could load it into Firefox, if you don't have an XML editor. Firefox shows you the error.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "python", "sax", "xml" ]
stackoverflow_0000708531_python_sax_xml.txt
Q: Launching a .py python script from within a cgi script I'm trying to launch a .py script from within a cgi script while running a local cgi server. The cgi script simply receives some data from Google Earth and passes it to the .py script which is currently being called using execfile('script.py') placed at the end of the cgi script. The script runs to completion, however script.py contains some print statements that I need to be able to monitor while the process runs. Any errors in the .py print to the localhost console window and the print statements seem to be buffered. Is there a way to send the output from the .py to another python console window while the localhost console is running? It seems like the subprocess module should do what I need but I've only been able to send the output to a variable or a logfile. This is fine except that I need to see the print statements in real-time. Thanks in advance A: You say you're launching a python script from a CGI script, but you don't specify what language the CGI script is written in. Because CGI is simply an interface, it's not clear what language the CGI script is written in. I'm going to assume python, since that makes the most sense. What would work best would be to write your messages out to a log file. In your "script.py" file, instead of using print, you should open a log file for appending, like so: logfile = file("/var/log/my-app-log.txt", "a") Then, wherever you have a print statement like this: print("Starting to do step 2...") Change it to a logfile.write() statement like this: logfile.write("starting to do step 2...\n") Note that you will need to add newlines separately as file.write() does not add one for you. Then, you can watch what's going on in your script with a command like tail: tail -f /var/log/my-app-log.txt This will show data as it gets appended to your log file. A: Use popen2 or subprocess to launch a console while redirecting your output stream to that console.
Launching a .py python script from within a cgi script
I'm trying to launch a .py script from within a cgi script while running a local cgi server. The cgi script simply receives some data from Google Earth and passes it to the .py script which is currently being called using execfile('script.py') placed at the end of the cgi script. The script runs to completion, however script.py contains some print statements that I need to be able to monitor while the process runs. Any errors in the .py print to the localhost console window and the print statements seem to be buffered. Is there a way to send the output from the .py to another python console window while the localhost console is running? It seems like the subprocess module should do what I need but I've only been able to send the output to a variable or a logfile. This is fine except that I need to see the print statements in real-time. Thanks in advance
[ "You say you're launching a python script from a CGI script, but you don't specify what language the CGI script is written in. Because CGI is simply an interface, it's not clear what language the CGI script is written in. I'm going to assume python, since that makes the most sense. \nWhat would work best would be to write your messages out to a log file. In your \"script.py\" file, instead of using print, you should open a log file for appending, like so: \nlogfile = file(\"/var/log/my-app-log.txt\", \"a\")\n\nThen, wherever you have a print statement like this:\nprint(\"Starting to do step 2...\")\n\nChange it to a logfile.write() statement like this:\nlogfile.write(\"starting to do step 2...\\n\")\n\nNote that you will need to add newlines separately as file.write() does not add one for you. \nThen, you can watch what's going on in your script with a command like tail:\ntail -f /var/log/my-app-log.txt\n\nThis will show data as it gets appended to your log file. \n", "Use popen2 or subprocess to launch a console while redirecting your output stream to that console.\n" ]
[ 1, 0 ]
[]
[]
[ "cgi", "python", "scripting" ]
stackoverflow_0000715791_cgi_python_scripting.txt
Q: How can I print entity numbers in my xml document instead of entity names using python's lxml? I'm using lxml and python to generate xml documents (just using etree.tostring(root) ) but at the moment the resulting xml displays html entities as with named entities ( &lt ; ) rather than their numeric values ( &#60 ; ). How exactly do I go about changing this so that the result uses the numeric values instead of the names? Thanks A: Ultimately, it looks like the python code will call xmlNodeDumpOutput in the libxml2 library. Unfortunately, it doesn't look like there is any way to configure this to control how such entities are represented. Looking at entities.c in xmlEncodeEntitiesReentrant, the < > and & characters are hardcoded to always use the appropriate XML entity, so there seems no way to force it to use numeric values. If you need this, you'll probably have to perform another pass on the string, and manually perform "outputString.replace("&lt;","&#60;")" for those characters.
How can I print entity numbers in my xml document instead of entity names using python's lxml?
I'm using lxml and python to generate xml documents (just using etree.tostring(root) ) but at the moment the resulting xml displays html entities as with named entities ( &lt ; ) rather than their numeric values ( &#60 ; ). How exactly do I go about changing this so that the result uses the numeric values instead of the names? Thanks
[ "Ultimately, it looks like the python code will call xmlNodeDumpOutput in the libxml2 library.\nUnfortunately, it doesn't look like there is any way to configure this to control how such entities are represented. Looking at entities.c in xmlEncodeEntitiesReentrant, the < > and & characters are hardcoded to always use the appropriate XML entity, so there seems no way to force it to use numeric values.\nIf you need this, you'll probably have to perform another pass on the string, and manually perform \"outputString.replace(\"&lt;\",\"&#60;\")\" for those characters.\n" ]
[ 2 ]
[]
[]
[ "lxml", "python", "xml" ]
stackoverflow_0000715304_lxml_python_xml.txt
Q: Barchart sizing of text & barwidth with matplotlib - python I'm creating a bar chart with matplotlib-0.91 (for the first time) but the y axis labels are being cut off. If I increase the width of the figure enough they eventually show up completely but then the output is not the correct size. Any way to deal with this? A: I think I ran into a similar problem. See if this helps adjusting the label's font size: import matplotlib.pyplot as plt import matplotlib.font_manager as fm fontsize2use = 10 fig = plt.figure(figsize=(10,5)) plt.xticks(fontsize=fontsize2use) plt.yticks(fontsize=fontsize2use) fontprop = fm.FontProperties(size=fontsize2use) ax = fig.add_subplot(111) ax.set_xlabel('XaxisLabel') ax.set_ylabel('YaxisLabel') . <main plotting code> . ax.legend(loc=0, prop=fontprop) For the bar width, if your using pyplot.bar it looks like you can play with the width attribute. A: Take a look at subplots_adjust, or just use axes([left,bottom,width,height]).
Barchart sizing of text & barwidth with matplotlib - python
I'm creating a bar chart with matplotlib-0.91 (for the first time) but the y axis labels are being cut off. If I increase the width of the figure enough they eventually show up completely but then the output is not the correct size. Any way to deal with this?
[ "I think I ran into a similar problem.\nSee if this helps adjusting the label's font size:\nimport matplotlib.pyplot as plt\nimport matplotlib.font_manager as fm\n\nfontsize2use = 10\n\nfig = plt.figure(figsize=(10,5))\nplt.xticks(fontsize=fontsize2use) \nplt.yticks(fontsize=fontsize2use) \nfontprop = fm.FontProperties(size=fontsize2use)\nax = fig.add_subplot(111)\nax.set_xlabel('XaxisLabel')\nax.set_ylabel('YaxisLabel')\n.\n<main plotting code>\n.\nax.legend(loc=0, prop=fontprop) \n\nFor the bar width, if your using pyplot.bar it looks like you can play with the width attribute.\n", "Take a look at subplots_adjust, or just use axes([left,bottom,width,height]).\n" ]
[ 4, 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0000712082_matplotlib_python.txt
Q: Populating form field based on query/slug factor I've seen some similar questions, but nothing that quite pointed me in the direction I was hoping for. I have a situation where I have a standard django form built off of a model. This form has a drop down box where you select an item you want to post a comment on. Now I'd like people to be able to browse by items, and click a link to comment on that particular item. What I'd like is for when a user clicks that link they'll be presented with the same old form, however, the dropbox will be defaulted to the item they wanted to comment on. Is there a sane way to do this with the existing form? Should I create a separate form entirely for this need? As a note, this isn't a true comment system, and isn't intended to be. One idea I had was to construct urls like: comment/?q=item1 Catching the 'item1' section, and then over riding the save function to force that into the form, while hiding the company in the form. From a UI standpoint, I'm not ecstatic with that idea though. Any thoughts or ideas? A: If I'm reading your question right, this is a fairly common use-case and well support by django forms. You can use the same form for both scenarios you describe. Let's say the item to be commented has the primary key 5. You would build a link for the user to click with a URL that looks like this: <a href="/comment/5/">Comment on me</a> (This would work just as well with a slug field, though see the comment below about how the identifier must match the ID in the field's choices: /comment/my_item_1/) Your view would pick up the parameter, and pass it on to the form in the initial parameter: def show_comment_form(request, item_id): form = MyCommentForm(initial={'item_drop_down':item_id}) The form will be displayed with the drop-down pre-selected. For this example to work, of course, the item_id parameter must match whatever the choice identifier is for the item field (if it's built automatically off a model field, as it sounds, that will probably be the primary key of the available items' class). By this I mean that, if the choices were to look like: choices = ( (1, 'Item 1'), (2, 'Item 2') ) Then the item_id should be 1 or 2 as that's what will be in the resulting <select> options (ie: <option value="1">Item 1</option>). Automatically created ModelForm classes will take care of this for you, otherwise just be vigilant. You can find more information here in the django docs: Dynamic Initial Values
Populating form field based on query/slug factor
I've seen some similar questions, but nothing that quite pointed me in the direction I was hoping for. I have a situation where I have a standard django form built off of a model. This form has a drop down box where you select an item you want to post a comment on. Now I'd like people to be able to browse by items, and click a link to comment on that particular item. What I'd like is for when a user clicks that link they'll be presented with the same old form, however, the dropbox will be defaulted to the item they wanted to comment on. Is there a sane way to do this with the existing form? Should I create a separate form entirely for this need? As a note, this isn't a true comment system, and isn't intended to be. One idea I had was to construct urls like: comment/?q=item1 Catching the 'item1' section, and then over riding the save function to force that into the form, while hiding the company in the form. From a UI standpoint, I'm not ecstatic with that idea though. Any thoughts or ideas?
[ "If I'm reading your question right, this is a fairly common use-case and well support by django forms. You can use the same form for both scenarios you describe.\nLet's say the item to be commented has the primary key 5. You would build a link for the user to click with a URL that looks like this:\n<a href=\"/comment/5/\">Comment on me</a>\n\n(This would work just as well with a slug field, though see the comment below about how the identifier must match the ID in the field's choices: /comment/my_item_1/)\nYour view would pick up the parameter, and pass it on to the form in the initial parameter:\ndef show_comment_form(request, item_id):\n form = MyCommentForm(initial={'item_drop_down':item_id})\n\nThe form will be displayed with the drop-down pre-selected. For this example to work, of course, the item_id parameter must match whatever the choice identifier is for the item field (if it's built automatically off a model field, as it sounds, that will probably be the primary key of the available items' class).\nBy this I mean that, if the choices were to look like:\nchoices = ( (1, 'Item 1'),\n (2, 'Item 2') )\n\nThen the item_id should be 1 or 2 as that's what will be in the resulting <select> options (ie: <option value=\"1\">Item 1</option>). Automatically created ModelForm classes will take care of this for you, otherwise just be vigilant.\nYou can find more information here in the django docs: Dynamic Initial Values\n" ]
[ 1 ]
[]
[]
[ "django_forms", "python" ]
stackoverflow_0000715889_django_forms_python.txt
Q: OptionParser - supporting any option at the end of the command line I'm writing a small program that's supposed to execute a command on a remote server (let's say a reasonably dumb wrapper around ssh [hostname] [command]). I want to execute it as such: ./floep [command] However, I need to pass certain command lines from time to time: ./floep -v [command] so I decided to use optparse.OptionParser for this. Problem is, I sometimes the command also has argument, which works fine if I do: ./floep -v "uname -a" But I also want it to work when I use: ./floep -v uname -a The idea is, as soon as I come across the first non-option argument, everything after that should be part of my command. This, however, gives me: Usage: floep [options] floep: error: no such option: -a Does OptionParser support this syntax? If so: how? If not: what's the best way to fix this? A: Try using disable_interspersed_args() #!/usr/bin/env python from optparse import OptionParser parser = OptionParser() parser.disable_interspersed_args() parser.add_option("-v", action="store_true", dest="verbose") (options, args) = parser.parse_args() print "Options: %s args: %s" % (options, args) When run: $ ./options.py foo -v bar Options: {'verbose': None} args: ['foo', '-v', 'bar'] $ ./options.py -v foo bar Options: {'verbose': True} args: ['foo', 'bar'] $ ./options.py foo -a bar Options: {'verbose': None} args: ['foo', '-a', 'bar'] A: OptionParser instances can actually be manipulated during the parsing operation for complex cases. In this case, however, I believe the scenario you describe is supported out-of-the-box (which would be good news if true! how often does that happen??). See this section in the docs: Querying and manipulating your option parser. To quote the link above: disable_interspersed_args() Set parsing to stop on the first non-option. Use this if you have a command processor which runs another command which has options of its own and you want to make sure these options don’t get confused. For example, each command might have a different set of options. A: from optparse import OptionParser import subprocess import os import sys parser = OptionParser() parser.add_option("-q", "--quiet", action="store_true", dest="quiet", default=False, help="don't print output") parser.add_option("-s", "--signal", action="store_true", dest="signal", default=False, help="signal end of program and return code") parser.disable_interspersed_args() (options, command) = parser.parse_args() if not command: parser.print_help() sys.exit(1) if options.quiet: ret = subprocess.call(command, stdout=open(os.devnull, 'w'), stderr=subprocess.STDOUT) else: ret = subprocess.call(command) if options.signal: print "END OF PROGRAM!!! Code: %d" % ret
OptionParser - supporting any option at the end of the command line
I'm writing a small program that's supposed to execute a command on a remote server (let's say a reasonably dumb wrapper around ssh [hostname] [command]). I want to execute it as such: ./floep [command] However, I need to pass certain command lines from time to time: ./floep -v [command] so I decided to use optparse.OptionParser for this. Problem is, I sometimes the command also has argument, which works fine if I do: ./floep -v "uname -a" But I also want it to work when I use: ./floep -v uname -a The idea is, as soon as I come across the first non-option argument, everything after that should be part of my command. This, however, gives me: Usage: floep [options] floep: error: no such option: -a Does OptionParser support this syntax? If so: how? If not: what's the best way to fix this?
[ "Try using disable_interspersed_args()\n#!/usr/bin/env python\nfrom optparse import OptionParser\n\nparser = OptionParser()\nparser.disable_interspersed_args()\nparser.add_option(\"-v\", action=\"store_true\", dest=\"verbose\")\n(options, args) = parser.parse_args()\n\nprint \"Options: %s args: %s\" % (options, args)\n\nWhen run:\n\n$ ./options.py foo -v bar\nOptions: {'verbose': None} args: ['foo', '-v', 'bar']\n$ ./options.py -v foo bar\nOptions: {'verbose': True} args: ['foo', 'bar']\n$ ./options.py foo -a bar\nOptions: {'verbose': None} args: ['foo', '-a', 'bar']\n\n", "OptionParser instances can actually be manipulated during the parsing operation for complex cases. In this case, however, I believe the scenario you describe is supported out-of-the-box (which would be good news if true! how often does that happen??). See this section in the docs: Querying and manipulating your option parser.\nTo quote the link above:\n\ndisable_interspersed_args()\nSet parsing to stop on the first non-option. Use this if you have a \n command processor which runs another command which has options of its\n own and you want to make sure these options don’t get confused. For example, \n each command might have a different set of options.\n\n", "from optparse import OptionParser\nimport subprocess\nimport os\nimport sys\n\nparser = OptionParser()\nparser.add_option(\"-q\", \"--quiet\",\n action=\"store_true\", dest=\"quiet\", default=False,\n help=\"don't print output\")\nparser.add_option(\"-s\", \"--signal\",\n action=\"store_true\", dest=\"signal\", default=False,\n help=\"signal end of program and return code\")\n\nparser.disable_interspersed_args()\n(options, command) = parser.parse_args()\n\nif not command:\n parser.print_help()\n sys.exit(1)\n\nif options.quiet:\n ret = subprocess.call(command, stdout=open(os.devnull, 'w'), \n stderr=subprocess.STDOUT)\nelse:\n ret = subprocess.call(command)\n\nif options.signal:\n print \"END OF PROGRAM!!! Code: %d\" % ret\n\n" ]
[ 13, 1, 1 ]
[ "You can use a bash script like this:\n#!/bin/bash\nwhile [ \"-\" == \"${1:0:1}\" ] ; do\n if [ \"-v\" == \"${1}\" ] ; then\n # do something\n echo \"-v\"\n elif [ \"-s\" == \"${1}\" ] ; then\n # do something\n echo \"-s\"\n fi\n shift\ndone\n${@}\n\nThe ${@} gives you the rest of the command line that was not consumed by the shift calls.\nTo use ssh you simply change the line from\n ${@}\nto\n ssh ${user}@${host} ${@}\ntest.sh echo bla\nbla \ntest.sh -v echo bla\n-v\nbla \ntest.sh -v -s echo bla\n-v\n-s\nbla \n" ]
[ -1 ]
[ "optparse", "python" ]
stackoverflow_0000716006_optparse_python.txt
Q: How to integrate BIRT with Python Has anyone ever tried that? A: What kind of integration are you talking about? If you want to call some BIRT API the I gues it could be done from Jython as Jython can call any Java API. If you don't need to call the BIRT API then you can just get the birt reports with http requests from the BIRT report server (a tomcat application).
How to integrate BIRT with Python
Has anyone ever tried that?
[ "What kind of integration are you talking about?\nIf you want to call some BIRT API the I gues it could be done from Jython as Jython can call any Java API.\nIf you don't need to call the BIRT API then you can just get the birt reports with http requests from the BIRT report server (a tomcat application).\n" ]
[ 1 ]
[]
[]
[ "birt", "java", "python", "reporting" ]
stackoverflow_0000697594_birt_java_python_reporting.txt
Q: My first python program: can you tell me what I'm doing wrong? I hope this question is considered appropriate for stackoverflow. If not, I'll remove the question right away. I've just wrote my very first python program. The idea is that you can issue a command, and it's gets sent to several servers in parallel. This is just for personal educational purposes. The program works! I really want to get better at python and therefore I'd like to ask the following questions: My style looks messy compared to PHP (what I'm used to). Do you have any suggestions around style improvements. Am I using the correct libraries? Am I using them correctly? Am I using the correct datatypes? Am I using them correctly? I have a good programming background, but it took me quite a while to develope a decent style for PHP (PEAR-coding standards, knowing what tools to use and when). The source (one file, 92 lines of code) http://code.google.com/p/floep/source/browse/trunk/floep A: Usually is preferred that what follows after the end of sentence : is in a separate line (also don't add a space before it) if options.verbose: print "" instead of if options.verbose : print "" You don't need to check the len of a list if you are going to iterate over it if len(threadlist) > 0 : for server in threadlist : ... is redundant, a more 'readable' is (python is smart enough to not iterate over an empty list): for server in threadlist: ... Also a more 'pythonistic' is to use list's comprehensions (but certainly is a debatable opinion) server = [] for i in grouplist : servers+=getServers(i) can be shortened to server = [getServers(i) for i in grouplist] A: Before unloading any criticism, first let me say congratulations on getting your first Python program working. Moving from one language to another can be a chore, constantly fumbling around with syntax issues and hunting through unfamiliar libraries. The most quoted style guideline is PEP-8, but that's only a guide, and at least some part of it is ignored...no, I mean deemed not applicable to some specific situation with all due respect to the guideline authors and contributors :-). I can't compare it to PHP, but compared to other Python applications it is pretty clear that you are following style conventions from other languages. I didn't always agree with many things that other developers said you must do, but over time I recognized why using conventions helps communicate what the application is doing and will help other developers help you. Raise exceptions, not strings. raise 'Server or group ' + sectionname + ' not found in ' + configfile becomes raise RuntimeError('Server or group ' + sectionname + ' not found in ' + configfile) No space before the ':' at the end of an 'if' or 'for', and don't put multiple statements on the same line, and be consistent about putting spaces around operators. Use variable names for objects and stick with i and j for loop index variables (like our masterful FORTRAN forefathers): for i in grouplist : servers+=getServers(i) becomes: for section in grouplist: servers += getServers(section) Containers can be tested for contents without getting their length: while len(threadlist) > 0 : becomes while threadlist: and if command.strip() == "" : becomes if command.strip(): Splitting a tuple is usually not put in parenthesis on the left hand side of a statement, and the command logic is a bit convoluted. If there are no args then the " ".join(...) is going to be an empty string: (options,args) = parser.parse_args() if options.verbose : print "floep 0.1" command = " ".join(args) if command.strip() == "" : parser.error('no command given') becomes options, args = parser.parse_args() if options.verbose: print "floep 0.1" if not args: parser.error('no command given') command = " ".join(args) A python for loop has an unusual 'else' clause which is executed if the loop goes through all of the elements without a 'break': for server in threadlist : foundOne = False if not server.isAlive() : ...snip... foundOne = True if not foundOne : time.sleep(0.010) becomes for server in threadlist: if not server.isAlive(): ...snip... break else: time.sleep(0.010) Getting a list of lines and then joining them back together is a bit long winded: result = proc.readlines() strresult = '' for line in result : strresult+=line self.result = strresult becomes self.result = proc.read() Your library use is good, check out the subprocess module, it's a little more up-to-date. Your datatypes are fine. And you'll get lots of other anwsers :-) A: String exceptions are deprecated in Python, so this line: if not config.has_section(sectionname): raise 'Server or group ' + sectionname + ' not found in ' + configfile should be reworked into something like this: if not config.has_section(sectionname): raise ConfigNotFoundError( "Server or group" + sectionname + "not found in" + configfile) class ConfigNotFoundError(Exception): pass [Edited to reflect the suggestion of dangph in the comments] It's more lines of code, but it's better for future upgrades. For readability's sake, something like this: parser.add_option('-q','--quiet',action="store_false", help="Display only server output", dest="verbose", default=True) Can be rewritten like this: parser.add_option('-q', '--quiet', action="store_false", help="Display only server output", dest="verbose", default=True) You might prefer another method of splitting the method call up, but the idea is that long lines can be hard to read. You should also read PEP 8 to get a sense of Python style. A: Often, for reuse purposes, we do the following, starting at about line 48 in your program def main(): config = ConfigParser.RawConfigParser() etc. if __name__ == "__main__": main() This is just a starting point. Once you've done this, you realize that main() is really two parts: parsing the command-line interface and doing the work. You then want to refactor things to look like this. def serverWork(group,...): servers = getServers(group) etc. def main(): config = ConfigParser.RawConfigParser() if command.strip() == "": parser.error('no command given') else: serverWork( options.group, options.etc., ... ) Now, you have elevated the real work to a function within this module. Your serverWork function can now be reused easily by other programs or scripts.
My first python program: can you tell me what I'm doing wrong?
I hope this question is considered appropriate for stackoverflow. If not, I'll remove the question right away. I've just wrote my very first python program. The idea is that you can issue a command, and it's gets sent to several servers in parallel. This is just for personal educational purposes. The program works! I really want to get better at python and therefore I'd like to ask the following questions: My style looks messy compared to PHP (what I'm used to). Do you have any suggestions around style improvements. Am I using the correct libraries? Am I using them correctly? Am I using the correct datatypes? Am I using them correctly? I have a good programming background, but it took me quite a while to develope a decent style for PHP (PEAR-coding standards, knowing what tools to use and when). The source (one file, 92 lines of code) http://code.google.com/p/floep/source/browse/trunk/floep
[ "Usually is preferred that what follows after the end of sentence : is in a separate line (also don't add a space before it)\nif options.verbose:\n print \"\"\n\ninstead of\nif options.verbose : print \"\"\n\nYou don't need to check the len of a list if you are going to iterate over it\nif len(threadlist) > 0 : \n for server in threadlist :\n ...\n\nis redundant, a more 'readable' is (python is smart enough to not iterate over an empty list):\nfor server in threadlist:\n ...\n\nAlso a more 'pythonistic' is to use list's comprehensions (but certainly is a debatable opinion)\nserver = []\nfor i in grouplist : servers+=getServers(i)\n\ncan be shortened to\nserver = [getServers(i) for i in grouplist]\n\n", "Before unloading any criticism, first let me say congratulations on getting your first Python program working. Moving from one language to another can be a chore, constantly fumbling around with syntax issues and hunting through unfamiliar libraries.\nThe most quoted style guideline is PEP-8, but that's only a guide, and at least some part of it is ignored...no, I mean deemed not applicable to some specific situation with all due respect to the guideline authors and contributors :-).\nI can't compare it to PHP, but compared to other Python applications it is pretty clear that you are following style conventions from other languages. I didn't always agree with many things that other developers said you must do, but over time I recognized why using conventions helps communicate what the application is doing and will help other developers help you.\n\nRaise exceptions, not strings.\nraise 'Server or group ' + sectionname + ' not found in ' + configfile\nbecomes\nraise RuntimeError('Server or group ' + sectionname + ' not found in ' + configfile)\n\nNo space before the ':' at the end of an 'if' or 'for', and don't put multiple statements on the same line, and be consistent about putting spaces around operators. Use variable names for objects and stick with i and j for loop index variables (like our masterful FORTRAN forefathers):\nfor i in grouplist : servers+=getServers(i)\nbecomes:\nfor section in grouplist:\n servers += getServers(section)\n\nContainers can be tested for contents without getting their length:\nwhile len(threadlist) > 0 :\nbecomes\nwhile threadlist:\nand\nif command.strip() == \"\" :\nbecomes\nif command.strip():\n\nSplitting a tuple is usually not put in parenthesis on the left hand side of a statement, and the command logic is a bit convoluted. If there are no args then the \" \".join(...) is going to be an empty string:\n\n(options,args) = parser.parse_args()\n\nif options.verbose : print \"floep 0.1\" \n\ncommand = \" \".join(args)\n\nif command.strip() == \"\" : parser.error('no command given')\n\nbecomes\n\noptions, args = parser.parse_args()\nif options.verbose:\n print \"floep 0.1\" \n\nif not args:\n parser.error('no command given')\ncommand = \" \".join(args)\n\n\nA python for loop has an unusual 'else' clause which is executed if the loop goes through all of the elements without a 'break':\n\n for server in threadlist :\n foundOne = False \n if not server.isAlive() :\n ...snip...\n foundOne = True\n\n if not foundOne :\n time.sleep(0.010)\n\nbecomes\n\n for server in threadlist:\n if not server.isAlive():\n ...snip...\n break\n else:\n time.sleep(0.010)\n\n\nGetting a list of lines and then joining them back together is a bit long winded:\n\n result = proc.readlines()\n strresult = ''\n for line in result : strresult+=line \n self.result = strresult\n\nbecomes\n\n self.result = proc.read()\n\n\nYour library use is good, check out the subprocess module, it's a little more up-to-date.\nYour datatypes are fine.\nAnd you'll get lots of other anwsers :-)\n", "String exceptions are deprecated in Python, so this line:\nif not config.has_section(sectionname): \n raise 'Server or group ' + sectionname + ' not found in ' + configfile\n\nshould be reworked into something like this:\nif not config.has_section(sectionname):\n raise ConfigNotFoundError(\n \"Server or group\" + sectionname + \"not found in\" + configfile)\n\nclass ConfigNotFoundError(Exception):\n pass\n\n[Edited to reflect the suggestion of dangph in the comments]\nIt's more lines of code, but it's better for future upgrades.\nFor readability's sake, something like this:\nparser.add_option('-q','--quiet',action=\"store_false\", help=\"Display only server output\", dest=\"verbose\", default=True)\n\nCan be rewritten like this:\nparser.add_option('-q',\n '--quiet',\n action=\"store_false\",\n help=\"Display only server output\", \n dest=\"verbose\", \n default=True)\n\nYou might prefer another method of splitting the method call up, but the idea is that long lines can be hard to read.\nYou should also read PEP 8 to get a sense of Python style.\n", "Often, for reuse purposes, we do the following, starting at about line 48 in your program\ndef main():\n config = ConfigParser.RawConfigParser()\n etc.\n\nif __name__ == \"__main__\":\n main()\n\nThis is just a starting point. \nOnce you've done this, you realize that main() is really two parts: parsing the command-line interface and doing the work. You then want to refactor things to look like this.\ndef serverWork(group,...):\n servers = getServers(group)\n etc.\n\ndef main():\n config = ConfigParser.RawConfigParser()\n\n if command.strip() == \"\":\n parser.error('no command given')\n else:\n serverWork( options.group, options.etc., ... )\n\nNow, you have elevated the real work to a function within this module. Your serverWork function can now be reused easily by other programs or scripts.\n" ]
[ 10, 7, 5, 3 ]
[]
[]
[ "python" ]
stackoverflow_0000716278_python.txt
Q: Google Data API authentication I am trying to get my Django app (NOT using Google app engine) retrieve data from Google Contacts using Google Contacts Data API. Going through authentication documentation as well as Data API Python client docs First step (AuthSubRequest) which is getting the single-use token works fine. The next step(AuthSubSessionToken), which is upgrade single-use token to a session token. The python API call UpgradeToSessionToken() simply didn't work for me it gave me NonAuthSubToken exception: gd_client = gdata.contacts.service.ContactsService() gd_client.auth_token = authsub_token gd_client.UpgradeToSessionToken() As an alternative I want to get it working by "manually" constructing the HTTP request: url = 'https://www.google.com/accounts/AuthSubSessionToken' headers = { 'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'AuthSub token=' + authsub_token, 'User-Agent': 'Python/2.6.1', 'Host': 'https://www.google.com', 'Accept': 'text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2', 'Connection': 'keep-alive', } req = urllib2.Request(url, None, headers) response = urllib2.urlopen(req) this gives me a different error: HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop. The last 30x error message was: Moved Temporarily What am I doing wrong here? I'd appreciate help/advice/suggestions with either of the methods I am trying to use: Python API call (UpgradeToSessionToken) or manually constructing HTTP request with urllib2. A: According to the 2.0 documentation here there is a python example set... Running the sample code A full working sample client, containing all the sample code shown in this document, is available in the Python client library distribution, under the directory samples/contacts/contacts_example.py. The sample client performs several operations on contacts to demonstrate the use of the Contacts Data API. Hopefully it will point you in the right direction. A: I had a similar issue recently. Mine got fixed by setting "secure" to "true". next = 'http://www.coolcalendarsite.com/welcome.pyc' scope = 'http://www.google.com/calendar/feeds/' secure = True session = True calendar_service = gdata.calendar.service.CalendarService() A: There are four different ways to authenticate. Is it really that important for you to use AuthSub? If you can't get AuthSub to work, then consider the ClientLogin approach. I had no trouble getting that to work.
Google Data API authentication
I am trying to get my Django app (NOT using Google app engine) retrieve data from Google Contacts using Google Contacts Data API. Going through authentication documentation as well as Data API Python client docs First step (AuthSubRequest) which is getting the single-use token works fine. The next step(AuthSubSessionToken), which is upgrade single-use token to a session token. The python API call UpgradeToSessionToken() simply didn't work for me it gave me NonAuthSubToken exception: gd_client = gdata.contacts.service.ContactsService() gd_client.auth_token = authsub_token gd_client.UpgradeToSessionToken() As an alternative I want to get it working by "manually" constructing the HTTP request: url = 'https://www.google.com/accounts/AuthSubSessionToken' headers = { 'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'AuthSub token=' + authsub_token, 'User-Agent': 'Python/2.6.1', 'Host': 'https://www.google.com', 'Accept': 'text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2', 'Connection': 'keep-alive', } req = urllib2.Request(url, None, headers) response = urllib2.urlopen(req) this gives me a different error: HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop. The last 30x error message was: Moved Temporarily What am I doing wrong here? I'd appreciate help/advice/suggestions with either of the methods I am trying to use: Python API call (UpgradeToSessionToken) or manually constructing HTTP request with urllib2.
[ "According to the 2.0 documentation here there is a python example set...\n\nRunning the sample code\nA full working sample client, containing all the sample code shown in this document, is available in the Python client library distribution, under the directory samples/contacts/contacts_example.py.\nThe sample client performs several operations on contacts to demonstrate the use of the Contacts Data API.\n\nHopefully it will point you in the right direction.\n", "I had a similar issue recently. Mine got fixed by setting \"secure\" to \"true\".\n next = 'http://www.coolcalendarsite.com/welcome.pyc'\n scope = 'http://www.google.com/calendar/feeds/'\n secure = True\n session = True\n calendar_service = gdata.calendar.service.CalendarService()\n\n", "There are four different ways to authenticate. Is it really that important for you to use AuthSub? If you can't get AuthSub to work, then consider the ClientLogin approach. I had no trouble getting that to work.\n" ]
[ 4, 1, 1 ]
[]
[]
[ "django", "gdata", "gdata_api", "google_api", "python" ]
stackoverflow_0000695703_django_gdata_gdata_api_google_api_python.txt
Q: Why is my PyObjc Cocoa view class forgetting its fields? I was trying to hack up a tool to visualize shaders for my game and I figured I would try using python and cocoa. I have ran into a brick wall of sorts though. Maybe its my somewhat poor understand of objective c but I can not seem to get this code for a view I was trying to write working: from objc import YES, NO, IBAction, IBOutlet from Foundation import * from AppKit import * import gv class SceneView(NSOpenGLView): def __init__(self): NSOpenGLView.__init__(self) self.renderer = None def doinit(self): self.renderer = gv.CoreRenderer() def initWithFrame_(self, frame): self = super(SceneView, self).initWithFrame_(frame) if self: self.doinit() print self.__dict__ return self def drawRect_(self, rect): clearColor = [0.0,0.0,0.0,0.0] print self.__dict__ self.renderer.clear(CF_Target|CF_ZBuffer,clearColor) It outputs this when executed: {'renderer': <gv.CoreRenderer; proxy of <Swig Object of type 'GV::CoreRenderer *' at 0x202c7d0> >} {} 2009-04-03 19:13:30.941 geom-view-edit[50154:10b] An exception has occured: Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/PyObjCTools/AppHelper.py", line 235, in runEventLoop File "/mnt/gilead/amcharg/projects/geom-view-edit/build/Debug/geom-view-edit.app/Contents/Resources/SceneView.py", line 37, in drawRect_ self.renderer.clear(CF_Target|CF_ZBuffer,clearColor) AttributeError: 'SceneView' object has no attribute 'renderer' It seems to be losing my renderer variable which is not that surprising considering how funky the initWithFrame_ code is but this was something xcode seemed to write which I suppose makes sense since objective C has the init separate from alloc idiom. It is still strange seeing it python however. Is there anyways to salvage this or should I take it out behind the code shed shoot it and use QT or wxPython? I considered using objective-c but I want to test out these nifty swig bindings I just compiled =) A: Depending on what's happening elsewhere in your app, your instance might actually be getting copied. In this case, implement the copyWithZone method to ensure that the new copy gets the renderer as well. (Caveat, while I am a Python developer, and an Objective-C cocoa developer, I haven't used PyObjC myself, so I can't say for certain if you should be implementing copyWithZone or __copy__). In fact, shoving a copyWithZone method into the class with a print will allow you to tell if the method is being called and if that's the reason renderer appears to have vanished. Edit: Base on your feedback, I've pasted your code into a blank xcode python project (just substituting something else for gv.CoreRenderer, since I don't have that), and it works fine with some minor modifications. How are you instantiating your SceneView? In my case I: Created a blank xcode project using the Cocoa-Python template Created a new file called SceneView.py. I pasted in your code. Opened the MainMenu.xib file, and dragged an NSOpenGLView box onto the window. With the NSOpenGLView box selected, I went to the attributes inspector and changed the class of the box to SceneView Back in xcode, I added import SceneView in the imports in main.py so that the class would be available when the xib file is loaded I implemented an awakeFromNib method in SceneView.py to handle setting up self.renderer. Note that __init__, and initWithFrame are not called for nib objects during your program execution... they are considered "serialized" into the nib file, and therefore already instantiated. I'm glossing over some details, but this is why awakeFromNib exists. Everything worked on run. The __dict__ had appropriate values in the drawRect_ call, and such. Here's the awakeFromNib function: def awakeFromNib(self): print "Awake from nib" self.renderer = gv.CoreRenderer() So, I'm guessing there are just some crossed wires somewhere in how your object is being instantiated and/or added to the view. Are you using Interface Builder for your object, or are you manually creating it and adding it to a view later? I'm curious to see that you are getting loggin outputs from initWithFrame, which is why I'm asking how you are creating the SceneView. A: Even if they weren't serialized, the __init__-constructor of python isn't supported by the ObjectiveC-bridge. So one needs to overload e.g. initWithFrame: for self-created Views.
Why is my PyObjc Cocoa view class forgetting its fields?
I was trying to hack up a tool to visualize shaders for my game and I figured I would try using python and cocoa. I have ran into a brick wall of sorts though. Maybe its my somewhat poor understand of objective c but I can not seem to get this code for a view I was trying to write working: from objc import YES, NO, IBAction, IBOutlet from Foundation import * from AppKit import * import gv class SceneView(NSOpenGLView): def __init__(self): NSOpenGLView.__init__(self) self.renderer = None def doinit(self): self.renderer = gv.CoreRenderer() def initWithFrame_(self, frame): self = super(SceneView, self).initWithFrame_(frame) if self: self.doinit() print self.__dict__ return self def drawRect_(self, rect): clearColor = [0.0,0.0,0.0,0.0] print self.__dict__ self.renderer.clear(CF_Target|CF_ZBuffer,clearColor) It outputs this when executed: {'renderer': <gv.CoreRenderer; proxy of <Swig Object of type 'GV::CoreRenderer *' at 0x202c7d0> >} {} 2009-04-03 19:13:30.941 geom-view-edit[50154:10b] An exception has occured: Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/PyObjCTools/AppHelper.py", line 235, in runEventLoop File "/mnt/gilead/amcharg/projects/geom-view-edit/build/Debug/geom-view-edit.app/Contents/Resources/SceneView.py", line 37, in drawRect_ self.renderer.clear(CF_Target|CF_ZBuffer,clearColor) AttributeError: 'SceneView' object has no attribute 'renderer' It seems to be losing my renderer variable which is not that surprising considering how funky the initWithFrame_ code is but this was something xcode seemed to write which I suppose makes sense since objective C has the init separate from alloc idiom. It is still strange seeing it python however. Is there anyways to salvage this or should I take it out behind the code shed shoot it and use QT or wxPython? I considered using objective-c but I want to test out these nifty swig bindings I just compiled =)
[ "Depending on what's happening elsewhere in your app, your instance might actually be getting copied. \nIn this case, implement the copyWithZone method to ensure that the new copy gets the renderer as well. (Caveat, while I am a Python developer, and an Objective-C cocoa developer, I haven't used PyObjC myself, so I can't say for certain if you should be implementing copyWithZone or __copy__).\nIn fact, shoving a copyWithZone method into the class with a print will allow you to tell if the method is being called and if that's the reason renderer appears to have vanished.\n\nEdit: Base on your feedback, I've pasted your code into a blank xcode python project (just substituting something else for gv.CoreRenderer, since I don't have that), and it works fine with some minor modifications. How are you instantiating your SceneView?\nIn my case I:\n\nCreated a blank xcode project using the Cocoa-Python template\nCreated a new file called SceneView.py. I pasted in your code.\nOpened the MainMenu.xib file, and dragged an NSOpenGLView box onto the window.\nWith the NSOpenGLView box selected, I went to the attributes inspector and changed the class of the box to SceneView\nBack in xcode, I added import SceneView in the imports in main.py so that the class would be available when the xib file is loaded\nI implemented an awakeFromNib method in SceneView.py to handle setting up self.renderer. Note that __init__, and initWithFrame are not called for nib objects during your program execution... they are considered \"serialized\" into the nib file, and therefore already instantiated. I'm glossing over some details, but this is why awakeFromNib exists.\nEverything worked on run. The __dict__ had appropriate values in the drawRect_ call, and such.\n\nHere's the awakeFromNib function:\ndef awakeFromNib(self):\n print \"Awake from nib\"\n self.renderer = gv.CoreRenderer()\n\nSo, I'm guessing there are just some crossed wires somewhere in how your object is being instantiated and/or added to the view. Are you using Interface Builder for your object, or are you manually creating it and adding it to a view later? I'm curious to see that you are getting loggin outputs from initWithFrame, which is why I'm asking how you are creating the SceneView.\n", "Even if they weren't serialized, the __init__-constructor of python isn't supported by the ObjectiveC-bridge. So one needs to overload e.g. initWithFrame: for self-created Views.\n" ]
[ 3, 2 ]
[]
[]
[ "macos", "pyobjc", "python", "xcode" ]
stackoverflow_0000716386_macos_pyobjc_python_xcode.txt
Q: Does the Python library httplib2 cache URIs with GET strings? In the following example what is cached correctly? Is there a Vary-Header I have to set server-side for the GET string? import httplib2 h = httplib2.Http(".cache") resp, content = h.request("http://test.com/list/") resp, content = h.request("http://test.com/list?limit=10") resp, content = h.request("http://test.com/list?limit=50") A: httplib2 uses the full URI for the cache key, so in this case each of the URLs you have in your example will be cached separately by the client. For the chapter and verse from the __init__.py file for httplib2, if you would like proof, have a look at call to the cache on around line 1000: cachekey = defrag_uri cached_value = self.cache.get(cachekey) The defrag_uri is defined by the function urlnorm (line 170ish) and includes the scheme, authority, path, and query. Of course, as you know, the server may interpret the definition of "resource" quite differently and, so, may still return cached content. Since it sounds like you're controlling the server in this case, you have full control there, so no issues. Either way, on the client side, there would be no client-cached values used for the first call to each of the 3 URLs in your examples.
Does the Python library httplib2 cache URIs with GET strings?
In the following example what is cached correctly? Is there a Vary-Header I have to set server-side for the GET string? import httplib2 h = httplib2.Http(".cache") resp, content = h.request("http://test.com/list/") resp, content = h.request("http://test.com/list?limit=10") resp, content = h.request("http://test.com/list?limit=50")
[ "httplib2 uses the full URI for the cache key, so in this case each of the URLs you have in your example will be cached separately by the client.\nFor the chapter and verse from the __init__.py file for httplib2, if you would like proof, have a look at call to the cache on around line 1000:\ncachekey = defrag_uri\ncached_value = self.cache.get(cachekey)\n\nThe defrag_uri is defined by the function urlnorm (line 170ish) and includes the scheme, authority, path, and query.\nOf course, as you know, the server may interpret the definition of \"resource\" quite differently and, so, may still return cached content. Since it sounds like you're controlling the server in this case, you have full control there, so no issues. Either way, on the client side, there would be no client-cached values used for the first call to each of the 3 URLs in your examples.\n" ]
[ 4 ]
[]
[]
[ "caching", "httplib2", "python" ]
stackoverflow_0000717700_caching_httplib2_python.txt
Q: Self-repairing Python threads I've created a web spider that accesses both a US and EU server. The US and EU servers are the same data structure, but have different data inside them, and I want to collate it all. In order to be nice to the server, there's a wait time between each request. As the program is exactly the same, in order to speed up processing, I've threaded the program so it can access the EU and US servers simultaneously. This crawling will take on the order of weeks, not days. There will be exceptions, and while I've tried to handle everything inside the program, it's likely something weird might crop up. To be truly defensive about this, I'd like to catch a thread that's failed, log the error and restart it. Worst case I lose a handful of pages out of thousands, which is better than having a thread fail and lose 50% of speed. However, from what I've read, Python threads die silently. Does anyone have any ideas? class AccessServer(threading.Thread): def __init__(self, site): threading.Thread.__init__(self) self.site = site self.qm = QueueManager.QueueManager(site) def run(self): # Do stuff here def main(): us_thread = AccessServer(u"us") us_thread.start() eu_thread = AccessServer(u"eu") eu_thread.start() A: Just use a try: ... except: ... block in the run method. If something weird happens that causes the thread to fail, it's highly likely that an error will be thrown somewhere in your code (as opposed to in the threading subsystem itself); this way you can catch it, log it, and restart the thread. It's your call whether you want to actually shut down the thread and start a new one, or just enclose the try/except block in a while loop so the same thread keeps running. Another solution, if you suspect that something really weird might happen which you can't detect through Python's error handling mechanism, would be to start a monitor thread that periodically checks to see that the other threads are running properly. A: Can you have e.g. the main thread function as a monitoring thread? E.g. require that the worker thread regularly update some thread-specific timestamp value, and if a thread hasn't updated it's timestamp within a suitable time, have the monitoring thread kill it and restart? Or, see this answer
Self-repairing Python threads
I've created a web spider that accesses both a US and EU server. The US and EU servers are the same data structure, but have different data inside them, and I want to collate it all. In order to be nice to the server, there's a wait time between each request. As the program is exactly the same, in order to speed up processing, I've threaded the program so it can access the EU and US servers simultaneously. This crawling will take on the order of weeks, not days. There will be exceptions, and while I've tried to handle everything inside the program, it's likely something weird might crop up. To be truly defensive about this, I'd like to catch a thread that's failed, log the error and restart it. Worst case I lose a handful of pages out of thousands, which is better than having a thread fail and lose 50% of speed. However, from what I've read, Python threads die silently. Does anyone have any ideas? class AccessServer(threading.Thread): def __init__(self, site): threading.Thread.__init__(self) self.site = site self.qm = QueueManager.QueueManager(site) def run(self): # Do stuff here def main(): us_thread = AccessServer(u"us") us_thread.start() eu_thread = AccessServer(u"eu") eu_thread.start()
[ "Just use a try: ... except: ... block in the run method. If something weird happens that causes the thread to fail, it's highly likely that an error will be thrown somewhere in your code (as opposed to in the threading subsystem itself); this way you can catch it, log it, and restart the thread. It's your call whether you want to actually shut down the thread and start a new one, or just enclose the try/except block in a while loop so the same thread keeps running.\nAnother solution, if you suspect that something really weird might happen which you can't detect through Python's error handling mechanism, would be to start a monitor thread that periodically checks to see that the other threads are running properly.\n", "Can you have e.g. the main thread function as a monitoring thread? E.g. require that the worker thread regularly update some thread-specific timestamp value, and if a thread hasn't updated it's timestamp within a suitable time, have the monitoring thread kill it and restart?\nOr, see this answer\n" ]
[ 8, 3 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0000717831_multithreading_python.txt
Q: Locating (file/line) the invocation of a constructor in python I'm implementing a event system: Various pieces of code will post events to a central place where they will be distributed to all listeners. The main problem with this approach: When an exception happens during event processing, I can't tell anymore who posted the event. So my question is: Is there an efficient way to figure out who called the constructor and remember that in Python 2.5? More info: The simple way would be to use the traceback module to get a copy of the stack in the constructor and remember that. Alas, I only need this information rarely, so I'm wondering if there was a way to cache this or whether I could just remember the topmost stack frame and work my way back in the rare case that I actually need this data. A: import sys def get_caller(ext=False): """ Get the caller of the caller of this function. If the optional ext parameter is given, returns the line's text as well. """ f=sys._getframe(2) s=(f.f_code.co_filename, f.f_lineno) del f if ext: import linecache s=(s[0], s[1], linecache.getline(s[0], s[1])) return s def post_event(e): caller=get_caller(True) print "Event %r posted from %r"%(e, caller) ## Testing the functions. def q(): post_event("baz") post_event("foo") print "Hello!" q() results in Event 'foo' posted from ('getcaller.py', 20, 'post_event("foo")\n') Hello! Event 'baz' posted from ('getcaller.py', 17, '\tpost_event("baz")\n') A: You could simply store a reference to the caller's frame object, but this is probably a bad idea. This keeps the frames alive, and also holds references to all the local variables used, so it may impact performance if they happen to be using large chunks of memory, and could have even worse effects if they're relying (incorrectly) on finalization to destroy resources like locks and filehandles when they go out of scope. That means you'd need to hold a string representation of the stacktrace instead, which is not ideal for your purposes (need to actually do some processing to get it, even though it's rarely needed). Unfortunately, there doesn't seem to be much way around this, though you could consider disabling it until you set some configuration option. That way you'd get better performance for the common case, but could still enable the setting when trying to diagnose failures. If your calling function alone (or some small number of parent callers) is enough to distinguish the route (ie. the trace is always the same when called via func1(), and there's no func2 -> func1() vs func3() -> func1() to distinguish between ), you could maintain a hash based on filename and line number of the calling frame (or the last two calling frames etc). However this probably doesn't match your situation, and where it doesn't, you'd end up with bogus stack traces. Note that if you do want the caller's frame, using inspect.currentframe(depth) is probably a better way to get it. A: I'd think that the simplest method would be to add an ID field to the event(s) in question, and to have each event source (by whatever definition of 'event source' is appropriate here) provide a unique identifier when it posts the event. You do get slightly more overhead, but probably not enough to be problematic, and I'd suspect that you'll find other ways that knowing an event's source would be helpful. A: It may be worthwhile to attach a hash of the stack trace to the constructor of your event and to store the actual contents in memcache with the hash as the key.
Locating (file/line) the invocation of a constructor in python
I'm implementing a event system: Various pieces of code will post events to a central place where they will be distributed to all listeners. The main problem with this approach: When an exception happens during event processing, I can't tell anymore who posted the event. So my question is: Is there an efficient way to figure out who called the constructor and remember that in Python 2.5? More info: The simple way would be to use the traceback module to get a copy of the stack in the constructor and remember that. Alas, I only need this information rarely, so I'm wondering if there was a way to cache this or whether I could just remember the topmost stack frame and work my way back in the rare case that I actually need this data.
[ "import sys\ndef get_caller(ext=False):\n \"\"\" Get the caller of the caller of this function. If the optional ext parameter is given, returns the line's text as well. \"\"\"\n f=sys._getframe(2)\n s=(f.f_code.co_filename, f.f_lineno)\n del f\n if ext:\n import linecache\n s=(s[0], s[1], linecache.getline(s[0], s[1]))\n\n return s\n\ndef post_event(e):\n caller=get_caller(True)\n print \"Event %r posted from %r\"%(e, caller)\n\n## Testing the functions.\n\ndef q():\n post_event(\"baz\")\n\npost_event(\"foo\")\nprint \"Hello!\"\nq()\n\nresults in \nEvent 'foo' posted from ('getcaller.py', 20, 'post_event(\"foo\")\\n')\nHello!\nEvent 'baz' posted from ('getcaller.py', 17, '\\tpost_event(\"baz\")\\n')\n\n", "You could simply store a reference to the caller's frame object, but this is probably a bad idea. This keeps the frames alive, and also holds references to all the local variables used, so it may impact performance if they happen to be using large chunks of memory, and could have even worse effects if they're relying (incorrectly) on finalization to destroy resources like locks and filehandles when they go out of scope.\nThat means you'd need to hold a string representation of the stacktrace instead, which is not ideal for your purposes (need to actually do some processing to get it, even though it's rarely needed). Unfortunately, there doesn't seem to be much way around this, though you could consider disabling it until you set some configuration option. That way you'd get better performance for the common case, but could still enable the setting when trying to diagnose failures.\nIf your calling function alone (or some small number of parent callers) is enough to distinguish the route (ie. the trace is always the same when called via func1(), and there's no func2 -> func1() vs func3() -> func1() to distinguish between ), you could maintain a hash based on filename and line number of the calling frame (or the last two calling frames etc). However this probably doesn't match your situation, and where it doesn't, you'd end up with bogus stack traces.\nNote that if you do want the caller's frame, using inspect.currentframe(depth) is probably a better way to get it.\n", "I'd think that the simplest method would be to add an ID field to the event(s) in question, and to have each event source (by whatever definition of 'event source' is appropriate here) provide a unique identifier when it posts the event. You do get slightly more overhead, but probably not enough to be problematic, and I'd suspect that you'll find other ways that knowing an event's source would be helpful. \n", "It may be worthwhile to attach a hash of the stack trace to the constructor of your event and to store the actual contents in memcache with the hash as the key.\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "event_handling", "exception", "python", "stack_trace" ]
stackoverflow_0000716795_event_handling_exception_python_stack_trace.txt
Q: Unable to put Python code to Joomla I have a Python code from Google app engine. I need to implement it to Joomla. How can you implement Python code to Joomla? [edit after the 1st answer] It is enough for me that I can put the code to a module position. A: Joomla is PHP based whereas Google App Engine is Python based (and tends to use Django). Your best bet is to either find an alternative to the python code, find someone to translate it, or learn python and manually translate it. There's no straight python to php conversion though. EDIT: but if you really want to be adventurous, you can try the Python in PHP project which is still early phase and looks to be someone's side project: http://www.csh.rit.edu/~jon/projects/pip/
Unable to put Python code to Joomla
I have a Python code from Google app engine. I need to implement it to Joomla. How can you implement Python code to Joomla? [edit after the 1st answer] It is enough for me that I can put the code to a module position.
[ "Joomla is PHP based whereas Google App Engine is Python based (and tends to use Django). Your best bet is to either find an alternative to the python code, find someone to translate it, or learn python and manually translate it. \nThere's no straight python to php conversion though.\nEDIT: but if you really want to be adventurous, you can try the Python in PHP project which is still early phase and looks to be someone's side project: http://www.csh.rit.edu/~jon/projects/pip/\n" ]
[ 2 ]
[]
[]
[ "joomla", "python" ]
stackoverflow_0000718498_joomla_python.txt
Q: How to execute an arbitrary shell script and pass multiple variables via Python? I am building an application plugin in Python which allows users to arbitrarily extend the application with simple scripts (working under Mac OS X). Executing Python scripts is easy, but some users are more comfortable with languages like Ruby. From what I've read, I can easily execute Ruby scripts (or other arbitrary shell scripts) using subprocess and capture their output with a pipe; that's not a problem, and there's lots of examples online. However, I need to provide the script with multiple variables (say a chunk of text along with some simple boolean information about the text the script is modifying) and I'm having trouble figuring out the best way to do this. Does anyone have a suggestion for the best way to accomplish this? My goal is to provide scripts with the information they need with the least required code needed for accessing that information within the script. Thanks in advance! A: See http://docs.python.org/library/subprocess.html#using-the-subprocess-module args should be a string, or a sequence of program arguments. The program to execute is normally the first item in the args sequence or the string if a string is given, but can be explicitly set by using the executable argument. So, your call can look like this p = subprocess.Popen( args=["script.sh", "-p", p_opt, "-v", v_opt, arg1, arg2] ) You've put arbitrary Python values into the args of subprocess.Popen. A: If you are going to be launching multiple scripts and need to pass the same information to each of them, you might consider using the environment (warning, I don't know Python, so the following code most likely sucks): #!/usr/bin/python import os try: #if environment is set if os.environ["child"] == "1": print os.environ["string"] except: #set environment os.environ["child"] = "1" os.environ["string"] = "hello world" #run this program 5 times as a child process for n in range(1, 5): os.system(__file__) A: One approach you could take would be to use json as a protocol between parent and child scripts, since json support is readily available in many languages, and is fairly expressive. You could also use a pipe to send an arbitrary amount of data down to the child process, assuming your requirements allow you to have the child scripts read from standard input. For example, the parent could do something like (Python 2.6 shown): #!/usr/bin/env python import json import subprocess data_for_child = { 'text' : 'Twas brillig...', 'flag1' : False, 'flag2' : True } child = subprocess.Popen(["./childscript"], stdin=subprocess.PIPE) json.dump(data_for_child, child.stdin) And here is a sketch of a child script: #!/usr/bin/env python # Imagine this were written in a different language. import json import sys d = json.load(sys.stdin) print d In this trivial example, the output is: $ ./foo12.py {u'text': u'Twas brillig...', u'flag2': True, u'flag1': False}
How to execute an arbitrary shell script and pass multiple variables via Python?
I am building an application plugin in Python which allows users to arbitrarily extend the application with simple scripts (working under Mac OS X). Executing Python scripts is easy, but some users are more comfortable with languages like Ruby. From what I've read, I can easily execute Ruby scripts (or other arbitrary shell scripts) using subprocess and capture their output with a pipe; that's not a problem, and there's lots of examples online. However, I need to provide the script with multiple variables (say a chunk of text along with some simple boolean information about the text the script is modifying) and I'm having trouble figuring out the best way to do this. Does anyone have a suggestion for the best way to accomplish this? My goal is to provide scripts with the information they need with the least required code needed for accessing that information within the script. Thanks in advance!
[ "See http://docs.python.org/library/subprocess.html#using-the-subprocess-module\n\nargs should be a string, or a sequence\n of program arguments. The program to\n execute is normally the first item in\n the args sequence or the string if a\n string is given, but can be explicitly\n set by using the executable argument.\n\nSo, your call can look like this\np = subprocess.Popen( args=[\"script.sh\", \"-p\", p_opt, \"-v\", v_opt, arg1, arg2] )\n\nYou've put arbitrary Python values into the args of subprocess.Popen.\n", "If you are going to be launching multiple scripts and need to pass the same information to each of them, you might consider using the environment (warning, I don't know Python, so the following code most likely sucks):\n#!/usr/bin/python \n\nimport os\n\ntry:\n #if environment is set\n if os.environ[\"child\"] == \"1\":\n print os.environ[\"string\"]\nexcept:\n #set environment\n os.environ[\"child\"] = \"1\"\n os.environ[\"string\"] = \"hello world\"\n\n #run this program 5 times as a child process\n for n in range(1, 5):\n os.system(__file__)\n\n", "One approach you could take would be to use json as a protocol between parent and child scripts, since json support is readily available in many languages, and is fairly expressive. You could also use a pipe to send an arbitrary amount of data down to the child process, assuming your requirements allow you to have the child scripts read from standard input. For example, the parent could do something like (Python 2.6 shown):\n#!/usr/bin/env python\n\nimport json\nimport subprocess\n\ndata_for_child = {\n 'text' : 'Twas brillig...',\n 'flag1' : False,\n 'flag2' : True\n}\n\nchild = subprocess.Popen([\"./childscript\"], stdin=subprocess.PIPE)\njson.dump(data_for_child, child.stdin)\n\nAnd here is a sketch of a child script:\n#!/usr/bin/env python\n# Imagine this were written in a different language.\n\nimport json\nimport sys\n\nd = json.load(sys.stdin)\nprint d\n\nIn this trivial example, the output is:\n\n$ ./foo12.py\n{u'text': u'Twas brillig...', u'flag2': True, u'flag1': False}\n\n" ]
[ 4, 1, 0 ]
[]
[]
[ "environment_variables", "macos", "python", "ruby", "shell" ]
stackoverflow_0000714360_environment_variables_macos_python_ruby_shell.txt
Q: How do I print outputs from calls to subprocess.Popen(...) in a loop? I wrote a script to run a command-line program with different input arguments and grab a certain line from the output. I have the following running in a loop: p1 = subprocess.Popen(["program", args], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=False) p2 = subprocess.Popen(["grep", phrase], stdin=p1.stdout, stdout=subprocess.PIPE, shell=False) p1.wait() p2.wait() p = str(p2.stdout.readlines()) print 'p is ', p One problem is that there is only output after the loop is finished running. I want to print something each time a process is finished. How can I do that? Also, I want to have the option of displaying the output of p1. But I can't grab it with p1.stdout.readlines() without breaking p2. How can I do this? I was thinking that I could just not make the call to grep, store the output of p1 and search for the phrase, but there's a lot of output, so this way seems pretty inefficient. Any suggestions would be greatly appreciated. Thanks! A: Here's a quick hack that worked for me on Linux. It might work for you, depending on your requirements. It uses tee as a filter that, if you pass print_all to your script, will duplicate an extra copy to /dev/tty (hey, I said it was a hack): #!/usr/bin/env python import subprocess import sys phrase = "bar" if len(sys.argv) > 1 and sys.argv[1] == 'print_all': tee_args = ['tee', '/dev/tty'] else: tee_args = ['tee'] p1 = subprocess.Popen(["./program"], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=False) p2 = subprocess.Popen(tee_args, stdin=p1.stdout, stdout=subprocess.PIPE, shell=False) p3 = subprocess.Popen(["grep", phrase], stdin=p2.stdout, stdout=subprocess.PIPE, shell=False) p1.wait() p2.wait() p3.wait() p = str(p3.stdout.readlines()) print 'p is ', p With the following as contents for program: #!/bin/sh echo foo echo bar echo baz Example output: $ ./foo13.py p is ['bar\n'] $ ./foo13.py print_all foo bar baz p is ['bar\n'] A: Try calling sys.stdout.flush() after each print statement.
How do I print outputs from calls to subprocess.Popen(...) in a loop?
I wrote a script to run a command-line program with different input arguments and grab a certain line from the output. I have the following running in a loop: p1 = subprocess.Popen(["program", args], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=False) p2 = subprocess.Popen(["grep", phrase], stdin=p1.stdout, stdout=subprocess.PIPE, shell=False) p1.wait() p2.wait() p = str(p2.stdout.readlines()) print 'p is ', p One problem is that there is only output after the loop is finished running. I want to print something each time a process is finished. How can I do that? Also, I want to have the option of displaying the output of p1. But I can't grab it with p1.stdout.readlines() without breaking p2. How can I do this? I was thinking that I could just not make the call to grep, store the output of p1 and search for the phrase, but there's a lot of output, so this way seems pretty inefficient. Any suggestions would be greatly appreciated. Thanks!
[ "Here's a quick hack that worked for me on Linux. It might work for you, depending on your requirements. It uses tee as a filter that, if you pass print_all to your script, will duplicate an extra copy to /dev/tty (hey, I said it was a hack):\n#!/usr/bin/env python\n\nimport subprocess\nimport sys\n\nphrase = \"bar\"\nif len(sys.argv) > 1 and sys.argv[1] == 'print_all':\n tee_args = ['tee', '/dev/tty']\nelse:\n tee_args = ['tee']\n\np1 = subprocess.Popen([\"./program\"], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=False)\np2 = subprocess.Popen(tee_args, stdin=p1.stdout, stdout=subprocess.PIPE, shell=False)\np3 = subprocess.Popen([\"grep\", phrase], stdin=p2.stdout, stdout=subprocess.PIPE, shell=False)\np1.wait()\np2.wait()\np3.wait()\np = str(p3.stdout.readlines())\nprint 'p is ', p\n\nWith the following as contents for program:\n#!/bin/sh\n\necho foo\necho bar\necho baz\n\nExample output:\n\n$ ./foo13.py\np is ['bar\\n']\n$ ./foo13.py print_all\nfoo\nbar\nbaz\np is ['bar\\n']\n\n", "Try calling sys.stdout.flush() after each print statement.\n" ]
[ 2, 1 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0000714879_python_subprocess.txt
Q: Changing the title of a Tab in wx.Notebook I'm experimenting with wxPython, I have a tabbed interface (notebook) and each tab is basically a file list view (yes, I'm trying to make a file manager) The file list inherits from wx.ListCtrl, and the tabbed interface inherits from wx.Notebook I'm just starting .. and I had it so double clicking on a folder will cd into that folder, but I want to also change the title of the tab. How do I do that? I have the object that represents the file list and the title I want to set it to, [ EDIT Notebook.SetPageText() takes a number, so I can't pass the tab object directly to it ] my current approach is to cycle through the tabs until one of them matches my tab: for tab_id in range(self.GetPageCount()): if self.GetPage(tab_id) == tab: self.SetPageText(tab_id, title) break This seems rather naive though, isn't there a smarter approach? A: I don't know wxPython, but I assume it wraps all the methods of the C++ classes. There is wxNotebook::GetSelection() which returns wxNOT_FOUND or the index of the selected page, which can then be used to call wxNotebook::SetPageText(). Or use wxNotebook::GetPage() with this index to check whether it is equal to tab. A: I think doing something like this helps : notebook.get_tab_label(notebook.get_nth_page(your_page_number)).set_text("Your text") If you want to have a reference to the current tab always, you must connect the "switch-page" signal, and save the page in a variable.
Changing the title of a Tab in wx.Notebook
I'm experimenting with wxPython, I have a tabbed interface (notebook) and each tab is basically a file list view (yes, I'm trying to make a file manager) The file list inherits from wx.ListCtrl, and the tabbed interface inherits from wx.Notebook I'm just starting .. and I had it so double clicking on a folder will cd into that folder, but I want to also change the title of the tab. How do I do that? I have the object that represents the file list and the title I want to set it to, [ EDIT Notebook.SetPageText() takes a number, so I can't pass the tab object directly to it ] my current approach is to cycle through the tabs until one of them matches my tab: for tab_id in range(self.GetPageCount()): if self.GetPage(tab_id) == tab: self.SetPageText(tab_id, title) break This seems rather naive though, isn't there a smarter approach?
[ "I don't know wxPython, but I assume it wraps all the methods of the C++ classes.\nThere is wxNotebook::GetSelection() which returns wxNOT_FOUND or the index of the selected page, which can then be used to call wxNotebook::SetPageText().\nOr use wxNotebook::GetPage() with this index to check whether it is equal to tab.\n", "I think doing something like this helps :\n\nnotebook.get_tab_label(notebook.get_nth_page(your_page_number)).set_text(\"Your text\")\n\nIf you want to have a reference to the current tab always, you must connect the \"switch-page\" signal, and save the page in a variable.\n" ]
[ 2, 0 ]
[ "As .GetPage returns a wx.Window, I think tab.Label = title should work.\n" ]
[ -1 ]
[ "python", "tabbed_interface", "wxpython", "wxwidgets" ]
stackoverflow_0000718546_python_tabbed_interface_wxpython_wxwidgets.txt
Q: Btrieve without Pervasive? Is there any library available to query Btrieve databases without buying something from Pervasive? I'm looking to code in C# or Python. A: As far as I know that is not possible. It is not an open source database, so writing drivers for it is really hard. A: If you download one of the trial versions, you can get/install the odbc client and connect that way. In our version of pervasive (older version) on the server where the database is installed, you can also find this client install. A: This depends a lot on the version of Btrieve. I've been working with btrieve for a long time and have found that the best API for the old 6.15 version was in pascal. That having been said there was definately a C api around as well. Pervasive have recently released a 6.15 ultimate patch. Using this and the C api should allow you to work effectively with older btrieve databases. It is possible for instance to build new modules for python using C.
Btrieve without Pervasive?
Is there any library available to query Btrieve databases without buying something from Pervasive? I'm looking to code in C# or Python.
[ "As far as I know that is not possible. It is not an open source database, so writing drivers for it is really hard.\n", "If you download one of the trial versions, you can get/install the odbc client and connect that way.\nIn our version of pervasive (older version) on the server where the database is installed, you can also find this client install.\n", "This depends a lot on the version of Btrieve. I've been working with btrieve for a long time and have found that the best API for the old 6.15 version was in pascal. That having been said there was definately a C api around as well.\nPervasive have recently released a 6.15 ultimate patch. Using this and the C api should allow you to work effectively with older btrieve databases. It is possible for instance to build new modules for python using C. \n" ]
[ 2, 2, 0 ]
[]
[]
[ "btrieve", "c#", "python" ]
stackoverflow_0000080215_btrieve_c#_python.txt
Q: can pylons + authkit ignore particular responses with 401 status? i am writing a pylons app, and I am using authkit for authentication/authorization. if a user is not logged in and hits a page that requires authorization, authkit swallows the 401 (not authenticated) response and redirects to a login page. this is great for the web interface, but not great for our web services. when an unauthenticated user this a protected web service, we'd like to set the status of the response to 401. is there are way to slip some 401 responses past authkit in pylons, but not all? thanks, matt A: It looks like the authkit.setup.intercept option is designed to do precisely this.
can pylons + authkit ignore particular responses with 401 status?
i am writing a pylons app, and I am using authkit for authentication/authorization. if a user is not logged in and hits a page that requires authorization, authkit swallows the 401 (not authenticated) response and redirects to a login page. this is great for the web interface, but not great for our web services. when an unauthenticated user this a protected web service, we'd like to set the status of the response to 401. is there are way to slip some 401 responses past authkit in pylons, but not all? thanks, matt
[ "It looks like the authkit.setup.intercept option is designed to do precisely this.\n" ]
[ 1 ]
[]
[]
[ "authkit", "http", "pylons", "python" ]
stackoverflow_0000717776_authkit_http_pylons_python.txt
Q: How to recover a broken python "cPickle" dump? I am using rss2email for converting a number of RSS feeds into mail for easier consumption. That is, I was using it because it broke in a horrible way today: On every run, it only gives me this backtrace: Traceback (most recent call last): File "/usr/share/rss2email/rss2email.py", line 740, in <module> elif action == "list": list() File "/usr/share/rss2email/rss2email.py", line 681, in list feeds, feedfileObject = load(lock=0) File "/usr/share/rss2email/rss2email.py", line 422, in load feeds = pickle.load(feedfileObject) TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {})) The only helpful fact that I have been able to construct from this backtrace is that the file ~/.rss2email/feeds.dat in which rss2email keeps all its configuration and runtime state is somehow broken. Apparently, rss2email reads its state and dumps it back using cPickle on every run. I have even found the line containing that 'sxOYAAuyzSx0WqN3BVPjE+6pgPU'string mentioned above in the giant (>12MB) feeds.dat file. To my untrained eye, the dump does not appear to be truncated or otherwise damaged. What approaches could I try in order to reconstruct the file? The Python version is 2.5.4 on a Debian/unstable system. EDIT Peter Gibson and J.F. Sebastian have suggested directly loading from the pickle file and I had tried that before. Apparently, a Feed class that is defined in rss2email.py is needed, so here's my script: #!/usr/bin/python import sys # import pickle import cPickle as pickle sys.path.insert(0,"/usr/share/rss2email") from rss2email import Feed feedfile = open("feeds.dat", 'rb') feeds = pickle.load(feedfile) The "plain" pickle variant produces the following traceback: Traceback (most recent call last): File "./r2e-rescue.py", line 8, in <module> feeds = pickle.load(feedfile) File "/usr/lib/python2.5/pickle.py", line 1370, in load return Unpickler(file).load() File "/usr/lib/python2.5/pickle.py", line 858, in load dispatch[key](self) File "/usr/lib/python2.5/pickle.py", line 1133, in load_reduce value = func(*args) TypeError: 'str' object is not callable The cPickle variant produces essentially the same thing as calling r2e itself: Traceback (most recent call last): File "./r2e-rescue.py", line 10, in <module> feeds = pickle.load(feedfile) TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {})) EDIT 2 Following J.F. Sebastian's suggestion around putting "printf debugging" into Feed.__setstate__ into my test script, these are the last few lines before Python bails out. u'http:/com/news.ars/post/20080924-everyone-declares-victory-in-smutfree-wireless-broadband-test.html': u'http:/com/news.ars/post/20080924-everyone-declares-victory-in-smutfree-wireless-broadband-test.html'}, 'to': None, 'url': 'http://arstechnica.com/'} Traceback (most recent call last): File "./r2e-rescue.py", line 23, in ? feeds = pickle.load(feedfile) TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {})) The same thing happens on a Debian/etch box using python 2.4.4-2. A: How I solved my problem A Perl port of pickle.py Following J.F. Sebastian's comment about how simple the pickle format is, I went out to port parts of pickle.py to Perl. A couple of quick regular expressions would have been a faster way to access my data, but I felt that the hack value and an opportunity to learn more about Python would be be worth it. Plus, I still feel much more comfortable using (and debugging code in) Perl than Python. Most of the porting effort (simple types, tuples, lists, dictionaries) went very straightforward. Perl's and Python's different notions of classes and objects has been the only issue so far where a bit more than simple translation of idioms was needed. The result is a module called Pickle::Parse which after a bit of polishing will be published on CPAN. A module called Python::Serialise::Pickle existed on CPAN, but I found its parsing capabilities lacking: It spews debugging output all over the place and doesn't seem to support classes/objects. Parsing, transforming data, detecting actual errors in the stream Based upon Pickle::Parse, I tried to parse the feeds.dat file. After a few iteration of fixing trivial bugs in my parsing code, I got an error message that was strikingly similar to pickle.py's original object not callable error message: Can't use string ("sxOYAAuyzSx0WqN3BVPjE+6pgPU") as a subroutine ref while "strict refs" in use at lib/Pickle/Parse.pm line 489, <STDIN> line 187102. Ha! Now we're at a point where it's quite likely that the actual data stream is broken. Plus, we get an idea where it is broken. It turned out that the first line of the following sequence was wrong: g7724 ((I2009 I3 I19 I1 I19 I31 I3 I78 I0 t(dtRp62457 Position 7724 in the "memo" pointed to that string "sxOYAAuyzSx0WqN3BVPjE+6pgPU". From similar records earlier in the stream, it was clear that a time.struct_time object was needed instead. All later records shared this wrong pointer. With a simple search/replace operation, it was trivial to fix this. I find it ironic that I found the source of the error by accident through Perl's feature that tells the user its position in the input data stream when it dies. Conclusion I will move away from rss2email as soon as I find time to automatically transform its pickled configuration/state mess to another tool's format. pickle.py needs more meaningful error messages that tell the user about the position of the data stream (not the poision in its own code) where things go wrong. Porting parts pickle.py to Perl was fun and, in the end, rewarding. A: Have you tried manually loading the feeds.dat file using both cPickle and pickle? If the output differs it might hint at the error. Something like (from your home directory): import cPickle, pickle f = open('.rss2email/feeds.dat', 'r') obj1 = cPickle.load(f) obj2 = pickle.load(f) (you might need to open in binary mode 'rb' if rss2email doesn't pickle in ascii). Pete Edit: The fact that cPickle and pickle give the same error suggests that the feeds.dat file is the problem. Probably a change in the Feed class between versions of rss2email as suggested in the Ubuntu bug J.F. Sebastian links to. A: Sounds like the internals of cPickle are getting tangled up. This thread (http://bytes.com/groups/python/565085-cpickle-problems) looks like it might have a clue.. A: 'sxOYAAuyzSx0WqN3BVPjE+6pgPU' is most probably unrelated to the pickle's problem Post an error traceback for (to determine what class defines the attribute that can't be called (the one that leads to the TypeError): python -c "import pickle; pickle.load(open('feeds.dat'))" EDIT: Add the following to your code and run (redirect stderr to file then use 'tail -2' on it to print last 2 lines): from pprint import pprint def setstate(self, dict_): pprint(dict_, stream=sys.stderr, depth=None) self.__dict__.update(dict_) Feed.__setstate__ = setstate If the above doesn't yield an interesting output then use general troubleshooting tactics: Confirm that 'feeds.dat' is the problem: backup ~/.rss2email directory install rss2email into virtualenv/pip sandbox (or use zc.buildout) to isolate the environment (make sure you are using feedparser.py from the trunk). add couple of feeds, add feeds until 'feeds.dat' size is greater than the current. Run some tests. try old 'feeds.dat' try new 'feeds.dat' on existing rss2email installation See r2e bails out with TypeError bug on Ubuntu.
How to recover a broken python "cPickle" dump?
I am using rss2email for converting a number of RSS feeds into mail for easier consumption. That is, I was using it because it broke in a horrible way today: On every run, it only gives me this backtrace: Traceback (most recent call last): File "/usr/share/rss2email/rss2email.py", line 740, in <module> elif action == "list": list() File "/usr/share/rss2email/rss2email.py", line 681, in list feeds, feedfileObject = load(lock=0) File "/usr/share/rss2email/rss2email.py", line 422, in load feeds = pickle.load(feedfileObject) TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {})) The only helpful fact that I have been able to construct from this backtrace is that the file ~/.rss2email/feeds.dat in which rss2email keeps all its configuration and runtime state is somehow broken. Apparently, rss2email reads its state and dumps it back using cPickle on every run. I have even found the line containing that 'sxOYAAuyzSx0WqN3BVPjE+6pgPU'string mentioned above in the giant (>12MB) feeds.dat file. To my untrained eye, the dump does not appear to be truncated or otherwise damaged. What approaches could I try in order to reconstruct the file? The Python version is 2.5.4 on a Debian/unstable system. EDIT Peter Gibson and J.F. Sebastian have suggested directly loading from the pickle file and I had tried that before. Apparently, a Feed class that is defined in rss2email.py is needed, so here's my script: #!/usr/bin/python import sys # import pickle import cPickle as pickle sys.path.insert(0,"/usr/share/rss2email") from rss2email import Feed feedfile = open("feeds.dat", 'rb') feeds = pickle.load(feedfile) The "plain" pickle variant produces the following traceback: Traceback (most recent call last): File "./r2e-rescue.py", line 8, in <module> feeds = pickle.load(feedfile) File "/usr/lib/python2.5/pickle.py", line 1370, in load return Unpickler(file).load() File "/usr/lib/python2.5/pickle.py", line 858, in load dispatch[key](self) File "/usr/lib/python2.5/pickle.py", line 1133, in load_reduce value = func(*args) TypeError: 'str' object is not callable The cPickle variant produces essentially the same thing as calling r2e itself: Traceback (most recent call last): File "./r2e-rescue.py", line 10, in <module> feeds = pickle.load(feedfile) TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {})) EDIT 2 Following J.F. Sebastian's suggestion around putting "printf debugging" into Feed.__setstate__ into my test script, these are the last few lines before Python bails out. u'http:/com/news.ars/post/20080924-everyone-declares-victory-in-smutfree-wireless-broadband-test.html': u'http:/com/news.ars/post/20080924-everyone-declares-victory-in-smutfree-wireless-broadband-test.html'}, 'to': None, 'url': 'http://arstechnica.com/'} Traceback (most recent call last): File "./r2e-rescue.py", line 23, in ? feeds = pickle.load(feedfile) TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {})) The same thing happens on a Debian/etch box using python 2.4.4-2.
[ "How I solved my problem\nA Perl port of pickle.py\nFollowing J.F. Sebastian's comment about how simple the pickle\nformat is, I went out to port parts of pickle.py to Perl. A couple\nof quick regular expressions would have been a faster way to access my\ndata, but I felt that the hack value and an opportunity to learn more\nabout Python would be be worth it. Plus, I still feel much more\ncomfortable using (and debugging code in) Perl than Python.\nMost of the porting effort (simple types, tuples, lists, dictionaries)\nwent very straightforward. Perl's and Python's different notions of\nclasses and objects has been the only issue so far where a bit more\nthan simple translation of idioms was needed. The result is a module\ncalled Pickle::Parse which after a bit of polishing will be\npublished on CPAN.\nA module called Python::Serialise::Pickle existed on CPAN, but I\nfound its parsing capabilities lacking: It spews debugging output all\nover the place and doesn't seem to support classes/objects.\nParsing, transforming data, detecting actual errors in the stream\nBased upon Pickle::Parse, I tried to parse the feeds.dat file.\nAfter a few iteration of fixing trivial bugs in my parsing code, I got\nan error message that was strikingly similar to pickle.py's original\nobject not callable error message:\nCan't use string (\"sxOYAAuyzSx0WqN3BVPjE+6pgPU\") as a subroutine\nref while \"strict refs\" in use at lib/Pickle/Parse.pm line 489,\n<STDIN> line 187102.\n\nHa! Now we're at a point where it's quite likely that the actual data\nstream is broken. Plus, we get an idea where it is broken.\nIt turned out that the first line of the following sequence was wrong:\ng7724\n((I2009\nI3\nI19\nI1\nI19\nI31\nI3\nI78\nI0\nt(dtRp62457\n\nPosition 7724 in the \"memo\" pointed to that string\n\"sxOYAAuyzSx0WqN3BVPjE+6pgPU\". From similar records earlier in the\nstream, it was clear that a time.struct_time object was needed\ninstead. All later records shared this wrong pointer. With a simple\nsearch/replace operation, it was trivial to fix this.\nI find it ironic that I found the source of the error by accident\nthrough Perl's feature that tells the user its position in the input\ndata stream when it dies.\nConclusion\n\nI will move away from rss2email as soon as I find time to\nautomatically transform its pickled configuration/state mess to\nanother tool's format.\npickle.py needs more meaningful error messages that tell the user\nabout the position of the data stream (not the poision in its own\ncode) where things go wrong.\nPorting parts pickle.py to Perl was fun and, in the end, rewarding.\n\n", "Have you tried manually loading the feeds.dat file using both cPickle and pickle? If the output differs it might hint at the error.\nSomething like (from your home directory):\nimport cPickle, pickle\nf = open('.rss2email/feeds.dat', 'r')\nobj1 = cPickle.load(f)\nobj2 = pickle.load(f)\n\n(you might need to open in binary mode 'rb' if rss2email doesn't pickle in ascii).\nPete\nEdit: The fact that cPickle and pickle give the same error suggests that the feeds.dat file is the problem. Probably a change in the Feed class between versions of rss2email as suggested in the Ubuntu bug J.F. Sebastian links to.\n", "Sounds like the internals of cPickle are getting tangled up. This thread (http://bytes.com/groups/python/565085-cpickle-problems) looks like it might have a clue..\n", "\n'sxOYAAuyzSx0WqN3BVPjE+6pgPU' is most probably unrelated to the pickle's problem\nPost an error traceback for (to determine what class defines the attribute that can't be called (the one that leads to the TypeError):\npython -c \"import pickle; pickle.load(open('feeds.dat'))\"\n\n\nEDIT:\nAdd the following to your code and run (redirect stderr to file then use 'tail -2' on it to print last 2 lines): \nfrom pprint import pprint\ndef setstate(self, dict_):\n pprint(dict_, stream=sys.stderr, depth=None)\n self.__dict__.update(dict_)\nFeed.__setstate__ = setstate\n\nIf the above doesn't yield an interesting output then use general troubleshooting tactics:\nConfirm that 'feeds.dat' is the problem:\n\nbackup ~/.rss2email directory\ninstall rss2email into virtualenv/pip sandbox (or use zc.buildout) to isolate the environment (make sure you are using feedparser.py from the trunk). \nadd couple of feeds, add feeds until 'feeds.dat' size is greater than the current. Run some tests.\ntry old 'feeds.dat'\ntry new 'feeds.dat' on existing rss2email installation \n\nSee r2e bails out with TypeError bug on Ubuntu.\n" ]
[ 6, 3, 2, 2 ]
[]
[]
[ "pickle", "python", "rss" ]
stackoverflow_0000664444_pickle_python_rss.txt
Q: Significant figures in the decimal module So I've decided to try to solve my physics homework by writing some python scripts to solve problems for me. One problem that I'm running into is that significant figures don't always seem to come out properly. For example this handles significant figures properly: from decimal import Decimal >>> Decimal('1.0') + Decimal('2.0') Decimal("3.0") But this doesn't: >>> Decimal('1.00') / Decimal('3.00') Decimal("0.3333333333333333333333333333") So two questions: Am I right that this isn't the expected amount of significant digits, or do I need to brush up on significant digit math? Is there any way to do this without having to set the decimal precision manually? Granted, I'm sure I can use numpy to do this, but I just want to know if there's a way to do this with the decimal module out of curiosity. A: Changing the decimal working precision to 2 digits is not a good idea, unless you absolutely only are going to perform a single operation. You should always perform calculations at higher precision than the level of significance, and only round the final result. If you perform a long sequence of calculations and round to the number of significant digits at each step, errors will accumulate. The decimal module doesn't know whether any particular operation is one in a long sequence, or the final result, so it assumes that it shouldn't round more than necessary. Ideally it would use infinite precision, but that is too expensive so the Python developers settled for 28 digits. Once you've arrived at the final result, what you probably want is quantize: >>> (Decimal('1.00') / Decimal('3.00')).quantize(Decimal("0.001")) Decimal("0.333") You have to keep track of significance manually. If you want automatic significance tracking, you should use interval arithmetic. There are some libraries available for Python, including pyinterval and mpmath (which supports arbitrary precision). It is also straightforward to implement interval arithmetic with the decimal library, since it supports directed rounding. You may also want to read the Decimal Arithmetic FAQ: Is the decimal arithmetic ‘significance’ arithmetic? A: Decimals won't throw away decimal places like that. If you really want to limit precision to 2 d.p. then try decimal.getcontext().prec=2 EDIT: You can alternatively call quantize() every time you multiply or divide (addition and subtraction will preserve the 2 dps). A: Just out of curiosity...is it necessary to use the decimal module? Why not floating point with a significant-figures rounding of numbers when you are ready to see them? Or are you trying to keep track of the significant figures of the computation (like when you have to do an error analysis of a result, calculating the computed error as a function of the uncertainties that went into the calculation)? If you want a rounding function that rounds from the left of the number instead of the right, try: def lround(x,leadingDigits=0): """Return x either as 'print' would show it (the default) or rounded to the specified digit as counted from the leftmost non-zero digit of the number, e.g. lround(0.00326,2) --> 0.0033 """ assert leadingDigits>=0 if leadingDigits==0: return float(str(x)) #just give it back like 'print' would give it return float('%.*e' % (int(leadingDigits),x)) #give it back as rounded by the %e format The numbers will look right when you print them or convert them to strings, but if you are working at the prompt and don't explicitly print them they may look a bit strange: >>> lround(1./3.,2),str(lround(1./3.,2)),str(lround(1./3.,4)) (0.33000000000000002, '0.33', '0.3333') A: Decimal defaults to 28 places of precision. The only way to limit the number of digits it returns is by altering the precision. A: What's wrong with floating point? >>> "%8.2e"% ( 1.0/3.0 ) '3.33e-01' It was designed for scientific-style calculations with a limited number of significant digits.
Significant figures in the decimal module
So I've decided to try to solve my physics homework by writing some python scripts to solve problems for me. One problem that I'm running into is that significant figures don't always seem to come out properly. For example this handles significant figures properly: from decimal import Decimal >>> Decimal('1.0') + Decimal('2.0') Decimal("3.0") But this doesn't: >>> Decimal('1.00') / Decimal('3.00') Decimal("0.3333333333333333333333333333") So two questions: Am I right that this isn't the expected amount of significant digits, or do I need to brush up on significant digit math? Is there any way to do this without having to set the decimal precision manually? Granted, I'm sure I can use numpy to do this, but I just want to know if there's a way to do this with the decimal module out of curiosity.
[ "Changing the decimal working precision to 2 digits is not a good idea, unless you absolutely only are going to perform a single operation.\nYou should always perform calculations at higher precision than the level of significance, and only round the final result. If you perform a long sequence of calculations and round to the number of significant digits at each step, errors will accumulate. The decimal module doesn't know whether any particular operation is one in a long sequence, or the final result, so it assumes that it shouldn't round more than necessary. Ideally it would use infinite precision, but that is too expensive so the Python developers settled for 28 digits.\nOnce you've arrived at the final result, what you probably want is quantize:\n\n>>> (Decimal('1.00') / Decimal('3.00')).quantize(Decimal(\"0.001\"))\nDecimal(\"0.333\")\n\nYou have to keep track of significance manually. If you want automatic significance tracking, you should use interval arithmetic. There are some libraries available for Python, including pyinterval and mpmath (which supports arbitrary precision). It is also straightforward to implement interval arithmetic with the decimal library, since it supports directed rounding.\nYou may also want to read the Decimal Arithmetic FAQ: Is the decimal arithmetic ‘significance’ arithmetic?\n", "Decimals won't throw away decimal places like that. If you really want to limit precision to 2 d.p. then try\ndecimal.getcontext().prec=2\n\nEDIT: You can alternatively call quantize() every time you multiply or divide (addition and subtraction will preserve the 2 dps).\n", "Just out of curiosity...is it necessary to use the decimal module? Why not floating point with a significant-figures rounding of numbers when you are ready to see them? Or are you trying to keep track of the significant figures of the computation (like when you have to do an error analysis of a result, calculating the computed error as a function of the uncertainties that went into the calculation)? If you want a rounding function that rounds from the left of the number instead of the right, try:\ndef lround(x,leadingDigits=0): \n \"\"\"Return x either as 'print' would show it (the default) \n or rounded to the specified digit as counted from the leftmost \n non-zero digit of the number, e.g. lround(0.00326,2) --> 0.0033\n \"\"\" \n assert leadingDigits>=0 \n if leadingDigits==0: \n return float(str(x)) #just give it back like 'print' would give it\n return float('%.*e' % (int(leadingDigits),x)) #give it back as rounded by the %e format \n\nThe numbers will look right when you print them or convert them to strings, but if you are working at the prompt and don't explicitly print them they may look a bit strange:\n>>> lround(1./3.,2),str(lround(1./3.,2)),str(lround(1./3.,4))\n(0.33000000000000002, '0.33', '0.3333')\n\n", "Decimal defaults to 28 places of precision.\nThe only way to limit the number of digits it returns is by altering the precision.\n", "What's wrong with floating point? \n>>> \"%8.2e\"% ( 1.0/3.0 )\n'3.33e-01'\n\nIt was designed for scientific-style calculations with a limited number of significant digits.\n" ]
[ 8, 3, 1, 0, 0 ]
[ "If I undertand Decimal correctly, the \"precision\" is the number of digits after the decimal point in decimal notation.\nYou seem to want something else: the number of significant digits. That is one more than the number of digits after the decimal point in scientific notation.\nI would be interested in learning about a Python module that does significant-digits-aware floating point point computations.\n" ]
[ -1 ]
[ "decimal", "floating_point", "physics", "python", "significance" ]
stackoverflow_0000144218_decimal_floating_point_physics_python_significance.txt
Q: Python 3: formatting zip module arguments correctly (newb) Please tell me why this code fails. I am new and I don't understand why my formatting of my zip arguments is incorrect. Since I am unsure how to communicate best so I will show the code, the error message, and what I believe is happening. #!c:\python30 # Filename: backup_ver5.py import os import time import zipfile source = r'"C:\Documents and Settings\Benjamin Serrato\My Documents\python\backup_list"' target_dir = r'C:\Documents and Settings\Benjamin Serrato\My Documents\python\backup_dir' today = target_dir + os.sep + time.strftime('%Y%m%d') now = time.strftime('%H%M%S') comment = input('Enter a comment --> ') if len(comment) == 0: target = '"' + today + os.sep + now + '.zip' + '"' else: target = '"' + today + os.sep + now + '_' + \ comment.replace(' ', '_') + '.zip' + '"' if not os.path.exists(today): os.mkdir(today) print('Successfully created directory', today) print(target) print(source) zip_command = zipfile.ZipFile(target, 'w').write(source) if os.system(zip_command) == 0: print('Successful backup to', target) else: print('Backup FAILED') enter code here I recieve this error message: Enter a comment --> "C:\Documents and Settings\Benjamin Serrato\My Documents\python\backup_dir\20090 405\134614.zip" "C:\Documents and Settings\Benjamin Serrato\My Documents\python\backup_list" Traceback (most recent call last): File "C:\Documents and Settings\Benjamin Serrato\My Documents\python\backup_ve r5.py", line 32, in <module> zip_command = zipfile.ZipFile(target, 'w').write(source) File "c:\python30\lib\zipfile.py", line 683, in __init__ self.fp = io.open(file, modeDict[mode]) File "C:\Python30\lib\io.py", line 222, in open closefd) File "C:\Python30\lib\io.py", line 615, in __init__ _fileio._FileIO.__init__(self, name, mode, closefd) IOError: [Errno 22] Invalid argument: '"C:\\Documents and Settings\\Benjamin Ser rato\\My Documents\\python\\backup_dir\\20090405\\134614.zip"' The two print tests before zip_command is assigned tell me that the two strings are being passed to zipfile.ZipFile() correctly. The traceback tells me I am not calling zipfile.ZipFile() correctly. The error in __init__ makes me more sure of this. Last, the problem seems to be that I am causing my path string to have double backslashes. I can't follow why the IOError shows that. I used this site to figure out how to use zipfile. zipfile is a class, I import it at the start of the program then I use it and its primary method. I pass the file I would like to write to zipfile.ZipFile('file to write', 'mode') and set the program to open an object set to be writable. Then the command writes the file to the destination folder with a sub-method like so, "".zipfile('files to write'). Where am I going wrong? A: Looks like it's because you have an extra pair of double quotes around your pathname. Remove the double quotes, and see if it works. A: To answer your other question: the double backslashes are there because they are escaped.
Python 3: formatting zip module arguments correctly (newb)
Please tell me why this code fails. I am new and I don't understand why my formatting of my zip arguments is incorrect. Since I am unsure how to communicate best so I will show the code, the error message, and what I believe is happening. #!c:\python30 # Filename: backup_ver5.py import os import time import zipfile source = r'"C:\Documents and Settings\Benjamin Serrato\My Documents\python\backup_list"' target_dir = r'C:\Documents and Settings\Benjamin Serrato\My Documents\python\backup_dir' today = target_dir + os.sep + time.strftime('%Y%m%d') now = time.strftime('%H%M%S') comment = input('Enter a comment --> ') if len(comment) == 0: target = '"' + today + os.sep + now + '.zip' + '"' else: target = '"' + today + os.sep + now + '_' + \ comment.replace(' ', '_') + '.zip' + '"' if not os.path.exists(today): os.mkdir(today) print('Successfully created directory', today) print(target) print(source) zip_command = zipfile.ZipFile(target, 'w').write(source) if os.system(zip_command) == 0: print('Successful backup to', target) else: print('Backup FAILED') enter code here I recieve this error message: Enter a comment --> "C:\Documents and Settings\Benjamin Serrato\My Documents\python\backup_dir\20090 405\134614.zip" "C:\Documents and Settings\Benjamin Serrato\My Documents\python\backup_list" Traceback (most recent call last): File "C:\Documents and Settings\Benjamin Serrato\My Documents\python\backup_ve r5.py", line 32, in <module> zip_command = zipfile.ZipFile(target, 'w').write(source) File "c:\python30\lib\zipfile.py", line 683, in __init__ self.fp = io.open(file, modeDict[mode]) File "C:\Python30\lib\io.py", line 222, in open closefd) File "C:\Python30\lib\io.py", line 615, in __init__ _fileio._FileIO.__init__(self, name, mode, closefd) IOError: [Errno 22] Invalid argument: '"C:\\Documents and Settings\\Benjamin Ser rato\\My Documents\\python\\backup_dir\\20090405\\134614.zip"' The two print tests before zip_command is assigned tell me that the two strings are being passed to zipfile.ZipFile() correctly. The traceback tells me I am not calling zipfile.ZipFile() correctly. The error in __init__ makes me more sure of this. Last, the problem seems to be that I am causing my path string to have double backslashes. I can't follow why the IOError shows that. I used this site to figure out how to use zipfile. zipfile is a class, I import it at the start of the program then I use it and its primary method. I pass the file I would like to write to zipfile.ZipFile('file to write', 'mode') and set the program to open an object set to be writable. Then the command writes the file to the destination folder with a sub-method like so, "".zipfile('files to write'). Where am I going wrong?
[ "Looks like it's because you have an extra pair of double quotes around your pathname. Remove the double quotes, and see if it works.\n", "To answer your other question: the double backslashes are there because they are escaped.\n" ]
[ 3, 1 ]
[]
[]
[ "python", "zip" ]
stackoverflow_0000719503_python_zip.txt
Q: App Engine - problem trying to set a Model property value I'm pretty new to app engine, and I'm trying to set a bit of text into the app engine database for the first time. Here's my code: def setVenueIntroText(text): venue_obj = db.GqlQuery("SELECT * FROM Venue").get() venue_obj.intro_text = text # Works if I comment out db.put(venue_obj) # These two lines This throws some sort of exception - I can't tell what it is though because of my django 1.02 setup. Ok, I gave the code in the answer below a go, and it worked after deleting my datastores, but I'm still not satisfied. Here's an update: I've modified my code to something that looks like it makes sense to me. The getVenueIntroText doesn't complain when I call it - I haven't got any items in the database btw. When I call setVenueIntroText, it doesn't like what I'm doing for some reason - if someone knows the reason why, I'd really like to know :) Here's my latest attempt: def getVenueIntroText(): venue_info = "" venue_obj = db.GqlQuery("SELECT * FROM Venue").get() if venue_obj is not None: venue_info = venue_obj.intro_text return venue_info def setVenueIntroText(text): venue_obj = db.GqlQuery("SELECT * FROM Venue").get() if venue_obj is None: venue_obj = Venue(intro_text = text) else: venue_obj.intro_text = text db.put(venue_obj) A: I think this should work: def setVenueIntroText(text): query = db.GqlQuery("SELECT * FROM Venue") for result in query: result.intro_text = text db.put(result) A: I think the main problem was that I couldn't see the error messages - really stupid of me, I forgot to put DEBUG = True in my settings.py It turns out I needed a multiline=True in my StringProperty Django is catching my exceptions for me.
App Engine - problem trying to set a Model property value
I'm pretty new to app engine, and I'm trying to set a bit of text into the app engine database for the first time. Here's my code: def setVenueIntroText(text): venue_obj = db.GqlQuery("SELECT * FROM Venue").get() venue_obj.intro_text = text # Works if I comment out db.put(venue_obj) # These two lines This throws some sort of exception - I can't tell what it is though because of my django 1.02 setup. Ok, I gave the code in the answer below a go, and it worked after deleting my datastores, but I'm still not satisfied. Here's an update: I've modified my code to something that looks like it makes sense to me. The getVenueIntroText doesn't complain when I call it - I haven't got any items in the database btw. When I call setVenueIntroText, it doesn't like what I'm doing for some reason - if someone knows the reason why, I'd really like to know :) Here's my latest attempt: def getVenueIntroText(): venue_info = "" venue_obj = db.GqlQuery("SELECT * FROM Venue").get() if venue_obj is not None: venue_info = venue_obj.intro_text return venue_info def setVenueIntroText(text): venue_obj = db.GqlQuery("SELECT * FROM Venue").get() if venue_obj is None: venue_obj = Venue(intro_text = text) else: venue_obj.intro_text = text db.put(venue_obj)
[ "I think this should work:\ndef setVenueIntroText(text):\n query = db.GqlQuery(\"SELECT * FROM Venue\")\n for result in query:\n result.intro_text = text\n db.put(result)\n\n", "I think the main problem was that I couldn't see the error messages - really stupid of me, I forgot to put DEBUG = True in my settings.py\nIt turns out I needed a multiline=True in my StringProperty\nDjango is catching my exceptions for me.\n" ]
[ 1, 1 ]
[]
[]
[ "bigtable", "google_app_engine", "python" ]
stackoverflow_0000718553_bigtable_google_app_engine_python.txt
Q: How can you make a vote-up-down button like in Stackoverflow? Problems how to make an Ajax buttons (upward and downward arrows) such that the number can increase or decrease how to save the action af an user to an variable NumberOfVotesOfQuestionID I am not sure whether I should use database or not for the variable. However, I know that there is an easier way too to save the number of votes. How can you solve those problems? [edit] The server-side programming language is Python. A: This is a dirty/untested theoretical implementation using jQuery/Django. We're going to assume the voting up and down is for questions/answers like on this site, but that can obviously be adjusted to your real life use case. The template <div id="answer_595" class="answer"> <img src="vote_up.png" class="vote up"> <div class="score">0</div> <img src="vote_down.png" class="vote down"> Blah blah blah this is my answer. </div> <div id="answer_596" class="answer"> <img src="vote_up.png" class="vote up"> <div class="score">0</div> <img src="vote_down.png" class="vote down"> Blah blah blah this is my other answer. </div> Javascript $(function() { $('div.answer img.vote').click(function() { var id = $(this).parents('div.answer').attr('id').split('_')[1]; var vote_type = $(this).hasClass('up') ? 'up' : 'down'; if($(this).hasClass('selected')) { $.post('/vote/', {id: id, type: vote_type}, function(json) { if(json.success == 'success') { $('#answer_' + id) .find('img.' + vote_type); .attr('src', 'vote_' + vote_type + '_selected.png') .addClass('selected'); $('div.score', '#answer_' + id).html(json.score); } }); } else { $.post('/remove_vote/', {id: id, type: vote_type}, function(json) { if(json.success == 'success') { $('#answer_' + id) .find('img.' + vote_type); .attr('src', 'vote_' + vote_type + '.png') .removeClass('selected'); $('div.score', '#answer_' + id).html(json.score); } }); } }); }); Django views def vote(request): if request.method == 'POST': try: answer = Answer.objects.get(pk=request.POST['id']) except Answer.DoesNotExist: return HttpResponse("{'success': 'false'}") try: vote = Vote.objects.get(answer=answer, user=request.user) except Vote.DoesNotExist: pass else: return HttpResponse("{'success': 'false'}") if request.POST['type'] == 'up': answer.score = answer.score + 1 else: answer.score = answer.score - 1 answer.save() Vote.objects.create(answer=answer, user=request.user, type=request.POST['type']) return HttpResponse("{'success':'true', 'score':" + answer.score + "}") else: raise Http404('What are you doing here?') def remove_vote(request): if request.method == 'POST': try: answer = Answer.objects.get(pk=request.POST['id']) except Answer.DoesNotExist: return HttpResponse("{'success': 'false'}") try: vote = Vote.objects.get(answer=answer, user=request.user) except Vote.DoesNotExist: return HttpResponse("{'success': 'false'}") else: vote.delete() if request.POST['type'] == 'up': answer.score = answer.score - 1 else: answer.score = answer.score + 1 answer.save() return HttpResponse("{'success':'true', 'score':" + answer.score + "}") else: raise Http404('What are you doing here?') Yikes. When I started answering this question I didn't mean to write this much but I got carried away a little bit. You're still missing an initial request to get all the votes when the page is first loaded and such, but I'll leave that as an exercise to the reader. Anyhow, if you are in fact using Django and are interested in a more tested/real implemention of the Stackoverflow voting, I suggest you check out the source code for cnprog.com, a Chinese clone of Stackoverflow written in Python/Django. They released their code and it is pretty decent. A: A couple of points no one has mentioned: You don't want to use GET when changing the state of your database. Otherwise I could put an image on my site with src="http://stackoverflow.com/question_555/vote/up/answer_3/". You also need csrf (Cross Site Request Forgery) protection You must record who makes each vote to avoid people voting more than once for a particular question. Whether this is by IP address or userid. A: You create the buttons, which can be links or images or whatever. Now hook a JavaScript function up to each button's click event. On clicking, the function fires and Sends a request to the server code that says, more or less, +1 or -1. Server code takes over. This will vary wildly depending on what framework you use (or don't) and a bunch of other things. Code connects to the database and runs a query to +1 or -1 the score. How this happens will vary wildly depending on your database design, but it'll be something like UPDATE posts SET score=score+1 WHERE score_id={{insert id here}};. Depending on what the database says, the server returns a success code or a failure code as the AJAX request response. Response gets sent to AJAX, asynchronously. The JS response function updates the score if it's a success code, displays an error if it's a failure. You can store the code in a variable, but this is complicated and depends on how well you know the semantics of your code's runtime environment. It eventually needs to be pushed to persistent storage anyway, so using the database 100% is a good initial solution. When the time for optimizing performance comes, there are enough software in the world to cache database queries to make you feel woozy so it's not that big a deal. A: I thnk the answers for these questions are to long for stackoverflow. I'd recommend storing the votes in a Database. You don't mention a server-side programming language. please give us some more information This might help you get started
How can you make a vote-up-down button like in Stackoverflow?
Problems how to make an Ajax buttons (upward and downward arrows) such that the number can increase or decrease how to save the action af an user to an variable NumberOfVotesOfQuestionID I am not sure whether I should use database or not for the variable. However, I know that there is an easier way too to save the number of votes. How can you solve those problems? [edit] The server-side programming language is Python.
[ "This is a dirty/untested theoretical implementation using jQuery/Django.\nWe're going to assume the voting up and down is for questions/answers like on this site, but that can obviously be adjusted to your real life use case.\nThe template\n<div id=\"answer_595\" class=\"answer\">\n <img src=\"vote_up.png\" class=\"vote up\">\n <div class=\"score\">0</div>\n <img src=\"vote_down.png\" class=\"vote down\">\n Blah blah blah this is my answer.\n</div>\n\n<div id=\"answer_596\" class=\"answer\">\n <img src=\"vote_up.png\" class=\"vote up\">\n <div class=\"score\">0</div>\n <img src=\"vote_down.png\" class=\"vote down\">\n Blah blah blah this is my other answer.\n</div>\n\nJavascript\n$(function() {\n $('div.answer img.vote').click(function() {\n var id = $(this).parents('div.answer').attr('id').split('_')[1];\n var vote_type = $(this).hasClass('up') ? 'up' : 'down';\n if($(this).hasClass('selected')) {\n $.post('/vote/', {id: id, type: vote_type}, function(json) {\n if(json.success == 'success') {\n $('#answer_' + id)\n .find('img.' + vote_type);\n .attr('src', 'vote_' + vote_type + '_selected.png')\n .addClass('selected');\n $('div.score', '#answer_' + id).html(json.score);\n }\n });\n } else {\n $.post('/remove_vote/', {id: id, type: vote_type}, function(json) {\n if(json.success == 'success') {\n $('#answer_' + id)\n .find('img.' + vote_type);\n .attr('src', 'vote_' + vote_type + '.png')\n .removeClass('selected');\n $('div.score', '#answer_' + id).html(json.score);\n }\n }); \n }\n });\n});\n\nDjango views\ndef vote(request):\n if request.method == 'POST':\n try:\n answer = Answer.objects.get(pk=request.POST['id'])\n except Answer.DoesNotExist:\n return HttpResponse(\"{'success': 'false'}\")\n\n try:\n vote = Vote.objects.get(answer=answer, user=request.user)\n except Vote.DoesNotExist:\n pass\n else:\n return HttpResponse(\"{'success': 'false'}\")\n\n if request.POST['type'] == 'up':\n answer.score = answer.score + 1\n else:\n answer.score = answer.score - 1\n\n answer.save()\n\n Vote.objects.create(answer=answer,\n user=request.user,\n type=request.POST['type'])\n\n return HttpResponse(\"{'success':'true', 'score':\" + answer.score + \"}\")\n else:\n raise Http404('What are you doing here?')\n\ndef remove_vote(request):\n if request.method == 'POST':\n try:\n answer = Answer.objects.get(pk=request.POST['id'])\n except Answer.DoesNotExist:\n return HttpResponse(\"{'success': 'false'}\")\n\n try:\n vote = Vote.objects.get(answer=answer, user=request.user)\n except Vote.DoesNotExist:\n return HttpResponse(\"{'success': 'false'}\")\n else:\n vote.delete()\n\n if request.POST['type'] == 'up':\n answer.score = answer.score - 1\n else:\n answer.score = answer.score + 1\n\n answer.save()\n\n return HttpResponse(\"{'success':'true', 'score':\" + answer.score + \"}\")\n else:\n raise Http404('What are you doing here?')\n\nYikes. When I started answering this question I didn't mean to write this much but I got carried away a little bit. You're still missing an initial request to get all the votes when the page is first loaded and such, but I'll leave that as an exercise to the reader. Anyhow, if you are in fact using Django and are interested in a more tested/real implemention of the Stackoverflow voting, I suggest you check out the source code for cnprog.com, a Chinese clone of Stackoverflow written in Python/Django. They released their code and it is pretty decent.\n", "A couple of points no one has mentioned:\n\nYou don't want to use GET when changing the state of your database. Otherwise I could put an image on my site with src=\"http://stackoverflow.com/question_555/vote/up/answer_3/\".\nYou also need csrf (Cross Site Request Forgery) protection\nYou must record who makes each vote to avoid people voting more than once for a particular question. Whether this is by IP address or userid.\n\n", "You create the buttons, which can be links or images or whatever. Now hook a JavaScript function up to each button's click event. On clicking, the function fires and\n\nSends a request to the server code that says, more or less, +1 or -1.\nServer code takes over. This will vary wildly depending on what framework you use (or don't) and a bunch of other things.\nCode connects to the database and runs a query to +1 or -1 the score. How this happens will vary wildly depending on your database design, but it'll be something like UPDATE posts SET score=score+1 WHERE score_id={{insert id here}};.\nDepending on what the database says, the server returns a success code or a failure code as the AJAX request response.\nResponse gets sent to AJAX, asynchronously.\nThe JS response function updates the score if it's a success code, displays an error if it's a failure.\n\nYou can store the code in a variable, but this is complicated and depends on how well you know the semantics of your code's runtime environment. It eventually needs to be pushed to persistent storage anyway, so using the database 100% is a good initial solution. When the time for optimizing performance comes, there are enough software in the world to cache database queries to make you feel woozy so it's not that big a deal.\n", "I thnk the answers for these questions are to long for stackoverflow.\nI'd recommend storing the votes in a Database.\nYou don't mention a server-side programming language.\nplease give us some more information\nThis might help you get started\n" ]
[ 60, 8, 3, 0 ]
[]
[]
[ "ajax", "html", "javascript", "python" ]
stackoverflow_0000719194_ajax_html_javascript_python.txt
Q: How do I set Session name with Cherrypy? In PHP I would do it like this: session_name("special_session_name"); So how do I do it with Cherrypy? Just need to find exact equivalent for it. PHP manual page: http://fi2.php.net/session_name A: Reading the docs and the source most probably you have to set "tools.sessions.name" in your config file: cherrypy.config.update({'tools.sessions.name': "special_session_name"})
How do I set Session name with Cherrypy?
In PHP I would do it like this: session_name("special_session_name"); So how do I do it with Cherrypy? Just need to find exact equivalent for it. PHP manual page: http://fi2.php.net/session_name
[ "Reading the docs and the source most probably you have to set \"tools.sessions.name\" in your config file:\ncherrypy.config.update({'tools.sessions.name': \"special_session_name\"})\n\n" ]
[ 3 ]
[]
[]
[ "cherrypy", "php", "python", "session" ]
stackoverflow_0000719710_cherrypy_php_python_session.txt
Q: How can domain aliases be set up using Django? I am working on creating a website in Django which consists of two parts: the website itself, and the forum. They will both be on separate domains, i.e. example.com and exampleforum.com. How can this be done in Django, when the forum and main site are part of the same instance? A: This is done at the web server level. Django doesn't care about the domain on the incoming request. If you are using Apache just put multiple ServerAlias directives inside your virtual host like this: <VirtualHost *:80> ServerName www.mydomain.com ServerAlias mydomain.com ServerAlias forum.mydomain.com ... other directives as needed ... </VirtualHost> This tells Apache to direct requests for all of those domains into the same instance. For nginx your config file would look something like: server { listen 80; server_name www.mydomain.com mydomain.com forum.mydomain.com; ... other directives as needed ... }
How can domain aliases be set up using Django?
I am working on creating a website in Django which consists of two parts: the website itself, and the forum. They will both be on separate domains, i.e. example.com and exampleforum.com. How can this be done in Django, when the forum and main site are part of the same instance?
[ "This is done at the web server level. Django doesn't care about the domain on the incoming request.\nIf you are using Apache just put multiple ServerAlias directives inside your virtual host like this:\n<VirtualHost *:80>\n ServerName www.mydomain.com\n ServerAlias mydomain.com\n ServerAlias forum.mydomain.com\n ... other directives as needed ...\n</VirtualHost>\n\nThis tells Apache to direct requests for all of those domains into the same instance.\nFor nginx your config file would look something like:\nserver {\n listen 80;\n server_name www.mydomain.com mydomain.com forum.mydomain.com;\n ... other directives as needed ...\n}\n\n" ]
[ 4 ]
[]
[]
[ "cross_domain", "django", "dns", "python" ]
stackoverflow_0000719771_cross_domain_django_dns_python.txt
Q: Reversible version of compile() in Python I'm trying to make a function in Python that does the equivalent of compile(), but also lets me get the original string back. Let's call those two functions comp() and decomp(), for disambiguation purposes. That is, a = comp("2 * (3 + x)", "", "eval") eval(a, dict(x=3)) # => 12 decomp(a) # => "2 * (3 + x)" The returned string does not have to be identical ("2*(3+x)" would be acceptable), but it needs to be basically the same ("2 * x + 6" would not be). Here's what I've tried that doesn't work: Setting an attribute on the code object returned by compile. You can't set custom attributes on code objects. Subclassing code so I can add the attribute. code cannot be subclassed. Setting up a WeakKeyDictionary mapping code objects to the original strings. code objects cannot be weakly referenced. Here's what does work, with issues: Passing in the original code string for the filename to compile(). However, I lose the ability to actually keep a filename there, which I'd like to also do. Keeping a real dictionary mapping code objects to strings. This leaks memory, although since compiling is rare, it's acceptable for my current use case. I could probably run the keys through gc.get_referrers periodically and kill off dead ones, if I had to. A: This is kind of a weird problem, and my initial reaction is that you might be better off doing something else entirely to accomplish whatever it is you're trying to do. But it's still an interesting question, so here's my crack at it: I make the original code source an unused constant of the code object. import types def comp(source, *args, **kwargs): """Compile the source string; takes the same arguments as builtin compile(). Modifies the resulting code object so that the original source can be recovered with decomp().""" c = compile(source, *args, **kwargs) return types.CodeType(c.co_argcount, c.co_nlocals, c.co_stacksize, c.co_flags, c.co_code, c.co_consts + (source,), c.co_names, c.co_varnames, c.co_filename, c.co_name, c.co_firstlineno, c.co_lnotab, c.co_freevars, c.co_cellvars) def decomp(code_object): return code_object.co_consts[-1] >>> a = comp('2 * (3 + x)', '', 'eval') >>> eval(a, dict(x=3)) 12 >>> decomp(a) '2 * (3 + x)' A: My approach would be to wrap the code object in another object. Something like this: class CodeObjectEnhanced(object): def __init__(self, *args): self.compiled = compile(*args) self.original = args[0] def comp(*args): return CodeObjectEnhanced(*args) Then whenever you need the code object itself, you use a.compiled, and whenever you need the original, you use a.original. There may be a way to get eval to treat the new class as though it were an ordinary code object, redirecting the function to call eval(self.compiled) instead. One advantage of this is the original string is deleted at the same time as the code object. However you do this, I think storing the original string is probably the best approach, as you end up with the exact string you used, not just an approximation.
Reversible version of compile() in Python
I'm trying to make a function in Python that does the equivalent of compile(), but also lets me get the original string back. Let's call those two functions comp() and decomp(), for disambiguation purposes. That is, a = comp("2 * (3 + x)", "", "eval") eval(a, dict(x=3)) # => 12 decomp(a) # => "2 * (3 + x)" The returned string does not have to be identical ("2*(3+x)" would be acceptable), but it needs to be basically the same ("2 * x + 6" would not be). Here's what I've tried that doesn't work: Setting an attribute on the code object returned by compile. You can't set custom attributes on code objects. Subclassing code so I can add the attribute. code cannot be subclassed. Setting up a WeakKeyDictionary mapping code objects to the original strings. code objects cannot be weakly referenced. Here's what does work, with issues: Passing in the original code string for the filename to compile(). However, I lose the ability to actually keep a filename there, which I'd like to also do. Keeping a real dictionary mapping code objects to strings. This leaks memory, although since compiling is rare, it's acceptable for my current use case. I could probably run the keys through gc.get_referrers periodically and kill off dead ones, if I had to.
[ "This is kind of a weird problem, and my initial reaction is that you might be better off doing something else entirely to accomplish whatever it is you're trying to do. But it's still an interesting question, so here's my crack at it: I make the original code source an unused constant of the code object.\nimport types\n\ndef comp(source, *args, **kwargs):\n \"\"\"Compile the source string; takes the same arguments as builtin compile().\n Modifies the resulting code object so that the original source can be\n recovered with decomp().\"\"\"\n c = compile(source, *args, **kwargs)\n return types.CodeType(c.co_argcount, c.co_nlocals, c.co_stacksize, \n c.co_flags, c.co_code, c.co_consts + (source,), c.co_names, \n c.co_varnames, c.co_filename, c.co_name, c.co_firstlineno, \n c.co_lnotab, c.co_freevars, c.co_cellvars)\n\ndef decomp(code_object):\n return code_object.co_consts[-1]\n\n\n>>> a = comp('2 * (3 + x)', '', 'eval')\n>>> eval(a, dict(x=3))\n12\n>>> decomp(a)\n'2 * (3 + x)'\n\n", "My approach would be to wrap the code object in another object. Something like this:\nclass CodeObjectEnhanced(object):\n def __init__(self, *args):\n self.compiled = compile(*args)\n self.original = args[0]\ndef comp(*args):\n return CodeObjectEnhanced(*args)\n\nThen whenever you need the code object itself, you use a.compiled, and whenever you need the original, you use a.original. There may be a way to get eval to treat the new class as though it were an ordinary code object, redirecting the function to call eval(self.compiled) instead.\nOne advantage of this is the original string is deleted at the same time as the code object. However you do this, I think storing the original string is probably the best approach, as you end up with the exact string you used, not just an approximation.\n" ]
[ 6, 4 ]
[]
[]
[ "metaprogramming", "python" ]
stackoverflow_0000718769_metaprogramming_python.txt
Q: Help Me Figure Out A Random Scheduling Algorithm using Python and PostgreSQL I am trying to do the schedule for the upcoming season for my simulation baseball team. I have an existing Postgresql database that contains the old schedule. There are 648 rows in the database: 27 weeks of series for 24 teams. The problem is that the schedule has gotten predictable and allows teams to know in advance about weak parts of their schedule. What I want to do is take the existing schedule and randomize it. That way teams are still playing each other the proper number of times but not in the same order as before. There is one rule that has been tripping me up: each team can only play one home and one road series PER week. I had been fooling around with SELECT statements based on ORDER BY RANDOM() but I haven't figured out how to make sure a team only has one home and one road series per week. Now, I could do this in PHP (which is the language I am most comfortable with) but I am trying to make the shift to Python so I'm not sure how to get this done in Python. I know that Python doesn't seem to handle two dimensional arrays very well. Any help would be greatly appreciated. A: Have you considered keeping your same "schedule", and just shuffling the teams? Generating a schedule where everyone plays each other the proper number of times is possible, but if you already have such a schedule then it's much easier to just shuffle the teams. You could keep your current table, but replace each team in it with an id (0-23, or A-X, or whatever), then randomly generate into another table where you assign each team to each id (0 = TeamJoe, 1 = TeamBob, etc). Then when it's time to shuffle again next year, just regenerate that mapping table. Not sure if this answers the question the way you want, but is probably what I would go with (and is actually how I do it on my fantasy football website). A: I'm not sure I fully understand the problem, but here is how I would do it: 1. create a complete list of matches that need to happen 2. iterate over the weeks, selecting which match needs to happen in this week. You can use Python lists to represent the matches that still need to happen, and, for each week, the matches that are happening in this week. In step 2, selecting a match to happen would work this way: a. use random.choice to select a random match to happen. b. determine which team has a home round for this match, using random.choice([1,2]) (if it could have been a home round for either team) c. temporarily remove all matches that get blocked by this selection. a match is blocked if one of its teams has already two matches in the week, or if both teams already have a home match in this week, or if both teams already have a road match in this week. d. when there are no available matches anymore for a week, proceed to the next week, readding all the matches that got blocked for the previous week. A: I think I've understood your question correctly, but anyhow, you can make use of Python's set datatype and generator functionality: import random def scheduler(teams): """ schedule generator: only accepts an even number of teams! """ if 0 != len(teams) % 2: return while teams: home_team = random.choice(list(teams)) teams.remove(home_team) away_team = random.choice(list(teams)) teams.remove(away_team) yield(home_team, away_team) # team list from sql select statement teams = set(["TEAM A", "TEAM B", "TEAM C", "TEAM D"]) for team in scheduler(teams): print(team) This keeps SQL processing to a minimum and should be very easy to add to new rules, like the ones I didn't understand ;) Good luck EDIT: Ah, makes more sense now, should have had one less beer! In which case, I'd definitely recommend NumPy. Take a look through the tutorial and look at consuming the 2-dimensional home-away teams array as you are grabbing random fixtures. It would probably be best to feed the home and away teams from the home day into the away day, so you can ensure there that each team plays home and away each week.
Help Me Figure Out A Random Scheduling Algorithm using Python and PostgreSQL
I am trying to do the schedule for the upcoming season for my simulation baseball team. I have an existing Postgresql database that contains the old schedule. There are 648 rows in the database: 27 weeks of series for 24 teams. The problem is that the schedule has gotten predictable and allows teams to know in advance about weak parts of their schedule. What I want to do is take the existing schedule and randomize it. That way teams are still playing each other the proper number of times but not in the same order as before. There is one rule that has been tripping me up: each team can only play one home and one road series PER week. I had been fooling around with SELECT statements based on ORDER BY RANDOM() but I haven't figured out how to make sure a team only has one home and one road series per week. Now, I could do this in PHP (which is the language I am most comfortable with) but I am trying to make the shift to Python so I'm not sure how to get this done in Python. I know that Python doesn't seem to handle two dimensional arrays very well. Any help would be greatly appreciated.
[ "Have you considered keeping your same \"schedule\", and just shuffling the teams? Generating a schedule where everyone plays each other the proper number of times is possible, but if you already have such a schedule then it's much easier to just shuffle the teams.\nYou could keep your current table, but replace each team in it with an id (0-23, or A-X, or whatever), then randomly generate into another table where you assign each team to each id (0 = TeamJoe, 1 = TeamBob, etc). Then when it's time to shuffle again next year, just regenerate that mapping table.\nNot sure if this answers the question the way you want, but is probably what I would go with (and is actually how I do it on my fantasy football website).\n", "I'm not sure I fully understand the problem, but here is how I would do it:\n1. create a complete list of matches that need to happen\n2. iterate over the weeks, selecting which match needs to happen in this week.\nYou can use Python lists to represent the matches that still need to happen, and, for each week, the matches that are happening in this week.\nIn step 2, selecting a match to happen would work this way:\na. use random.choice to select a random match to happen.\nb. determine which team has a home round for this match, using random.choice([1,2]) (if it could have been a home round for either team)\nc. temporarily remove all matches that get blocked by this selection. a match is blocked if one of its teams has already two matches in the week, or if both teams already have a home match in this week, or if both teams already have a road match in this week.\nd. when there are no available matches anymore for a week, proceed to the next week, readding all the matches that got blocked for the previous week.\n", "I think I've understood your question correctly, but anyhow, you can make use of Python's set datatype and generator functionality:\nimport random\n\ndef scheduler(teams):\n \"\"\" schedule generator: only accepts an even number of teams! \"\"\"\n if 0 != len(teams) % 2:\n return\n\n while teams:\n home_team = random.choice(list(teams))\n teams.remove(home_team)\n away_team = random.choice(list(teams))\n teams.remove(away_team)\n yield(home_team, away_team)\n\n# team list from sql select statement\nteams = set([\"TEAM A\", \"TEAM B\", \"TEAM C\", \"TEAM D\"])\n\nfor team in scheduler(teams):\n print(team)\n\nThis keeps SQL processing to a minimum and should be very easy to add to new rules, like the ones I didn't understand ;) Good luck\nEDIT:\nAh, makes more sense now, should have had one less beer! In which case, I'd definitely recommend NumPy. Take a look through the tutorial and look at consuming the 2-dimensional home-away teams array as you are grabbing random fixtures. It would probably be best to feed the home and away teams from the home day into the away day, so you can ensure there that each team plays home and away each week.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "postgresql", "python" ]
stackoverflow_0000719886_postgresql_python.txt
Q: How do I apply Django model Meta options to models that I did not write? I want to apply the "ordering" Meta option to the Django model User from django.contrib.auth.models. Normally I would just put the Meta class in the model's definition, but in this case I did not define the model. So where do I put the Meta class to modify the User model? A: This is how the Django manual recommends you do it: You could also use a proxy model to define a different default ordering on a model. The standard User model has no ordering defined on it (intentionally; sorting is expensive and we don't want to do it all the time when we fetch users). You might want to regularly order by the username attribute when you use the proxy. This is easy: class OrderedUser(User): class Meta: ordering = ["username"] proxy = True Now normal User queries will be unorderd and OrderedUser queries will be ordered by username. Note that for this to work you will need to have a trunk checkout of Django as it is fairly new. If you don't have access to it, you will need to get rid of the proxy part and implement it that way, which can get cumbersome. Check out this article on how to accomplish this. A: Paolo's answer is great; I wasn't previously aware of the new proxy support. The only issue with it is that you need to target your code to the OrderedUser model - which is in a sense similar to simply doing a User.objects.filter(....).order_by('username'). In other words, it's less verbose but you need to explicitly write your code to target it. (Of course, as mentioned, you'd also have to be on trunk.) My sense is that you want all User queries to be ordered, including in third party apps that you don't control. In such a circumstance, monkeypatching the base class is relatively easy and very unlikely to cause any problems. In a central location (such as your settings.py), you could do: from django.contrib.auth.models import User User.Meta.ordering = ['username'] UPDATE: Django 1.5 now supports configurable User models. A: You can either subclass User: class OrderedUser(User): class Meta: ordering = ['-id', 'username'] Or you could use the ordering in ModelAdmin: class UserAdmin(admin.ModelAdmin): ordering = ['-id', 'username'] # unregister user since its already been registered by auth admin.site.unregister(User) admin.site.register(User, UserAdmin) Note: the ModelAdmin method will only change the ordering in the admin, it won't change the ordering of queries. A: Contact the author and ask them to make a change.
How do I apply Django model Meta options to models that I did not write?
I want to apply the "ordering" Meta option to the Django model User from django.contrib.auth.models. Normally I would just put the Meta class in the model's definition, but in this case I did not define the model. So where do I put the Meta class to modify the User model?
[ "This is how the Django manual recommends you do it:\n\nYou could also use a proxy model to define a different default ordering on a model. The standard User model has no ordering defined on it (intentionally; sorting is expensive and we don't want to do it all the time when we fetch users). You might want to regularly order by the username attribute when you use the proxy. This is easy:\n\nclass OrderedUser(User):\n class Meta:\n ordering = [\"username\"]\n proxy = True\n\n\nNow normal User queries will be unorderd and OrderedUser queries will be ordered by username.\n\nNote that for this to work you will need to have a trunk checkout of Django as it is fairly new.\nIf you don't have access to it, you will need to get rid of the proxy part and implement it that way, which can get cumbersome. Check out this article on how to accomplish this.\n", "Paolo's answer is great; I wasn't previously aware of the new proxy support. The only issue with it is that you need to target your code to the OrderedUser model - which is in a sense similar to simply doing a User.objects.filter(....).order_by('username'). In other words, it's less verbose but you need to explicitly write your code to target it. (Of course, as mentioned, you'd also have to be on trunk.)\nMy sense is that you want all User queries to be ordered, including in third party apps that you don't control. In such a circumstance, monkeypatching the base class is relatively easy and very unlikely to cause any problems. In a central location (such as your settings.py), you could do:\nfrom django.contrib.auth.models import User\nUser.Meta.ordering = ['username']\n\nUPDATE: Django 1.5 now supports configurable User models.\n", "You can either subclass User:\nclass OrderedUser(User):\n class Meta:\n ordering = ['-id', 'username']\n\nOr you could use the ordering in ModelAdmin:\nclass UserAdmin(admin.ModelAdmin):\n ordering = ['-id', 'username']\n\n# unregister user since its already been registered by auth\nadmin.site.unregister(User)\nadmin.site.register(User, UserAdmin)\n\nNote: the ModelAdmin method will only change the ordering in the admin, it won't change the ordering of queries.\n", "Contact the author and ask them to make a change.\n" ]
[ 9, 6, 3, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0000720083_django_python.txt
Q: What's the easiest way/best tutorials to get familiar with SQLAlchemy? What are best resources/tutorials for starting up with SQLAlchemy? Maybe some simple step by step stuff like creating a simple table and using it and going up from there. A: Personally, I'd buy this book and cram it into the noggin over the course of a week or so. I've tried tackling SQLAlchemy on the job without learning the details first. I had a hard time with it, because I found the online documentation to be sparse and cryptic ("read the source for more info..."). SA also provides several levels of abstraction at which you can work and I wasn't confident that I was ever working at the correct level. A: Probably the SQLAlchemy ORM Tutorial? I started with it. A: Using SQLAlchemy (IBM developerWorks)
What's the easiest way/best tutorials to get familiar with SQLAlchemy?
What are best resources/tutorials for starting up with SQLAlchemy? Maybe some simple step by step stuff like creating a simple table and using it and going up from there.
[ "Personally, I'd buy this book and cram it into the noggin over the course of a week or so.\nI've tried tackling SQLAlchemy on the job without learning the details first. I had a hard time with it, because I found the online documentation to be sparse and cryptic (\"read the source for more info...\"). SA also provides several levels of abstraction at which you can work and I wasn't confident that I was ever working at the correct level.\n", "Probably the SQLAlchemy ORM Tutorial? I started with it.\n", "Using SQLAlchemy (IBM developerWorks)\n" ]
[ 5, 3, 1 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0000195771_python_sqlalchemy.txt
Q: How do i use Django session to read/set cookies? I am trying to use the Django sessions to read and set my cookies, but when i do the following the program just does not respond! sessionID = request.session["userid"] The program does not pass this point! Any ideas? A: First, Django already creates a user object for you so you don't need to store it in the session. Just access it as: request.user For example, to get the username you would use: request.user.username Next, if you want to store information in the session you don't need to worry about it at the cookie level. Simply write key / value pairs to the request.session dictionary. Here are some examples from the Django documentation. Edit: The reason your program isn't responding is because a KeyError exception is being raised. 'userid' doesn't exist as a key in the session dictionary (unless you have added it yourself). This is why it is better to program dictionary reads like this: id = request.session.get('somekey', False) Which will return False if 'somekey' doesn't exist in the dictionary.
How do i use Django session to read/set cookies?
I am trying to use the Django sessions to read and set my cookies, but when i do the following the program just does not respond! sessionID = request.session["userid"] The program does not pass this point! Any ideas?
[ "First, Django already creates a user object for you so you don't need to store it in the session. Just access it as:\nrequest.user\n\nFor example, to get the username you would use:\nrequest.user.username\n\nNext, if you want to store information in the session you don't need to worry about it at the cookie level. Simply write key / value pairs to the request.session dictionary.\nHere are some examples from the Django documentation.\nEdit: The reason your program isn't responding is because a KeyError exception is being raised. 'userid' doesn't exist as a key in the session dictionary (unless you have added it yourself).\nThis is why it is better to program dictionary reads like this:\nid = request.session.get('somekey', False)\n\nWhich will return False if 'somekey' doesn't exist in the dictionary.\n" ]
[ 4 ]
[]
[]
[ "django", "python" ]
stackoverflow_0000720329_django_python.txt
Q: Function definition in Python I am new to Python. I was trying to define and run a simple function in a class. Can anybody please tell me what's wrong in my code: class A : def m1(name,age,address) : print('Name -->',name) print('Age -->',age) print('Address -->',address) >>> a = A() >>> a.m1('X',12,'XXXX') Traceback (most recent call last): File "<pyshell#22>", line 1, in <module> a.m1('X',12,'XXXX') I am getting below error TypeError: m1() takes exactly 3 positional arguments (4 given) A: Instance methods take instance as first argument: class A : def m1(self, name,age,address) : print('Name -->',name) print('Age -->',age) print('Address -->',address) You can also use @staticmethod decorator to create static function: class A : @staticmethod def m1(name,age,address) : print('Name -->',name) print('Age -->',age) print('Address -->',address) A: By convention, methods in a class instance receive an object reference as the 1st argument, named self. >>> class A: ... def m1(self,name,age,address): ... print('Name -->',name) ... print('Age -->',age) ... print('Address -->',address) ... >>> a=A() >>> a.m1('X',12,'XXXX') ('Name -->', 'X') ('Age -->', 12) ('Address -->', 'XXXX') >>> A: The first parameter is always the object itself. class A : def m1(self, name,age,address) : print('Name -->',name) print('Age -->',age) print('Address -->',address)
Function definition in Python
I am new to Python. I was trying to define and run a simple function in a class. Can anybody please tell me what's wrong in my code: class A : def m1(name,age,address) : print('Name -->',name) print('Age -->',age) print('Address -->',address) >>> a = A() >>> a.m1('X',12,'XXXX') Traceback (most recent call last): File "<pyshell#22>", line 1, in <module> a.m1('X',12,'XXXX') I am getting below error TypeError: m1() takes exactly 3 positional arguments (4 given)
[ "Instance methods take instance as first argument:\nclass A :\n def m1(self, name,age,address) :\n print('Name -->',name)\n print('Age -->',age)\n print('Address -->',address)\n\nYou can also use @staticmethod decorator to create static function:\nclass A :\n @staticmethod\n def m1(name,age,address) :\n print('Name -->',name)\n print('Age -->',age)\n print('Address -->',address)\n\n", "By convention, methods in a class instance receive an object reference as the 1st argument, named self.\n>>> class A:\n... def m1(self,name,age,address):\n... print('Name -->',name)\n... print('Age -->',age)\n... print('Address -->',address)\n... \n>>> a=A()\n>>> a.m1('X',12,'XXXX')\n('Name -->', 'X')\n('Age -->', 12)\n('Address -->', 'XXXX')\n>>> \n\n", "The first parameter is always the object itself.\nclass A :\n def m1(self, name,age,address) :\n print('Name -->',name)\n print('Age -->',age)\n print('Address -->',address)\n\n" ]
[ 18, 4, 4 ]
[]
[]
[ "python" ]
stackoverflow_0000720621_python.txt
Q: HTTP Authentication in Python Whats is the python urllib equivallent of curl -u username:password status="abcd" http://example.com/update.json I did this: handle = urllib2.Request(url) authheader = "Basic %s" % base64.encodestring('%s:%s' % (username, password)) handle.add_header("Authorization", authheader) Is there a better / simpler way? A: The trick is to create a password manager, and then tell urllib about it. Usually, you won't care about the realm of the authentication, just the host/url part. For example, the following: password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm() top_level_url = "http://example.com/" password_mgr.add_password(None, top_level_url, 'user', 'password') handler = urllib2.HTTPBasicAuthHandler(password_mgr) opener = urllib2.build_opener(urllib2.HTTPHandler, handler) request = urllib2.Request(url) Will set the user name and password to every URL starting with top_level_url. Other options are to specify a host name or more complete URL here. A good document describing this and more is at http://www.voidspace.org.uk/python/articles/urllib2.shtml#id6. A: Yes, have a look at the urllib2.HTTP*AuthHandlers. Example from the documentation: import urllib2 # Create an OpenerDirector with support for Basic HTTP Authentication... auth_handler = urllib2.HTTPBasicAuthHandler() auth_handler.add_password(realm='PDQ Application', uri='https://mahler:8092/site-updates.py', user='klem', passwd='kadidd!ehopper') opener = urllib2.build_opener(auth_handler) # ...and install it globally so it can be used with urlopen. urllib2.install_opener(opener) urllib2.urlopen('http://www.example.com/login.html')
HTTP Authentication in Python
Whats is the python urllib equivallent of curl -u username:password status="abcd" http://example.com/update.json I did this: handle = urllib2.Request(url) authheader = "Basic %s" % base64.encodestring('%s:%s' % (username, password)) handle.add_header("Authorization", authheader) Is there a better / simpler way?
[ "The trick is to create a password manager, and then tell urllib about it. Usually, you won't care about the realm of the authentication, just the host/url part. For example, the following:\npassword_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()\ntop_level_url = \"http://example.com/\"\npassword_mgr.add_password(None, top_level_url, 'user', 'password')\nhandler = urllib2.HTTPBasicAuthHandler(password_mgr)\nopener = urllib2.build_opener(urllib2.HTTPHandler, handler)\nrequest = urllib2.Request(url)\n\nWill set the user name and password to every URL starting with top_level_url. Other options are to specify a host name or more complete URL here.\nA good document describing this and more is at http://www.voidspace.org.uk/python/articles/urllib2.shtml#id6.\n", "Yes, have a look at the urllib2.HTTP*AuthHandlers.\nExample from the documentation:\nimport urllib2\n# Create an OpenerDirector with support for Basic HTTP Authentication...\nauth_handler = urllib2.HTTPBasicAuthHandler()\nauth_handler.add_password(realm='PDQ Application',\n uri='https://mahler:8092/site-updates.py',\n user='klem',\n passwd='kadidd!ehopper')\nopener = urllib2.build_opener(auth_handler)\n# ...and install it globally so it can be used with urlopen.\nurllib2.install_opener(opener)\nurllib2.urlopen('http://www.example.com/login.html')\n\n" ]
[ 20, 6 ]
[]
[]
[ "authentication", "curl", "http_headers", "python" ]
stackoverflow_0000720867_authentication_curl_http_headers_python.txt
Q: What's a good two-way encryption library implemented in Python? The authentication system for an application we're using right now uses a two-way hash that's basically little more than a glorified caesar cypher. Without going into too much detail about what's going on with it, I'd like to replace it with a more secure encryption algorithm (and it needs to be done server-side). Unfortunately, it needs to be two-way and the algorithms in hashlib are all one-way. What are some good encryption libraries that will include algorithms for this kind of thing? A: I assume you want an encryption algorithm, not a hash. The PyCrypto library offers a pretty wide range of options. It's in the middle of moving over to a new maintainer, so the docs are a little disorganized, but this is roughly where you want to start looking. I usually use AES for stuff like this. A: If it's two-way, it's not really a "hash". It's encryption (and from the sounds of things this is really more of a 'salt' or 'cypher', not real encryption.) A hash is one-way by definition. So rather than something like MD5 or SHA1 you need to look for something more like PGP. Secondly, can you explain the reasoning behind the 2-way requirement? That's not generally considered good practice for authentication systems any more. A: PyCrypto supports AES, DES, IDEA, RSA, ElGamal, etc. I've found the documentation here.
What's a good two-way encryption library implemented in Python?
The authentication system for an application we're using right now uses a two-way hash that's basically little more than a glorified caesar cypher. Without going into too much detail about what's going on with it, I'd like to replace it with a more secure encryption algorithm (and it needs to be done server-side). Unfortunately, it needs to be two-way and the algorithms in hashlib are all one-way. What are some good encryption libraries that will include algorithms for this kind of thing?
[ "I assume you want an encryption algorithm, not a hash. The PyCrypto library offers a pretty wide range of options. It's in the middle of moving over to a new maintainer, so the docs are a little disorganized, but this is roughly where you want to start looking. I usually use AES for stuff like this.\n", "If it's two-way, it's not really a \"hash\". It's encryption (and from the sounds of things this is really more of a 'salt' or 'cypher', not real encryption.) A hash is one-way by definition. So rather than something like MD5 or SHA1 you need to look for something more like PGP.\nSecondly, can you explain the reasoning behind the 2-way requirement? That's not generally considered good practice for authentication systems any more.\n", "PyCrypto supports AES, DES, IDEA, RSA, ElGamal, etc.\nI've found the documentation here.\n" ]
[ 21, 8, 6 ]
[]
[]
[ "encryption", "python" ]
stackoverflow_0000721436_encryption_python.txt
Q: Monitoring internet activity I'm looking into writing a small app (in Python) that monitors internet activity. The same idea as NetMeter except with a little more customisation (I need to be able to set off-peak time ranges). Anyway, I've been having a little trouble researching these questions: Does Python have an API to monitor this? As far as data collecting goes I'll probably be recording values in bytes/min with the timestamp - is there something more sensible I'm missing here? A: The pylibpcap project may actually give you what you want out of the box, or at least a leg up on implementing one yourself. It's a set of python bindings, as the name suggests, to the libpcap packet capture library.
Monitoring internet activity
I'm looking into writing a small app (in Python) that monitors internet activity. The same idea as NetMeter except with a little more customisation (I need to be able to set off-peak time ranges). Anyway, I've been having a little trouble researching these questions: Does Python have an API to monitor this? As far as data collecting goes I'll probably be recording values in bytes/min with the timestamp - is there something more sensible I'm missing here?
[ "The pylibpcap project may actually give you what you want out of the box, or at least a leg up on implementing one yourself. It's a set of python bindings, as the name suggests, to the libpcap packet capture library.\n" ]
[ 4 ]
[]
[]
[ "bandwidth", "monitoring", "python" ]
stackoverflow_0000722046_bandwidth_monitoring_python.txt
Q: pythonic way to compare compound classes? I have a class that acts as an item in a tree: class CItem( list ): pass I have two trees, each with CItem as root, each tree item has some dict members (like item._test = 1). Now i need to compare this trees. I can suggest to overload a comparison operator for CItem: class CItem( list ): def __eq__( self, other ): # first compare items as lists if not list.__eq__( self, other ): return False # now compare dict members if self.__dict__ != other.__dict__: return False # seems equal return True Now i can compare two trees using '==' or '!='. Is this a 'pythonic' way or such comparsion can be done easire? A: My feeling would be something like class CItem(list): def __eq__(self, other): return list.__eq__(self, other) and self.__dict__ == other.__dict__ but it's basically the same code you have, just expressed in shorter notation. I can't think of any more substantial changes to make offhand.
pythonic way to compare compound classes?
I have a class that acts as an item in a tree: class CItem( list ): pass I have two trees, each with CItem as root, each tree item has some dict members (like item._test = 1). Now i need to compare this trees. I can suggest to overload a comparison operator for CItem: class CItem( list ): def __eq__( self, other ): # first compare items as lists if not list.__eq__( self, other ): return False # now compare dict members if self.__dict__ != other.__dict__: return False # seems equal return True Now i can compare two trees using '==' or '!='. Is this a 'pythonic' way or such comparsion can be done easire?
[ "My feeling would be something like\nclass CItem(list):\n def __eq__(self, other):\n return list.__eq__(self, other) and self.__dict__ == other.__dict__\n\nbut it's basically the same code you have, just expressed in shorter notation. I can't think of any more substantial changes to make offhand.\n" ]
[ 3 ]
[]
[]
[ "python" ]
stackoverflow_0000722741_python.txt
Q: Env Variables in Python (v3.0) on Windows I'm using Python 3.0. How to expand an environment variable given the %var_name% syntax? Any help is much appreciated! Thanks! A: It's in a slightly unexpected place: os.path.expandvars(). Admittedly it is quite often used for processing paths: >>> import os.path >>> os.path.expandvars('%APPDATA%\\MyApp') 'C:\\Documents and Settings\\Administrator\\Application Data\\MyApp' but it's a shell function really. A: I'm guessing you mean "How do I get environment variables?": import os username = os.environ['UserName'] Alternatively, you can use: username = os.getenv('UserName') And to add/change your own variables, you can use: os.putenv('MyVar', 'something I want to store')
Env Variables in Python (v3.0) on Windows
I'm using Python 3.0. How to expand an environment variable given the %var_name% syntax? Any help is much appreciated! Thanks!
[ "It's in a slightly unexpected place: os.path.expandvars(). Admittedly it is quite often used for processing paths:\n>>> import os.path\n>>> os.path.expandvars('%APPDATA%\\\\MyApp')\n'C:\\\\Documents and Settings\\\\Administrator\\\\Application Data\\\\MyApp'\n\nbut it's a shell function really.\n", "I'm guessing you mean \"How do I get environment variables?\":\nimport os\nusername = os.environ['UserName']\n\nAlternatively, you can use:\nusername = os.getenv('UserName')\n\nAnd to add/change your own variables, you can use:\nos.putenv('MyVar', 'something I want to store')\n\n" ]
[ 3, 2 ]
[]
[]
[ "python", "scripting", "shell", "windows" ]
stackoverflow_0000722739_python_scripting_shell_windows.txt
Q: Why doesn't the handle_read method get called with asyncore? I am trying to proto-type send/recv via a packet socket using the asyncore dispatcher (code below). Although my handle_write method gets called promptly, the handle_read method doesn't seem to get invoked. The loop() does call the readable method every so often, but I am not able to receive anything. I know there are packets received on eth0 because a simple tcpdump shows incoming packets. Am I missing something? #!/usr/bin/python import asyncore, socket, IN, struct class packet_socket(asyncore.dispatcher): def __init__(self): asyncore.dispatcher.__init__(self) self.create_socket(socket.AF_PACKET, socket.SOCK_RAW) self.buffer = '0180C20034350012545900040060078910' self.socket.setsockopt(socket.SOL_SOCKET,IN.SO_BINDTODEVICE,struct.pack("%ds" % (len("eth0")+1,), "eth0")) def handle_close(self): self.close() def handle_connect(self): pass def handle_read(self): print "handle_read() called" data,addr=self.recvfrom(1024) print data print addr def readable(self): print "Checking read flag" return True def writable(self): return (len(self.buffer) > 0) def handle_write(self): print "Writing buffer data to the socket" sent = self.sendto(self.buffer,("eth0",0xFFFF)) self.buffer = self.buffer[sent:] c = packet_socket() asyncore.loop() Thanks in advance. A: I finally got this to work with some help from a co-worker. This has to do with passing the protocol argument to the create_socket() method. Unfortunately create_socket() of the dispatcher doesn't take a third argument - so I had to modify my packet_socket() constructor to take a pre-created socket with protocol as ETH_P_ALL (or whatever protocol type you desire to receive) as an argument. Edited code below: #!/usr/bin/python import asyncore, socket, IN, struct proto=3 s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(3)) s.bind(("eth0",proto)) class packet_socket(asyncore.dispatcher): def __init__(self,sock): asyncore.dispatcher.__init__(self,sock) #self.create_socket(socket.AF_PACKET, socket.SOCK_RAW,socket.htons(3)) self.buffer = '0180C20034350012545900040060078910' self.socket.setsockopt(socket.SOL_SOCKET,IN.SO_BINDTODEVICE,struct.pack("%ds" % (len("eth0")+1,), "eth0")) def handle_close(self): self.close() def handle_connect(self): pass def handle_read(self): print "handle_read() called" data,addr=self.recvfrom(1024) print data print addr def readable(self): print "Checking read flag" return True def writable(self): return (len(self.buffer) > 0) def handle_write(self): print "Writing buffer data to the socket" sent = self.sendto(self.buffer,("eth0",0xFFFF)) self.buffer = self.buffer[sent:] c = packet_socket(s) asyncore.loop() Thanks,
Why doesn't the handle_read method get called with asyncore?
I am trying to proto-type send/recv via a packet socket using the asyncore dispatcher (code below). Although my handle_write method gets called promptly, the handle_read method doesn't seem to get invoked. The loop() does call the readable method every so often, but I am not able to receive anything. I know there are packets received on eth0 because a simple tcpdump shows incoming packets. Am I missing something? #!/usr/bin/python import asyncore, socket, IN, struct class packet_socket(asyncore.dispatcher): def __init__(self): asyncore.dispatcher.__init__(self) self.create_socket(socket.AF_PACKET, socket.SOCK_RAW) self.buffer = '0180C20034350012545900040060078910' self.socket.setsockopt(socket.SOL_SOCKET,IN.SO_BINDTODEVICE,struct.pack("%ds" % (len("eth0")+1,), "eth0")) def handle_close(self): self.close() def handle_connect(self): pass def handle_read(self): print "handle_read() called" data,addr=self.recvfrom(1024) print data print addr def readable(self): print "Checking read flag" return True def writable(self): return (len(self.buffer) > 0) def handle_write(self): print "Writing buffer data to the socket" sent = self.sendto(self.buffer,("eth0",0xFFFF)) self.buffer = self.buffer[sent:] c = packet_socket() asyncore.loop() Thanks in advance.
[ "I finally got this to work with some help from a co-worker. This has to do with passing the protocol argument to the create_socket() method. Unfortunately create_socket() of the dispatcher doesn't take a third argument - so I had to modify my packet_socket() constructor to take a pre-created socket with protocol as ETH_P_ALL (or whatever protocol type you desire to receive) as an argument. Edited code below:\n\n\n#!/usr/bin/python\n\nimport asyncore, socket, IN, struct\n\nproto=3\ns = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(3))\ns.bind((\"eth0\",proto))\n\nclass packet_socket(asyncore.dispatcher):\n\n def __init__(self,sock):\n asyncore.dispatcher.__init__(self,sock)\n #self.create_socket(socket.AF_PACKET, socket.SOCK_RAW,socket.htons(3))\n self.buffer = '0180C20034350012545900040060078910'\n self.socket.setsockopt(socket.SOL_SOCKET,IN.SO_BINDTODEVICE,struct.pack(\"%ds\" % (len(\"eth0\")+1,), \"eth0\"))\n\n def handle_close(self):\n self.close()\n\n def handle_connect(self):\n pass\n\n def handle_read(self):\n print \"handle_read() called\" \n data,addr=self.recvfrom(1024)\n print data\n print addr\n\n def readable(self):\n print \"Checking read flag\" \n return True\n\n def writable(self):\n return (len(self.buffer) > 0)\n\n def handle_write(self):\n print \"Writing buffer data to the socket\" \n sent = self.sendto(self.buffer,(\"eth0\",0xFFFF))\n self.buffer = self.buffer[sent:]\n\nc = packet_socket(s)\n\nasyncore.loop()\n\n\n\nThanks,\n" ]
[ 1 ]
[]
[]
[ "packet", "python", "sockets" ]
stackoverflow_0000722605_packet_python_sockets.txt
Q: Data Modelling Advice for Blog Tagging system on Google App Engine Am wondering if anyone might provide some conceptual advice on an efficient way to build a data model to accomplish the simple system described below. Am somewhat new to thinking in a non-relational manner and want to try avoiding any obvious pitfalls. It's my understanding that a basic principal is that "storage is cheap, don't worry about data duplication" as you might in a normalized RDBMS. What I'd like to model is: A blog article which can be given 0-n tags. Many blog articles can share the same tag. When retrieving data would like to allow retrieval of all articles matching a tag. In many ways very similar to the approach taken here at stackoverflow. My normal mindset would be to create a many-to-may relationship between tags and blog articles. However, I'm thinking in the context of GAE that this would be expensive, although I have seen examples of it being done. Perhaps using a ListProperty containing each tag as part of the article entities, and a second data model to track tags as they're added and deleted? This way no need for any relationships and the ListProperty still allows queries where any list element matching will return results. Any suggestions on the most efficient way to approach this on GAE? A: Thanks to both of you for your suggestions. I've implemented (first iteration) as follows. Not sure if it's the best approach, but it's working. Class A = Articles. Has a StringListProperty which can be queried on it's list elements Class B = Tags. One entity per tag, also keeps a running count of the total number of articles using each tag. Data modifications to A are accompanied by maintenance work on B. Thinking that counts being pre-computed is a good approach in a read-heavy environment. A: counts being pre-computed is not only practical, but also necessary because the count() function returns a maximum of 1000. if write-contention might be an issue, make sure to check out the sharded counter example. http://code.google.com/appengine/articles/sharding_counters.html A: Many-to-many sounds reasonable. Perhaps you should try it first to see if it is actually expensive. Good thing about G.A.E. is that it will tell you when you are using too many cycles. Profiling for free! A: One possible way is with Expando, where you'd add a tag like: setattr(entity, 'tag_'+tag_name, True) Then you could query all the entities with a tag like: def get_all_with_tag(model_class, tag): return model_class.all().filter('tag_%s =' % tag, True) Of course you have to clean up your tags to be proper Python identifiers. I haven't tried this, so I'm not sure if it's really a good solution.
Data Modelling Advice for Blog Tagging system on Google App Engine
Am wondering if anyone might provide some conceptual advice on an efficient way to build a data model to accomplish the simple system described below. Am somewhat new to thinking in a non-relational manner and want to try avoiding any obvious pitfalls. It's my understanding that a basic principal is that "storage is cheap, don't worry about data duplication" as you might in a normalized RDBMS. What I'd like to model is: A blog article which can be given 0-n tags. Many blog articles can share the same tag. When retrieving data would like to allow retrieval of all articles matching a tag. In many ways very similar to the approach taken here at stackoverflow. My normal mindset would be to create a many-to-may relationship between tags and blog articles. However, I'm thinking in the context of GAE that this would be expensive, although I have seen examples of it being done. Perhaps using a ListProperty containing each tag as part of the article entities, and a second data model to track tags as they're added and deleted? This way no need for any relationships and the ListProperty still allows queries where any list element matching will return results. Any suggestions on the most efficient way to approach this on GAE?
[ "Thanks to both of you for your suggestions. I've implemented (first iteration) as follows. Not sure if it's the best approach, but it's working.\nClass A = Articles. Has a StringListProperty which can be queried on it's list elements\nClass B = Tags. One entity per tag, also keeps a running count of the total number of articles using each tag.\nData modifications to A are accompanied by maintenance work on B. Thinking that counts being pre-computed is a good approach in a read-heavy environment.\n", "counts being pre-computed is not only practical, but also necessary because the count() function returns a maximum of 1000. if write-contention might be an issue, make sure to check out the sharded counter example.\nhttp://code.google.com/appengine/articles/sharding_counters.html\n", "Many-to-many sounds reasonable. Perhaps you should try it first to see if it is actually expensive.\nGood thing about G.A.E. is that it will tell you when you are using too many cycles. Profiling for free!\n", "One possible way is with Expando, where you'd add a tag like:\nsetattr(entity, 'tag_'+tag_name, True)\n\nThen you could query all the entities with a tag like:\ndef get_all_with_tag(model_class, tag):\n return model_class.all().filter('tag_%s =' % tag, True)\n\nOf course you have to clean up your tags to be proper Python identifiers. I haven't tried this, so I'm not sure if it's really a good solution.\n" ]
[ 7, 2, 1, 1 ]
[]
[]
[ "data_modeling", "google_app_engine", "python" ]
stackoverflow_0000304117_data_modeling_google_app_engine_python.txt
Q: Google AppEngine: Date Range not returning correct results Im trying to search for some values within a date range for a specific type, but content for dates that exist in the database are not being returned by the query. Here is an extract of the python code: deltaDays = timedelta(days= 20) endDate = datetime.date.today() startDate = endDate - deltaDays result = db.GqlQuery( "SELECT * FROM myData WHERE mytype = :1 AND pubdate >= :2 and pubdate <= :3", type, startDate, endDate ) class myData(db.Model): mytype = db.StringProperty(required=True) value = db.FloatProperty(required=True) pubdate = db.DateTimeProperty(required=True) The GQL returns data, but some rows that I am expecting are missing: 2009-03-18 00:00:00 (missing date in results: 2009-03-20 data exists in database) 2009-03-23 00:00:00 2009-03-24 00:00:00 2009-03-25 00:00:00 2009-03-26 00:00:00 (missing date in results: 2009-03-27 data exists in database) 2009-03-30 00:00:00 (missing date in results: 2009-03-31. data exists in database) 2009-04-01 00:00:00 2009-04-02 00:00:00 2009-04-03 00:00:00 2009-04-06 00:00:00 I uploaded the data via de bulkload script. I just can think of the indexes being corrupted or something similar. This same query used to work for another table i had. But i had to replace it with new content from another source, and this new content is not responding to the query in the same way. The table has around 700.000 rows if that makes any difference. I have done more research ant it appears that its a bug in the appEngine DataStore. For more information about the bug check this link: http://code.google.com/p/googleappengine/issues/detail?id=901 I have tried droping the index and recreating it with no luck. thanks A: nothing looks wrong to me. are you sure that the missing dates also have mytype == type? i have observed some funny behaviour with indexes in the past. I recommend writing a handler to iterate through all of your records and just put() them back in the database. maybe something with the bulk uploader isn't working properly. Here's the type of handler I use to iterate through all the entities in a model class: class PPIterator(BaseRequestHandler): def get(self): query = Model.gql('ORDER BY __key__') last_key_str = self.request.get('last') if last_key_str: last_key = db.Key(last_key_str) query = Model.gql('WHERE __key__ > :1 ORDER BY __key__', last_key) entities = query.fetch(11) new_last_key_str = None if len(entities) == 11: new_last_key_str = str(entities[9].key()) for e in entities: e.put() if new_last_key_str: self.response.out.write(json.write(new_last_key_str)) else: self.response.out.write(json.write('done')) You can use whatever you want to iterate through the entities. I used to use Javascript in a browser window, but found that was a pig when making hundreds of thousands of requests. These days I find it more convenient to use a ruby script like this one: require 'net/http' require 'json' last=nil while last != 'done' url = 'your_url' path = '/your_path' path += "?/last=#{last}" if last last = Net::HTTP.get(url,path) puts last end Ben UPDATE: now that remote api is working and reliable, I rarely write this type of handler anymore. The same ideas apply to the code you'd use there to iterate through the entities in the remote api console.
Google AppEngine: Date Range not returning correct results
Im trying to search for some values within a date range for a specific type, but content for dates that exist in the database are not being returned by the query. Here is an extract of the python code: deltaDays = timedelta(days= 20) endDate = datetime.date.today() startDate = endDate - deltaDays result = db.GqlQuery( "SELECT * FROM myData WHERE mytype = :1 AND pubdate >= :2 and pubdate <= :3", type, startDate, endDate ) class myData(db.Model): mytype = db.StringProperty(required=True) value = db.FloatProperty(required=True) pubdate = db.DateTimeProperty(required=True) The GQL returns data, but some rows that I am expecting are missing: 2009-03-18 00:00:00 (missing date in results: 2009-03-20 data exists in database) 2009-03-23 00:00:00 2009-03-24 00:00:00 2009-03-25 00:00:00 2009-03-26 00:00:00 (missing date in results: 2009-03-27 data exists in database) 2009-03-30 00:00:00 (missing date in results: 2009-03-31. data exists in database) 2009-04-01 00:00:00 2009-04-02 00:00:00 2009-04-03 00:00:00 2009-04-06 00:00:00 I uploaded the data via de bulkload script. I just can think of the indexes being corrupted or something similar. This same query used to work for another table i had. But i had to replace it with new content from another source, and this new content is not responding to the query in the same way. The table has around 700.000 rows if that makes any difference. I have done more research ant it appears that its a bug in the appEngine DataStore. For more information about the bug check this link: http://code.google.com/p/googleappengine/issues/detail?id=901 I have tried droping the index and recreating it with no luck. thanks
[ "nothing looks wrong to me. are you sure that the missing dates also have mytype == type?\ni have observed some funny behaviour with indexes in the past. I recommend writing a handler to iterate through all of your records and just put() them back in the database. maybe something with the bulk uploader isn't working properly.\nHere's the type of handler I use to iterate through all the entities in a model class:\n class PPIterator(BaseRequestHandler):\n def get(self):\n query = Model.gql('ORDER BY __key__')\n last_key_str = self.request.get('last')\n if last_key_str:\n last_key = db.Key(last_key_str)\n query = Model.gql('WHERE __key__ > :1 ORDER BY __key__', last_key)\n entities = query.fetch(11)\n new_last_key_str = None\n if len(entities) == 11:\n new_last_key_str = str(entities[9].key())\n for e in entities:\n e.put()\n if new_last_key_str:\n self.response.out.write(json.write(new_last_key_str))\n else:\n self.response.out.write(json.write('done'))\n\nYou can use whatever you want to iterate through the entities. I used to use Javascript in a browser window, but found that was a pig when making hundreds of thousands of requests. These days I find it more convenient to use a ruby script like this one:\nrequire 'net/http'\nrequire 'json'\nlast=nil\nwhile last != 'done'\n url = 'your_url'\n path = '/your_path'\n path += \"?/last=#{last}\" if last\n last = Net::HTTP.get(url,path)\n puts last\nend\n\nBen\nUPDATE: now that remote api is working and reliable, I rarely write this type of handler anymore. The same ideas apply to the code you'd use there to iterate through the entities in the remote api console.\n" ]
[ 1 ]
[]
[]
[ "google_app_engine", "gql", "python" ]
stackoverflow_0000722728_google_app_engine_gql_python.txt
Q: Store last created model's row in memory I am working on ajax-game. The abstract: 2+ gamers(browsers) change a variable which is saved to DB through json. All gamers are synchronized by javascript-timer+json - periodically reading that variable from DB. In general, all changes are stored in DB as history, but I want the recent change duplicated in memory. So the problem is: i want one variable to be stored in memory instead of DB. A: You can use the cache system: http://docs.djangoproject.com/en/dev/topics/cache/#topics-cache A: Unfortunately I don't believe you can do this unless you only have one instance of Python running, in which case you can use a global variable. With most web implementations you have a threaded server so this would not work. You would have to do a fetch from the database to get the latest copy of the record. If this is a very high-usage situation, you may want to look into memcached (or similar) as a way of lowering the performance overhead of hitting the database for each request. A: You'd either have to use a cache, or fetch the most recent change on each request (since you can't persist objects between requests in-memory). From what you describe, it sounds as if it's being hit fairly frequently, so the cache is probably the way to go. A: Would something like memcached be suitable?
Store last created model's row in memory
I am working on ajax-game. The abstract: 2+ gamers(browsers) change a variable which is saved to DB through json. All gamers are synchronized by javascript-timer+json - periodically reading that variable from DB. In general, all changes are stored in DB as history, but I want the recent change duplicated in memory. So the problem is: i want one variable to be stored in memory instead of DB.
[ "You can use the cache system:\nhttp://docs.djangoproject.com/en/dev/topics/cache/#topics-cache\n", "Unfortunately I don't believe you can do this unless you only have one instance of Python running, in which case you can use a global variable. With most web implementations you have a threaded server so this would not work. You would have to do a fetch from the database to get the latest copy of the record.\nIf this is a very high-usage situation, you may want to look into memcached (or similar) as a way of lowering the performance overhead of hitting the database for each request.\n", "You'd either have to use a cache, or fetch the most recent change on each request (since you can't persist objects between requests in-memory).\nFrom what you describe, it sounds as if it's being hit fairly frequently, so the cache is probably the way to go.\n", "Would something like memcached be suitable?\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0000602030_django_python.txt
Q: Inserting object with ManyToMany in Django I have a blog-like application with stories and categories: class Category(models.Model): ... class Story(models.Model): categories = models.ManyToManyField(Category) ... Now I know that when you save a new instance of a model with a many-to-many field, problems come up because the object is not yet in the database. This problem usually manifests itself on form submission, which can be neatly worked around with story_form.save(commit=False). What about a situation where there are no forms to speak of? In my case, I want to build an API to accept remote submissions. Since I like JSON, and a whole lot of other messaging in our company is in JSON (including outgoing messages from this server), I'd like to be able to receive the following: { "operation": "INSERT", "values": [ { "datatype": "story", "categories": [4,6,8], "id":50, ... } ] } and implement a factory that converts the values to instances. But I'd like the factory to be as agnostic as possible to the type of operation. So: { "operation": "UPDATE", "values": [ { "datatype": "story", "categories": [4,6,8], "id":50, ... } ] } should also be converted in the same way, except that INSERT ignores id and UPDATE gets the already existing instance and overrides it. (The remote submitter listens to a feed that gives it, among other things, the category objects to cache, so it can, and must, refer to them by id, but it doesn't have any direct communication with the database.) My real question is: what's the most easiest consistent to inflate an instance of a Django model object that has a ManyToManyManager involved. As far as I can fathom, any insert of an object with a many-to-many field will require two database hits, just because it is necessary to obtain a new id first. But my current awkward solution is to save the object right away and mark it hidden, so that functions down the line can play with it and save it as something a little more meaningful. It seems like one step up would be overriding save so that objects without ids save once, copy some proxy field to categories, then save again. Best of all would be some robust manager object that saves me the trouble. What do you recommend? A: "As far as I can fathom, any insert of an object with a many-to-many field will require two database hits,..." So what? Micromanaging each individual database access generally isn't worth all the thinking. Do the simplest, most obvious thing so that Django can optimize cache for you. Your application performance is --typically-- dominated by the slow download to the browser, and all the JPEGS, CSS and other static content that is part of your page. Time spent in brain-cramping thinking about how to make two Primary Keys (for a many-to-many relationship) without doing two database accesses is not going to pay out well. Two PK's is usually two database accesses. Edit "...litters the database on error..." Django has transactions. See http://docs.djangoproject.com/en/dev/topics/db/transactions/#managing-database-transactions. Use the @transaction.commit_manually decorator. "forces validation that is meant to occur later" Doesn't make sense -- update your question to explain this. A: I commented on S.Lott's post that I feel his answer is the best. He's right: if the goal is just to avoid two database hits, then you're just in for a world of unnecessary pain. Reading your reference to ModelForm, however, if you are looking instead for a solution to that allows you to defer official saving in some way, you may wish to have a look at the save_instance() function in forms.models. The inner function save_m2m is how the delayed many-to-many save is accomplished for forms. Implementing something for models without forms would basically follow the same principle. Having said that, and coming back to S.Lott's post, the case of a ModelForm and an actual Model are somewhat different. Because forms expose only a "safe" set of data to be edited in a browser ("safe" because it is filtered in some way, or excludes critical fields that a user shouldn't be editing), it is a reasonable design expectation that someone might need to add important information to the form-derived model before saving. This is why django has the commit=False. This expectation falls down for cases where you are directly instantiating models. Here you have programmatic access to the model API, so you will probably find that using that API directly is easier to maintain and less error prone than through generalized indirection. I can understand why you are picturing the factory concept, but in this case you may find the effort to create a bullet-proof generalization for all manner of models is a complication that's just not worth it.
Inserting object with ManyToMany in Django
I have a blog-like application with stories and categories: class Category(models.Model): ... class Story(models.Model): categories = models.ManyToManyField(Category) ... Now I know that when you save a new instance of a model with a many-to-many field, problems come up because the object is not yet in the database. This problem usually manifests itself on form submission, which can be neatly worked around with story_form.save(commit=False). What about a situation where there are no forms to speak of? In my case, I want to build an API to accept remote submissions. Since I like JSON, and a whole lot of other messaging in our company is in JSON (including outgoing messages from this server), I'd like to be able to receive the following: { "operation": "INSERT", "values": [ { "datatype": "story", "categories": [4,6,8], "id":50, ... } ] } and implement a factory that converts the values to instances. But I'd like the factory to be as agnostic as possible to the type of operation. So: { "operation": "UPDATE", "values": [ { "datatype": "story", "categories": [4,6,8], "id":50, ... } ] } should also be converted in the same way, except that INSERT ignores id and UPDATE gets the already existing instance and overrides it. (The remote submitter listens to a feed that gives it, among other things, the category objects to cache, so it can, and must, refer to them by id, but it doesn't have any direct communication with the database.) My real question is: what's the most easiest consistent to inflate an instance of a Django model object that has a ManyToManyManager involved. As far as I can fathom, any insert of an object with a many-to-many field will require two database hits, just because it is necessary to obtain a new id first. But my current awkward solution is to save the object right away and mark it hidden, so that functions down the line can play with it and save it as something a little more meaningful. It seems like one step up would be overriding save so that objects without ids save once, copy some proxy field to categories, then save again. Best of all would be some robust manager object that saves me the trouble. What do you recommend?
[ "\"As far as I can fathom, any insert of an object with a many-to-many field will require two database hits,...\"\nSo what?\nMicromanaging each individual database access generally isn't worth all the thinking. Do the simplest, most obvious thing so that Django can optimize cache for you. \nYour application performance is --typically-- dominated by the slow download to the browser, and all the JPEGS, CSS and other static content that is part of your page.\nTime spent in brain-cramping thinking about how to make two Primary Keys (for a many-to-many relationship) without doing two database accesses is not going to pay out well. Two PK's is usually two database accesses.\n\nEdit\n\"...litters the database on error...\"\nDjango has transactions. See http://docs.djangoproject.com/en/dev/topics/db/transactions/#managing-database-transactions. Use the @transaction.commit_manually decorator.\n\"forces validation that is meant to occur later\"\nDoesn't make sense -- update your question to explain this.\n", "I commented on S.Lott's post that I feel his answer is the best. He's right: if the goal is just to avoid two database hits, then you're just in for a world of unnecessary pain.\nReading your reference to ModelForm, however, if you are looking instead for a solution to that allows you to defer official saving in some way, you may wish to have a look at the save_instance() function in forms.models. The inner function save_m2m is how the delayed many-to-many save is accomplished for forms. Implementing something for models without forms would basically follow the same principle.\nHaving said that, and coming back to S.Lott's post, the case of a ModelForm and an actual Model are somewhat different. Because forms expose only a \"safe\" set of data to be edited in a browser (\"safe\" because it is filtered in some way, or excludes critical fields that a user shouldn't be editing), it is a reasonable design expectation that someone might need to add important information to the form-derived model before saving. This is why django has the commit=False.\nThis expectation falls down for cases where you are directly instantiating models. Here you have programmatic access to the model API, so you will probably find that using that API directly is easier to maintain and less error prone than through generalized indirection. I can understand why you are picturing the factory concept, but in this case you may find the effort to create a bullet-proof generalization for all manner of models is a complication that's just not worth it.\n" ]
[ 3, 2 ]
[]
[]
[ "django", "many_to_many", "python" ]
stackoverflow_0000723293_django_many_to_many_python.txt
Q: Controlling a Windows Console App w/ stdin pipe I am trying to control a console application (JTAG app from Segger) from Python using the subprocess module. The application behaves correctly for stdout, but stdin doesn't seem to be read. If enable the shell, I can type into the input and control the application, but I need to do this programmatically. The same code works fine for issuing commands to something like cmd.exe. I'm guessing that the keyboard is being read directly instead of stdin. Any ideas how I can send the application input? from subprocess import Popen, PIPE, STDOUT jtag = Popen('"C:/Program Files/SEGGER/JLinkARM_V402e/JLink.exe"', shell=True, universal_newlines=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT) jtag.stdin.write('usb\n') jtag.stdin.flush() print "Stdout:" while True: s = jtag.stdout.readline() if not s: break print s, jtag.terminate() A: As shoosh says, I'd try to verify that the application really is looking for keyboard input. If it is, you can try Win32 message passing, or sending it keyboard input via automation. For the message passing route, you could use the EnumWindows function via ctypes to find the window you're after, then using PostMessage to send it WM_KEYDOWN messages. You can also send keyboard input via pywinauto, or the ActiveX control of AutoIt via win32com. Using AutoIt: from win32com.client import Dispatch auto = Dispatch("AutoItX3.Control") auto.WinActivate("The window's title", "") auto.WinWaitActive("The window's title", "", 10) auto.Send("The input") A: I'm guessing that the keyboard is being read directly instead of stdin This is a pretty strong assumption and before stitching a solution you should try to verify it somehow. There are different levels of doing this. Actually two I can think of right now: Waiting for keyboard events from the main windows loop. if this is the case then you can simulate a keyboard simply by sending the window the right kind of message. these can be wither WM_KEYDOWN or WM_CHAR or perhaps some other related variants. Actually polling the hardware, for instance using GetAsyncKeyState(). This is somewhat unlikely and if this is really what's going on, I doubt you can do anything to simulate it programatically. Another take on this is trying to use the on-screen keyboard and see if it works with the application. if it does, figure out how to simulate what it does. Some tools which might be helpful - Spy++ (comes with Visual Studio) - allows you to see what messages go into a window strace allows you to see what syscalls a process is making.
Controlling a Windows Console App w/ stdin pipe
I am trying to control a console application (JTAG app from Segger) from Python using the subprocess module. The application behaves correctly for stdout, but stdin doesn't seem to be read. If enable the shell, I can type into the input and control the application, but I need to do this programmatically. The same code works fine for issuing commands to something like cmd.exe. I'm guessing that the keyboard is being read directly instead of stdin. Any ideas how I can send the application input? from subprocess import Popen, PIPE, STDOUT jtag = Popen('"C:/Program Files/SEGGER/JLinkARM_V402e/JLink.exe"', shell=True, universal_newlines=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT) jtag.stdin.write('usb\n') jtag.stdin.flush() print "Stdout:" while True: s = jtag.stdout.readline() if not s: break print s, jtag.terminate()
[ "As shoosh says, I'd try to verify that the application really is looking for keyboard input. If it is, you can try Win32 message passing, or sending it keyboard input via automation.\nFor the message passing route, you could use the EnumWindows function via ctypes to find the window you're after, then using PostMessage to send it WM_KEYDOWN messages.\nYou can also send keyboard input via pywinauto, or the ActiveX control of AutoIt via win32com.\nUsing AutoIt:\nfrom win32com.client import Dispatch\n\nauto = Dispatch(\"AutoItX3.Control\")\nauto.WinActivate(\"The window's title\", \"\")\nauto.WinWaitActive(\"The window's title\", \"\", 10)\n\nauto.Send(\"The input\")\n\n", "I'm guessing that the keyboard is being read directly instead of stdin\n\nThis is a pretty strong assumption and before stitching a solution you should try to verify it somehow. There are different levels of doing this. Actually two I can think of right now:\n\nWaiting for keyboard events from the main windows loop. if this is the case then you can simulate a keyboard simply by sending the window the right kind of message. these can be wither WM_KEYDOWN or WM_CHAR or perhaps some other related variants.\nActually polling the hardware, for instance using GetAsyncKeyState(). This is somewhat unlikely and if this is really what's going on, I doubt you can do anything to simulate it programatically.\n\nAnother take on this is trying to use the on-screen keyboard and see if it works with the application. if it does, figure out how to simulate what it does.\nSome tools which might be helpful - \n\nSpy++ (comes with Visual Studio) - allows you to see what messages go into a window\nstrace allows you to see what syscalls a process is making.\n\n" ]
[ 3, 2 ]
[]
[]
[ "command_line", "jtag", "python", "subprocess", "windows" ]
stackoverflow_0000723424_command_line_jtag_python_subprocess_windows.txt
Q: Is there any good Python tutorial/guide to use XML-RPC with Last.fm API? I'm new to XML-RPC and I would like to know if there is any good tutorial to use XML-RPC with the Last.fm API. Is it possible to call the API methods using the xmlrpclib module like in the following example? import xmlrpclib myserver = xmlrpclib.ServerProxy('http://ws.audioscrobbler.com/2.0/') A: Your code looks just fine. You might not know this, but most XML-RPC endpoints (such as Last.fm's) support XML-RPC introspection. For instance, if you want to find out what methods it exposes, do this: import xmlrpclib svc = xmlrpclib.ServerProxy('http://ws.audioscrobbler.com/2.0/') print svc.system.listMethods() And you'll be given a list of the methods exposed by the XML-RPC endpoint. By the way, that bit of code up there demonstrates how to use a ServerProxy object to call a method exposed by the endpoint it's tied to, in this case, the system.listMethods method. If you wanted to call the user.getTopTags (as demonstrated on the API documentation homepage) method exposed by Last.fm, you'd do this: print svc.user.getTopTags({'user': 'foo', 'api_key': 'bar'}) Dead simple! Of course, you'll need an API key from Last.fm before you can use the API. A: Now its not a good time to work on last.fm's api. They are changing it in a few days I think. A: pylast Last fm library in Python The pylast library is a good choice for this work. The library has a very large set of functionality covering all the major parts of the last.fm API. Functionality This includes: Albums, Artists, Auth, Events, Geo, Libraries, Playlists, Tags, Tasteometer ratings, Users and Venues. Using a library such as this means that a lot of the work is done for you, so you dont spend time reinventing the wheel. (The library iteself is 3,000+ lines of code). License Because of the license which this library is released under, it is possible to modify the code yourself. There is also a community of people working to hightlight any bugs in the library at http://sourceforge.net/tracker/?group_id=66150&atid=513503 A: You can use this: http://pypi.python.org/pypi/pylast/0.3.1 or if u do it by your own you can check the code... A: Yes, your example of using the xmlrpclib looks fine. Pylast is probably not the best beginner example. From Python, I think the simplest options are to use XML-RPC as you mentioned, or the REST API with the JSON response format and simplejson to decode the ouput.
Is there any good Python tutorial/guide to use XML-RPC with Last.fm API?
I'm new to XML-RPC and I would like to know if there is any good tutorial to use XML-RPC with the Last.fm API. Is it possible to call the API methods using the xmlrpclib module like in the following example? import xmlrpclib myserver = xmlrpclib.ServerProxy('http://ws.audioscrobbler.com/2.0/')
[ "Your code looks just fine.\nYou might not know this, but most XML-RPC endpoints (such as Last.fm's) support XML-RPC introspection. For instance, if you want to find out what methods it exposes, do this: \nimport xmlrpclib\nsvc = xmlrpclib.ServerProxy('http://ws.audioscrobbler.com/2.0/')\nprint svc.system.listMethods()\n\nAnd you'll be given a list of the methods exposed by the XML-RPC endpoint.\nBy the way, that bit of code up there demonstrates how to use a ServerProxy object to call a method exposed by the endpoint it's tied to, in this case, the system.listMethods method. If you wanted to call the user.getTopTags (as demonstrated on the API documentation homepage) method exposed by Last.fm, you'd do this:\nprint svc.user.getTopTags({'user': 'foo', 'api_key': 'bar'})\n\nDead simple! Of course, you'll need an API key from Last.fm before you can use the API.\n", "Now its not a good time to work on last.fm's api. They are changing it in a few days I think.\n", "pylast\n\nLast fm library in Python\nThe pylast library is a good choice for this work.\nThe library has a very large set of functionality covering all the major parts of the last.fm API.\nFunctionality\nThis includes: Albums, Artists, Auth, Events, Geo, Libraries, Playlists, Tags, Tasteometer ratings, Users and Venues.\nUsing a library such as this means that a lot of the work is done for you, so you dont spend time reinventing the wheel. (The library iteself is 3,000+ lines of code).\nLicense\nBecause of the license which this library is released under, it is possible to modify the code yourself.\nThere is also a community of people working to hightlight any bugs in the library at http://sourceforge.net/tracker/?group_id=66150&atid=513503\n", "You can use this:\nhttp://pypi.python.org/pypi/pylast/0.3.1\nor if u do it by your own you can check the code...\n", "Yes, your example of using the xmlrpclib looks fine.\nPylast is probably not the best beginner example. From Python, I think the simplest options are to use XML-RPC as you mentioned, or the REST API with the JSON response format and simplejson to decode the ouput.\n" ]
[ 7, 1, 1, 0, 0 ]
[]
[]
[ "python", "web_services", "xml_rpc" ]
stackoverflow_0000646578_python_web_services_xml_rpc.txt
Q: Python class inclusion wrong behaviour I have into my main.py from modules import controller ctrl = controller help(ctrl) print(ctrl.div(5,2)) and the controllor.py is: class controller: def div(self, x, y): return x // y when I run my main I got the error: Traceback (most recent call last): File "...\main.py", line 8, in ? print(ctrl.div(5,2)) AttributeError: 'module' object has no attribute 'div' WHat is wrong? A: This is very confusing as shown. When you say from modules import controller You're making the claim that you have a module with a filename of modules.py. OR You're making the claim that you have a package named modules. This directory has an __init__.py file and a module with a filename of controller.py You should clarify this to be precise. It looks like you have mis-named your files and modules in the the example code posted here. When you say from modules import controller That creates a module (not a class) named controller. When you say ctrl = controller That creates another name for the controller module, ctrl. At no time to you reference the class (controller.controller). At no time did you create an instance of the class (controller.controller()). A: ctrl = controller ‘controller’ is a module, representing your whole ‘controller.py’ file. In Python, unlike in Java, there can be any number of symbols defined inside a module, so there isn't a 1:1 relationship between the imported module and the class defined in it. So the script complains because the ‘controller’ module does not have a ‘div’ function; ‘div’ is defined as a method of the ‘controller’ class inside the ‘controller’ module. If you want an instance of the controller() class you need to say: ctrl= controller.controller() (Note also the () to instantiate the object, or you'll be getting the class itself rather than an instance. If you do really want to define a static method in the class so you can call it without an instance, you can do this using the ‘staticmethod’ decorator and omitting ‘self’.) It's usually best to name your classes with an initial capital to avoid confusion: class Controller(object): ... ctrl= controller.Controller() A: You should create an instance of controller, like this: ctrl = controller() Note the brackets after controller. A: When you execute following code from modules import controller ctrl = controller ctrl variable becomes a pointer to controller class. To create an instance of controller class you need to add parenthesis: from modules import controller ctrl = controller()
Python class inclusion wrong behaviour
I have into my main.py from modules import controller ctrl = controller help(ctrl) print(ctrl.div(5,2)) and the controllor.py is: class controller: def div(self, x, y): return x // y when I run my main I got the error: Traceback (most recent call last): File "...\main.py", line 8, in ? print(ctrl.div(5,2)) AttributeError: 'module' object has no attribute 'div' WHat is wrong?
[ "This is very confusing as shown.\nWhen you say\nfrom modules import controller\n\nYou're making the claim that you have a module with a filename of modules.py.\nOR\nYou're making the claim that you have a package named modules. This directory has an __init__.py file and a module with a filename of controller.py\nYou should clarify this to be precise. It looks like you have mis-named your files and modules in the the example code posted here.\nWhen you say\nfrom modules import controller\n\nThat creates a module (not a class) named controller.\nWhen you say\nctrl = controller\n\nThat creates another name for the controller module, ctrl.\nAt no time to you reference the class (controller.controller). At no time did you create an instance of the class (controller.controller()).\n", "\nctrl = controller\n\n‘controller’ is a module, representing your whole ‘controller.py’ file. In Python, unlike in Java, there can be any number of symbols defined inside a module, so there isn't a 1:1 relationship between the imported module and the class defined in it.\nSo the script complains because the ‘controller’ module does not have a ‘div’ function; ‘div’ is defined as a method of the ‘controller’ class inside the ‘controller’ module. If you want an instance of the controller() class you need to say:\nctrl= controller.controller()\n\n(Note also the () to instantiate the object, or you'll be getting the class itself rather than an instance. If you do really want to define a static method in the class so you can call it without an instance, you can do this using the ‘staticmethod’ decorator and omitting ‘self’.)\nIt's usually best to name your classes with an initial capital to avoid confusion:\nclass Controller(object):\n ...\n\nctrl= controller.Controller()\n\n", "You should create an instance of controller, like this:\nctrl = controller()\n\nNote the brackets after controller.\n", "When you execute following code\nfrom modules import controller\nctrl = controller\n\nctrl variable becomes a pointer to controller class.\nTo create an instance of controller class you need to add parenthesis:\nfrom modules import controller\nctrl = controller()\n\n" ]
[ 4, 3, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000722640_python.txt
Q: String inside a string BASE_URL = 'http://foobar.com?foo=%s' variable = 'bar' final_url = BASE_URL % (variable) I get this 'http://foobar.com?foo=bar' # It ignores the inside string. But i wanted something like this 'http://foobar.com?foo='bar'' Thanks for the answer. Can you help me out with almost the same problem: lst = ['foo', 'bar', 'foo bar'] [str(l) for l in lst if ' ' in l] I get ['foo bar'] but i wanted it like [''foo bar''] Thanks in advance. A: Change your BASE_URL to either BASE_URL = "http://foobar.com?foo='%s'" or BASE_URL = 'http://foobar.com?foo=\'%s\'' A: If you're working with URL parameters, it's probably safer to use urllib.urlencode: import urllib BASE_URL = 'http://foobar.com/?%s' print BASE_URL % urllib.urlencode({ 'foo': 'bar', }) Regarding the quotes: Why do you explicitly want them? Normally your HTTP-wrapper would handle all that for you. Regarding your 2nd question: If you absolutely also want to have the quotes in there, you still have to either escape them when appending the contained string, or (probably the safer way of doing it) would be using repr(...) lst = ['foo', 'bar', 'foo bar'] lst2 = [] for l in lst: if ' ' in l: lst2.append(repr(l)) A: It seems, you are a bit confused about how string literals work. When you say s = 'this is a string', you are assigning a string to a variable. What string? Well, a string literal that you hardcoded in your program. Python uses the apostrophes to indicate start and end of a string literal - with anything inside being the contents of the string. This is probably one of the first hard problems for beginners in programming: There is a difference between what you write in your programs source code and what actually happens at runtime. You might want to work through a couple of tutorials (I hear "Dive into Python" is pretty good). A: If you want single quotes to appear in URL you can use BASE_URL = 'http://foobar.com?foo=%s' variable = "'bar'" final_url = BASE_URL % (variable) But this variant is quite insecure, if variable is coming from somewhere (like user input).
String inside a string
BASE_URL = 'http://foobar.com?foo=%s' variable = 'bar' final_url = BASE_URL % (variable) I get this 'http://foobar.com?foo=bar' # It ignores the inside string. But i wanted something like this 'http://foobar.com?foo='bar'' Thanks for the answer. Can you help me out with almost the same problem: lst = ['foo', 'bar', 'foo bar'] [str(l) for l in lst if ' ' in l] I get ['foo bar'] but i wanted it like [''foo bar''] Thanks in advance.
[ "Change your BASE_URL to either\nBASE_URL = \"http://foobar.com?foo='%s'\"\n\nor\nBASE_URL = 'http://foobar.com?foo=\\'%s\\''\n\n", "If you're working with URL parameters, it's probably safer to use urllib.urlencode:\nimport urllib\n\nBASE_URL = 'http://foobar.com/?%s'\nprint BASE_URL % urllib.urlencode({\n 'foo': 'bar', \n})\n\nRegarding the quotes: Why do you explicitly want them? Normally your HTTP-wrapper would handle all that for you.\nRegarding your 2nd question: If you absolutely also want to have the quotes in there, you still have to either escape them when appending the contained string, or (probably the safer way of doing it) would be using repr(...)\nlst = ['foo', 'bar', 'foo bar']\nlst2 = []\n\nfor l in lst:\n if ' ' in l:\n lst2.append(repr(l))\n\n", "It seems, you are a bit confused about how string literals work.\nWhen you say s = 'this is a string', you are assigning a string to a variable. What string? Well, a string literal that you hardcoded in your program.\nPython uses the apostrophes to indicate start and end of a string literal - with anything inside being the contents of the string.\nThis is probably one of the first hard problems for beginners in programming: There is a difference between what you write in your programs source code and what actually happens at runtime. You might want to work through a couple of tutorials (I hear \"Dive into Python\" is pretty good).\n", "If you want single quotes to appear in URL you can use\nBASE_URL = 'http://foobar.com?foo=%s'\nvariable = \"'bar'\"\nfinal_url = BASE_URL % (variable)\n\nBut this variant is quite insecure, if variable is coming from somewhere (like user input).\n" ]
[ 7, 7, 3, 1 ]
[]
[]
[ "python", "string" ]
stackoverflow_0000720927_python_string.txt
Q: ctypes bindings for Subversion in windows Is there a binary installer or a faq for the new ctypes bindings for Subversion 1.6 in Windows (32 and 64bit)? What library would you use to make an easy to deploy (both win32 and x64) svn client in python for svn version >= 1.5? A: You have the pysvn module which will allow you to do that: Binary installer based on subversion 1.5.5
ctypes bindings for Subversion in windows
Is there a binary installer or a faq for the new ctypes bindings for Subversion 1.6 in Windows (32 and 64bit)? What library would you use to make an easy to deploy (both win32 and x64) svn client in python for svn version >= 1.5?
[ "You have the pysvn module which will allow you to do that:\nBinary installer based on subversion 1.5.5\n" ]
[ 1 ]
[]
[]
[ "ctypes", "python", "svn", "windows" ]
stackoverflow_0000724580_ctypes_python_svn_windows.txt
Q: How to handle unicode of an unknown encoding in Django? I want to save some text to the database using the Django ORM wrappers. The problem is, this text is generated by scraping external websites and many times it seems they are listed with the wrong encoding. I would like to store the raw bytes so I can improve my encoding detection as time goes on without redoing the scrapes. But Django seems to want everything to be stored as unicode. Can I get around that somehow? A: You can store data, encoded into base64, for example. Or try to analize HTTP headers from browser, may be it is simplier to get proper encoding from there. A: Create a File with the data. Use a Django models.FileField to hold a reference to the file. No it does not involve a ton of I/O. If your file is small it adds 2 or 3 I/O's (the directory read, the iNode read and the data read.)
How to handle unicode of an unknown encoding in Django?
I want to save some text to the database using the Django ORM wrappers. The problem is, this text is generated by scraping external websites and many times it seems they are listed with the wrong encoding. I would like to store the raw bytes so I can improve my encoding detection as time goes on without redoing the scrapes. But Django seems to want everything to be stored as unicode. Can I get around that somehow?
[ "You can store data, encoded into base64, for example. Or try to analize HTTP headers from browser, may be it is simplier to get proper encoding from there.\n", "Create a File with the data. Use a Django models.FileField to hold a reference to the file.\nNo it does not involve a ton of I/O. If your file is small it adds 2 or 3 I/O's (the directory read, the iNode read and the data read.) \n" ]
[ 1, 1 ]
[]
[]
[ "django", "python", "unicode" ]
stackoverflow_0000724212_django_python_unicode.txt
Q: What's the best way to propagate information from my wx.Process back to my main thread? I'm trying to subclass wx.Process such that I have a customized process launcher that fires events back to the main thread with data collected from the stdout stream. Is this a good way of doing things? class BuildProcess(wx.Process): def __init__(self, cmd, notify=None): wx.Process.__init__(self, notify) print "Constructing a build process" self.Bind(wx.EVT_IDLE, self.on_idle) self.Redirect() self.cmd = cmd self.pid = None def start(self): print "Starting the process" self.pid = wx.Execute(self.cmd, wx.EXEC_ASYNC, self) print "Started." def on_idle(self, evt): print "doing the idle thing..." stream = self.GetInputStream() if stream.CanRead(): text = stream.read() wx.PostEvent(self, BuildEvent(EVT_BUILD_UPDATE, self, data=text)) print text def OnTerminate(self, *args, **kwargs): wx.Process.OnTerminate(self, *args, **kwargs) print "Terminating" BuildEvent here is a custom subclass of wx.PyEvent. The process is starting, running, and terminating correctly, but my on_idle function is never executing, even though I'm sure I've bound it to the idle event. A: The objective is not to call methods of another process, the objective is to redirect the stdout of another process back to the parent process via "update" events fired periodically as the process executes. One solution is to use wx.Timer to periodically poll the output stream of the process, so that we don't rely on EVT_IDLE to do the work for us (I had trouble getting EVT_IDLE to fire) class BuildProcess(wx.Process): def __init__(self, cmd, notify=None): wx.Process.__init__(self, notify) self.Redirect() self.cmd = cmd self.pid = None self.timer = wx.Timer(self) self.Bind(wx.EVT_TIMER, self.on_timer) def start(self): wx.PostEvent(self, BuildEvent(EVT_BUILD_STARTED, self)) self.pid = wx.Execute(self.cmd, wx.EXEC_ASYNC, self) self.timer.Start(100) def on_timer(self, evt): stream = self.GetInputStream() if stream.CanRead(): text = stream.read() wx.PostEvent(self, BuildEvent(EVT_BUILD_UPDATE, self, data=text)) def OnTerminate(self, *args, **kwargs): print "terminating..." stream = self.GetInputStream() if stream.CanRead(): text = stream.read() wx.PostEvent(self, BuildEvent(EVT_BUILD_UPDATE, self, data=text)) if self.timer: self.timer.Stop() wx.PostEvent(self, BuildEvent(EVT_BUILD_FINISHED, self)) By this method, every 100ms the output stream is read, packaged up, and shipped off as a build event. A: From looking the the wxProcess docs, I don't think it works that way: wxProcess will create a new, seperate process running as a child of you current process. It is not possible to run methods connected to a message in such a process. Maybe you can connect your idle event to a function or method in you main thread. Or, mayby the wxThread class is what you really want to use.
What's the best way to propagate information from my wx.Process back to my main thread?
I'm trying to subclass wx.Process such that I have a customized process launcher that fires events back to the main thread with data collected from the stdout stream. Is this a good way of doing things? class BuildProcess(wx.Process): def __init__(self, cmd, notify=None): wx.Process.__init__(self, notify) print "Constructing a build process" self.Bind(wx.EVT_IDLE, self.on_idle) self.Redirect() self.cmd = cmd self.pid = None def start(self): print "Starting the process" self.pid = wx.Execute(self.cmd, wx.EXEC_ASYNC, self) print "Started." def on_idle(self, evt): print "doing the idle thing..." stream = self.GetInputStream() if stream.CanRead(): text = stream.read() wx.PostEvent(self, BuildEvent(EVT_BUILD_UPDATE, self, data=text)) print text def OnTerminate(self, *args, **kwargs): wx.Process.OnTerminate(self, *args, **kwargs) print "Terminating" BuildEvent here is a custom subclass of wx.PyEvent. The process is starting, running, and terminating correctly, but my on_idle function is never executing, even though I'm sure I've bound it to the idle event.
[ "The objective is not to call methods of another process, the objective is to redirect the stdout of another process back to the parent process via \"update\" events fired periodically as the process executes. \nOne solution is to use wx.Timer to periodically poll the output stream of the process, so that we don't rely on EVT_IDLE to do the work for us (I had trouble getting EVT_IDLE to fire)\nclass BuildProcess(wx.Process):\n\n def __init__(self, cmd, notify=None):\n wx.Process.__init__(self, notify)\n self.Redirect()\n self.cmd = cmd\n self.pid = None\n self.timer = wx.Timer(self)\n self.Bind(wx.EVT_TIMER, self.on_timer)\n\n def start(self):\n wx.PostEvent(self, BuildEvent(EVT_BUILD_STARTED, self))\n self.pid = wx.Execute(self.cmd, wx.EXEC_ASYNC, self)\n self.timer.Start(100)\n\n def on_timer(self, evt):\n stream = self.GetInputStream()\n if stream.CanRead():\n text = stream.read()\n wx.PostEvent(self, BuildEvent(EVT_BUILD_UPDATE, self, data=text))\n\n\n def OnTerminate(self, *args, **kwargs):\n print \"terminating...\"\n stream = self.GetInputStream()\n if stream.CanRead():\n text = stream.read()\n wx.PostEvent(self, BuildEvent(EVT_BUILD_UPDATE, self, data=text))\n if self.timer:\n self.timer.Stop()\n wx.PostEvent(self, BuildEvent(EVT_BUILD_FINISHED, self))\n\nBy this method, every 100ms the output stream is read, packaged up, and shipped off as a build event.\n", "From looking the the wxProcess docs, I don't think it works that way: wxProcess will create a new, seperate process running as a child of you current process. It is not possible to run methods connected to a message in such a process.\nMaybe you can connect your idle event to a function or method in you main thread.\nOr, mayby the wxThread class is what you really want to use.\n" ]
[ 1, 0 ]
[]
[]
[ "events", "multithreading", "process", "python", "wxpython" ]
stackoverflow_0000723984_events_multithreading_process_python_wxpython.txt
Q: Programming Design Help - How to Structure a Sudoku Solver program? I'm trying to create a sudoku solver program in Java (maybe Python). I'm just wondering how I should go about structuring this... Do I create a class and make each box a object of that class (9x9=81 objects)? If yes, how do I control all the objects - in other words, how do I make them all call a certain method in the class? Do I just create functions to calculate and just control all the numbers in there with something like an multi-D array? And actually, even if I could just create multiple functions, how would I control all the objects if I were to make each box an object? Thanks. A: Don't over-engineer it. It's a 2-D array or maybe a Board class that represents a 2-D array at best. Have functions that calculate a given row/column and functions that let you access each square. Additional methods can be used validate that each sub-3x3 and row/column don't violate the required constraints. A: Well, I would use one class for the sudoku itself, with a 9 x 9 array and all the functionality to add numbers and detect errors in the pattern. Another class will be used to solve the puzzle. A: Do you need to do it in Python or Java? I do a lot of programming in Python, but this can be done much more concisely with integer program using a language like AMPL or GLPK, which I find more elegant (and generally more efficient) for problems like this. Here it is in AMPL, although I haven't verified how this works: http://taha.ineg.uark.edu/Sudoku.txt A: The simplest way to do it is to represent the board by a 2D 9x9 array. You'll want to have references to each row, column and 3x3 box as a separate object, so storing each cell in a String makes more sense (in Java) than using a primitive. With a String you can keep references to the same object in multiple containers. A: just for fun, here is what is supposed to be the shortest program, in python, that can solve a sudoku grid: def r(a):i=a.find('0') if i<0:print a [m in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3)or a[j]for j in range(81)]or r(a[:i]+m+a[i+1:])for m in`14**7*9`]r(raw_input()) hmm ok it's quite cryptic and I don't think it matchs your question so I apologize for this noise :) Anyway you'll find some explanation of these 173 characters here. There's also an explanation in french here A: Maybe a design that had a box per square, and another class to represent the puzzle itself that would have a collection of boxes, contain all the rules for box interactions, and control the overall game would be a good design. A: First, it looks like there are two kinds of cells. Known calls; those with a fixed value, no choices. Unknown cells; those with a set of candidate values that reduces down to a single final value. Second, there are several groups of cells. Horizontal rows and Vertical columns which must have one cell of each value. That constraint is used to remove values from various cells in the row or column. 3x3 blocks which must have one cell of each value. That constraint is used to remove values from various cells in the block. Finally, there's the overall grid. This has several complementary views. It's 81 cells. The cells are also collected into a 3x3 grid of 3x3 blocks. The cells are also collected into 9 columns. The cells are also collected into 9 rows. And you have a solver strategy object. Each Unknown cell it set to having set( range(1,10) ) as the candidate values. For each row, column and 3x3 block (27 different collections): a. For each cell: If it has definite value (Known cells and Unknown cells implement this differently): remove that value from all other cells in this grouping. The above must be iterated until no changes are found. At this point, you either have it solved (all cells report a definite value), or, you have some cells with multiple values. Now you have to engage in a sophisticated back-tracking solver to find a combination of the remaining values that "works". A: A class containing a 1d array of 81 ints (0 is empty) is sufficient for the rule class. The rule class enforces the rules (no duplicate numbers in each row, column or 3x3 square). It also has an array of 81 bools so it knows which cells are fixed and which need to be solved. The public interface to this class has all the methods you need to manipulate the board: int getCell(int x, int y); bool setCell(int x, int y, int value); bool clearCell(int x, int y); int[] getRow(int x); int[] getCol(int y); int[] getSubBox(int x, int y); void resetPuzzle(); void loadPuzzle(InputStream stream); Then your solver uses the public interface to this class to solve the puzzle. The class structure of the solver I presume is the purpose of writing the 5 millionth Sudoku solver. If you are looking for hints, I'll edit this post later.
Programming Design Help - How to Structure a Sudoku Solver program?
I'm trying to create a sudoku solver program in Java (maybe Python). I'm just wondering how I should go about structuring this... Do I create a class and make each box a object of that class (9x9=81 objects)? If yes, how do I control all the objects - in other words, how do I make them all call a certain method in the class? Do I just create functions to calculate and just control all the numbers in there with something like an multi-D array? And actually, even if I could just create multiple functions, how would I control all the objects if I were to make each box an object? Thanks.
[ "Don't over-engineer it. It's a 2-D array or maybe a Board class that represents a 2-D array at best. Have functions that calculate a given row/column and functions that let you access each square. Additional methods can be used validate that each sub-3x3 and row/column don't violate the required constraints.\n", "Well, I would use one class for the sudoku itself, with a 9 x 9 array and all the functionality to add numbers and detect errors in the pattern.\nAnother class will be used to solve the puzzle.\n", "Do you need to do it in Python or Java? I do a lot of programming in Python, but this can be done much more concisely with integer program using a language like AMPL or GLPK, which I find more elegant (and generally more efficient) for problems like this.\nHere it is in AMPL, although I haven't verified how this works:\nhttp://taha.ineg.uark.edu/Sudoku.txt\n", "The simplest way to do it is to represent the board by a 2D 9x9 array. You'll want to have references to each row, column and 3x3 box as a separate object, so storing each cell in a String makes more sense (in Java) than using a primitive. With a String you can keep references to the same object in multiple containers.\n", "just for fun, here is what is supposed to be the shortest program, in python, that can solve a sudoku grid:\ndef r(a):i=a.find('0') if i<0:print a [m in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3)or a[j]for j in range(81)]or r(a[:i]+m+a[i+1:])for m in`14**7*9`]r(raw_input())\n\nhmm ok it's quite cryptic and I don't think it matchs your question so I apologize for this noise :) \nAnyway you'll find some explanation of these 173 characters here. \nThere's also an explanation in french here\n", "Maybe a design that had a box per square, and another class to represent the puzzle itself that would have a collection of boxes, contain all the rules for box interactions, and control the overall game would be a good design.\n", "First, it looks like there are two kinds of cells.\n\nKnown calls; those with a fixed value, no choices.\nUnknown cells; those with a set of candidate values that reduces down to a single final value.\n\nSecond, there are several groups of cells.\n\nHorizontal rows and Vertical columns which must have one cell of each value. That constraint is used to remove values from various cells in the row or column.\n3x3 blocks which must have one cell of each value. That constraint is used to remove values from various cells in the block.\n\nFinally, there's the overall grid. This has several complementary views.\n\nIt's 81 cells.\nThe cells are also collected into a 3x3 grid of 3x3 blocks.\nThe cells are also collected into 9 columns.\nThe cells are also collected into 9 rows.\n\nAnd you have a solver strategy object. \n\nEach Unknown cell it set to having set( range(1,10) ) as the candidate values.\nFor each row, column and 3x3 block (27 different collections):\na. For each cell:\n\nIf it has definite value (Known cells and Unknown cells implement this differently): remove that value from all other cells in this grouping.\n\n\nThe above must be iterated until no changes are found.\nAt this point, you either have it solved (all cells report a definite value), or, you have some cells with multiple values. Now you have to engage in a sophisticated back-tracking solver to find a combination of the remaining values that \"works\".\n", "A class containing a 1d array of 81 ints (0 is empty) is sufficient for the rule class. The rule class enforces the rules (no duplicate numbers in each row, column or 3x3 square). It also has an array of 81 bools so it knows which cells are fixed and which need to be solved. The public interface to this class has all the methods you need to manipulate the board:\nint getCell(int x, int y);\nbool setCell(int x, int y, int value);\nbool clearCell(int x, int y);\nint[] getRow(int x);\nint[] getCol(int y);\nint[] getSubBox(int x, int y);\nvoid resetPuzzle();\nvoid loadPuzzle(InputStream stream);\n\nThen your solver uses the public interface to this class to solve the puzzle. The class structure of the solver I presume is the purpose of writing the 5 millionth Sudoku solver. If you are looking for hints, I'll edit this post later.\n" ]
[ 12, 2, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "data_structures", "python", "sudoku" ]
stackoverflow_0000431996_data_structures_python_sudoku.txt
Q: String manipulation in Python I am converting some code from another language to python. That code reads a rather large file into a string and then manipulates it by array indexing like: str[i] = 'e' This does not work directly in python due to the strings being immutable. What is the preferred way of doing this in python ? I have seen the string.replace() function, but it returns a copy of the string which does not sound very optimal as the string in this case is an entire file. A: Assuming you're not using a variable-length text encoding such as UTF-8, you can use array.array: >>> import array >>> a = array.array('c', 'foo') >>> a[1] = 'e' >>> a array('c', 'feo') >>> a.tostring() 'feo' But since you're dealing with the contents of a file, mmap should be more efficient: >>> f = open('foo', 'r+') >>> import mmap >>> m = mmap.mmap(f.fileno(), 0) >>> m[:] 'foo\n' >>> m[1] = 'e' >>> m[:] 'feo\n' >>> exit() % cat foo feo Here's a quick benchmarking script (you'll need to replace dd with something else for non-Unix OSes): import os, time, array, mmap def modify(s): for i in xrange(len(s)): s[i] = 'q' def measure(func): start = time.time() func(open('foo', 'r+')) print func.func_name, time.time() - start def do_split(f): l = list(f.read()) modify(l) return ''.join(l) def do_array(f): a = array.array('c', f.read()) modify(a) return a.tostring() def do_mmap(f): m = mmap.mmap(f.fileno(), 0) modify(m) os.system('dd if=/dev/random of=foo bs=1m count=5') measure(do_mmap) measure(do_array) measure(do_split) Output I got on my several-year-old laptop matches my intuition: 5+0 records in 5+0 records out 5242880 bytes transferred in 0.710966 secs (7374304 bytes/sec) do_mmap 1.00865888596 do_array 1.09792494774 do_split 1.20163106918 So mmap is slightly faster but none of the suggested solutions is particularly different. If you're seeing a huge difference, try using cProfile to see what's taking the time. A: l = list(str) l[i] = 'e' str = ''.join(l) A: Others have answered the string manipulation part of your question, but I think you ought to think about whether it would be better to parse the file and modify the data structure the text represents rather than manipulating the text directly. A: Try: sl = list(s) sl[i] = 'e' s = ''.join(sl)
String manipulation in Python
I am converting some code from another language to python. That code reads a rather large file into a string and then manipulates it by array indexing like: str[i] = 'e' This does not work directly in python due to the strings being immutable. What is the preferred way of doing this in python ? I have seen the string.replace() function, but it returns a copy of the string which does not sound very optimal as the string in this case is an entire file.
[ "Assuming you're not using a variable-length text encoding such as UTF-8, you can use array.array:\n>>> import array\n>>> a = array.array('c', 'foo')\n>>> a[1] = 'e'\n>>> a\narray('c', 'feo')\n>>> a.tostring()\n'feo'\n\nBut since you're dealing with the contents of a file, mmap should be more efficient:\n>>> f = open('foo', 'r+')\n>>> import mmap\n>>> m = mmap.mmap(f.fileno(), 0)\n>>> m[:]\n'foo\\n'\n>>> m[1] = 'e'\n>>> m[:]\n'feo\\n'\n>>> exit()\n% cat foo\nfeo\n\nHere's a quick benchmarking script (you'll need to replace dd with something else for non-Unix OSes):\nimport os, time, array, mmap\n\ndef modify(s):\n for i in xrange(len(s)):\n s[i] = 'q'\n\ndef measure(func):\n start = time.time()\n func(open('foo', 'r+'))\n print func.func_name, time.time() - start\n\ndef do_split(f):\n l = list(f.read())\n modify(l)\n return ''.join(l)\n\ndef do_array(f):\n a = array.array('c', f.read())\n modify(a)\n return a.tostring()\n\ndef do_mmap(f):\n m = mmap.mmap(f.fileno(), 0)\n modify(m)\n\nos.system('dd if=/dev/random of=foo bs=1m count=5')\n\nmeasure(do_mmap)\nmeasure(do_array)\nmeasure(do_split)\n\nOutput I got on my several-year-old laptop matches my intuition:\n5+0 records in\n5+0 records out\n5242880 bytes transferred in 0.710966 secs (7374304 bytes/sec)\ndo_mmap 1.00865888596\ndo_array 1.09792494774\ndo_split 1.20163106918\n\nSo mmap is slightly faster but none of the suggested solutions is particularly different. If you're seeing a huge difference, try using cProfile to see what's taking the time.\n", "l = list(str)\nl[i] = 'e'\nstr = ''.join(l)\n\n", "Others have answered the string manipulation part of your question, but I think you ought to think about whether it would be better to parse the file and modify the data structure the text represents rather than manipulating the text directly.\n", "Try:\nsl = list(s)\nsl[i] = 'e'\ns = ''.join(sl)\n\n" ]
[ 12, 9, 1, 0 ]
[]
[]
[ "python", "replace", "string" ]
stackoverflow_0000725364_python_replace_string.txt
Q: Dynamic use of a class method defined in a Cython extension module I would like to use the C implementation of a class method (generated from Cython) if it is present, or use its Python equivalent if the C extension is not present. I first tried this: class A(object): try: import c_ext method = c_ext.optimized_method except ImportError: def method(self): return "foo" Where optimized_method is a function defined in a Cython module: def optimized_method(self): return "fasterfoo" But this doesn't work: >>> A().method() exceptions.TypeError: optimized_method() takes exactly one argument (0 given) The only way I found to make this work is: class A(object): def method(self): try: import c_ext return c_ext.optimized_method(self) except ImportError: pass return "foo" But checking for the module's presence at each function call seems quite suboptimal... Why isn't my first approach working ? [edit] : added Cython module's contents A: Ok I just found the answer... The problem comes from the way Cython wraps the functions it exports: every method is unbound regardless from where it is referenced. The solution is to explicitly declare a bound method: class A(object): def method(self): return "foo" try: import c_ext import types A.method = types.MethodType(c_ext.optimized_method, None, A) except ImportError: pass
Dynamic use of a class method defined in a Cython extension module
I would like to use the C implementation of a class method (generated from Cython) if it is present, or use its Python equivalent if the C extension is not present. I first tried this: class A(object): try: import c_ext method = c_ext.optimized_method except ImportError: def method(self): return "foo" Where optimized_method is a function defined in a Cython module: def optimized_method(self): return "fasterfoo" But this doesn't work: >>> A().method() exceptions.TypeError: optimized_method() takes exactly one argument (0 given) The only way I found to make this work is: class A(object): def method(self): try: import c_ext return c_ext.optimized_method(self) except ImportError: pass return "foo" But checking for the module's presence at each function call seems quite suboptimal... Why isn't my first approach working ? [edit] : added Cython module's contents
[ "Ok I just found the answer...\nThe problem comes from the way Cython wraps the functions it exports: every method is unbound regardless from where it is referenced.\nThe solution is to explicitly declare a bound method:\nclass A(object):\n def method(self):\n return \"foo\"\n\ntry:\n import c_ext\n import types\n A.method = types.MethodType(c_ext.optimized_method, None, A)\nexcept ImportError:\n pass\n\n" ]
[ 4 ]
[]
[]
[ "cython", "methods", "python" ]
stackoverflow_0000725777_cython_methods_python.txt
Q: If monkey patching is permitted in both Ruby and Python, why is it more controversial in Ruby? In many discussions I have heard about Ruby in which people have expressed their reservations about the language, the issue of monkey patching comes up as one of their primary concerns. However, I rarely hear the same arguments made in the context of Python although it is also permitted in the Python language. Why this distinction? Does Python include different types of safeguards to minimize the risks of this feature? A: It's a technique less practised in Python, in part because "core" classes in Python (those implemented in C) are not really modifiable. In Ruby, on the other hand, because of the way it's implemented internally (not better, just different) just about anything can be modified dynamically. Philosophically, it's something that tends to be frowned on within the Python community, distinctly less so in the Ruby world. I don't know why you assert that it's more controversial (can you link to an authoritative reference?) - my experience has been that monkey-patching is an accepted technique if one where the user should be aware of possible consequences. A: The languages might permit it, but neither community condones the practice. Monkeypatching isn't condoned in either language, but you hear about it more often in Ruby because the form of open class it uses makes it very, very easy to monkeypatch a class and because of this, it's more acceptable in the Ruby community, but still frowned upon. Monkeypatching simply isn't as prevalent or as easy in Python, which is why you won't hear the same arguments against it in that community. Python does nothing that Ruby doesn't do to prevent the practice. The reason you hear/read about it more often in Ruby is that this in Ruby: class MyClass def foo puts "foo" end end class MyClass def bar puts "bar" end end will give you a class that contains two methods, foo and bar, whereas this in Python: class MyClass: def foo(self): print "foo" class MyClass: def bar(self): print "bar" will leave you with a class that only contains the method bar, as redefinition of a class clobbers the previous definition completely. To monkeypatch in Python, you actually have to write this: class MyClass: def foo(self): print "foo" def bar(self): print "bar" MyClass.bar = bar which is harder than the Ruby version. That alone makes Ruby code much easier to monkeypatch than Python code. A: As a Python programmer who has had a taste of Ruby (and likes it), I think there is somewhat of an ironic parallel to when Python was beginning to become popular. C and Java programmers would ‘bash’ Python, stating that it wasn't a real language, and that the dynamic nature of its types would be dangerous, and allow people to create ‘bad’ code. As Python became more popular, and the advantages of its rapid development time became apparent, not to mention the less verbose syntax: // Java Person p = new Person(); # Python p = Person() we began to see some more dynamic features appear in later versions of Java. Autoboxing and -unboxing make it less troublesome to deal with primitives, and Generics allow us to code once and apply it to many types. It was with some amusement that I saw one of the key flexible features of Ruby – Monkey Patching, being touted as dangerous by the Python crowd. Having started teaching Ruby to students this year, I think that being able to ‘fix’ the implementation of an existing class, even one that is part of the system, is very powerful. Sure, you can screw up badly and your program can crash. I can segfault in C pretty easily, too. And Java apps can die flaming death. The truth is, I see Monkey Patching as the next step in dynamic and meta-programming. Funny, since it has been around since Smalltalk. A: "Does Python include different types of safeguards to minimize the risks of this feature?" Yes. The community refuses to do it. The safeguard is entirely social. A: Actually in Python it's a bit harder to modify basic types. For example imagine, that you redefine integer. Ruby: class Fixnum def *(n) 5 end end Now 2*2 yields 5. Python: >>> class int(int): def __mul__(self, x): return 5 >>> 2*2 4 >>> int(2)*int(2) 5 A: In Python, any literal ("", {}, 1.0, etc) creates an instance of the standard class, even if you tried to monkeypatch it and redefined the corresponding class in your namespace. It just won't work how you intended: class str(): # define your custom string type ... a = "foo" # still a real Python string a = str("foo") # only this uses your custom class A: I think that monkey patching should only be used as the last solution. Normally Python programmers know how a class or a method behave. They know that class xxx is doing things in a certain way. When you monkey patch a class or a method, you are changing it's behavior. Other Python programmers using this class can be very surprised if that class is behaving differently. The normal way of doing things is subclassing. That way, other programmers know that they are using a different object. They can use the original class or the subclass if they choose to. A: If you want to do some monkey patching in Python, it is relatively easy, as long as you are not modifying a built-in type (int, float, str). class SomeClass: def foo(self): print "foo" def tempfunc(self): print "bar" SomeClass.bar = tempfunc del tempfunc This will add the bar method to SomeClass and even existing instances of that class can use that injected method.
If monkey patching is permitted in both Ruby and Python, why is it more controversial in Ruby?
In many discussions I have heard about Ruby in which people have expressed their reservations about the language, the issue of monkey patching comes up as one of their primary concerns. However, I rarely hear the same arguments made in the context of Python although it is also permitted in the Python language. Why this distinction? Does Python include different types of safeguards to minimize the risks of this feature?
[ "It's a technique less practised in Python, in part because \"core\" classes in Python (those implemented in C) are not really modifiable. In Ruby, on the other hand, because of the way it's implemented internally (not better, just different) just about anything can be modified dynamically.\nPhilosophically, it's something that tends to be frowned on within the Python community, distinctly less so in the Ruby world. I don't know why you assert that it's more controversial (can you link to an authoritative reference?) - my experience has been that monkey-patching is an accepted technique if one where the user should be aware of possible consequences.\n", "The languages might permit it, but neither community condones the practice. Monkeypatching isn't condoned in either language, but you hear about it more often in Ruby because the form of open class it uses makes it very, very easy to monkeypatch a class and because of this, it's more acceptable in the Ruby community, but still frowned upon. Monkeypatching simply isn't as prevalent or as easy in Python, which is why you won't hear the same arguments against it in that community. Python does nothing that Ruby doesn't do to prevent the practice.\nThe reason you hear/read about it more often in Ruby is that this in Ruby:\nclass MyClass\n def foo\n puts \"foo\"\n end\nend\n\nclass MyClass\n def bar\n puts \"bar\"\n end\nend\n\nwill give you a class that contains two methods, foo and bar, whereas this in Python:\nclass MyClass:\n def foo(self):\n print \"foo\"\n\nclass MyClass:\n def bar(self):\n print \"bar\"\n\nwill leave you with a class that only contains the method bar, as redefinition of a class clobbers the previous definition completely. To monkeypatch in Python, you actually have to write this:\nclass MyClass:\n def foo(self):\n print \"foo\"\n\ndef bar(self):\n print \"bar\"\nMyClass.bar = bar\n\nwhich is harder than the Ruby version. That alone makes Ruby code much easier to monkeypatch than Python code.\n", "As a Python programmer who has had a taste of Ruby (and likes it), I think there is somewhat of an ironic parallel to when Python was beginning to become popular.\nC and Java programmers would ‘bash’ Python, stating that it wasn't a real language, and that the dynamic nature of its types would be dangerous, and allow people to create ‘bad’ code. As Python became more popular, and the advantages of its rapid development time became apparent, not to mention the less verbose syntax:\n// Java\nPerson p = new Person();\n\n# Python\np = Person()\n\nwe began to see some more dynamic features appear in later versions of Java. Autoboxing and -unboxing make it less troublesome to deal with primitives, and Generics allow us to code once and apply it to many types.\nIt was with some amusement that I saw one of the key flexible features of Ruby – Monkey Patching, being touted as dangerous by the Python crowd. Having started teaching Ruby to students this year, I think that being able to ‘fix’ the implementation of an existing class, even one that is part of the system, is very powerful.\nSure, you can screw up badly and your program can crash. I can segfault in C pretty easily, too. And Java apps can die flaming death.\nThe truth is, I see Monkey Patching as the next step in dynamic and meta-programming. Funny, since it has been around since Smalltalk.\n", "\"Does Python include different types of safeguards to minimize the risks of this feature?\" \nYes. The community refuses to do it. The safeguard is entirely social.\n", "Actually in Python it's a bit harder to modify basic types. \nFor example imagine, that you redefine integer.\nRuby:\nclass Fixnum \n def *(n)\n 5 \n end \nend\n\nNow 2*2 yields 5.\nPython:\n>>> class int(int):\n def __mul__(self, x):\n return 5\n\n\n>>> 2*2\n4\n>>> int(2)*int(2)\n5\n\n", "In Python, any literal (\"\", {}, 1.0, etc) creates an instance of the standard class, even if you tried to monkeypatch it and redefined the corresponding class in your namespace.\nIt just won't work how you intended:\nclass str():\n # define your custom string type\n ...\n\na = \"foo\" # still a real Python string\na = str(\"foo\") # only this uses your custom class\n\n", "I think that monkey patching should only be used as the last solution.\nNormally Python programmers know how a class or a method behave. They know that class xxx is doing things in a certain way.\nWhen you monkey patch a class or a method, you are changing it's behavior. Other Python programmers using this class can be very surprised if that class is behaving differently.\nThe normal way of doing things is subclassing. That way, other programmers know that they are using a different object. They can use the original class or the subclass if they choose to.\n", "If you want to do some monkey patching in Python, it is relatively easy, as long as you are not modifying a built-in type (int, float, str).\nclass SomeClass:\n def foo(self):\n print \"foo\"\n\ndef tempfunc(self):\n print \"bar\"\nSomeClass.bar = tempfunc\ndel tempfunc\n\nThis will add the bar method to SomeClass and even existing instances of that class can use that injected method.\n" ]
[ 21, 16, 16, 13, 3, 3, 2, 1 ]
[]
[]
[ "language_features", "monkeypatching", "python", "ruby" ]
stackoverflow_0000717506_language_features_monkeypatching_python_ruby.txt
Q: Finding all *rendered* images in a HTML file I need a way to find only rendered IMG tags in a HTML snippet. So, I can't just regex the HTML snippet to find all IMG tags because I'd also get IMG tags that are shown as text in the HTML (not rendered). I'm using Python on AppEngine. Any ideas? Thanks, Ivan A: The source code for rendered img tag are something like this: <img src="img.jpg"></img> If the img tag is displayed as text(not rendered), the html code would be like this: &lt;img src=&quot;styles/BWLogo.jpg&quot;&gt;&lt;/img&gt; &lt; is "<" character, &gt; is ">" character To match rendered img tag only,you can use regex to match img tag formed by < and >, not &lt; and &gt; Img tags in comments also need to be ignored by ingnoring characters between "<!--" and "-->" A: Use BeautifulSoup. It is an HTML/XML parser for Python that provides simple, idiomatic ways of navigating, searching, and modifying the parse tree. It probably won't be mistaken by fake img tags. A: Sounds like a job for BeautifulSoup: >>> from BeautifulSoup import BeautifulSoup >>> doc = """ ... <html> ... <body> ... <img src="test.jpg"> ... &lt;img src="yay.jpg"&gt; ... <!-- <img src="ohnoes.jpg"> --> ... <img src="hurrah.jpg"> ... </body> ... </html> ... """ >>> soup = BeautifulSoup(doc) >>> soup.findAll('img') [<img src="test.jpg" />, <img src="hurrah.jpg" />] As you can see, BeautifulSoup is smart enough to ignore comments and displayed HTML. EDIT: I'm not sure what you mean by the RSS feed escaping ALL images, though. I wouldn't expect BeautifulSoup to figure out which are meant to be shown if they are all escaped. Can you clarify? A: As image tags might be in between some <pre> or <xmp> tag you probably have to walk through the dom (= convert the html to a xml/dom tree and search through it) and find all the <img> nodes. There is a xml.dom class in the python standard library: docs.python.org You could do that on the client aswell and report it back via ajax (this would mean more load on the server though).
Finding all *rendered* images in a HTML file
I need a way to find only rendered IMG tags in a HTML snippet. So, I can't just regex the HTML snippet to find all IMG tags because I'd also get IMG tags that are shown as text in the HTML (not rendered). I'm using Python on AppEngine. Any ideas? Thanks, Ivan
[ "The source code for rendered img tag are something like this:\n<img src=\"img.jpg\"></img>\n\nIf the img tag is displayed as text(not rendered), the html code would be like this:\n &lt;img src=&quot;styles/BWLogo.jpg&quot;&gt;&lt;/img&gt;\n\n&lt; is \"<\" character, &gt; is \">\" character\nTo match rendered img tag only,you can use regex to match img tag formed by < and >, not &lt; and &gt;\nImg tags in comments also need to be ignored by ingnoring characters between \"<!--\" and \"-->\"\n", "Use BeautifulSoup. It is an HTML/XML parser for Python that provides simple, idiomatic ways of navigating, searching, and modifying the parse tree. It probably won't be mistaken by fake img tags.\n", "Sounds like a job for BeautifulSoup:\n>>> from BeautifulSoup import BeautifulSoup\n>>> doc = \"\"\"\n... <html>\n... <body>\n... <img src=\"test.jpg\">\n... &lt;img src=\"yay.jpg\"&gt;\n... <!-- <img src=\"ohnoes.jpg\"> -->\n... <img src=\"hurrah.jpg\">\n... </body>\n... </html>\n... \"\"\"\n>>> soup = BeautifulSoup(doc)\n>>> soup.findAll('img')\n[<img src=\"test.jpg\" />, <img src=\"hurrah.jpg\" />]\n\nAs you can see, BeautifulSoup is smart enough to ignore comments and displayed HTML.\nEDIT: I'm not sure what you mean by the RSS feed escaping ALL images, though. I wouldn't expect BeautifulSoup to figure out which are meant to be shown if they are all escaped. Can you clarify?\n", "As image tags might be in between some <pre> or <xmp> tag you probably have to walk through the dom (= convert the html to a xml/dom tree and search through it) and find all the <img> nodes. There is a xml.dom class in the python standard library: docs.python.org\nYou could do that on the client aswell and report it back via ajax (this would mean more load on the server though).\n" ]
[ 2, 2, 2, 0 ]
[]
[]
[ "html", "parsing", "python", "regex" ]
stackoverflow_0000725756_html_parsing_python_regex.txt
Q: Transferring Python modules Basically for this case, I am using the _winreg module in Python v2.6 but the python package I have to use is v2.5. When I try to use: _winreg.ExpandEnvironmentStrings it complains about not having this attribute in this module. I have successfully transferred other modules like comtypes from site-packages folder. But the problem is I don't know which files to copy/replace. Is there a way to do this? Also is site-packages the main places for 3rd party modules? A: It's a compiled C extension, not pure Python, so you generally can't simply copy the DLL/so file across from one installation to another: the Python binary interface changes on 0.1 version number updates (but not 0.0.1 updates). In any case, _winreg seems to be statically build into Python.exe on the current official Windows builds rather than being dropped into the ‘DLLs’ folder. _winreg.ExpandEnvironmentStrings is not available pre-2.6, but you could usefully fall back to os.path.expandvars, which does more or less the same thing. (It also supports $VAR variables, which under Windows you might not want, but this may not be a practical problem.) You're right: %-syntax for expandvars under Windows was only introduced in 2.6, how useless. Looks like you'll need the below. If the worst comes to the worst it's fairly simple to write by hand: import re, os def expandEnvironmentStrings(s): r= re.compile('%([^%]+)%') return r.sub(lambda m: os.environ.get(m.group(1), m.group(0)), s) Though either way there is always Python 2.x's inability to read Unicode envvars to worry about.
Transferring Python modules
Basically for this case, I am using the _winreg module in Python v2.6 but the python package I have to use is v2.5. When I try to use: _winreg.ExpandEnvironmentStrings it complains about not having this attribute in this module. I have successfully transferred other modules like comtypes from site-packages folder. But the problem is I don't know which files to copy/replace. Is there a way to do this? Also is site-packages the main places for 3rd party modules?
[ "It's a compiled C extension, not pure Python, so you generally can't simply copy the DLL/so file across from one installation to another: the Python binary interface changes on 0.1 version number updates (but not 0.0.1 updates). In any case, _winreg seems to be statically build into Python.exe on the current official Windows builds rather than being dropped into the ‘DLLs’ folder.\n_winreg.ExpandEnvironmentStrings is not available pre-2.6, but you could usefully fall back to os.path.expandvars, which does more or less the same thing. (It also supports $VAR variables, which under Windows you might not want, but this may not be a practical problem.) You're right: %-syntax for expandvars under Windows was only introduced in 2.6, how useless. Looks like you'll need the below.\nIf the worst comes to the worst it's fairly simple to write by hand:\nimport re, os\n\ndef expandEnvironmentStrings(s):\n r= re.compile('%([^%]+)%')\n return r.sub(lambda m: os.environ.get(m.group(1), m.group(0)), s)\n\nThough either way there is always Python 2.x's inability to read Unicode envvars to worry about.\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0000727791_python.txt
Q: Is there a way to overload += in python? I know about the __add__ method to override plus, but when I use that to override +=, I end up with one of two problems: (1) if __add__ mutates self, then z = x + y will mutate x when I don't really want x to be mutated there. (2) if __add__ returns a new object, then tmp = z z += x z += y tmp += w return z will return something without w since z and tmp point to different objects after z += x is executed. I can make some sort of .append() method, but I'd prefer to overload += if it is possible. A: Yes. Just override the object's __iadd__ method, which takes the same parameters as add. You can find more information here.
Is there a way to overload += in python?
I know about the __add__ method to override plus, but when I use that to override +=, I end up with one of two problems: (1) if __add__ mutates self, then z = x + y will mutate x when I don't really want x to be mutated there. (2) if __add__ returns a new object, then tmp = z z += x z += y tmp += w return z will return something without w since z and tmp point to different objects after z += x is executed. I can make some sort of .append() method, but I'd prefer to overload += if it is possible.
[ "Yes. Just override the object's __iadd__ method, which takes the same parameters as add. You can find more information here.\n" ]
[ 102 ]
[]
[]
[ "operator_overloading", "python" ]
stackoverflow_0000728361_operator_overloading_python.txt
Q: Python/urllib suddenly stops working properly I'm writing a little tool to monitor class openings at my school. I wrote a python script that will fetch the current availablity of classes from each department every few minutes. The script was functioning properly until the uni's site started returning this: SIS Server is not available at this time Uni must have blocked my server right? Well, not really because that is the output I get when I goto the URL directly from other PCs. But if I go through the intermediary form on uni's site that does a POST, I don't get that message. The URL I'm requesting is https://s4.its.unc.edu/SISMisc/SISTalkerServlet This is what my python code looks like: data = urllib.urlencode({"progname" : "SIR033WA", "SUBJ" : "busi", "CRS" : "", "TERM" : "20099"}) f = urllib.urlopen("https://s4.its.unc.edu/SISMisc/SISTalkerServlet", data) s = f.read() print (s) I am really stumped! It seems like python isn't sending a proper request. At first I thought it wasn't sending a proper post data but I changed the URL to my localbox and the post data apache recieved seemed just fine. If you'd like to see the system actually functioning, goto https://s4.its.unc.edu/SISMisc/browser/student_pass_z.jsp and click on the "Enter as Guest" button and then look for "Course Availability". (Now you know why I'm building this!) Weirdest thing is this was working until 11am! I've had the same error before but it only lasted for few minutes. This tells me it is more of a problem somewhere than any blocking of my server by the uni. update Upon suggestion, I tried to play with a more legit referer/user-agent. Same result. This is what I tried: import httplib import urllib headers = {'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US;rv:1.9.0.4) Gecko/2008102920 Firefox/3.0.4',"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain","Referrer": "https://s4.its.unc.edu/SISMisc/SISTalkerServlet"} data = urllib.urlencode({"progname" : "SIR033WA", "SUBJ" : "busi", "CRS" : "", "TERM" : "20099"}) c = httplib.HTTPSConnection("s4.its.unc.edu",443) c.request("POST", "/SISMisc/SISTalkerServlet",data,headers) r = c.getresponse() print r.read() A: This post doesn't attempt to fix your code, but suggest a debugging tool. Once upon a time I was coding a program to fill out online forms for me. To learn exactly how my browser was handling the POSTs, and cookies, and whatnot, I installed WireShark ( http://www.wireshark.org/ ), a network sniffer. This application allowed me to view, chunk by chunk, the data that was being sent and received on the IP and hardware level. You might consider trying out a similar program and comparing the network flow. This might highlight differences between what your browser is doing and your script is doing. A: After seeing multiple requests from an odd non-browser User-Agent string, it's possible that they are blocking users not being referred to from the site. For example, PHP has a feature called $_SERVER['HTTP_REFERRER'] IIRC, which will check the page which reffered the user to the current one. Since your program is not including one in the User-Agent string (you are trying to directly access it) it is very possible they are preventing you access based upon that. Try adding a referrer into the headers of your http request and see how it goes. (preferably a page which links to the one you're trying to access) http://whatsmyuseragent.com/ can assist you in building your spoofed user agent. you then build headers like so... headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"} and then send them as an additional parameter with your HTTPConnection request... conn.request("POST", "/page/on/site", params, headers) see the python doc on httplib for further reference and examples.
Python/urllib suddenly stops working properly
I'm writing a little tool to monitor class openings at my school. I wrote a python script that will fetch the current availablity of classes from each department every few minutes. The script was functioning properly until the uni's site started returning this: SIS Server is not available at this time Uni must have blocked my server right? Well, not really because that is the output I get when I goto the URL directly from other PCs. But if I go through the intermediary form on uni's site that does a POST, I don't get that message. The URL I'm requesting is https://s4.its.unc.edu/SISMisc/SISTalkerServlet This is what my python code looks like: data = urllib.urlencode({"progname" : "SIR033WA", "SUBJ" : "busi", "CRS" : "", "TERM" : "20099"}) f = urllib.urlopen("https://s4.its.unc.edu/SISMisc/SISTalkerServlet", data) s = f.read() print (s) I am really stumped! It seems like python isn't sending a proper request. At first I thought it wasn't sending a proper post data but I changed the URL to my localbox and the post data apache recieved seemed just fine. If you'd like to see the system actually functioning, goto https://s4.its.unc.edu/SISMisc/browser/student_pass_z.jsp and click on the "Enter as Guest" button and then look for "Course Availability". (Now you know why I'm building this!) Weirdest thing is this was working until 11am! I've had the same error before but it only lasted for few minutes. This tells me it is more of a problem somewhere than any blocking of my server by the uni. update Upon suggestion, I tried to play with a more legit referer/user-agent. Same result. This is what I tried: import httplib import urllib headers = {'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US;rv:1.9.0.4) Gecko/2008102920 Firefox/3.0.4',"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain","Referrer": "https://s4.its.unc.edu/SISMisc/SISTalkerServlet"} data = urllib.urlencode({"progname" : "SIR033WA", "SUBJ" : "busi", "CRS" : "", "TERM" : "20099"}) c = httplib.HTTPSConnection("s4.its.unc.edu",443) c.request("POST", "/SISMisc/SISTalkerServlet",data,headers) r = c.getresponse() print r.read()
[ "This post doesn't attempt to fix your code, but suggest a debugging tool.\nOnce upon a time I was coding a program to fill out online forms for me. To learn exactly how my browser was handling the POSTs, and cookies, and whatnot, I installed WireShark ( http://www.wireshark.org/ ), a network sniffer. This application allowed me to view, chunk by chunk, the data that was being sent and received on the IP and hardware level.\nYou might consider trying out a similar program and comparing the network flow. This might highlight differences between what your browser is doing and your script is doing.\n", "After seeing multiple requests from an odd non-browser User-Agent string, it's possible that they are blocking users not being referred to from the site. For example, PHP has a feature called $_SERVER['HTTP_REFERRER'] IIRC, which will check the page which reffered the user to the current one. Since your program is not including one in the User-Agent string (you are trying to directly access it) it is very possible they are preventing you access based upon that. Try adding a referrer into the headers of your http request and see how it goes. (preferably a page which links to the one you're trying to access)\nhttp://whatsmyuseragent.com/ can assist you in building your spoofed user agent. \nyou then build headers like so...\nheaders = {\"Content-type\": \"application/x-www-form-urlencoded\",\n\"Accept\": \"text/plain\"}\n\nand then send them as an additional parameter with your HTTPConnection request... \nconn.request(\"POST\", \"/page/on/site\", params, headers)\n\nsee the python doc on httplib for further reference and examples.\n" ]
[ 2, 0 ]
[]
[]
[ "python", "urllib" ]
stackoverflow_0000728193_python_urllib.txt
Q: Example of how to use msilib to create a .msi file from a python module Can anyone give me an example of how to use python's msilib standard library module to create a msi file from a custom python module? For example, let's say I have a custom module called cool.py with the following code class Cool(object): def print_cool(self): print "cool" and I want to create an msi file using msilib that will install cool.py in python's site-packages directory. How can I do that? A: You need to write a distutils setup script for your module, then you can do python setup.py bdist_msi and an msi-installer will be created for your module. See also http://docs.python.org/distutils/apiref.html#module-distutils.command.bdist_msi A: I think there is a misunderstanding: think of MS CAB Files as archives like .zip-Files. Now it is possible to put anything in such an archive, like your cool.py. But i think you mentioned that python source, since you want it executed, otherwise just use an archiver like zip, no need to use mslib. If i am right then you first need to convert your script into an executable using something like py2exe or pyinstaller.
Example of how to use msilib to create a .msi file from a python module
Can anyone give me an example of how to use python's msilib standard library module to create a msi file from a custom python module? For example, let's say I have a custom module called cool.py with the following code class Cool(object): def print_cool(self): print "cool" and I want to create an msi file using msilib that will install cool.py in python's site-packages directory. How can I do that?
[ "You need to write a distutils setup script for your module, then you can do\npython setup.py bdist_msi\n\nand an msi-installer will be created for your module.\nSee also http://docs.python.org/distutils/apiref.html#module-distutils.command.bdist_msi\n", "I think there is a misunderstanding: think of MS CAB Files as archives like .zip-Files. Now it is possible to put anything in such an archive, like your cool.py. But i think you mentioned that python source, since you want it executed, otherwise just use an archiver like zip, no need to use mslib. \nIf i am right then you first need to convert your script into an executable using something like py2exe or pyinstaller.\n" ]
[ 5, 0 ]
[]
[]
[ "python", "windows", "windows_installer" ]
stackoverflow_0000728589_python_windows_windows_installer.txt
Q: Python's eval() and globals() I'm trying to execute a number of functions using eval(), and I need to create some kind of environment for them to run. It is said in documentation that you can pass globals as a second parameter to eval(). But it seems to not work in my case. Here's the simpified example (I tried two approaches, declaring variable global and using globals(), and both do not work): File script.py: import test global test_variable test_variable = 'test_value' g = globals() g['test_variable'] = 'test_value' eval('test.my_func()', g) File test.py: def my_func(): global test_variable print repr(test_variable) And I'm getting: NameError: global name 'test_variable' is not defined. What should I do to pass that test_variable into my_func()? Assuming I can't pass it as a parameter. A: test_variable should be global in test.py. You're getting a name error because you're trying to declare a variable global that doesn't yet exist. So your my_test.py file should be like this: test_variable = None def my_func(): print test_variable And running this from the command prompt: >>> import my_test >>> eval('my_test.my_func()') None >>> my_test.test_variable = 'hello' >>> my_test.test_variable 'hello' >>> eval('my_test.my_func()') hello Generally it's bad form to use eval() and globals, so make sure you know what your doing. A: Please correct me Python experts if I am wrong. I am also learning Python. The following is my current understanding of why the NameError exception was thrown. In Python, you cannot create a variable that can be access across modules without specifying the module name (i.e. to access the global variable test in module mod1 you need to use mod1.test when you in module mod2). The scope of the global variable is pretty much limited to the module itself. Thus when you have following in test.py: def my_func(): global test_variable print repr(test_variable) The test_variable here refers to test.test_variable (i.e. test_variable in the test module namespace). So setting test_variable in script.py will put the variable in the __main__ namespace (__main__ because this is the top-level module/script you provided to the Python interpreter to execute). Thus, this test_variable will be in a different namespace and not in the test module namespace where it is required to be. Hence, Python generates a NameError because it cannot find the variable after searching the test module global namespace and built-in namespace (local function namespace is skipped because of the global statement). Therefore, for eval to work, you need to set test_variable in the test module namespace in script.py: import test test.test_variable = 'test_value' eval('test.my_func()') For more details about Python’s scope and namespaces see: http://docs.python.org/tutorial/classes.html#python-scopes-and-name-spaces
Python's eval() and globals()
I'm trying to execute a number of functions using eval(), and I need to create some kind of environment for them to run. It is said in documentation that you can pass globals as a second parameter to eval(). But it seems to not work in my case. Here's the simpified example (I tried two approaches, declaring variable global and using globals(), and both do not work): File script.py: import test global test_variable test_variable = 'test_value' g = globals() g['test_variable'] = 'test_value' eval('test.my_func()', g) File test.py: def my_func(): global test_variable print repr(test_variable) And I'm getting: NameError: global name 'test_variable' is not defined. What should I do to pass that test_variable into my_func()? Assuming I can't pass it as a parameter.
[ "test_variable should be global in test.py. You're getting a name error because you're trying to declare a variable global that doesn't yet exist.\nSo your my_test.py file should be like this:\ntest_variable = None\n\ndef my_func():\n print test_variable\n\nAnd running this from the command prompt:\n>>> import my_test\n>>> eval('my_test.my_func()')\nNone\n>>> my_test.test_variable = 'hello'\n>>> my_test.test_variable\n'hello'\n>>> eval('my_test.my_func()')\nhello\n\nGenerally it's bad form to use eval() and globals, so make sure you know what your doing.\n", "Please correct me Python experts if I am wrong. I am also learning Python. The following is my current understanding of why the NameError exception was thrown.\nIn Python, you cannot create a variable that can be access across modules without specifying the module name (i.e. to access the global variable test in module mod1 you need to use mod1.test when you in module mod2). The scope of the global variable is pretty much limited to the module itself.\nThus when you have following in test.py:\ndef my_func():\n global test_variable\n print repr(test_variable)\n\nThe test_variable here refers to test.test_variable (i.e. test_variable in the test module namespace).\nSo setting test_variable in script.py will put the variable in the __main__ namespace (__main__ because this is the top-level module/script you provided to the Python interpreter to execute). Thus, this test_variable will be in a different namespace and not in the test module namespace where it is required to be. Hence, Python generates a NameError because it cannot find the variable after searching the test module global namespace and built-in namespace (local function namespace is skipped because of the global statement).\nTherefore, for eval to work, you need to set test_variable in the test module namespace in script.py:\nimport test\ntest.test_variable = 'test_value'\neval('test.my_func()')\n\nFor more details about Python’s scope and namespaces see: http://docs.python.org/tutorial/classes.html#python-scopes-and-name-spaces\n" ]
[ 10, 4 ]
[]
[]
[ "eval", "python" ]
stackoverflow_0000729248_eval_python.txt
Q: Identifying a map in groovy While porting over a code fragment from python I've stumbled over a trivial problem: if isinstance(v['content'], dict): What would be the most elegant way to port this over to groovy? A: You can use instanceof (see map-specific example here), like this: if (v['content'] instanceof java.util.map)
Identifying a map in groovy
While porting over a code fragment from python I've stumbled over a trivial problem: if isinstance(v['content'], dict): What would be the most elegant way to port this over to groovy?
[ "You can use instanceof (see map-specific example here), like this:\nif (v['content'] instanceof java.util.map)\n\n" ]
[ 5 ]
[]
[]
[ "groovy", "python" ]
stackoverflow_0000729354_groovy_python.txt
Q: How to I get scons to invoke an external script? I'm trying to use scons to build a latex document. In particular, I want to get scons to invoke a python program that generates a file containing a table that is \input{} into the main document. I've looked over the scons documentation but it is not immediately clear to me what I need to do. What I wish to achieve is essentially what you would get with this makefile: document.pdf: table.tex pdflatex document.tex table.tex: python table_generator.py How can I express this in scons? A: Something along these lines should do - env.Command ('document.tex', '', 'python table_generator.py') env.PDF ('document.pdf', 'document.tex') It declares that 'document.tex' is generated by calling the Python script, and requests a PDF document to be created from this generatd 'document.tex' file. Note that this is in spirit only. It may require some tweaking. In particular, I'm not certain what kind of semantics you would want for the generation of 'document.tex' - should it be generated every time? Only when it doesn't exist? When some other file changes? (you would want to add this dependency as the second argument to Command() that case). In addition, the output of Command() can be used as input to PDF() if desired. For clarity, I didn't do that. A: In this simple case, the easiest way is to just use the subprocess module from subprocess import call call("python table_generator.py") call("pdflatex document.tex") Regardless of where in your SConstruct file these lines are placed, they will happen before any of the compiling and linking performed by SCons. The downside is that these commands will be executed every time you run SCons, rather than only when the files have changed, which is what would happen in your example Makefile. So if those commands take a long time to run, this wouldn't be a good solution. If you really need to only run these commands when the files have changed, look at the SCons manual section Writing Your Own Builders.
How to I get scons to invoke an external script?
I'm trying to use scons to build a latex document. In particular, I want to get scons to invoke a python program that generates a file containing a table that is \input{} into the main document. I've looked over the scons documentation but it is not immediately clear to me what I need to do. What I wish to achieve is essentially what you would get with this makefile: document.pdf: table.tex pdflatex document.tex table.tex: python table_generator.py How can I express this in scons?
[ "Something along these lines should do -\nenv.Command ('document.tex', '', 'python table_generator.py')\nenv.PDF ('document.pdf', 'document.tex')\n\nIt declares that 'document.tex' is generated by calling the Python script, and requests a PDF document to be created from this generatd 'document.tex' file.\nNote that this is in spirit only. It may require some tweaking. In particular, I'm not certain what kind of semantics you would want for the generation of 'document.tex' - should it be generated every time? Only when it doesn't exist? When some other file changes? (you would want to add this dependency as the second argument to Command() that case).\nIn addition, the output of Command() can be used as input to PDF() if desired. For clarity, I didn't do that.\n", "In this simple case, the easiest way is to just use the subprocess module\nfrom subprocess import call\ncall(\"python table_generator.py\")\ncall(\"pdflatex document.tex\")\n\nRegardless of where in your SConstruct file these lines are placed, they will happen before any of the compiling and linking performed by SCons.\nThe downside is that these commands will be executed every time you run SCons, rather than only when the files have changed, which is what would happen in your example Makefile. So if those commands take a long time to run, this wouldn't be a good solution.\nIf you really need to only run these commands when the files have changed, look at the SCons manual section Writing Your Own Builders.\n" ]
[ 16, 3 ]
[]
[]
[ "latex", "python", "scons", "tex" ]
stackoverflow_0000729759_latex_python_scons_tex.txt
Q: Seting up Python on IIS 5.1 I have this test python file import os print 'Content-type: text/html' print print '<HTML><HEAD><TITLE>Python Sample CGI</TITLE></HEAD>' print '<BODY>' print "<H1>This is A Sample Python CGI Script</H1>" print '<br>' if os.environ.has_key('REMOTE_HOST'): print "<p>You have accessed this site from IP: "+os.environ["REMOTE_HOST"]+"</p>" else: print os.environ['COMPUTERNAME'] print '</BODY></html>' I created an application on IIS 5.1 with permission to execute scripts and created a mapping to .py like this: C:\Python30\python.exe -u "%" "%" But when I try to execute the script I got the following error: CGI Error The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are: C:\Python30\python.exe: can't find '__main__.py' in '' Any idea? A: C:\Python30\python.exe -u "%" "%" Close, but it should be "%s". I use: "C:\Python30\python.exe" -u "%s" (The second %s is for command-line <isindex> queries, which will never happen in this century.)
Seting up Python on IIS 5.1
I have this test python file import os print 'Content-type: text/html' print print '<HTML><HEAD><TITLE>Python Sample CGI</TITLE></HEAD>' print '<BODY>' print "<H1>This is A Sample Python CGI Script</H1>" print '<br>' if os.environ.has_key('REMOTE_HOST'): print "<p>You have accessed this site from IP: "+os.environ["REMOTE_HOST"]+"</p>" else: print os.environ['COMPUTERNAME'] print '</BODY></html>' I created an application on IIS 5.1 with permission to execute scripts and created a mapping to .py like this: C:\Python30\python.exe -u "%" "%" But when I try to execute the script I got the following error: CGI Error The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are: C:\Python30\python.exe: can't find '__main__.py' in '' Any idea?
[ "C:\\Python30\\python.exe -u \"%\" \"%\"\n\nClose, but it should be \"%s\". I use:\n\"C:\\Python30\\python.exe\" -u \"%s\" \n\n(The second %s is for command-line <isindex> queries, which will never happen in this century.)\n" ]
[ 2 ]
[]
[]
[ "cgi", "iis", "iis_5", "python" ]
stackoverflow_0000730105_cgi_iis_iis_5_python.txt
Q: wxPython: Making a fixed-height panel I have a wx.Frame, in which I have a vertical BoxSizer with two items, a TextCtrl and a custom widget. I want the custom widget to have a fixed pixel height, while the TextCtrl will expand normally to fill the window. What should I do? A: Got it. When creating the widget, use a size of (-1,100), where "100" is the height you want. Apparently the "-1" is a sort of "None" in this context. When adding the widget to the sizer, use a proportion of 0, like this: self.sizer.Add(self.timeline,0,wx.EXPAND)
wxPython: Making a fixed-height panel
I have a wx.Frame, in which I have a vertical BoxSizer with two items, a TextCtrl and a custom widget. I want the custom widget to have a fixed pixel height, while the TextCtrl will expand normally to fill the window. What should I do?
[ "Got it.\nWhen creating the widget, use a size of (-1,100), where \"100\" is the height you want. Apparently the \"-1\" is a sort of \"None\" in this context.\nWhen adding the widget to the sizer, use a proportion of 0, like this:\nself.sizer.Add(self.timeline,0,wx.EXPAND)\n" ]
[ 6 ]
[]
[]
[ "layout", "python", "widget", "wxpython" ]
stackoverflow_0000730394_layout_python_widget_wxpython.txt
Q: Python "round robin" Given multiple (x,y) ordered pairs, I want to compare distances between each one of them. So pretend I have a list of ordered pairs: pairs = [a,b,c,d,e,f] I have a function that takes two ordered pairs and find the distance between them: def distance(a,b): from math import sqrt as sqrt from math import pow as pow d1 = pow((a[0] - b[0]),2) d2 = pow((a[1] - b[1]),2) distance = sqrt(d1 + d2) return distance How can I use this function to compare every ordered pair to every other ordered pair, ultimately finding the two ordered-pairs with the greatest distance between them? Psuedopsuedocode: distance(a,b) distance(a,c) ... distance(e,f) Any help would be tremendously appreciated. A: in python 2.6, you can use itertools.permutations import itertools perms = itertools.permutations(pairs, 2) distances = (distance(*p) for p in perms) or import itertools combs = itertools.combinations(pairs, 2) distances = (distance(*c) for c in combs) A: try: from itertools import combinations except ImportError: def combinations(l, n): if n != 2: raise Exception('This placeholder only good for n=2') for i in range(len(l)): for j in range(i+1, len(l)): yield l[i], l[j] coords_list = [(0,0), (3,4), (6,8)] def distance(p1, p2): return ( ( p2[0]-p1[0] ) ** 2 + ( p2[1]-p1[1] )**2 ) ** 0.5 largest_distance, (p1, p2) = max([ (distance(p1,p2), (p1, p2)) for (p1,p2) in combinations(coords_list, 2) ]) print largest_distance, p1, p2 A: Try: max(distance(a, b) for (i, a) in enumerate(pairs) for b in pairs[i+1:]) This avoid identity-comparisons (e.g. distance(x, x), distance(y, y), etc.). It also avoids doing symmetric comparisons, since distance(x, y) == distance(y, x). Update: I like Evgeny's solution to use itertools a little better, as it expresses what you're trying to do more succinctly. Both of our solutions do the same thing. (Note: make sure you use combinations, not permutations -- that will be much slower!) A: slightly related, you don't have to compute the euclidean distance yourself, there's math.hypot: In [1]: a = (1, 2) In [2]: b = (4, 5) In [3]: hypot(a[0]-b[0], a[1]-b[1]) Out[3]: 4.2426406871192848 A: If you don't mind doing distance calculations between two points that are the same twice, the following will find the greatest distance: max( [distance(a, b) for a in pairs for b in pairs] ) In order to have the a and b pair instead, then do the following: import operator max( [((a,b), distance(a, b)) for a in pairs for b in pairs], key=operator.itemgetter(1)) You can combine this with John Feminella's solution to get the (a,b) tuple without doing excess distance comparisons
Python "round robin"
Given multiple (x,y) ordered pairs, I want to compare distances between each one of them. So pretend I have a list of ordered pairs: pairs = [a,b,c,d,e,f] I have a function that takes two ordered pairs and find the distance between them: def distance(a,b): from math import sqrt as sqrt from math import pow as pow d1 = pow((a[0] - b[0]),2) d2 = pow((a[1] - b[1]),2) distance = sqrt(d1 + d2) return distance How can I use this function to compare every ordered pair to every other ordered pair, ultimately finding the two ordered-pairs with the greatest distance between them? Psuedopsuedocode: distance(a,b) distance(a,c) ... distance(e,f) Any help would be tremendously appreciated.
[ "in python 2.6, you can use itertools.permutations\nimport itertools\nperms = itertools.permutations(pairs, 2)\ndistances = (distance(*p) for p in perms)\n\nor\nimport itertools\ncombs = itertools.combinations(pairs, 2)\ndistances = (distance(*c) for c in combs)\n\n", "try:\n\n from itertools import combinations\n\nexcept ImportError:\n\n def combinations(l, n):\n if n != 2: raise Exception('This placeholder only good for n=2')\n for i in range(len(l)):\n for j in range(i+1, len(l)):\n yield l[i], l[j]\n\n\ncoords_list = [(0,0), (3,4), (6,8)]\n\ndef distance(p1, p2):\n return ( ( p2[0]-p1[0] ) ** 2 + ( p2[1]-p1[1] )**2 ) ** 0.5\n\nlargest_distance, (p1, p2) = max([\n (distance(p1,p2), (p1, p2)) for (p1,p2) in combinations(coords_list, 2)\n ])\n\n\nprint largest_distance, p1, p2\n\n", "Try:\nmax(distance(a, b) for (i, a) in enumerate(pairs) for b in pairs[i+1:])\n\nThis avoid identity-comparisons (e.g. distance(x, x), distance(y, y), etc.). It also avoids doing symmetric comparisons, since distance(x, y) == distance(y, x).\n\nUpdate: I like Evgeny's solution to use itertools a little better, as it expresses what you're trying to do more succinctly. Both of our solutions do the same thing. (Note: make sure you use combinations, not permutations -- that will be much slower!)\n", "slightly related, you don't have to compute the euclidean distance yourself, there's math.hypot:\nIn [1]: a = (1, 2)\nIn [2]: b = (4, 5)\nIn [3]: hypot(a[0]-b[0], a[1]-b[1])\nOut[3]: 4.2426406871192848\n\n", "If you don't mind doing distance calculations between two points that are the same twice, the following will find the greatest distance:\nmax( [distance(a, b) for a in pairs for b in pairs] )\n\nIn order to have the a and b pair instead, then do the following:\nimport operator\nmax( [((a,b), distance(a, b)) for a in pairs for b in pairs], key=operator.itemgetter(1))\n\nYou can combine this with John Feminella's solution to get the (a,b) tuple without doing excess distance comparisons\n" ]
[ 17, 10, 6, 4, 3 ]
[]
[]
[ "iteration", "python", "round_robin" ]
stackoverflow_0000728543_iteration_python_round_robin.txt
Q: Pygame Invalid Syntax I just can't figure out I've been following a tutorial "McGugan - Beginning Game Development with Python and Pygame (Apress, 2007)" and in the code at around chapter five involving object movement I keep getting invalid syntax alerts on '-' being used in the code. It isn't up to date but I would've thought a subtract wouldn't be changed in any updates due to its simplicity and necessity. This is the code I have: background_image_filename = 'sushiplate.jpg' sprite_image_filename = 'fugu.png' import pygame from pygame.locals import * from sys import exit from gameobjects.vector2 import Vector2 pygame.init() screen = pygame.display.set_mode((640, 480), 0, 32) background = pygame.image.load(background_image_filename).convert() sprite = pygame.image.load(sprite_image_filename).convert_alpha() clock = pygame.time.Clock() position = Vector2(100.0, 100.0) speed = 250. heading = Vector2() while True: for event in pygame.event.get(): if event.type == QUIT: exit() if event.type == MOUSEBUTTONDOWN: destination = Vector2(*event.pos) – Vector2(*sprite.get_size())/2. heading = Vector2.from_points(position, destination) heading.normalize() screen.blit(background, (0,0)) screen.blit(sprite, position) time_passed = clock.tick() time_passed_seconds = time_passed / 1000.0 distance_moved = time_passed_seconds * speed position += heading * distance_moved pygame.display.update() am I doing something wrong or is it just simply outdated? Any help is much needed. A: In this line: destination = Vector2(*event.pos) – Vector2(*sprite.get_size())/2. You somehow typed the character "–" (EN DASH) instead of "-" (HYPHEN-MINUS). Use "-" (HYPHEN-MINUS) instead, like this: destination = Vector2(*event.pos) - Vector2(*sprite.get_size())/2. A: I can't be sure without a stack trace, but I have a hunch that it's the wrong - symbol. What editor are you using? Is it possible that your editor is taking the - symbol and turning it into a fancier dash, like an ndash or an mdash? A: Maybe try changing speed to "speed = 250.0". I don't know if that dangling dot would throw python off. What is going on here, with your error message at least, is the Python parser is stumbling over something before your '-', which screws up its interpretation of '-'. So I recommend looking before the '-' for typos. Also, make sure you turn on visible white space in your editor when debugging Python code. This could be a white space error, which would be invisible to us at Stack Overflow. EDIT: So I was completely wrong about that '-' error being a red herring. But keep that parser behavior in mind/white space thing in mind, could help in the future. Apologies if this is obvious to you, I don't know what level you are at with Python.
Pygame Invalid Syntax I just can't figure out
I've been following a tutorial "McGugan - Beginning Game Development with Python and Pygame (Apress, 2007)" and in the code at around chapter five involving object movement I keep getting invalid syntax alerts on '-' being used in the code. It isn't up to date but I would've thought a subtract wouldn't be changed in any updates due to its simplicity and necessity. This is the code I have: background_image_filename = 'sushiplate.jpg' sprite_image_filename = 'fugu.png' import pygame from pygame.locals import * from sys import exit from gameobjects.vector2 import Vector2 pygame.init() screen = pygame.display.set_mode((640, 480), 0, 32) background = pygame.image.load(background_image_filename).convert() sprite = pygame.image.load(sprite_image_filename).convert_alpha() clock = pygame.time.Clock() position = Vector2(100.0, 100.0) speed = 250. heading = Vector2() while True: for event in pygame.event.get(): if event.type == QUIT: exit() if event.type == MOUSEBUTTONDOWN: destination = Vector2(*event.pos) – Vector2(*sprite.get_size())/2. heading = Vector2.from_points(position, destination) heading.normalize() screen.blit(background, (0,0)) screen.blit(sprite, position) time_passed = clock.tick() time_passed_seconds = time_passed / 1000.0 distance_moved = time_passed_seconds * speed position += heading * distance_moved pygame.display.update() am I doing something wrong or is it just simply outdated? Any help is much needed.
[ "In this line:\ndestination = Vector2(*event.pos) – Vector2(*sprite.get_size())/2.\n\nYou somehow typed the character \"–\" (EN DASH) instead of \"-\" (HYPHEN-MINUS).\nUse \"-\" (HYPHEN-MINUS) instead, like this:\ndestination = Vector2(*event.pos) - Vector2(*sprite.get_size())/2.\n\n", "I can't be sure without a stack trace, but I have a hunch that it's the wrong - symbol. What editor are you using? Is it possible that your editor is taking the - symbol and turning it into a fancier dash, like an ndash or an mdash? \n", "Maybe try changing speed to \"speed = 250.0\". I don't know if that dangling dot would throw python off.\nWhat is going on here, with your error message at least, is the Python parser is stumbling over something before your '-', which screws up its interpretation of '-'. So I recommend looking before the '-' for typos.\nAlso, make sure you turn on visible white space in your editor when debugging Python code. This could be a white space error, which would be invisible to us at Stack Overflow.\nEDIT:\nSo I was completely wrong about that '-' error being a red herring. But keep that parser behavior in mind/white space thing in mind, could help in the future.\nApologies if this is obvious to you, I don't know what level you are at with Python.\n" ]
[ 5, 0, 0 ]
[]
[]
[ "pygame", "python", "syntax" ]
stackoverflow_0000731057_pygame_python_syntax.txt
Q: Python script - SCP on windows How is it possible to do secure copy using python (windows native install - ActivePython). Unfortunately pexpect module is for unix only and we don't want cygwin locally. I wrote a script that based on pscp.exe win tool - but always stops at first execution becuse of fingerprint host id. and haven't found option to switch this off. the remote hosts are running ssh-server on cygwin (win 2003 servers). Thanks A: paramiko is pretty slick. See this question for some more details. A: I strongly recommend that you use keys rather than passwords. If you use ssh keys properly, you do not need to use expect, as the scp command won't ask for any user input. If you have command line ssh installed, you can make a key like this: ssh-keygen -t dsa Then simply follow the instructions provided, and save the key to the default location. If you put a passphrase on it, you'll need to use some sort of ssh agent, either the command line ssh-agent or pagent on windows. You can also create an ssh key with the putty suite's puttygen. To set up the key for authentication, simply put a copy of id_dsa.pub on the host you want to scp to in the file ~/.ssh/authorized_keys. A: http://pypi.python.org/pypi/ssh4py SCP example: http://blog.keyphrene.com/keyphrene/index.php/2008/09/18/13-scp A: Twisted Conch supports ssh and sftp. A: How do you expect to provide the authentication data? The easiest way is to create a key, and make sure it is in the server's list of accepted hosts. That way scp will authenticate using the private/public key pair automatically, and "just work". This is a handy tutorial on how to go about creating and uploading the key. Of course this assumes you have the necessary admin access to the server.
Python script - SCP on windows
How is it possible to do secure copy using python (windows native install - ActivePython). Unfortunately pexpect module is for unix only and we don't want cygwin locally. I wrote a script that based on pscp.exe win tool - but always stops at first execution becuse of fingerprint host id. and haven't found option to switch this off. the remote hosts are running ssh-server on cygwin (win 2003 servers). Thanks
[ "paramiko is pretty slick. See this question for some more details.\n", "I strongly recommend that you use keys rather than passwords. If you use ssh keys properly, you do not need to use expect, as the scp command won't ask for any user input. If you have command line ssh installed, you can make a key like this: \nssh-keygen -t dsa\n\nThen simply follow the instructions provided, and save the key to the default location. If you put a passphrase on it, you'll need to use some sort of ssh agent, either the command line ssh-agent or pagent on windows. You can also create an ssh key with the putty suite's puttygen. \nTo set up the key for authentication, simply put a copy of id_dsa.pub on the host you want to scp to in the file ~/.ssh/authorized_keys. \n", "http://pypi.python.org/pypi/ssh4py\nSCP example: http://blog.keyphrene.com/keyphrene/index.php/2008/09/18/13-scp\n", "Twisted Conch supports ssh and sftp.\n", "How do you expect to provide the authentication data? The easiest way is to create a key, and make sure it is in the server's list of accepted hosts. That way scp will authenticate using the private/public key pair automatically, and \"just work\".\nThis is a handy tutorial on how to go about creating and uploading the key. Of course this assumes you have the necessary admin access to the server.\n" ]
[ 2, 2, 1, 1, 0 ]
[]
[]
[ "copy", "python", "windows" ]
stackoverflow_0000729130_copy_python_windows.txt
Q: Is there a way to get all the directories but not files in a directory in Python? This link is using a custom method, but I just wanna see if there is a single method to do it in Python 2.6? A: There isn't a built-in function to only list files, but it's easy enough to define in a couple of lines: def listfiles(directory): return [f for f in os.listdir(directory) if os.path.isdir(os.path.join(directory, f))] EDIT: fixed, thanks Stephan202 A: If a_directory is the directory you want to inspect, then: next(f1 for f in os.walk(a_directory)) From the os.walk() reference: Generate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory top (including top itself), it yields a 3-tuple (dirpath, dirnames, filenames). A: I don't believe there is. Since directories are also files, you have to ask for all the files, then ask each one if it is a directory. A: def listdirs(path): ret = [] for cur_name in os.listdir(path): full_path = os.path.join(path, cur_name) if os.path.isdir(full_path): ret.append(cur_name) return ret onlydirs = listdir("/tmp/") print onlydirs ..or as a list-comprehension.. path = "/tmp/" onlydirs = [x for x in os.listdir(path) if os.path.isdir(os.path.join(path, x))] print onlydirs
Is there a way to get all the directories but not files in a directory in Python?
This link is using a custom method, but I just wanna see if there is a single method to do it in Python 2.6?
[ "There isn't a built-in function to only list files, but it's easy enough to define in a couple of lines:\ndef listfiles(directory):\n return [f for f in os.listdir(directory) \n if os.path.isdir(os.path.join(directory, f))]\n\nEDIT: fixed, thanks Stephan202\n", "If a_directory is the directory you want to inspect, then:\nnext(f1 for f in os.walk(a_directory))\nFrom the os.walk() reference:\n\nGenerate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory top (including top itself), it yields a 3-tuple (dirpath, dirnames, filenames).\n\n", "I don't believe there is. Since directories are also files, you have to ask for all the files, then ask each one if it is a directory.\n", "def listdirs(path):\n ret = []\n for cur_name in os.listdir(path):\n full_path = os.path.join(path, cur_name)\n if os.path.isdir(full_path):\n ret.append(cur_name)\n return ret\n\nonlydirs = listdir(\"/tmp/\")\nprint onlydirs\n\n..or as a list-comprehension..\npath = \"/tmp/\"\nonlydirs = [x for x in os.listdir(path) if os.path.isdir(os.path.join(path, x))]\nprint onlydirs\n\n" ]
[ 5, 3, 1, 0 ]
[]
[]
[ "directory", "python" ]
stackoverflow_0000731534_directory_python.txt
Q: What's easiest way to get Python script output on the web? I have a python script that runs continuously. It outputs 2 lines of info every 30 seconds. I'd like to be able to view this output on the web. In particular, I'd like the site to auto-update (add the new output at the top of the page/site every 30 seconds without having to refresh the page). I understand I can do this with javascript but is there a python only based solution? Even if there is, is javascript the way to go? I'm more than willing to learn javascript if needed but if not, I'd like to stay focused on python. Sorry for the basic question but I'm still clueless when it comes to web programming. Thx! A: This question appears to have two things in it. Presentation on the web. This is easy to do in Python -- use Django or TurboGears or any Python-based web framework. Refresh of the web page to show new data. This can be done two ways. Some fancy Javascript to refresh. Some fancy HTML to refresh the page. The meta refresh tag is what you want. If you do this, you have an all-Python solution. A: If you want a dead simple way to print data from a Python script to a webpage and update automatically, you can just print from the script. For example, using Apache with the below Python CGI script: #!/usr/bin/python import time import sys import random def write(inline=''): sys.stdout.write(inline) sys.stdout.write('\r\n') sys.stdout.flush() #prints out random digits between 1 and 1000 indefinitely write("Content-type: text/html\r\n") i = 0 while(True): i = i + 1 time.sleep(1) write(str(i) + "<br />") If I navigate to that in a browser (Firefox, don't know if other browsers might work differently with regards to buffering etc), it prints the digits continually. Mind you, it prints in sequential order so the newer data is at the bottom rather than that top, but it might work depending on what exactly you're looking to do. If this isn't really what you're looking for, the only other way to do this is an automatic refreshing page (either in an iframe, or the whole page) or with javascript to do the data fetching. You can use a meta refresh tag in your iframe or page HTML source, and your CGI can print the new data each time it's refreshed. Alternatively, you can use javascript with an XMLHTTPRequest to read the new data in without a visual page refresh. A: You could use Comet, but I strongly discourage you from doing so. I'd just write a short Javascript, using jQuery this is really straightforward. Another possibility is the use of an iframe that reloads every 30 seconds, this would prevent the whole page from reloading. A: If you want to do it entirely in python you can use pyjamas. It generates javascript directly from python code, so you avoid writing javascript yourself completely. A: You need Javascript in one way or another for your 30 second refresh. Alternatively, you could set a meta tag refresh for every 30 seconds to redirect to the current page, but the Javascript route will prevent page flicker. A: Write your output to a log file, and load the log file to the browser thru web server. If you need auto refresh, create a template HTML file with tag to refresh every 15 seconds: <META HTTP-EQUIV="refresh" CONTENT="15"> and use server side include to include the log file on the page. A: Perhaps "long polling" is what you're looking for? Long polling could be described as "HTTP push", basically you have a (Python) script served via a web-server, which only outputs data when available.. Then you try and load this page asynchronously via Javascript, when it fails you retry, when it succeeds you do something with the data (display it, usually) The examples in my answer are in PHP, but it it's only really 2 commands (sleep(rand(1, 10)) - the other few are to demonstrate the javascript's error handling) Well, it's not quite that simple.. You can't just serve a CGI python script via Apache, because you will run out of worker-threads, and the web-server will not be able to accept any further connections.. So, you need to use a more specialised server.. The twisted Python framework is perfect for such servers - the following two servers are incidentally both written with it cometd - the "most famous" long-polling server thing, although I never had much luck with the Python implementation slosh - seems extremely simply to use.. Implemented in Python, although you can interact with it via HTTP requests A: JavaScript is the primary way to add this sort of interactivity to a website. You can make the back-end Python, but the client will have to use JavaScript AJAX calls to update the page. Python doesn't run in the browser, so you're out of luck if you want to use just Python. (It's also possible to use Flash or Java applets, but that's a pretty heavyweight solution for what seems like a small problem.) A: Is this for a real webapp? Or is this a convenience thing for you to view output in the browser? If it's more so for convenience, you could consider using mod_python. mod_python is an extension for the apache webserver that embeds a python interpreter in the web server (so the script runs server side). It would easily let you do this sort of thing locally or for your own convenience. Then you could just run the script with mod python and have the handler post your results. You could probably easily implement the refreshing too, but I would not know off the top of my head how to do this. Hope this helps... check out mod_python. It's not too bad once you get everything configured.
What's easiest way to get Python script output on the web?
I have a python script that runs continuously. It outputs 2 lines of info every 30 seconds. I'd like to be able to view this output on the web. In particular, I'd like the site to auto-update (add the new output at the top of the page/site every 30 seconds without having to refresh the page). I understand I can do this with javascript but is there a python only based solution? Even if there is, is javascript the way to go? I'm more than willing to learn javascript if needed but if not, I'd like to stay focused on python. Sorry for the basic question but I'm still clueless when it comes to web programming. Thx!
[ "This question appears to have two things in it.\n\nPresentation on the web. This is easy to do in Python -- use Django or TurboGears or any Python-based web framework.\nRefresh of the web page to show new data. This can be done two ways.\n\nSome fancy Javascript to refresh.\nSome fancy HTML to refresh the page. The meta refresh tag is what you want. If you do this, you have an all-Python solution.\n\n\n", "If you want a dead simple way to print data from a Python script to a webpage and update automatically, you can just print from the script. For example, using Apache with the below Python CGI script: \n#!/usr/bin/python \n\nimport time\nimport sys\nimport random\n\ndef write(inline=''):\n sys.stdout.write(inline)\n sys.stdout.write('\\r\\n')\n sys.stdout.flush()\n\n#prints out random digits between 1 and 1000 indefinitely\nwrite(\"Content-type: text/html\\r\\n\")\ni = 0\nwhile(True):\n i = i + 1\n time.sleep(1)\n write(str(i) + \"<br />\")\n\nIf I navigate to that in a browser (Firefox, don't know if other browsers might work differently with regards to buffering etc), it prints the digits continually. Mind you, it prints in sequential order so the newer data is at the bottom rather than that top, but it might work depending on what exactly you're looking to do. \nIf this isn't really what you're looking for, the only other way to do this is an automatic refreshing page (either in an iframe, or the whole page) or with javascript to do the data fetching. \nYou can use a meta refresh tag in your iframe or page HTML source, and your CGI can print the new data each time it's refreshed. Alternatively, you can use javascript with an XMLHTTPRequest to read the new data in without a visual page refresh.\n", "You could use Comet, but I strongly discourage you from doing so. I'd just write a short Javascript, using jQuery this is really straightforward.\nAnother possibility is the use of an iframe that reloads every 30 seconds, this would prevent the whole page from reloading.\n", "If you want to do it entirely in python you can use pyjamas.\nIt generates javascript directly from python code, so you avoid writing javascript yourself completely.\n", "You need Javascript in one way or another for your 30 second refresh. Alternatively, you could set a meta tag refresh for every 30 seconds to redirect to the current page, but the Javascript route will prevent page flicker.\n", "Write your output to a log file, and load the log file to the browser thru web server. If you need auto refresh, create a template HTML file with tag to refresh every 15 seconds:\n<META HTTP-EQUIV=\"refresh\" CONTENT=\"15\">\n\nand use server side include to include the log file on the page.\n", "Perhaps \"long polling\" is what you're looking for?\nLong polling could be described as \"HTTP push\", basically you have a (Python) script served via a web-server, which only outputs data when available.. Then you try and load this page asynchronously via Javascript, when it fails you retry, when it succeeds you do something with the data (display it, usually)\nThe examples in my answer are in PHP, but it it's only really 2 commands (sleep(rand(1, 10)) - the other few are to demonstrate the javascript's error handling)\nWell, it's not quite that simple.. You can't just serve a CGI python script via Apache, because you will run out of worker-threads, and the web-server will not be able to accept any further connections.. So, you need to use a more specialised server..\n\nThe twisted Python framework is perfect for such servers - the following two servers are incidentally both written with it\ncometd - the \"most famous\" long-polling server thing, although I never had much luck with the Python implementation\nslosh - seems extremely simply to use.. Implemented in Python, although you can interact with it via HTTP requests\n\n", "JavaScript is the primary way to add this sort of interactivity to a website. You can make the back-end Python, but the client will have to use JavaScript AJAX calls to update the page. Python doesn't run in the browser, so you're out of luck if you want to use just Python.\n(It's also possible to use Flash or Java applets, but that's a pretty heavyweight solution for what seems like a small problem.)\n", "Is this for a real webapp? Or is this a convenience thing for you to view output in the browser? If it's more so for convenience, you could consider using mod_python.\nmod_python is an extension for the apache webserver that embeds a python interpreter in the web server (so the script runs server side). It would easily let you do this sort of thing locally or for your own convenience. Then you could just run the script with mod python and have the handler post your results. You could probably easily implement the refreshing too, but I would not know off the top of my head how to do this.\nHope this helps... check out mod_python. It's not too bad once you get everything configured.\n" ]
[ 5, 3, 2, 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "javascript", "python" ]
stackoverflow_0000731470_javascript_python.txt
Q: Showing data in a GUI where the data comes from an outside source I'm kind of lost on how to approach this problem, I'd like to write a GUI ideally using Tkinter with python, but I initially started with Qt and found that the problem extends either with all GUI frameworks or my limited understanding. The data in this case is coming from a named pipe, and I'd like to display whatever comes through the pipe into a textbox. I've tried having one thread listen on the pipe and another create the GUI, but in both cases one thread always seems to hang or the GUI never gets created. Any suggestions? A: When I did something like this I used a separate thread listening on the pipe. The thread had a pointer/handle back to the GUI so it could send the data to be displayed. I suppose you could do it in the GUI's update/event loop, but you'd have to make sure it's doing non-blocking reads on the pipe. I did it in a separate thread because I had to do lots of processing on the data that came through. Oh and when you're doing the displaying, make sure you do it in non-trivial "chunks" at a time. It's very easy to max out the message queue (on Windows at least) that's sending the update commands to the textbox. A: In the past when I've had GUI's reading data off of external things (eg: ethernet sockets), I've had a separate thread that handles servicing the external thing, and a timed callback (generally set to something like half a second) to update the GUI widget that displays the external data. A: Here is the way I would do it (on windows): import wx, wx.lib.newevent, threading import win32event, win32pipe, win32file, pywintypes, winerror NewMessage, EVT_NEW_MESSAGE = wx.lib.newevent.NewEvent() class MessageNotifier(threading.Thread): pipe_name = r"\\.\pipe\named_pipe_demo" def __init__(self, frame): threading.Thread.__init__(self) self.frame = frame def run(self): open_mode = win32pipe.PIPE_ACCESS_DUPLEX | win32file.FILE_FLAG_OVERLAPPED pipe_mode = win32pipe.PIPE_TYPE_MESSAGE sa = pywintypes.SECURITY_ATTRIBUTES() sa.SetSecurityDescriptorDacl(1, None, 0) pipe_handle = win32pipe.CreateNamedPipe( self.pipe_name, open_mode, pipe_mode, win32pipe.PIPE_UNLIMITED_INSTANCES, 0, 0, 6000, sa ) overlapped = pywintypes.OVERLAPPED() overlapped.hEvent = win32event.CreateEvent(None, 0, 0, None) while 1: try: hr = win32pipe.ConnectNamedPipe(pipe_handle, overlapped) except: # Error connecting pipe pipe_handle.Close() break if hr == winerror.ERROR_PIPE_CONNECTED: # Client is fast, and already connected - signal event win32event.SetEvent(overlapped.hEvent) rc = win32event.WaitForSingleObject( overlapped.hEvent, win32event.INFINITE ) if rc == win32event.WAIT_OBJECT_0: try: hr, data = win32file.ReadFile(pipe_handle, 64) win32file.WriteFile(pipe_handle, "ok") win32pipe.DisconnectNamedPipe(pipe_handle) wx.PostEvent(self.frame, NewMessage(data=data)) except win32file.error: continue class Messages(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) self.messages = wx.TextCtrl(self, style=wx.TE_MULTILINE | wx.TE_READONLY) self.Bind(EVT_NEW_MESSAGE, self.On_Update) def On_Update(self, event): self.messages.Value += "\n" + event.data app = wx.PySimpleApp() app.TopWindow = Messages() app.TopWindow.Show() MessageNotifier(app.TopWindow).start() app.MainLoop() Test it by sending some data with: import win32pipe print win32pipe.CallNamedPipe(r"\\.\pipe\named_pipe_demo", "Hello", 64, 0) (you also get a response in this case)
Showing data in a GUI where the data comes from an outside source
I'm kind of lost on how to approach this problem, I'd like to write a GUI ideally using Tkinter with python, but I initially started with Qt and found that the problem extends either with all GUI frameworks or my limited understanding. The data in this case is coming from a named pipe, and I'd like to display whatever comes through the pipe into a textbox. I've tried having one thread listen on the pipe and another create the GUI, but in both cases one thread always seems to hang or the GUI never gets created. Any suggestions?
[ "When I did something like this I used a separate thread listening on the pipe. The thread had a pointer/handle back to the GUI so it could send the data to be displayed.\nI suppose you could do it in the GUI's update/event loop, but you'd have to make sure it's doing non-blocking reads on the pipe. I did it in a separate thread because I had to do lots of processing on the data that came through.\nOh and when you're doing the displaying, make sure you do it in non-trivial \"chunks\" at a time. It's very easy to max out the message queue (on Windows at least) that's sending the update commands to the textbox.\n", "In the past when I've had GUI's reading data off of external things (eg: ethernet sockets), I've had a separate thread that handles servicing the external thing, and a timed callback (generally set to something like half a second) to update the GUI widget that displays the external data.\n", "Here is the way I would do it (on windows):\nimport wx, wx.lib.newevent, threading\nimport win32event, win32pipe, win32file, pywintypes, winerror\n\n\nNewMessage, EVT_NEW_MESSAGE = wx.lib.newevent.NewEvent()\nclass MessageNotifier(threading.Thread):\n pipe_name = r\"\\\\.\\pipe\\named_pipe_demo\"\n\n def __init__(self, frame):\n threading.Thread.__init__(self)\n self.frame = frame\n\n def run(self):\n open_mode = win32pipe.PIPE_ACCESS_DUPLEX | win32file.FILE_FLAG_OVERLAPPED\n pipe_mode = win32pipe.PIPE_TYPE_MESSAGE\n\n sa = pywintypes.SECURITY_ATTRIBUTES()\n sa.SetSecurityDescriptorDacl(1, None, 0)\n\n pipe_handle = win32pipe.CreateNamedPipe(\n self.pipe_name, open_mode, pipe_mode,\n win32pipe.PIPE_UNLIMITED_INSTANCES,\n 0, 0, 6000, sa\n )\n\n overlapped = pywintypes.OVERLAPPED()\n overlapped.hEvent = win32event.CreateEvent(None, 0, 0, None)\n\n while 1:\n try:\n hr = win32pipe.ConnectNamedPipe(pipe_handle, overlapped)\n except:\n # Error connecting pipe\n pipe_handle.Close()\n break\n\n if hr == winerror.ERROR_PIPE_CONNECTED:\n # Client is fast, and already connected - signal event\n win32event.SetEvent(overlapped.hEvent)\n\n rc = win32event.WaitForSingleObject(\n overlapped.hEvent, win32event.INFINITE\n )\n\n if rc == win32event.WAIT_OBJECT_0:\n try:\n hr, data = win32file.ReadFile(pipe_handle, 64)\n win32file.WriteFile(pipe_handle, \"ok\")\n win32pipe.DisconnectNamedPipe(pipe_handle)\n wx.PostEvent(self.frame, NewMessage(data=data))\n except win32file.error:\n continue\n\n\nclass Messages(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None)\n self.messages = wx.TextCtrl(self, style=wx.TE_MULTILINE | wx.TE_READONLY)\n self.Bind(EVT_NEW_MESSAGE, self.On_Update)\n\n def On_Update(self, event):\n self.messages.Value += \"\\n\" + event.data\n\n\napp = wx.PySimpleApp()\napp.TopWindow = Messages()\napp.TopWindow.Show()\nMessageNotifier(app.TopWindow).start()\napp.MainLoop()\n\nTest it by sending some data with:\nimport win32pipe\n\nprint win32pipe.CallNamedPipe(r\"\\\\.\\pipe\\named_pipe_demo\", \"Hello\", 64, 0)\n\n(you also get a response in this case)\n" ]
[ 0, 0, 0 ]
[]
[]
[ "named_pipes", "python", "user_interface" ]
stackoverflow_0000731759_named_pipes_python_user_interface.txt
Q: How to convert html entities into symbols? I have made some adaptations to the script from this answer. and I am having problems with unicode. Some of the questions end up being written poorly. Some answers and responses end up looking like: Yeah.. I know.. I&#8217;m a simpleton.. So what&#8217;s a Singleton? (2) How can I make the &#8217; to be translated to the right character? Note: If that matters, I'm using python 2.6, on a French windows. >>> sys.getdefaultencoding() 'ascii' >>> sys.getfilesystemencoding() 'mbcs' EDIT1: Based on Ryan Ginstrom's post, I have been able to correct a part of the output, but I am having problems with python's unicode. In Idle / python shell: Yeah.. I know.. I’m a simpleton.. So what’s a Singleton? In a text file, when redirecting stdout Yeah.. I know.. I’m a simpleton.. So what’s a Singleton? How can I correct that ? Edit2: I have tried Jarret Hardie's solution but it didn't do anything. I am on windows, using python 2.6, so my site-packages folder is at: C:\Python26\Lib\site-packages There was no siteconfig.py file, so I created one, pasted the code provided by Jarret Hardie, started a python interpreter, but seems like it has not been loaded. sys.getdefaultencoding() 'ascii' I noticed there is a site.py file at : C:\Python26\Lib\site.py I tried changing the encoding in the function def setencoding(): """Set the string encoding used by the Unicode implementation. The default is 'ascii', but if you're willing to experiment, you can change this.""" encoding = "ascii" # Default value set by _PyUnicode_Init() if 0: # Enable to support locale aware default string encodings. import locale loc = locale.getdefaultlocale() if loc[1]: encoding = loc[1] if 0: # Enable to switch off string to Unicode coercion and implicit # Unicode to string conversion. encoding = "undefined" if encoding != "ascii": # On Non-Unicode builds this will raise an AttributeError... sys.setdefaultencoding(encoding) # Needs Python Unicode build ! to set the encoding to utf-8. It worked (after a restart of python of course). >>> sys.getdefaultencoding() 'utf-8' The sad thing is that it didn't correct the caracters in my program. :( A: You should be able to convert HTMl/XML entities into Unicode characters. Check out this answer in SO: Decoding HTML Entities With Python Basically you want something like this: from BeautifulSoup import BeautifulStoneSoup soup = BeautifulStoneSoup(urllib2.urlopen(URL), convertEntities=BeautifulStoneSoup.ALL_ENTITIES) A: Does changing your default encoding in siteconfig.py work? In your site-packages file (on my OS X system it's in /Library/Python/2.5/site-packages/) create a file called siteconfig.py. In this file put: import sys sys.setdefaultencoding('utf-8') The setdefaultencoding method is removed from the sys module once siteconfig.py is processed, so you must put it in site-packages so that Python will read it when the interpreter starts up.
How to convert html entities into symbols?
I have made some adaptations to the script from this answer. and I am having problems with unicode. Some of the questions end up being written poorly. Some answers and responses end up looking like: Yeah.. I know.. I&#8217;m a simpleton.. So what&#8217;s a Singleton? (2) How can I make the &#8217; to be translated to the right character? Note: If that matters, I'm using python 2.6, on a French windows. >>> sys.getdefaultencoding() 'ascii' >>> sys.getfilesystemencoding() 'mbcs' EDIT1: Based on Ryan Ginstrom's post, I have been able to correct a part of the output, but I am having problems with python's unicode. In Idle / python shell: Yeah.. I know.. I’m a simpleton.. So what’s a Singleton? In a text file, when redirecting stdout Yeah.. I know.. I’m a simpleton.. So what’s a Singleton? How can I correct that ? Edit2: I have tried Jarret Hardie's solution but it didn't do anything. I am on windows, using python 2.6, so my site-packages folder is at: C:\Python26\Lib\site-packages There was no siteconfig.py file, so I created one, pasted the code provided by Jarret Hardie, started a python interpreter, but seems like it has not been loaded. sys.getdefaultencoding() 'ascii' I noticed there is a site.py file at : C:\Python26\Lib\site.py I tried changing the encoding in the function def setencoding(): """Set the string encoding used by the Unicode implementation. The default is 'ascii', but if you're willing to experiment, you can change this.""" encoding = "ascii" # Default value set by _PyUnicode_Init() if 0: # Enable to support locale aware default string encodings. import locale loc = locale.getdefaultlocale() if loc[1]: encoding = loc[1] if 0: # Enable to switch off string to Unicode coercion and implicit # Unicode to string conversion. encoding = "undefined" if encoding != "ascii": # On Non-Unicode builds this will raise an AttributeError... sys.setdefaultencoding(encoding) # Needs Python Unicode build ! to set the encoding to utf-8. It worked (after a restart of python of course). >>> sys.getdefaultencoding() 'utf-8' The sad thing is that it didn't correct the caracters in my program. :(
[ "You should be able to convert HTMl/XML entities into Unicode characters. Check out this answer in SO:\nDecoding HTML Entities With Python\nBasically you want something like this:\nfrom BeautifulSoup import BeautifulStoneSoup\n\nsoup = BeautifulStoneSoup(urllib2.urlopen(URL),\n convertEntities=BeautifulStoneSoup.ALL_ENTITIES)\n\n", "Does changing your default encoding in siteconfig.py work?\nIn your site-packages file (on my OS X system it's in /Library/Python/2.5/site-packages/) create a file called siteconfig.py. In this file put:\nimport sys\nsys.setdefaultencoding('utf-8')\n\nThe setdefaultencoding method is removed from the sys module once siteconfig.py is processed, so you must put it in site-packages so that Python will read it when the interpreter starts up.\n" ]
[ 1, 0 ]
[]
[]
[ "beautifulsoup", "html_entities", "python", "unicode" ]
stackoverflow_0000728296_beautifulsoup_html_entities_python_unicode.txt
Q: HTML Rich Textbox I'm writing a web-app using Python and Pylons. I need a textbox that is rich (ie, provides the ability to bold/underline/add bullets..etc...). Does anyone know a library or widget I can use? It doesn't have to be Python/Pylons specific, as it can be a Javascript implementation as well. Thanks! A: There are several very mature javascript implementations that are server-framework agnostic: http://www.fckeditor.net/ TinyMCE WMD (used by SO) The wikipedia article on Free HTML editors has a good overview, though note that not all are for application embedding. A: ExtJS's HtmlEditor was the best I found (license issues aside): http://extjs.com/deploy/dev/docs/?class=Ext.form.HtmlEditor ExtJS is a bit heavy-weight, but that HtmlEditor was the most responsive and best-looking out of the box that I found. It's worth running the output through HTMLTidy, which there are python libraries for. A: webkit-gtk is getting very stable, and i believe has python bindings now so technically you could use that (then your text editor merely needs to be <body contenteditable></body> and you'd be done. Unfortunately i'm not sure how complete its bindings are at present
HTML Rich Textbox
I'm writing a web-app using Python and Pylons. I need a textbox that is rich (ie, provides the ability to bold/underline/add bullets..etc...). Does anyone know a library or widget I can use? It doesn't have to be Python/Pylons specific, as it can be a Javascript implementation as well. Thanks!
[ "There are several very mature javascript implementations that are server-framework agnostic:\n\nhttp://www.fckeditor.net/\nTinyMCE\nWMD (used by SO)\n\nThe wikipedia article on Free HTML editors has a good overview, though note that not all are for application embedding.\n", "ExtJS's HtmlEditor was the best I found (license issues aside):\nhttp://extjs.com/deploy/dev/docs/?class=Ext.form.HtmlEditor\nExtJS is a bit heavy-weight, but that HtmlEditor was the most responsive and best-looking out of the box that I found. It's worth running the output through HTMLTidy, which there are python libraries for.\n", "webkit-gtk is getting very stable, and i believe has python bindings now so technically you could use that (then your text editor merely needs to be <body contenteditable></body> and you'd be done. Unfortunately i'm not sure how complete its bindings are at present\n" ]
[ 5, 2, 1 ]
[]
[]
[ "http", "javascript", "pylons", "python", "widget" ]
stackoverflow_0000732429_http_javascript_pylons_python_widget.txt
Q: Resetting the main GUI window I just want the equivalent of closing and reopening my main program. I want to invoke it when a "new"-like option from a drop-down menu is clicked on. Something like calling root.destroy() and then re-initiating the mainloop. How can I get this done? A: There are at least three ways you can solve this. Method one: the head fake. When you create your app, don't put all the widgets in the root window. Instead, hide the root window and create a new toplevel that represents your application. When you restart it's just a matter of destroying that new toplevel and re-running all your start-up logic. Method two: nuke and pave. Similar in concept but slightly different in execution. In this model, when you want to restart you simply delete all the widgets in the main window, reset the geometry to null (so the window will once again resize itself based on its contents) and then run the logic that draws all the other widgets. Method three: if it worked the first time... As suggested by Martin v. Löwis, simply have your program exec a new instance of the program, then exit. The first two methods are potentially faster and have the (dis?)advantage of preserving the current environment. For example you could save the copy of the clipboard, column widths, etc. The third method absolutely guarantees a blank slate. A: If you are on Unix, restart the entire application with os.execv. Make sure you pass all command line arguments etc. A: You could take all your GUI building logic and initial state code out of the mainloop and put it into functions. Call these functions from the mainloop (something like: buildgui() & initstate()) and then, when the user clicks your menu icon, just call initstate() to set it back like it was when the application first started.
Resetting the main GUI window
I just want the equivalent of closing and reopening my main program. I want to invoke it when a "new"-like option from a drop-down menu is clicked on. Something like calling root.destroy() and then re-initiating the mainloop. How can I get this done?
[ "There are at least three ways you can solve this. \nMethod one: the head fake. When you create your app, don't put all the widgets in the root window. Instead, hide the root window and create a new toplevel that represents your application. When you restart it's just a matter of destroying that new toplevel and re-running all your start-up logic.\nMethod two: nuke and pave. Similar in concept but slightly different in execution. In this model, when you want to restart you simply delete all the widgets in the main window, reset the geometry to null (so the window will once again resize itself based on its contents) and then run the logic that draws all the other widgets.\nMethod three: if it worked the first time... As suggested by Martin v. Löwis, simply have your program exec a new instance of the program, then exit. \nThe first two methods are potentially faster and have the (dis?)advantage of preserving the current environment. For example you could save the copy of the clipboard, column widths, etc. The third method absolutely guarantees a blank slate.\n", "If you are on Unix, restart the entire application with os.execv. Make sure you pass all command line arguments etc.\n", "You could take all your GUI building logic and initial state code out of the mainloop and put it into functions. Call these functions from the mainloop (something like: buildgui() & initstate()) and then, when the user clicks your menu icon, just call initstate() to set it back like it was when the application first started.\n" ]
[ 4, 2, 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0000731887_python_tkinter.txt
Q: Django Model: Returning username from currently logged in user I'm working on a Django app for hosting media (specifically audio and images). I have image galleries and photos separate in my model, and have them linked with a ForeignKey (not sure if that's correct, but still learning). What I need is for the Album class's __unicode__ to return the album owner's username. class Album(models.Model): artist = models.ForeignKey(User, unique=True, related_name='artpunk') def __unicode__(self): return self.artist.username I know the username property exists, and confirmed it by inserting a dir() and checking the console output. The problem is when I enter the image section of the admin panel, it simply states "Unrecognised command." Can User properties not be accessed by models? Or am I doing something else wrong? EDIT: Forgot to mention, using Python 2.6 with Django 1.0.2. The exact text of the error is, as above, simply "Unrecognised command" in bold, and I've already run syncdb without issue. However, I reran syncdb (gave no output) this morning just to try again and now it seems to be working fine. It's reproducible by changing the following: def __unicode__(self): return self.artist.username To something like this: def __unicode__(self): return self.artist.username+'\'s Gallery' A: There should be no problem accessing the user (even as a foreign key) from a model. I just finished testing it out myself, and there doesn't appear to be any significant difference. def __unicode__(self): return self.user.username On a side note, you should also just be able to return self.artist, since I believe that User.__unicode__() returns the username anyway. What are the exact details of the error? What version of Django/Python are you using? Did you make a change to your model that's not yet reflected in the database? Sometimes I've noticed you just need to restart the test server for things to work well. Particularly in the admin. In response to your edit, try casting the username as a string: str(self.user.username)
Django Model: Returning username from currently logged in user
I'm working on a Django app for hosting media (specifically audio and images). I have image galleries and photos separate in my model, and have them linked with a ForeignKey (not sure if that's correct, but still learning). What I need is for the Album class's __unicode__ to return the album owner's username. class Album(models.Model): artist = models.ForeignKey(User, unique=True, related_name='artpunk') def __unicode__(self): return self.artist.username I know the username property exists, and confirmed it by inserting a dir() and checking the console output. The problem is when I enter the image section of the admin panel, it simply states "Unrecognised command." Can User properties not be accessed by models? Or am I doing something else wrong? EDIT: Forgot to mention, using Python 2.6 with Django 1.0.2. The exact text of the error is, as above, simply "Unrecognised command" in bold, and I've already run syncdb without issue. However, I reran syncdb (gave no output) this morning just to try again and now it seems to be working fine. It's reproducible by changing the following: def __unicode__(self): return self.artist.username To something like this: def __unicode__(self): return self.artist.username+'\'s Gallery'
[ "There should be no problem accessing the user (even as a foreign key) from a model. I just finished testing it out myself, and there doesn't appear to be any significant difference.\ndef __unicode__(self):\n return self.user.username\n\nOn a side note, you should also just be able to return self.artist, since I believe that User.__unicode__() returns the username anyway.\nWhat are the exact details of the error? What version of Django/Python are you using? Did you make a change to your model that's not yet reflected in the database? Sometimes I've noticed you just need to restart the test server for things to work well. Particularly in the admin.\nIn response to your edit, try casting the username as a string:\nstr(self.user.username)\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_admin", "django_models", "python" ]
stackoverflow_0000732405_django_django_admin_django_models_python.txt
Q: Function overloading in Python: Missing As function overloading says: Function overloading is absent in Python. As far as I feel this a big handicap since its also an object-oriented (OO) language. Initially I found that unable to differentiate between the argument types was difficult, but the dynamic nature of Python made it easy (e.g. list, tuples, strings are much similar). However, counting the number of arguments passed and then doing the job is like an overkill. A: Now, unless you're trying to write C++ code using Python syntax, what would you need overloading for? I think it's exactly opposite. Overloading is only necessary to make strongly-typed languages act more like Python. In Python you have keyword argument, and you have *args and **kwargs. See for example: What is a clean, Pythonic way to have multiple constructors in Python? A: As unwind noted, keyword arguments with default values can go a long way. I'll also state that in my opinion, it goes against the spirit of Python to worry a lot about what types are passed into methods. In Python, I think it's more accepted to use duck typing -- asking what an object can do, rather than what it is. Thus, if your method may accept a string or a tuple, you might do something like this: def print_names(names): """Takes a space-delimited string or an iterable""" try: for name in names.split(): # string case print name except AttributeError: for name in names: print name Then you could do either of these: print_names("Ryan Billy") print_names(("Ryan", "Billy")) Although an API like that sometimes indicates a design problem. A: You don't need function overloading, as you have the *args and **kwargs arguments. The fact is that function overloading is based on the idea that passing different types you will execute different code. If you have a dynamically typed language like Python, you should not distinguish by type, but you should deal with interfaces and their compliance with the code you write. For example, if you have code that can handle either an integer, or a list of integers, you can try iterating on it and if you are not able to, then you assume it's an integer and go forward. Of course it could be a float, but as far as the behavior is concerned, if a float and an int appear to be the same, then they can be interchanged. A: Oftentimes you see the suggestion use use keyword arguments, with default values, instead. Look into that. A: You can pass a mutable container datatype into a function, and it can contain anything you want. If you need a different functionality, name the functions differently, or if you need the same interface, just write an interface function (or method) that calls the functions appropriately based on the data received. It took a while to me to get adjusted to this coming from Java, but it really isn't a "big handicap".
Function overloading in Python: Missing
As function overloading says: Function overloading is absent in Python. As far as I feel this a big handicap since its also an object-oriented (OO) language. Initially I found that unable to differentiate between the argument types was difficult, but the dynamic nature of Python made it easy (e.g. list, tuples, strings are much similar). However, counting the number of arguments passed and then doing the job is like an overkill.
[ "Now, unless you're trying to write C++ code using Python syntax, what would you need overloading for?\nI think it's exactly opposite. Overloading is only necessary to make strongly-typed languages act more like Python. In Python you have keyword argument, and you have *args and **kwargs.\nSee for example: What is a clean, Pythonic way to have multiple constructors in Python?\n", "As unwind noted, keyword arguments with default values can go a long way.\nI'll also state that in my opinion, it goes against the spirit of Python to worry a lot about what types are passed into methods. In Python, I think it's more accepted to use duck typing -- asking what an object can do, rather than what it is.\nThus, if your method may accept a string or a tuple, you might do something like this:\ndef print_names(names):\n \"\"\"Takes a space-delimited string or an iterable\"\"\"\n try:\n for name in names.split(): # string case\n print name\n except AttributeError:\n for name in names:\n print name\n\nThen you could do either of these:\nprint_names(\"Ryan Billy\")\nprint_names((\"Ryan\", \"Billy\"))\n\nAlthough an API like that sometimes indicates a design problem.\n", "You don't need function overloading, as you have the *args and **kwargs arguments.\nThe fact is that function overloading is based on the idea that passing different types you will execute different code. If you have a dynamically typed language like Python, you should not distinguish by type, but you should deal with interfaces and their compliance with the code you write.\nFor example, if you have code that can handle either an integer, or a list of integers, you can try iterating on it and if you are not able to, then you assume it's an integer and go forward. Of course it could be a float, but as far as the behavior is concerned, if a float and an int appear to be the same, then they can be interchanged.\n", "Oftentimes you see the suggestion use use keyword arguments, with default values, instead. Look into that.\n", "You can pass a mutable container datatype into a function, and it can contain anything you want.\nIf you need a different functionality, name the functions differently, or if you need the same interface, just write an interface function (or method) that calls the functions appropriately based on the data received.\nIt took a while to me to get adjusted to this coming from Java, but it really isn't a \"big handicap\".\n" ]
[ 35, 32, 21, 6, 6 ]
[]
[]
[ "missing_features", "overloading", "python" ]
stackoverflow_0000733264_missing_features_overloading_python.txt
Q: Access list of tuples I have a list that contains several tuples, like: [('a_key', 'a value'), ('another_key', 'another value')] where the first tuple-values act as dictionary-keys. I'm now searching for a python-like way to access the key/value-pairs, like: "mylist.a_key" or "mylist['a_key']" without iterating over the list. any ideas? A: You can't do it without any iteration. You will either need iteration to convert it into a dict, at which point key access will become possible sans iteration, or you will need to iterate over it for each key access. Converting to a dict seems the better idea-- in the long run it is more efficient, but more importantly, it represents how you actually see this data structure-- as pairs of keys and values. >>> x = [('a_key', 'a value'), ('another_key', 'another value')] >>> y = dict(x) >>> y['a_key'] 'a value' >>> y['another_key'] 'another value' A: If you're generating the list yourself, you might be able to create it as a dictionary at source (which allows for key, value pairs). Otherwise, Van Gale's defaultdict is the way to go I would think. Edit: As mentioned in the comments, defaultdict is not required here unless you need to deal with corner cases like several values with the same key in your list. Still, if you can originally generate the "list" as a dictionary, you save yourself having to iterate back over it afterwards.
Access list of tuples
I have a list that contains several tuples, like: [('a_key', 'a value'), ('another_key', 'another value')] where the first tuple-values act as dictionary-keys. I'm now searching for a python-like way to access the key/value-pairs, like: "mylist.a_key" or "mylist['a_key']" without iterating over the list. any ideas?
[ "You can't do it without any iteration. You will either need iteration to convert it into a dict, at which point key access will become possible sans iteration, or you will need to iterate over it for each key access. Converting to a dict seems the better idea-- in the long run it is more efficient, but more importantly, it represents how you actually see this data structure-- as pairs of keys and values.\n>>> x = [('a_key', 'a value'), ('another_key', 'another value')]\n>>> y = dict(x)\n>>> y['a_key']\n'a value'\n>>> y['another_key']\n'another value'\n\n", "If you're generating the list yourself, you might be able to create it as a dictionary at source (which allows for key, value pairs).\nOtherwise, Van Gale's defaultdict is the way to go I would think.\nEdit:\nAs mentioned in the comments, defaultdict is not required here unless you need to deal with corner cases like several values with the same key in your list. Still, if you can originally generate the \"list\" as a dictionary, you save yourself having to iterate back over it afterwards.\n" ]
[ 14, 3 ]
[]
[]
[ "python" ]
stackoverflow_0000733574_python.txt
Q: How do I make this progress bar close when it is done I commonly write Python scipts to do conversion tasks for me and whenever I write one that takes a while I use this little progress bar to check on it import sys import time from PyQt4 import QtGui app = QtGui.QApplication(sys.argv) barra = QtGui.QProgressBar() barra.show() barra.setMinimum(0) barra.setMaximum(10) for a in range(10): time.sleep(1) barra.setValue(a) app.exec_() I have 2 questions: How do I make it close itself when it reaches 100% (It stays open and if you close the python shell before clicking the X button you crash it.) also, When it loses and regains focus, it stops painting correctly. the process will continue to completion but the progress bar space is all white. How do I handle this? A: Well, because you set your Maximum to 10, your progress bar shouldn't reach 100% because for a in range(10): time.sleep(1) barra.setValue(a) will only iterate up to 9. Progress bars don't close automatically. You will have to call barra.hide() after your loop. As for the paint problem, it's likely because whatever script you ran this script from is in the same thread as the progress bar. So when you switch away and back the paint events are delayed by the actual processing of the parent script. You can either set a timer to periodically call .update() or .repaint() on 'barra' (update() is recommended over repaint()) OR you would want your main processing code to run in a QThread, which is also available in the PyQt code, but that will require some reading on your part :) The doc is for Qt, but it applies to PyQt as well: https://doc.qt.io/qt-4.8/threads.html
How do I make this progress bar close when it is done
I commonly write Python scipts to do conversion tasks for me and whenever I write one that takes a while I use this little progress bar to check on it import sys import time from PyQt4 import QtGui app = QtGui.QApplication(sys.argv) barra = QtGui.QProgressBar() barra.show() barra.setMinimum(0) barra.setMaximum(10) for a in range(10): time.sleep(1) barra.setValue(a) app.exec_() I have 2 questions: How do I make it close itself when it reaches 100% (It stays open and if you close the python shell before clicking the X button you crash it.) also, When it loses and regains focus, it stops painting correctly. the process will continue to completion but the progress bar space is all white. How do I handle this?
[ "Well, because you set your Maximum to 10, your progress bar shouldn't reach 100% because \nfor a in range(10):\n time.sleep(1)\n barra.setValue(a)\n\nwill only iterate up to 9.\nProgress bars don't close automatically. You will have to call \nbarra.hide()\n\nafter your loop.\nAs for the paint problem, it's likely because whatever script you ran this script from is in the same thread as the progress bar. So when you switch away and back the paint events are delayed by the actual processing of the parent script. You can either set a timer to periodically call .update() or .repaint() on 'barra' (update() is recommended over repaint()) OR you would want your main processing code to run in a QThread, which is also available in the PyQt code, but that will require some reading on your part :)\nThe doc is for Qt, but it applies to PyQt as well:\nhttps://doc.qt.io/qt-4.8/threads.html\n" ]
[ 5 ]
[]
[]
[ "progress_bar", "pyqt", "python" ]
stackoverflow_0000732829_progress_bar_pyqt_python.txt
Q: Type checking of arguments Python Sometimes checking of arguments in Python is necessary. e.g. I have a function which accepts either the address of other node in the network as the raw string address or class Node which encapsulates the other node's information. I use type() function as in: if type(n) == type(Node): do this elif type(n) == type(str) do this Is this a good way to do this? Update 1: Python 3 has annotation for function parameters. These can be used for type checks using tool: http://mypy-lang.org/ A: Use isinstance(). Sample: if isinstance(n, unicode): # do this elif isinstance(n, Node): # do that ... A: >>> isinstance('a', str) True >>> isinstance(n, Node) True A: Sounds like you're after a "generic function" - one which behaves differently based on the arguments given. It's a bit like how you'll get a different function when you call a method on a different object, but rather than just using the first argument (the object/self) to lookup the function you instead use all of the arguments. Turbogears uses something like this for deciding how to convert objects to JSON - if I recall correctly. There's an article from IBM on using the dispatcher package for this sort of thing: From that article: import dispatch @dispatch.generic() def doIt(foo, other): "Base generic function of 'doIt()'" @doIt.when("isinstance(foo,int) and isinstance(other,str)") def doIt(foo, other): print "foo is an unrestricted int |", foo, other @doIt.when("isinstance(foo,str) and isinstance(other,int)") def doIt(foo, other): print "foo is str, other an int |", foo, other @doIt.when("isinstance(foo,int) and 3<=foo<=17 and isinstance(other,str)") def doIt(foo, other): print "foo is between 3 and 17 |", foo, other @doIt.when("isinstance(foo,int) and 0<=foo<=1000 and isinstance(other,str)") def doIt(foo, other): print "foo is between 0 and 1000 |", foo, other A: You can also use a try catch to type check if necessary: def my_function(this_node): try: # call a method/attribute for the Node object if this_node.address: # more code here pass except AttributeError, e: # either this is not a Node or maybe it's a string, # so behavior accordingly pass You can see an example of this in Beginning Python in the second about generators (page 197 in my edition) and I believe in the Python Cookbook. Many times catching an AttributeError or TypeError is simpler and apparently faster. Also, it may work best in this manner because then you are not tied to a particular inheritance tree (e.g., your object could be a Node or it could be something other object that has the same behavior as a Node). A: No, typechecking arguments in Python is not necessary. It is never necessary. If your code accepts addresses as rawstring or as a Node object, your design is broken. That comes from the fact that if you don't know already the type of an object in your own program, then you're doing something wrong already. Typechecking hurts code reuse and reduces performance. Having a function that performs different things depending on the type of the object passed is bug-prone and has a behavior harder to understand and maintain. You have following saner options: Make a Node object constructor that accepts rawstrings, or a function that converts strings in Node objects. Make your function assume the argument passed is a Node object. That way, if you need to pass a string to the function, you just do: myfunction(Node(some_string)) That's your best option, it is clean, easy to understand and maintain. Anyone reading the code immediatelly understands what is happening, and you don't have to typecheck. Make two functions, one that accepts Node objects and one that accepts rawstrings. You can make one call the other internally, in the most convenient way (myfunction_str can create a Node object and call myfunction_node, or the other way around). Make Node objects have a __str__ method and inside your function, call str() on the received argument. That way you always get a string by coercion. In any case, don't typecheck. It is completely unnecessary and has only downsides. Refactor your code instead in a way you don't need to typecheck. You only get benefits in doing so, both in short and long run.
Type checking of arguments Python
Sometimes checking of arguments in Python is necessary. e.g. I have a function which accepts either the address of other node in the network as the raw string address or class Node which encapsulates the other node's information. I use type() function as in: if type(n) == type(Node): do this elif type(n) == type(str) do this Is this a good way to do this? Update 1: Python 3 has annotation for function parameters. These can be used for type checks using tool: http://mypy-lang.org/
[ "Use isinstance(). Sample:\nif isinstance(n, unicode):\n # do this\nelif isinstance(n, Node):\n # do that\n...\n\n", ">>> isinstance('a', str)\nTrue\n>>> isinstance(n, Node)\nTrue\n\n", "Sounds like you're after a \"generic function\" - one which behaves differently based on the arguments given. It's a bit like how you'll get a different function when you call a method on a different object, but rather than just using the first argument (the object/self) to lookup the function you instead use all of the arguments.\nTurbogears uses something like this for deciding how to convert objects to JSON - if I recall correctly.\nThere's an article from IBM on using the dispatcher package for this sort of thing:\nFrom that article:\nimport dispatch\n@dispatch.generic()\ndef doIt(foo, other):\n \"Base generic function of 'doIt()'\"\n@doIt.when(\"isinstance(foo,int) and isinstance(other,str)\")\ndef doIt(foo, other):\n print \"foo is an unrestricted int |\", foo, other\n@doIt.when(\"isinstance(foo,str) and isinstance(other,int)\")\ndef doIt(foo, other):\n print \"foo is str, other an int |\", foo, other\n@doIt.when(\"isinstance(foo,int) and 3<=foo<=17 and isinstance(other,str)\")\ndef doIt(foo, other):\n print \"foo is between 3 and 17 |\", foo, other\n@doIt.when(\"isinstance(foo,int) and 0<=foo<=1000 and isinstance(other,str)\")\ndef doIt(foo, other):\n print \"foo is between 0 and 1000 |\", foo, other\n\n", "You can also use a try catch to type check if necessary:\ndef my_function(this_node):\n try:\n # call a method/attribute for the Node object\n if this_node.address:\n # more code here\n pass\n except AttributeError, e:\n # either this is not a Node or maybe it's a string, \n # so behavior accordingly\n pass\n\nYou can see an example of this in Beginning Python in the second about generators (page 197 in my edition) and I believe in the Python Cookbook. Many times catching an AttributeError or TypeError is simpler and apparently faster. Also, it may work best in this manner because then you are not tied to a particular inheritance tree (e.g., your object could be a Node or it could be something other object that has the same behavior as a Node).\n", "No, typechecking arguments in Python is not necessary. It is never \nnecessary.\nIf your code accepts addresses as rawstring or as a Node object, your\ndesign is broken.\nThat comes from the fact that if you don't know already the type of an\nobject in your own program, then you're doing something wrong already.\nTypechecking hurts code reuse and reduces performance. Having a function\nthat performs different things depending on the type of the object passed\nis bug-prone and has a behavior harder to understand and maintain.\nYou have following saner options:\n\nMake a Node object constructor that accepts rawstrings, or a function\nthat converts strings in Node objects. Make your function assume the\nargument passed is a Node object. That way, if you need to pass a\nstring to the function, you just do:\nmyfunction(Node(some_string))\n\nThat's your best option, it is clean, easy to understand and maintain.\nAnyone reading the code immediatelly understands what is happening,\nand you don't have to typecheck.\nMake two functions, one that accepts Node objects and one that accepts\nrawstrings. You can make one call the other internally, in the most \nconvenient way (myfunction_str can create a Node object and call\nmyfunction_node, or the other way around).\nMake Node objects have a __str__ method and inside your function,\ncall str() on the received argument. That way you always get a string\nby coercion.\n\nIn any case, don't typecheck. It is completely unnecessary and has only\ndownsides. Refactor your code instead in a way you don't need to typecheck.\nYou only get benefits in doing so, both in short and long run.\n" ]
[ 120, 17, 7, 6, 4 ]
[]
[]
[ "python", "typechecking" ]
stackoverflow_0000734368_python_typechecking.txt
Q: How do I run another script in Python without waiting for it to finish? I am creating a little dashboard for a user that will allow him to run specific jobs. I am using Django so I want him to be able to click a link to start the job and then return the page back to him with a message that the job is running. The results of the job will be emailed to him later. I believe I am supposed to use subprocess.Popen but I'm not sure of that. So in pseudocode, here is what I want to do: if job == 1: run script in background: /path/to/script.py return 'Job is running' A: p = subprocess.Popen([sys.executable, '/path/to/script.py'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) That will start the subprocess in background. Your script will keep running normally. Read the documentation here. A: Running this through a message queue is definitely the way to go if you're thinking about long-term scaling. Send a message to the queue who's running constantly in the background, and write job handlers to deal with the different sorts of messages. Since you're using Django, I think Beanstalkd is a pretty good fit. Here's a pretty nice tutorial on the subject. The first comment in that article also has some good tips. Personally I've rolled with a custom in-memory queue server written in Erlang, with Python-bindings written in C. But redis looks like it might work out as a great contender for future queuing/messaging-needs. Hope this helps! A: subprocess.Popen is indeed what you are looking for. A: Although if you find that you want to start communicating a bunch of information between the subprocess and the parent, you may want to consider a thread, or RPC framework like Twisted. But most likely those are too heavy for your application.
How do I run another script in Python without waiting for it to finish?
I am creating a little dashboard for a user that will allow him to run specific jobs. I am using Django so I want him to be able to click a link to start the job and then return the page back to him with a message that the job is running. The results of the job will be emailed to him later. I believe I am supposed to use subprocess.Popen but I'm not sure of that. So in pseudocode, here is what I want to do: if job == 1: run script in background: /path/to/script.py return 'Job is running'
[ "p = subprocess.Popen([sys.executable, '/path/to/script.py'], \n stdout=subprocess.PIPE, \n stderr=subprocess.STDOUT)\n\nThat will start the subprocess in background. Your script will keep running normally.\nRead the documentation here.\n", "Running this through a message queue is definitely the way to go if you're thinking about long-term scaling. Send a message to the queue who's running constantly in the background, and write job handlers to deal with the different sorts of messages. \nSince you're using Django, I think Beanstalkd is a pretty good fit. Here's a pretty nice tutorial on the subject. The first comment in that article also has some good tips.\nPersonally I've rolled with a custom in-memory queue server written in Erlang, with Python-bindings written in C. But redis looks like it might work out as a great contender for future queuing/messaging-needs. Hope this helps!\n", "subprocess.Popen is indeed what you are looking for.\n", "Although if you find that you want to start communicating a bunch of information between the subprocess and the parent, you may want to consider a thread, or RPC framework like Twisted.\nBut most likely those are too heavy for your application.\n" ]
[ 66, 6, 2, 1 ]
[]
[]
[ "background", "django", "process", "python", "subprocess" ]
stackoverflow_0000546017_background_django_process_python_subprocess.txt
Q: Propagating application settings Probably a very common question, but couldn't find suitable answer yet.. I have a (Python w/ C++ modules) application that makes heavy use of an SQLite database and its path gets supplied by user on application start-up. Every time some part of application needs access to database, I plan to acquire a new session and discard it when done. For that to happen, I obviously need access to the path supplied on startup. Couple of ways that I see it happening: 1. Explicit arguments The database path is passed everywhere it needs to be through an explicit parameter and database session is instantiated with that explicit path. This is perhaps the most modular, but seems to be incredibly awkward. 2. Database path singleton The database session object would look like: import foo.options class DatabaseSession(object): def __init__(self, path=foo.options.db_path): ... I consider this to be the lesser-evil singleton, since we're storing only constant strings, which don't change during application runtime. This leaves it possible to override the default and unit test the DatabaseSession class if necessary. 3. Database path singleton + static factory method Perhaps slight improvement over the above: def make_session(path=None): import foo.options if path is None: path = foo.options.db_path return DatabaseSession(path) class DatabaseSession(object): def __init__(self, path): ... This way the module doesn't depend on foo.options at all, unless we're using the factory method. Additionally, the method can perform stuff like session caching or whatnot. And then there are other patterns, which I don't know of. I vaguely saw something similar in web frameworks, but I don't have any experience with those. My example is quite specific, but I imagine it also expands to other application settings, hence the title of the post. I would like to hear your thoughts about what would be the best way to arrange this. A: Yes, there are others. Your option 3 though is very Pythonic. Use a standard Python module to encapsulate options (this is the way web frameworks like Django do it) Use a factory to emit properly configured sessions. Since SQLite already has a "connection", why not use that? What does your DatabaseSession class add that the built-in connection lacks?
Propagating application settings
Probably a very common question, but couldn't find suitable answer yet.. I have a (Python w/ C++ modules) application that makes heavy use of an SQLite database and its path gets supplied by user on application start-up. Every time some part of application needs access to database, I plan to acquire a new session and discard it when done. For that to happen, I obviously need access to the path supplied on startup. Couple of ways that I see it happening: 1. Explicit arguments The database path is passed everywhere it needs to be through an explicit parameter and database session is instantiated with that explicit path. This is perhaps the most modular, but seems to be incredibly awkward. 2. Database path singleton The database session object would look like: import foo.options class DatabaseSession(object): def __init__(self, path=foo.options.db_path): ... I consider this to be the lesser-evil singleton, since we're storing only constant strings, which don't change during application runtime. This leaves it possible to override the default and unit test the DatabaseSession class if necessary. 3. Database path singleton + static factory method Perhaps slight improvement over the above: def make_session(path=None): import foo.options if path is None: path = foo.options.db_path return DatabaseSession(path) class DatabaseSession(object): def __init__(self, path): ... This way the module doesn't depend on foo.options at all, unless we're using the factory method. Additionally, the method can perform stuff like session caching or whatnot. And then there are other patterns, which I don't know of. I vaguely saw something similar in web frameworks, but I don't have any experience with those. My example is quite specific, but I imagine it also expands to other application settings, hence the title of the post. I would like to hear your thoughts about what would be the best way to arrange this.
[ "Yes, there are others. Your option 3 though is very Pythonic. \nUse a standard Python module to encapsulate options (this is the way web frameworks like Django do it)\nUse a factory to emit properly configured sessions.\nSince SQLite already has a \"connection\", why not use that? What does your DatabaseSession class add that the built-in connection lacks?\n" ]
[ 2 ]
[]
[]
[ "global", "python", "settings", "singleton" ]
stackoverflow_0000735337_global_python_settings_singleton.txt
Q: Looping through chars, generating words and checking if domain exists Is there any way to generate words based on characters and checking if a domain exists with this word (ping)? What I want to do is to generate words based on some characters, example "abcdefgh", and then ping generatedword.com to check if it exists. A: You don't want to use the ping command, but you can use Python's socket.gethostbyname() function to determine whether a host exists. def is_valid_host(hostname): try: addr = socket.gethostbyname(hostname) except socket.gaierror, ex: return False return True hosts = ['abc', 'yahoo.com', 'google.com', 'nosuchagency.gov'] filter(is_valid_host, hosts) This is going to take tons of time and maybe make your ISP mad at you. You're better off either: Using a lower-level DNS interface such as dnspython, or Finding a direct interface to domain registrars, such as whois, and querying that. You aren't going to use this to spam people, are you? A: Just because a site fails a ping doesn't mean the domain is available. The domain could be reserved but not pointing anywhere, or the machine may not respond to pings, or it may just be down. A: It seems like you are talking about permutations of character combinations. This has been a fairly well published recipe. That link should get you started. One additional note, ping will not tell you if a server 'exists' or if the name is registered, only if it is online and is not behind a firewall that blocks ping traffic.
Looping through chars, generating words and checking if domain exists
Is there any way to generate words based on characters and checking if a domain exists with this word (ping)? What I want to do is to generate words based on some characters, example "abcdefgh", and then ping generatedword.com to check if it exists.
[ "You don't want to use the ping command, but you can use Python's socket.gethostbyname() function to determine whether a host exists.\ndef is_valid_host(hostname):\n try:\n addr = socket.gethostbyname(hostname)\n except socket.gaierror, ex:\n return False\n return True\n\nhosts = ['abc', 'yahoo.com', 'google.com', 'nosuchagency.gov']\nfilter(is_valid_host, hosts)\n\nThis is going to take tons of time and maybe make your ISP mad at you. You're better off either:\n\nUsing a lower-level DNS interface such as dnspython, or\nFinding a direct interface to domain registrars, such as whois, and querying that. \n\nYou aren't going to use this to spam people, are you?\n", "Just because a site fails a ping doesn't mean the domain is available. The domain could be reserved but not pointing anywhere, or the machine may not respond to pings, or it may just be down.\n", "It seems like you are talking about permutations of character combinations. This has been a fairly well published recipe. That link should get you started.\nOne additional note, ping will not tell you if a server 'exists' or if the name is registered, only if it is online and is not behind a firewall that blocks ping traffic.\n" ]
[ 7, 3, 0 ]
[]
[]
[ "ping", "python" ]
stackoverflow_0000735743_ping_python.txt