content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: How to convert a list of number to html in Python? hey guys, my last problem ^^ say i have a string which contains html, like html = '<td class="p11_666699"><strong>100</strong></td>' and a list like numbers = [100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150] basically i want to do something like this: html = '<td class="p11_666699"><strong>' + numbers + '</strong></td>' edit: for some reason the code tags add curly brackets when i post this, so it messes up, sorry. A: As pointed out in the comments, it's really hard to understand your question... but if you want to generate a bunch of table cells, each containing one of the numbers, use something like this: html = ''.join('<td>%d</td>' % n for n in numbers) Of course you can add in a class or other attribute to be applied to the table cells if you want. A: I think you either want this: numbers = [100, 101, 102, 103] output = "<td>" + ", ".join(map(str, numbers)) + "</td>" or output = "" for number in numbers: output += "<td>" + str(number) + "</td>" A: Are you looking for a way to join a list of numbers into a string? This will work: ' '.join(map(str, [10, 20, 30])) Resulting in: '10 20 30' The second argument of map is a list, so you can place your 'numbers' list there. This is in no way HTML specific, of course. A: html = ['<td class="p11_666699"><strong>%d</strong></td>' % number for number in numbers] And I see David just suggested this (but joined).
How to convert a list of number to html in Python?
hey guys, my last problem ^^ say i have a string which contains html, like html = '<td class="p11_666699"><strong>100</strong></td>' and a list like numbers = [100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150] basically i want to do something like this: html = '<td class="p11_666699"><strong>' + numbers + '</strong></td>' edit: for some reason the code tags add curly brackets when i post this, so it messes up, sorry.
[ "As pointed out in the comments, it's really hard to understand your question... but if you want to generate a bunch of table cells, each containing one of the numbers, use something like this:\nhtml = ''.join('<td>%d</td>' % n for n in numbers)\n\nOf course you can add in a class or other attribute to be applied to the table cells if you want.\n", "I think you either want this:\nnumbers = [100, 101, 102, 103]\noutput = \"<td>\" + \", \".join(map(str, numbers)) + \"</td>\"\n\nor\noutput = \"\"\nfor number in numbers:\n output += \"<td>\" + str(number) + \"</td>\"\n\n", "Are you looking for a way to join a list of numbers into a string? This will work:\n' '.join(map(str, [10, 20, 30]))\n\nResulting in:\n'10 20 30'\n\nThe second argument of map is a list, so you can place your 'numbers' list there.\nThis is in no way HTML specific, of course.\n", "html = ['<td class=\"p11_666699\"><strong>%d</strong></td>' % number for number in numbers]\n\nAnd I see David just suggested this (but joined). \n" ]
[ 4, 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000520881_python.txt
Q: Further Processing of Output of Undefined Methods (Python) How do I write a Python class that handles calls on undefined methods by first, getting the output of a function of the same name from a given module, and then, doing something further with that output? For example, given add(x, y), doublerInstance.add(1, 1) should return 4. I know _ _ getattr _ _() intercepts calls on undefined methods, and getattr() can retrieve a function object. But I don't know how get the arguments passed to the undefined call caught by _ _ getattr _ _() to the function retrieved by getattr(). EXAMPLE Module functions.py: def add(x, y): return x + Module doubler.py: class Doubler: def __init__(self, source): self.source = source def __getattr__(self, attrname): fnc = getattr(self.source, attrname) return fnc() * 2 Session: >import functions as f >import doubler as d >doublerInstance = d.Doubler(f) >doublerInstance.add(1, 2) <snip> TypeError: add() takes exactly 2 arguments, (0 given) END I do understand the error -- getattr() returns a function to be run, and the call fnc() doesn't pass any arguments to that function -- here, add(). But how do I get the arguments passed in to the call dblr.add(1, 2) and pass those to the function returned by the getattr() call? I'm looking for the right way to do this, not some usage of _ _ getattr _ _. I realize that decorator functions using @ might be a better tool here, but I don't yet understand those well enough to see whether they could be applied here. ALSO -- what resource should I have looked at to figure this out for myself? I haven't found it in the Lutz books, the Cookbook, or the Python Library Reference. A: __getattr__ has to return the function - not the result from calling it: class Doubler: def __init__(self, source): self.source = source def __getattr__(self, attrname): fnc = getattr(self.source, attrname) return lambda x,y : fnc(x,y) * 2 This uses a lambda expression; it returns a new function that doubles the output on fnc Perhaps this test code will make it clearer: import functions as f import doubler as d doublerInstance = d.Doubler(f) print doublerInstance.add(1, 2) doubleadd = doublerInstance.add print doubleadd(1,2) print doubleadd(2,3) A: When you call doublerInstance.add(1, 2), you're getting an attribute add from it, and then you're calling it. But inside your getattr, you're returning a value. You have to return a function. Anyway, for this particular case to work, you need this: def __getattr__(self, attrname) : fnc = getattr(self.source, attrname) def doubled(*args, **kwargs) : return 2 * fnc(*args, **kwargs) return doubled A: Try this: class Doubler: def __init__(self, source): self.source = source def __getattr__(self, attrname): def tmp_func(*args): fnc = getattr(self.source, attrname) return fnc(*args) * 2 return tmp_func Hint: getattr must return a function, not the result of the function call.
Further Processing of Output of Undefined Methods (Python)
How do I write a Python class that handles calls on undefined methods by first, getting the output of a function of the same name from a given module, and then, doing something further with that output? For example, given add(x, y), doublerInstance.add(1, 1) should return 4. I know _ _ getattr _ _() intercepts calls on undefined methods, and getattr() can retrieve a function object. But I don't know how get the arguments passed to the undefined call caught by _ _ getattr _ _() to the function retrieved by getattr(). EXAMPLE Module functions.py: def add(x, y): return x + Module doubler.py: class Doubler: def __init__(self, source): self.source = source def __getattr__(self, attrname): fnc = getattr(self.source, attrname) return fnc() * 2 Session: >import functions as f >import doubler as d >doublerInstance = d.Doubler(f) >doublerInstance.add(1, 2) <snip> TypeError: add() takes exactly 2 arguments, (0 given) END I do understand the error -- getattr() returns a function to be run, and the call fnc() doesn't pass any arguments to that function -- here, add(). But how do I get the arguments passed in to the call dblr.add(1, 2) and pass those to the function returned by the getattr() call? I'm looking for the right way to do this, not some usage of _ _ getattr _ _. I realize that decorator functions using @ might be a better tool here, but I don't yet understand those well enough to see whether they could be applied here. ALSO -- what resource should I have looked at to figure this out for myself? I haven't found it in the Lutz books, the Cookbook, or the Python Library Reference.
[ "__getattr__ has to return the function - not the result from calling it:\nclass Doubler:\n def __init__(self, source):\n self.source = source\n\n def __getattr__(self, attrname):\n fnc = getattr(self.source, attrname)\n return lambda x,y : fnc(x,y) * 2\n\nThis uses a lambda expression; it returns a new function that doubles the output on fnc\nPerhaps this test code will make it clearer:\nimport functions as f\nimport doubler as d\ndoublerInstance = d.Doubler(f)\nprint doublerInstance.add(1, 2)\n\ndoubleadd = doublerInstance.add\n\nprint doubleadd(1,2)\nprint doubleadd(2,3)\n\n", "When you call doublerInstance.add(1, 2), you're getting an attribute add from it, and then you're calling it. But inside your getattr, you're returning a value. You have to return a function.\nAnyway, for this particular case to work, you need this:\ndef __getattr__(self, attrname) :\n fnc = getattr(self.source, attrname)\n def doubled(*args, **kwargs) :\n return 2 * fnc(*args, **kwargs)\n return doubled\n\n", "Try this:\nclass Doubler:\ndef __init__(self, source):\n self.source = source\n\ndef __getattr__(self, attrname):\n def tmp_func(*args):\n fnc = getattr(self.source, attrname)\n return fnc(*args) * 2\n return tmp_func\n\nHint:\ngetattr must return a function, not the result of the function call.\n" ]
[ 2, 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0000521111_python.txt
Q: How does one parse simple inline markup (i.e. *bold*), in Python? How does one implement a parser (in Python) for a subset of wikitext that modifies text, namely: *bold*, /italics/, _underline_ I'm converting it to LaTeX, so the conversion is from: Hello, *world*! Let's /go/. to: Hello \textbf{world}! Let's \textit{go}. Though there's nothing specific about it being a conversion to LaTeX (notably except nested cases like "*bold /italics* whatami/" => "textbf{bold \textit{italics} whatami}"). I've looked at existing markup libraries, but they're (a) not quite the wiki language I'd like, and (b) seemingly overpowered for this problem. I've considered reverse engineering Creoleparser, but I'd like to know what suggestions others have before I undertake that effort. Thanks! A: If your language is small, regular expressions might be the least painful solution: >>> import re >>> str = "Hello, *world*! Let's /go/." >>> str = re.sub(r"\*([^\*]*)\*", r"\textbf{\1}", str) >>> str = re.sub(r"/([^/]*)/", r"\textit{\1}", str) >>> str "Hello, \textbf{world}! Let's \textit{go}."
How does one parse simple inline markup (i.e. *bold*), in Python?
How does one implement a parser (in Python) for a subset of wikitext that modifies text, namely: *bold*, /italics/, _underline_ I'm converting it to LaTeX, so the conversion is from: Hello, *world*! Let's /go/. to: Hello \textbf{world}! Let's \textit{go}. Though there's nothing specific about it being a conversion to LaTeX (notably except nested cases like "*bold /italics* whatami/" => "textbf{bold \textit{italics} whatami}"). I've looked at existing markup libraries, but they're (a) not quite the wiki language I'd like, and (b) seemingly overpowered for this problem. I've considered reverse engineering Creoleparser, but I'd like to know what suggestions others have before I undertake that effort. Thanks!
[ "If your language is small, regular expressions might be the least painful solution:\n>>> import re\n>>> str = \"Hello, *world*! Let's /go/.\"\n>>> str = re.sub(r\"\\*([^\\*]*)\\*\", r\"\\textbf{\\1}\", str)\n>>> str = re.sub(r\"/([^/]*)/\", r\"\\textit{\\1}\", str)\n>>> str\n\"Hello, \\textbf{world}! Let's \\textit{go}.\"\n\n" ]
[ 7 ]
[]
[]
[ "creole", "parsing", "python", "wikitext" ]
stackoverflow_0000521326_creole_parsing_python_wikitext.txt
Q: How to get the concrete class name as a string? I want to avoid calling a lot of isinstance() functions, so I'm looking for a way to get the concrete class name for an instance variable as a string. Any ideas? A: instance.__class__.__name__ example: >>> class A(): pass >>> a = A() >>> a.__class__.__name__ 'A' A: <object>.__class__.__name__ A: you can also create a dict with the classes themselves as keys, not necessarily the classnames typefunc={ int:lambda x: x*2, str:lambda s:'(*(%s)*)'%s } def transform (param): print typefunc[type(param)](param) transform (1) >>> 2 transform ("hi") >>> (*(hi)*) here typefunc is a dict that maps a function for each type. transform gets that function and applies it to the parameter. of course, it would be much better to use 'real' OOP
How to get the concrete class name as a string?
I want to avoid calling a lot of isinstance() functions, so I'm looking for a way to get the concrete class name for an instance variable as a string. Any ideas?
[ " instance.__class__.__name__\n\nexample:\n>>> class A():\n pass\n>>> a = A()\n>>> a.__class__.__name__\n'A'\n\n", "<object>.__class__.__name__\n\n", "you can also create a dict with the classes themselves as keys, not necessarily the classnames\ntypefunc={\n int:lambda x: x*2,\n str:lambda s:'(*(%s)*)'%s\n}\n\ndef transform (param):\n print typefunc[type(param)](param)\n\ntransform (1)\n>>> 2\ntransform (\"hi\")\n>>> (*(hi)*)\n\nhere typefunc is a dict that maps a function for each type. transform gets that function and applies it to the parameter.\nof course, it would be much better to use 'real' OOP\n" ]
[ 309, 29, 9 ]
[]
[]
[ "python" ]
stackoverflow_0000521502_python.txt
Q: How can I get only class variables? I have this class definition: class cols: name = 'name' size = 'size' date = 'date' @classmethod def foo(cls): print "This is a class method" With __dict__ I get all class attributes (members and variables). Also there are the "Internal attributes" too (like __main__). How can I get only the class variables without instantiation? A: I wouldn't know a straightforward way, especially since from the interpreter's POV, there is not that much of a difference between a method of a class and any other variable (methods have descriptors, but that's it...). So when you only want non-callable class members, you have to fiddle around a little: >>> class cols: ... name = "name" ... @classmethod ... def foo(cls): pass >>> import inspect >>> def get_vars(cls): ... return [name for name, obj in cls.__dict__.iteritems() if not name.startswith("__") and not inspect.isroutine(obj)] >>> get_vars(cols) ['name'] A: import inspect inspect.getmembers(cols) There are a lot if things you can do with the inspect module: http://lfw.org/python/inspect.html
How can I get only class variables?
I have this class definition: class cols: name = 'name' size = 'size' date = 'date' @classmethod def foo(cls): print "This is a class method" With __dict__ I get all class attributes (members and variables). Also there are the "Internal attributes" too (like __main__). How can I get only the class variables without instantiation?
[ "I wouldn't know a straightforward way, especially since from the interpreter's POV, there is not that much of a difference between a method of a class and any other variable (methods have descriptors, but that's it...).\nSo when you only want non-callable class members, you have to fiddle around a little:\n>>> class cols:\n... name = \"name\"\n... @classmethod\n... def foo(cls): pass\n\n>>> import inspect\n>>> def get_vars(cls):\n... return [name for name, obj in cls.__dict__.iteritems()\n if not name.startswith(\"__\") and not inspect.isroutine(obj)]\n>>> get_vars(cols)\n['name']\n\n", "import inspect\ninspect.getmembers(cols)\n\nThere are a lot if things you can do with the inspect module: http://lfw.org/python/inspect.html\n" ]
[ 5, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000521710_python.txt
Q: How can I generate multi-line build commands? In SCons, my command generators create ridiculously long command lines. I'd like to be able to split these commands across multiple lines for readability in the build log. e.g. I have a SConscipt like: import os # create dependency def my_cmd_generator(source, target, env, for_signature): return r'''echo its a small world after all \ its a small world after all''' my_cmd_builder = Builder(generator=my_cmd_generator, suffix = '.foo') env = Environment() env.Append( BUILDERS = {'MyCmd' : my_cmd_builder } ) my_cmd = env.MyCmd('foo.foo',os.popen('which bash').read().strip()) AlwaysBuild(my_cmd) When it executes, I get: scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... echo its a small world after all \ its a small world after all its a small world after all sh: line 1: its: command not found scons: *** [foo.foo] Error 127 scons: building terminated because of errors. Doing this in the python shell with os.system and os.popen works -- I get a readable command string and the sub-shell process interprets all the lines as one command. >>> import os >>> cmd = r'''echo its a small world after all \ ... its a small world after all''' >>> print cmd echo its a small world after all \ its a small world after all >>> os.system( cmd) its a small world after all its a small world after all 0 When I do this in SCons, it executes each line one at a time, which is not what I want. I also want to avoid building up my commands into a shell-script and then executing the shell script, because that will create string escaping madness. Is this possible? UPDATE: cournape, Thanks for the clue about the $CCCOMSTR. Unfortunately, I'm not using any of the languages that SCons supports out of the box, so I'm creating my own command generator. Using a generator, how can I get SCons to do: echo its a small world after all its a small world after all' but print echo its a small world after all \ its a small world after all ? A: Thanks to cournape's tip about Actions versus Generators ( and eclipse pydev debugger), I've finally figured out what I need to do. You want to pass in your function to the 'Builder' class as an 'action' not a 'generator'. This will allow you to actually execute the os.system or os.popen call directly. Here's the updated code: import os def my_action(source, target, env): cmd = r'''echo its a small world after all \ its a small world after all''' print cmd return os.system(cmd) my_cmd_builder = Builder( action=my_action, # <-- CRUCIAL PIECE OF SOLUTION suffix = '.foo') env = Environment() env.Append( BUILDERS = {'MyCmd' : my_cmd_builder } ) my_cmd = env.MyCmd('foo.foo',os.popen('which bash').read().strip()) This SConstruct file will produce the following output: scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... my_action(["foo.foo"], ["/bin/bash"]) echo its a small world after all \ its a small world after all its a small world after all its a small world after all scons: done building targets. The other crucial piece is to remember that switching from a 'generator' to an 'action' means the target you're building no longer has an implicit dependency on the actual string that you are passing to the sub-process shell. You can re-create this dependency by adding the string into your environment. e.g., the solution that I personally want looks like: import os cmd = r'''echo its a small world after all \ its a small world after all''' def my_action(source, target, env): print cmd return os.system(cmd) my_cmd_builder = Builder( action=my_action, suffix = '.foo') env = Environment() env['_MY_CMD'] = cmd # <-- CREATE IMPLICIT DEPENDENCY ON CMD STRING env.Append( BUILDERS = {'MyCmd' : my_cmd_builder } ) my_cmd = env.MyCmd('foo.foo',os.popen('which bash').read().strip()) A: You are mixing two totally different things: the command to be executed, and its representation in the command line. By default, scons prints the command line, but if you split the command line, you are changing the commands executed. Now, scons has a mechanism to change the printed commands. They are registered per Action instances, and many default ones are available: env = Environment() env['CCCOMSTR'] = "CC $SOURCE" env['CXXCOMSTR'] = "CXX $SOURCE" env['LINKCOM'] = "LINK $SOURCE" Will print, assuming only C and CXX sources: CC foo.c CC bla.c CXX yo.cc LINK yo.o bla.o foo.o
How can I generate multi-line build commands?
In SCons, my command generators create ridiculously long command lines. I'd like to be able to split these commands across multiple lines for readability in the build log. e.g. I have a SConscipt like: import os # create dependency def my_cmd_generator(source, target, env, for_signature): return r'''echo its a small world after all \ its a small world after all''' my_cmd_builder = Builder(generator=my_cmd_generator, suffix = '.foo') env = Environment() env.Append( BUILDERS = {'MyCmd' : my_cmd_builder } ) my_cmd = env.MyCmd('foo.foo',os.popen('which bash').read().strip()) AlwaysBuild(my_cmd) When it executes, I get: scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... echo its a small world after all \ its a small world after all its a small world after all sh: line 1: its: command not found scons: *** [foo.foo] Error 127 scons: building terminated because of errors. Doing this in the python shell with os.system and os.popen works -- I get a readable command string and the sub-shell process interprets all the lines as one command. >>> import os >>> cmd = r'''echo its a small world after all \ ... its a small world after all''' >>> print cmd echo its a small world after all \ its a small world after all >>> os.system( cmd) its a small world after all its a small world after all 0 When I do this in SCons, it executes each line one at a time, which is not what I want. I also want to avoid building up my commands into a shell-script and then executing the shell script, because that will create string escaping madness. Is this possible? UPDATE: cournape, Thanks for the clue about the $CCCOMSTR. Unfortunately, I'm not using any of the languages that SCons supports out of the box, so I'm creating my own command generator. Using a generator, how can I get SCons to do: echo its a small world after all its a small world after all' but print echo its a small world after all \ its a small world after all ?
[ "Thanks to cournape's tip about Actions versus Generators ( and eclipse pydev debugger), I've finally figured out what I need to do. You want to pass in your function to the 'Builder' class as an 'action' not a 'generator'. This will allow you to actually execute the os.system or os.popen call directly. Here's the updated code:\nimport os\n\ndef my_action(source, target, env):\n cmd = r'''echo its a small world after all \\\n its a small world after all'''\n print cmd\n return os.system(cmd)\n\nmy_cmd_builder = Builder(\n action=my_action, # <-- CRUCIAL PIECE OF SOLUTION\n suffix = '.foo')\n\nenv = Environment()\nenv.Append( BUILDERS = {'MyCmd' : my_cmd_builder } )\n\nmy_cmd = env.MyCmd('foo.foo',os.popen('which bash').read().strip())\n\nThis SConstruct file will produce the following output:\nscons: Reading SConscript files ...\nscons: done reading SConscript files.\nscons: Building targets ...\nmy_action([\"foo.foo\"], [\"/bin/bash\"])\necho its a small world after all \\\n its a small world after all\nits a small world after all its a small world after all\nscons: done building targets.\n\nThe other crucial piece is to remember that switching from a 'generator' to an 'action' means the target you're building no longer has an implicit dependency on the actual string that you are passing to the sub-process shell. You can re-create this dependency by adding the string into your environment. \ne.g., the solution that I personally want looks like:\nimport os\n\ncmd = r'''echo its a small world after all \\\n its a small world after all'''\n\ndef my_action(source, target, env):\n print cmd\n return os.system(cmd)\n\nmy_cmd_builder = Builder(\n action=my_action,\n suffix = '.foo')\n\nenv = Environment()\nenv['_MY_CMD'] = cmd # <-- CREATE IMPLICIT DEPENDENCY ON CMD STRING\nenv.Append( BUILDERS = {'MyCmd' : my_cmd_builder } )\n\nmy_cmd = env.MyCmd('foo.foo',os.popen('which bash').read().strip())\n\n", "You are mixing two totally different things: the command to be executed, and its representation in the command line. By default, scons prints the command line, but if you split the command line, you are changing the commands executed.\nNow, scons has a mechanism to change the printed commands. They are registered per Action instances, and many default ones are available:\nenv = Environment()\nenv['CCCOMSTR'] = \"CC $SOURCE\"\nenv['CXXCOMSTR'] = \"CXX $SOURCE\"\nenv['LINKCOM'] = \"LINK $SOURCE\"\n\nWill print, assuming only C and CXX sources:\nCC foo.c\nCC bla.c\nCXX yo.cc\nLINK yo.o bla.o foo.o\n\n" ]
[ 3, 1 ]
[]
[]
[ "build", "build_automation", "python", "scons" ]
stackoverflow_0000466293_build_build_automation_python_scons.txt
Q: Execute a string as a command in python I am developing my stuff in python. In this process I encountered a situation where I have a string called "import django". And I want to validate this string. Which means, I want to check whether the module mentioned('django' in this case) is in the python-path. How can I do it? A: My previous answer was wrong -- i didn't think to test my code. This actually works, though: look at the imp module. To just check for the module's importability in the current sys.path: try: imp.find_module('django', sys.path) except ImportError: print "Boo! no django for you!" A: I doub't that it's safe, but it's the most naïve solution: try: exec('import django') except ImportError: print('no django') A: If a module's name is available as string you can import it using the built-in __import__ function. module = __import__("module name", {}, {}, [], -1) For example, os = __import__("os", {}, {}, [], -1)
Execute a string as a command in python
I am developing my stuff in python. In this process I encountered a situation where I have a string called "import django". And I want to validate this string. Which means, I want to check whether the module mentioned('django' in this case) is in the python-path. How can I do it?
[ "My previous answer was wrong -- i didn't think to test my code. This actually works, though: look at the imp module.\nTo just check for the module's importability in the current sys.path: \ntry:\n imp.find_module('django', sys.path)\nexcept ImportError:\n print \"Boo! no django for you!\"\n\n", "I doub't that it's safe, but it's the most naïve solution:\ntry:\n exec('import django')\nexcept ImportError:\n print('no django')\n\n", "If a module's name is available as string you can import it using the built-in __import__ function.\nmodule = __import__(\"module name\", {}, {}, [], -1)\n\nFor example,\nos = __import__(\"os\", {}, {}, [], -1)\n\n" ]
[ 14, 1, 1 ]
[]
[]
[ "django", "path", "python" ]
stackoverflow_0000517491_django_path_python.txt
Q: Refactor this Python code to iterate over a container Surely there is a better way to do this? results = [] if not queryset is None: for obj in queryset: results.append((getattr(obj,field.attname),obj.pk)) The problem is that sometimes queryset is None which causes an exception when I try to iterate over it. In this case, I just want result to be set to an empty list. This code is from a Django view, but I don't think that matters--this seems like a more general Python question. EDIT: I turns out that it was my code that was turning an empty queryset into a "None" instead of returning an empty list. Being able to assume that queryset is always iteratable simplifies the code by allowing the "if" statement to be removed. The answers below maybe useful to others who have the same problem, but cannot modify their code to guarantee that queryset is not "None". A: results = [(getattr(obj, field.attname), obj.pk) for obj in queryset or []] A: How about for obj in (queryset or []): # Do your stuff It is the same as J.F Sebastians suggestion, only not implemented as a list comprehension. A: For what it's worth, Django managers have a "none" queryset that you can use to avoid gratuitous None-checking. Using it to ensure you don't have a null queryset may simplify your code. if queryset is None: queryset = MyModel.objects.none() References: Django documentation on none() Null Object Pattern A: you can use list comprehensions, but other than that I don't see what you can improve result = [] if queryset: result = [(getattr(obj, field.attname), obj.pk) for obj in queryset]
Refactor this Python code to iterate over a container
Surely there is a better way to do this? results = [] if not queryset is None: for obj in queryset: results.append((getattr(obj,field.attname),obj.pk)) The problem is that sometimes queryset is None which causes an exception when I try to iterate over it. In this case, I just want result to be set to an empty list. This code is from a Django view, but I don't think that matters--this seems like a more general Python question. EDIT: I turns out that it was my code that was turning an empty queryset into a "None" instead of returning an empty list. Being able to assume that queryset is always iteratable simplifies the code by allowing the "if" statement to be removed. The answers below maybe useful to others who have the same problem, but cannot modify their code to guarantee that queryset is not "None".
[ "results = [(getattr(obj, field.attname), obj.pk) for obj in queryset or []]\n\n", "How about\nfor obj in (queryset or []):\n # Do your stuff\n\nIt is the same as J.F Sebastians suggestion, only not implemented as a list comprehension.\n", "For what it's worth, Django managers have a \"none\" queryset that you can use to avoid gratuitous None-checking. Using it to ensure you don't have a null queryset may simplify your code.\nif queryset is None:\n queryset = MyModel.objects.none()\n\nReferences:\n\nDjango documentation on none()\nNull Object Pattern\n\n", "you can use list comprehensions, but other than that I don't see what you can improve\nresult = []\n if queryset:\n result = [(getattr(obj, field.attname), obj.pk) for obj in queryset]\n\n" ]
[ 19, 8, 2, 1 ]
[]
[]
[ "django", "iterator", "python", "refactoring" ]
stackoverflow_0000495294_django_iterator_python_refactoring.txt
Q: What's the most pythonic way of access C libraries - for example, OpenSSL? I need to access the crypto functions of OpenSSL to encode Blowfish data in a CBC streams. I've googled and found some Blowfish libraries (hand written) and some OpenSSL wrappers (none of the seem complete.) In the end, I need to access the certain OpenSSL functions, such as the full blowfish.h library of commands. What's the pythonic/right way of accessing them? Using something like SWIG to allow Python/C bindings, or is there a better way? Thanks! A: ctypes is the place to start. It lets you call into DLLs, using C-declared types, etc. I don't know if there are limitations that will keep you from doing everything you need, but it's very capable, and it's included in the standard library. A: There's lots of ways to interface with C (and C++) in Python. ctypes is pretty nice for quick little extensions, but it has a habit of turning would be compile time errors into runtime segfaults. If you're looking to write your own extension, SIP is very nice. SWIG is very general, but has a larger following. Of course, the first thing you should be doing is seeing if you really need to interface. Have you looked at PyCrypto? A: I was happy with M2Crypto (an OpenSSL wrapper) for blowfish. import M2Crypto from M2Crypto import EVP import base64 import struct key = '0' * 16 # security FTW iv = '' # initialization vector FTW dummy_block = ' ' * 8 encrypt = EVP.Cipher('bf_cbc', key, iv, M2Crypto.encrypt) decrypt = EVP.Cipher('bf_cbc', key, iv, M2Crypto.decrypt) binary = struct.pack(">Q", 42) ciphertext = encrypt.update(binary) decrypt.update(ciphertext) # output is delayed by one block i = struct.unpack(">Q", decrypt.update(dummy_block)) print i A: SWIG is pretty much the canonical method. Works good, too. A: I've had good success with Cython, as well. A: I would recommend M2Crypto as well, but if the code sample by joeforker looks a bit strange you might have an easier time understanding the M2Crypto cipher unit tests, which include Blowfish. Check out the CipherTestCase in test_evp.py.
What's the most pythonic way of access C libraries - for example, OpenSSL?
I need to access the crypto functions of OpenSSL to encode Blowfish data in a CBC streams. I've googled and found some Blowfish libraries (hand written) and some OpenSSL wrappers (none of the seem complete.) In the end, I need to access the certain OpenSSL functions, such as the full blowfish.h library of commands. What's the pythonic/right way of accessing them? Using something like SWIG to allow Python/C bindings, or is there a better way? Thanks!
[ "ctypes is the place to start. It lets you call into DLLs, using C-declared types, etc. I don't know if there are limitations that will keep you from doing everything you need, but it's very capable, and it's included in the standard library.\n", "There's lots of ways to interface with C (and C++) in Python. ctypes is pretty nice for quick little extensions, but it has a habit of turning would be compile time errors into runtime segfaults. If you're looking to write your own extension, SIP is very nice. SWIG is very general, but has a larger following. Of course, the first thing you should be doing is seeing if you really need to interface. Have you looked at PyCrypto?\n", "I was happy with M2Crypto (an OpenSSL wrapper) for blowfish.\nimport M2Crypto\nfrom M2Crypto import EVP\nimport base64\nimport struct\n\nkey = '0' * 16 # security FTW\niv = '' # initialization vector FTW\ndummy_block = ' ' * 8\n\nencrypt = EVP.Cipher('bf_cbc', key, iv, M2Crypto.encrypt)\ndecrypt = EVP.Cipher('bf_cbc', key, iv, M2Crypto.decrypt)\n\nbinary = struct.pack(\">Q\", 42)\nciphertext = encrypt.update(binary)\n\ndecrypt.update(ciphertext) # output is delayed by one block\ni = struct.unpack(\">Q\", decrypt.update(dummy_block))\n\nprint i\n\n", "SWIG is pretty much the canonical method. Works good, too.\n", "I've had good success with Cython, as well.\n", "I would recommend M2Crypto as well, but if the code sample by joeforker looks a bit strange you might have an easier time understanding the M2Crypto cipher unit tests, which include Blowfish. Check out the CipherTestCase in test_evp.py.\n" ]
[ 5, 5, 5, 0, 0, 0 ]
[]
[]
[ "c", "encryption", "openssl", "python" ]
stackoverflow_0000422903_c_encryption_openssl_python.txt
Q: Unicode vs UTF-8 confusion in Python / Django? I stumbled over this passage in the Django tutorial: Django models have a default str() method that calls unicode() and converts the result to a UTF-8 bytestring. This means that unicode(p) will return a Unicode string, and str(p) will return a normal string, with characters encoded as UTF-8. Now, I'm confused because afaik Unicode is not any particular representation, so what is a "Unicode string" in Python? Does that mean UCS-2? Googling turned up this "Python Unicode Tutorial" which boldly states Unicode is a two-byte encoding which covers all of the world's common writing systems. which is plain wrong, or is it? I have been confused many times by character set and encoding issues, but here I'm quite sure that the documentation I'm reading is confused. Does anybody know what's going on in Python when it gives me a "Unicode string"? A: what is a "Unicode string" in Python? Does that mean UCS-2? Unicode strings in Python are stored internally either as UCS-2 (fixed-length 16-bit representation, almost the same as UTF-16) or UCS-4/UTF-32 (fixed-length 32-bit representation). It's a compile-time option; on Windows it's always UTF-16 whilst many Linux distributions set UTF-32 (‘wide mode’) for their versions of Python. You are generally not supposed to care: you will see Unicode code-points as single elements in your strings and you won't know whether they're stored as two or four bytes. If you're in a UTF-16 build and you need to handle characters outside the Basic Multilingual Plane you'll be Doing It Wrong, but that's still very rare, and users who really need the extra characters should be compiling wide builds. plain wrong, or is it? Yes, it's quite wrong. To be fair I think that tutorial is rather old; it probably pre-dates wide Unicode strings, if not Unicode 3.1 (the version that introduced characters outside the Basic Multilingual Plane). There is an additional source of confusion stemming from Windows's habit of using the term “Unicode” to mean, specifically, the UTF-16LE encoding that NT uses internally. People from Microsoftland may often copy this somewhat misleading habit. A: Meanwhile, I did a refined research to verify what the internal representation in Python is, and also what its limits are. "The Truth About Unicode In Python" is a very good article which cites directly from the Python developers. Apparently, internal representation is either UCS-2 or UCS-4 depending on a compile-time switch. So Jon, it's not UTF-16, but your answer put me on the right track anyway, thanks. A: Python stores Unicode as UTF-16. str() will return the UTF-8 representation of the UTF-16 string.
Unicode vs UTF-8 confusion in Python / Django?
I stumbled over this passage in the Django tutorial: Django models have a default str() method that calls unicode() and converts the result to a UTF-8 bytestring. This means that unicode(p) will return a Unicode string, and str(p) will return a normal string, with characters encoded as UTF-8. Now, I'm confused because afaik Unicode is not any particular representation, so what is a "Unicode string" in Python? Does that mean UCS-2? Googling turned up this "Python Unicode Tutorial" which boldly states Unicode is a two-byte encoding which covers all of the world's common writing systems. which is plain wrong, or is it? I have been confused many times by character set and encoding issues, but here I'm quite sure that the documentation I'm reading is confused. Does anybody know what's going on in Python when it gives me a "Unicode string"?
[ "\nwhat is a \"Unicode string\" in Python? Does that mean UCS-2?\n\nUnicode strings in Python are stored internally either as UCS-2 (fixed-length 16-bit representation, almost the same as UTF-16) or UCS-4/UTF-32 (fixed-length 32-bit representation). It's a compile-time option; on Windows it's always UTF-16 whilst many Linux distributions set UTF-32 (‘wide mode’) for their versions of Python.\nYou are generally not supposed to care: you will see Unicode code-points as single elements in your strings and you won't know whether they're stored as two or four bytes. If you're in a UTF-16 build and you need to handle characters outside the Basic Multilingual Plane you'll be Doing It Wrong, but that's still very rare, and users who really need the extra characters should be compiling wide builds.\n\nplain wrong, or is it?\n\nYes, it's quite wrong. To be fair I think that tutorial is rather old; it probably pre-dates wide Unicode strings, if not Unicode 3.1 (the version that introduced characters outside the Basic Multilingual Plane).\nThere is an additional source of confusion stemming from Windows's habit of using the term “Unicode” to mean, specifically, the UTF-16LE encoding that NT uses internally. People from Microsoftland may often copy this somewhat misleading habit.\n", "Meanwhile, I did a refined research to verify what the internal representation in Python is, and also what its limits are. \"The Truth About Unicode In Python\" is a very good article which cites directly from the Python developers. Apparently, internal representation is either UCS-2 or UCS-4 depending on a compile-time switch. So Jon, it's not UTF-16, but your answer put me on the right track anyway, thanks.\n", "Python stores Unicode as UTF-16. str() will return the UTF-8 representation of the UTF-16 string.\n" ]
[ 54, 9, 0 ]
[ "From Wikipedia on UTF-8: \n\nUTF-8 (8-bit UCS/Unicode Transformation Format) is a variable-length character encoding for Unicode. It is able to represent any character in the Unicode standard, yet the initial encoding of byte codes and character assignments for UTF-8 is backwards compatible with ASCII. For these reasons, it is steadily becoming the preferred encoding for e-mail, web pages[1], and other places where characters are stored or streamed.\n\nSo, it's anywhere between one and four bytes depending on which character you wish to represent within the realm of Unicode.\nFrom Wikipedia on Unicode:\n\nIn computing, Unicode is an industry standard allowing computers to consistently represent and manipulate text expressed in most of the world's writing systems. \n\nSo it's able to represent most (but not all) of the world's writing systems. \nI hope this helps :)\n", "\nso what is a \"Unicode string\" in\n Python?\n\nPython 'knows' that your string is Unicode. Hence if you do regex on it, it will know which is character and which is not etc, which is really helpful. If you did a strlen it will also give the correct result. As an example if you did string count on Hello, you will get 5 (even if it's Unicode). But if you did a string count of a foreign word and that string was not a Unicode string than you will have much larger result. Pythong uses the information form the Unicode Character Database to identify each character in the Unicode String. Hope that helps. \n" ]
[ -1, -2 ]
[ "django", "python", "unicode" ]
stackoverflow_0000022149_django_python_unicode.txt
Q: Python cannot create instances I am trying to create a simple Python extension using PyCXX. And I'm compiling against my Python 2.5 installation. My goal is to be able to do the following in Python: import Cats kitty = Cats.Kitty() if type(kitty) == Cats.Kitty: kitty.Speak() But every time I try, this is the error that I get: TypeError: cannot create 'Kitty' instances It does see Cats.Kitty as a type object, but I can't create instances of the Kitty class, any ideas? Here is my current source: #include "CXX/Objects.hxx" #include "CXX/Extensions.hxx" #include <iostream> using namespace Py; using namespace std; class Kitty : public Py::PythonExtension<Kitty> { public: Kitty() { } virtual ~Kitty() { } static void init_type(void) { behaviors().name("Kitty"); behaviors().supportGetattr(); add_varargs_method("Speak", &Kitty::Speak); } virtual Py::Object getattr( const char *name ) { return getattr_methods( name ); } Py::Object Speak( const Py::Tuple &args ) { cout << "Meow!" << endl; return Py::None(); } }; class Cats : public ExtensionModule<Cats> { public: Cats() : ExtensionModule<Cats>("Cats") { Kitty::init_type(); initialize(); Dict d(moduleDictionary()); d["Kitty"] = Type((PyObject*)Kitty::type_object()); } virtual ~Cats() { } Py::Object factory_Kitty( const Py::Tuple &rargs ) { return Py::asObject( new Kitty ); } }; void init_Cats() { static Cats* cats = new Cats; } int main(int argc, char* argv[]) { Py_Initialize(); init_Cats(); return Py_Main(argc, argv); return 0; } A: I do'nt see it in the code, but sort of thing normally means it can't create an instance, which means it can't find a ctor. Are you sure you've got a ctor that exactly matches the expected signature?
Python cannot create instances
I am trying to create a simple Python extension using PyCXX. And I'm compiling against my Python 2.5 installation. My goal is to be able to do the following in Python: import Cats kitty = Cats.Kitty() if type(kitty) == Cats.Kitty: kitty.Speak() But every time I try, this is the error that I get: TypeError: cannot create 'Kitty' instances It does see Cats.Kitty as a type object, but I can't create instances of the Kitty class, any ideas? Here is my current source: #include "CXX/Objects.hxx" #include "CXX/Extensions.hxx" #include <iostream> using namespace Py; using namespace std; class Kitty : public Py::PythonExtension<Kitty> { public: Kitty() { } virtual ~Kitty() { } static void init_type(void) { behaviors().name("Kitty"); behaviors().supportGetattr(); add_varargs_method("Speak", &Kitty::Speak); } virtual Py::Object getattr( const char *name ) { return getattr_methods( name ); } Py::Object Speak( const Py::Tuple &args ) { cout << "Meow!" << endl; return Py::None(); } }; class Cats : public ExtensionModule<Cats> { public: Cats() : ExtensionModule<Cats>("Cats") { Kitty::init_type(); initialize(); Dict d(moduleDictionary()); d["Kitty"] = Type((PyObject*)Kitty::type_object()); } virtual ~Cats() { } Py::Object factory_Kitty( const Py::Tuple &rargs ) { return Py::asObject( new Kitty ); } }; void init_Cats() { static Cats* cats = new Cats; } int main(int argc, char* argv[]) { Py_Initialize(); init_Cats(); return Py_Main(argc, argv); return 0; }
[ "I do'nt see it in the code, but sort of thing normally means it can't create an instance, which means it can't find a ctor. Are you sure you've got a ctor that exactly matches the expected signature?\n" ]
[ 2 ]
[]
[]
[ "pycxx", "python" ]
stackoverflow_0000522921_pycxx_python.txt
Q: Event handling with Jython & Swing I'm making a GUI by using Swing from Jython. Event handling seems to be particularly elegant from Jython, just set JButton("Push me", actionPerformed = nameOfFunctionToCall) However, trying same thing inside a class gets difficult. Naively trying JButton("Push me", actionPerformed = nameOfMethodToCall) or JButton("Push me", actionPerformed = nameOfMethodToCall(self)) from a GUI-construction method of the class doesn't work, because the first argument of a method to be called should be self, in order to access the data members of the class, and on the other hand, it's not possible to pass any arguments to the event handler through AWT event queue. The only option seems to be using lambda (as advised at http://www.javalobby.org/articles/jython/) which results in something like this: JButton("Push me", actionPerformed = lambda evt : ClassName.nameOfMethodToCall(self)) It works, but the elegance is gone. All this just because the method being called needs a self reference from somewhere. Is there any other way around this? A: JButton("Push me", actionPerformed=self.nameOfMethodToCall) Here's a modified example from the article you cited: from javax.swing import JButton, JFrame class MyFrame(JFrame): def __init__(self): JFrame.__init__(self, "Hello Jython") button = JButton("Hello", actionPerformed=self.hello) self.add(button) self.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE) self.setSize(300, 300) self.show() def hello(self, event): print "Hello, world!" if __name__=="__main__": MyFrame()
Event handling with Jython & Swing
I'm making a GUI by using Swing from Jython. Event handling seems to be particularly elegant from Jython, just set JButton("Push me", actionPerformed = nameOfFunctionToCall) However, trying same thing inside a class gets difficult. Naively trying JButton("Push me", actionPerformed = nameOfMethodToCall) or JButton("Push me", actionPerformed = nameOfMethodToCall(self)) from a GUI-construction method of the class doesn't work, because the first argument of a method to be called should be self, in order to access the data members of the class, and on the other hand, it's not possible to pass any arguments to the event handler through AWT event queue. The only option seems to be using lambda (as advised at http://www.javalobby.org/articles/jython/) which results in something like this: JButton("Push me", actionPerformed = lambda evt : ClassName.nameOfMethodToCall(self)) It works, but the elegance is gone. All this just because the method being called needs a self reference from somewhere. Is there any other way around this?
[ "JButton(\"Push me\", actionPerformed=self.nameOfMethodToCall)\n\nHere's a modified example from the article you cited:\nfrom javax.swing import JButton, JFrame\n\nclass MyFrame(JFrame):\n def __init__(self):\n JFrame.__init__(self, \"Hello Jython\")\n button = JButton(\"Hello\", actionPerformed=self.hello)\n self.add(button)\n\n self.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE)\n self.setSize(300, 300)\n self.show()\n\n def hello(self, event):\n print \"Hello, world!\"\n\nif __name__==\"__main__\":\n MyFrame()\n\n" ]
[ 11 ]
[]
[]
[ "java", "jython", "python", "swing", "user_interface" ]
stackoverflow_0000520615_java_jython_python_swing_user_interface.txt
Q: Passing a list while retaining the original So I'm teaching myself Python, and I'm having an issue with lists. I want to pass my function a list and pop items off it while retaining the original list. How do I make python "instance" the passed list rather that passing a pointer to the original one? Example: def burninate(b): c = [] for i in range(3): c.append(b.pop()) return c a = range(6) d = burninate(a) print a, d Output: [0, 1, 2] [5, 4, 3] Desired output: [0, 1, 2, 3, 4, 5] [5, 4, 3] Thanks! A: As other answers have suggested, you can provide your function with a copy of the list. As an alternative, your function could take a copy of the argument: def burninate(b): c = [] b = list(b) for i in range(3): c.append(b.pop()) return c Basically, you need to be clear in your mind (and in your documentation) whether your function will change its arguments. In my opinion, functions that return computed values should not change their arguments, and functions that change their arguments should not return anything. See python's [].sort(), [].extend(), {}.update(), etc. for examples. Obviously there are exceptions (like .pop()). Also, depending on your particular case, you could rewrite the function to avoid using pop() or other functions that modify the argument. e.g. def burninante(b): return b[:-4:-1] # return the last three elements in reverse order A: You can call burninate() with a copy of the list like this: d = burninate(a[:]) or, d = burninate(list(a)) The other alternative is to make a copy of the list in your method: def burninate(b): c=[] b=b[:] for i in range(3): c.append(b.pop()) return c >>> a = range(6) >>> b = burninate(a) >>> print a, b >>> [0, 1, 2, 3, 4, 5] [5, 4, 3] A: A slightly more readable way to do the same thing is: d = burninate(list(a)) Here, the list() constructor creates a new list based on a. A: A more general solution would be to import copy, and use copy.copy() on the parameter. A: Other versions: def burninate(b): c = [] for i in range(1, 4): c.append(b[-i]) return c def burninate(b): c = b[-4:-1] c.reverse() return c And someday you will love list comprehensions: def burninate(b): return [b[-i] for i in range(1,4)] A: You can use copy.deepcopy() A: burninate = lambda x: x[:-4:-1]
Passing a list while retaining the original
So I'm teaching myself Python, and I'm having an issue with lists. I want to pass my function a list and pop items off it while retaining the original list. How do I make python "instance" the passed list rather that passing a pointer to the original one? Example: def burninate(b): c = [] for i in range(3): c.append(b.pop()) return c a = range(6) d = burninate(a) print a, d Output: [0, 1, 2] [5, 4, 3] Desired output: [0, 1, 2, 3, 4, 5] [5, 4, 3] Thanks!
[ "As other answers have suggested, you can provide your function with a copy of the list.\nAs an alternative, your function could take a copy of the argument:\ndef burninate(b):\n c = []\n b = list(b)\n for i in range(3):\n c.append(b.pop())\n return c\n\nBasically, you need to be clear in your mind (and in your documentation) whether your function will change its arguments. In my opinion, functions that return computed values should not change their arguments, and functions that change their arguments should not return anything. See python's [].sort(), [].extend(), {}.update(), etc. for examples. Obviously there are exceptions (like .pop()).\nAlso, depending on your particular case, you could rewrite the function to avoid using pop() or other functions that modify the argument. e.g.\ndef burninante(b):\n return b[:-4:-1] # return the last three elements in reverse order\n\n", "You can call burninate() with a copy of the list like this:\nd = burninate(a[:])\nor,\nd = burninate(list(a))\nThe other alternative is to make a copy of the list in your method:\ndef burninate(b):\n c=[]\n b=b[:]\n for i in range(3):\n c.append(b.pop())\n return c\n\n>>> a = range(6)\n>>> b = burninate(a)\n>>> print a, b\n>>> [0, 1, 2, 3, 4, 5] [5, 4, 3]\n\n", "A slightly more readable way to do the same thing is:\nd = burninate(list(a))\n\nHere, the list() constructor creates a new list based on a.\n", "A more general solution would be to import copy, and use copy.copy() on the parameter.\n", "Other versions:\ndef burninate(b):\n c = []\n for i in range(1, 4):\n c.append(b[-i])\n return c\n\ndef burninate(b):\n c = b[-4:-1]\n c.reverse()\n return c\n\nAnd someday you will love list comprehensions:\ndef burninate(b):\n return [b[-i] for i in range(1,4)]\n\n", "You can use copy.deepcopy()\n", "burninate = lambda x: x[:-4:-1]\n" ]
[ 14, 10, 6, 5, 2, 1, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0000227790_list_python.txt
Q: Is there a way to loop through a sub section of a list in Python So for a list that has 1000 elements, I want to loop from 400 to 500. How do you do it? I don't see a way by using the for each and for range techniques. A: for x in thousand[400:500]: pass If you are working with an iterable instead of a list, you should use itertools: import itertools for x in itertools.islice(thousand, 400, 500): pass If you need to loop over thousand[500], then use 501 as the latter index. This will work even if thousand[501] is not a valid index. A: for element in allElements[400:501]: # do something These are slices and generate a sublist of the whole list. They are one of the main elements of Python. A: Using for element in allElements[400:501]: doSomething(element) makes Python create new object, and might have some impact on memory usage. Instead I'd use: for index in xrange(400, 501): doSomething(allElements[index]) This way also enables you to manipulate list indexes during iteration. EDIT: In Python 3.0 you can use range() instead of xrange(), but in 2.5 and earlier versions range() creates a list while xrange() creates a generator, which eats less of your precious RAM.
Is there a way to loop through a sub section of a list in Python
So for a list that has 1000 elements, I want to loop from 400 to 500. How do you do it? I don't see a way by using the for each and for range techniques.
[ "for x in thousand[400:500]:\n pass\n\nIf you are working with an iterable instead of a list, you should use itertools:\nimport itertools\nfor x in itertools.islice(thousand, 400, 500):\n pass\n\nIf you need to loop over thousand[500], then use 501 as the latter index. This will work even if thousand[501] is not a valid index.\n", "for element in allElements[400:501]:\n # do something\n\nThese are slices and generate a sublist of the whole list. They are one of the main elements of Python.\n", "Using \nfor element in allElements[400:501]:\n doSomething(element)\n\nmakes Python create new object, and might have some impact on memory usage.\nInstead I'd use:\nfor index in xrange(400, 501):\n doSomething(allElements[index])\n\nThis way also enables you to manipulate list indexes during iteration.\nEDIT: In Python 3.0 you can use range() instead of xrange(), but in 2.5 and earlier versions range() creates a list while xrange() creates a generator, which eats less of your precious RAM.\n" ]
[ 22, 7, 2 ]
[]
[]
[ "list", "loops", "python" ]
stackoverflow_0000522430_list_loops_python.txt
Q: Google Apps HTTP Streaming with Python question I got a little question here: Some time ago I implemented HTTP Streaming using PHP code, something similar to what is on this page: http://my.opera.com/WebApplications/blog/show.dml/438711#comments And I get data with very similar solution. Now I tried to use second code from this page (in Python), but no matter what I do, I receive responseText from python server after everything completes. Here are some python code: print "Content-Type: application/x-www-form-urlencoded\n\n" i=1 while i<4: print("Event: server-time<br>") print("data: %f<br>" % (time.time(),)) sys.stdout.flush() i=i+1 time.sleep(1) And here is Javascript Code: ask = new XMLHttpRequest(); ask.open("GET","/Chat",true); setInterval(function() { if (ask.responseText) document.write(ask.responseText); },200); ask.send(null); Anyone got idea what I do wrong ? How can I receive those damn messages one after another, not just all of them at the end of while loop? Thanks for any help here! Edit: Main thing I forgot to add: server is google app server (i'm not sure is that google own implementation), here is a link with some explanation (i think uhh): http://code.google.com/intl/pl-PL/appengine/docs/python/gettingstarted/devenvironment.html http://code.google.com/intl/pl-PL/appengine/docs/whatisgoogleappengine.html A: Its highly likely App Engine buffers output. A quick search found this: http://code.google.com/appengine/docs/python/tools/webapp/buildingtheresponse.html The out stream buffers all output in memory, then sends the final output when the handler exits. webapp does not support streaming data to the client. A: That looks like a cgi code - I imagine the web server buffers the response from the cgi handlers. So it's really a matter of picking the right tools and making the right configuration. I suggest using a wsgi server and take advantage of the streaming support wsgi has. Here's your sample code translated to a wsgi app: def app(environ, start_response): start_response('200 OK', [('Content-type','application/x-www-form-urlencoded')]) i=1 while i<4: yield "Event: server-time<br>" yield "data: %f<br>" % (time.time(),) i=i+1 time.sleep(1) There are plenty of wsgi servers but here's an example with the reference one from python std lib: from wsgiref.simple_server import make_server httpd = make_server('', 8000, app) httpd.serve_forever()
Google Apps HTTP Streaming with Python question
I got a little question here: Some time ago I implemented HTTP Streaming using PHP code, something similar to what is on this page: http://my.opera.com/WebApplications/blog/show.dml/438711#comments And I get data with very similar solution. Now I tried to use second code from this page (in Python), but no matter what I do, I receive responseText from python server after everything completes. Here are some python code: print "Content-Type: application/x-www-form-urlencoded\n\n" i=1 while i<4: print("Event: server-time<br>") print("data: %f<br>" % (time.time(),)) sys.stdout.flush() i=i+1 time.sleep(1) And here is Javascript Code: ask = new XMLHttpRequest(); ask.open("GET","/Chat",true); setInterval(function() { if (ask.responseText) document.write(ask.responseText); },200); ask.send(null); Anyone got idea what I do wrong ? How can I receive those damn messages one after another, not just all of them at the end of while loop? Thanks for any help here! Edit: Main thing I forgot to add: server is google app server (i'm not sure is that google own implementation), here is a link with some explanation (i think uhh): http://code.google.com/intl/pl-PL/appengine/docs/python/gettingstarted/devenvironment.html http://code.google.com/intl/pl-PL/appengine/docs/whatisgoogleappengine.html
[ "Its highly likely App Engine buffers output. A quick search found this: http://code.google.com/appengine/docs/python/tools/webapp/buildingtheresponse.html\n\nThe out stream buffers all output in memory, then sends the final output when the handler exits. webapp does not support streaming data to the client.\n\n", "That looks like a cgi code - I imagine the web server buffers the response from the cgi handlers. So it's really a matter of picking the right tools and making the right configuration.\nI suggest using a wsgi server and take advantage of the streaming support wsgi has.\nHere's your sample code translated to a wsgi app:\ndef app(environ, start_response):\n start_response('200 OK', [('Content-type','application/x-www-form-urlencoded')])\n i=1\n while i<4:\n yield \"Event: server-time<br>\"\n yield \"data: %f<br>\" % (time.time(),)\n i=i+1\n time.sleep(1)\n\nThere are plenty of wsgi servers but here's an example with the reference one from python std lib:\nfrom wsgiref.simple_server import make_server\n\nhttpd = make_server('', 8000, app)\nhttpd.serve_forever()\n\n" ]
[ 3, 1 ]
[]
[]
[ "javascript", "python", "streaming" ]
stackoverflow_0000523579_javascript_python_streaming.txt
Q: How to externally populate a Django model? What is the best idea to fill up data into a Django model from an external source? E.g. I have a model Run, and runs data in an XML file, which changes weekly. Should I create a view and call that view URL from a curl cronjob (with the advantage that that data can be read anytime, not only when the cronjob runs), or create a python script and install that script as a cron (with DJANGO _SETTINGS _MODULE variable setup before executing the script)? A: There is excellent way to do some maintenance-like jobs in project environment- write a custom manage.py command. It takes all environment configuration and other stuff allows you to concentrate on concrete task. And of course call it directly by cron. A: You don't need to create a view, you should just trigger a python script with the appropriate Django environment settings configured. Then call your models directly the way you would if you were using a view, process your data, add it to your model, then .save() the model to the database. A: "create a python script and install that script as a cron (with DJANGO _SETTINGS _MODULE variable setup before executing the script)?" First, be sure to declare your Forms in a separate module (e.g. forms.py) Then, you can write batch loaders that look like this. (We have a LOT of these.) from myapp.forms import MyObjectLoadForm from myapp.models import MyObject import xml.etree.ElementTree as ET def xmlToDict( element ): return dict( field1= element.findtext('tag1'), field2= element.findtext('tag2'), ) def loadRow( aDict ): f= MyObjectLoadForm( aDict ) if f.is_valid(): f.save() def parseAndLoad( someFile ): doc= ET.parse( someFile ).getroot() for tag in doc.getiterator( "someTag" ) loadRow( xmlToDict(tag) ) Note that there is very little unique processing here -- it just uses the same Form and Model as your view functions. We put these batch scripts in with our Django application, since it depends on the application's models.py and forms.py. The only "interesting" part is transforming your XML row into a dictionary so that it works seamlessly with Django's forms. Other than that, this command-line program uses all the same Django components as your view. You'll probably want to add options parsing and logging to make a complete command-line app out of this. You'll also notice that much of the logic is generic -- only the xmlToDict function is truly unique. We call these "Builders" and have a class hierarchy so that our Builders are all polymorphic mappings from our source documents to Python dictionaries. A: I've used cron to update my DB using both a script and a view. From cron's point of view it doesn't really matter which one you choose. As you've noted, though, it's hard to beat the simplicity of firing up a browser and hitting a URL if you ever want to update at a non-scheduled interval. If you go the view route, it might be worth considering a view that accepts the XML file itself via an HTTP POST. If that makes sense for your data (you don't give much information about that XML file), it would still work from cron, but could also accept an upload from a browser -- potentially letting the person who produces the XML file update the DB by themselves. That's a big win if you're not the one making the XML file, which is usually the case in my experience.
How to externally populate a Django model?
What is the best idea to fill up data into a Django model from an external source? E.g. I have a model Run, and runs data in an XML file, which changes weekly. Should I create a view and call that view URL from a curl cronjob (with the advantage that that data can be read anytime, not only when the cronjob runs), or create a python script and install that script as a cron (with DJANGO _SETTINGS _MODULE variable setup before executing the script)?
[ "There is excellent way to do some maintenance-like jobs in project environment- write a custom manage.py command. It takes all environment configuration and other stuff allows you to concentrate on concrete task.\nAnd of course call it directly by cron.\n", "You don't need to create a view, you should just trigger a python script with the appropriate Django environment settings configured. Then call your models directly the way you would if you were using a view, process your data, add it to your model, then .save() the model to the database.\n", "\"create a python script and install that script as a cron (with DJANGO _SETTINGS _MODULE variable setup before executing the script)?\" \nFirst, be sure to declare your Forms in a separate module (e.g. forms.py) \nThen, you can write batch loaders that look like this. (We have a LOT of these.)\nfrom myapp.forms import MyObjectLoadForm\nfrom myapp.models import MyObject\nimport xml.etree.ElementTree as ET\n\ndef xmlToDict( element ):\n return dict(\n field1= element.findtext('tag1'),\n field2= element.findtext('tag2'),\n )\n\ndef loadRow( aDict ):\n f= MyObjectLoadForm( aDict )\n if f.is_valid():\n f.save()\n\ndef parseAndLoad( someFile ):\n doc= ET.parse( someFile ).getroot()\n for tag in doc.getiterator( \"someTag\" )\n loadRow( xmlToDict(tag) )\n\nNote that there is very little unique processing here -- it just uses the same Form and Model as your view functions. \nWe put these batch scripts in with our Django application, since it depends on the application's models.py and forms.py.\nThe only \"interesting\" part is transforming your XML row into a dictionary so that it works seamlessly with Django's forms. Other than that, this command-line program uses all the same Django components as your view.\nYou'll probably want to add options parsing and logging to make a complete command-line app out of this. You'll also notice that much of the logic is generic -- only the xmlToDict function is truly unique. We call these \"Builders\" and have a class hierarchy so that our Builders are all polymorphic mappings from our source documents to Python dictionaries.\n", "I've used cron to update my DB using both a script and a view. From cron's point of view it doesn't really matter which one you choose. As you've noted, though, it's hard to beat the simplicity of firing up a browser and hitting a URL if you ever want to update at a non-scheduled interval.\nIf you go the view route, it might be worth considering a view that accepts the XML file itself via an HTTP POST. If that makes sense for your data (you don't give much information about that XML file), it would still work from cron, but could also accept an upload from a browser -- potentially letting the person who produces the XML file update the DB by themselves. That's a big win if you're not the one making the XML file, which is usually the case in my experience.\n" ]
[ 11, 4, 2, 2 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0000524214_django_django_models_python.txt
Q: How to 'zoom' in on a section of the Mandelbrot set? I have created a Python file to generate a Mandelbrot set image. The original maths code was not mine, so I do not understand it - I only heavily modified it to make it about 250x faster (Threads rule!). Anyway, I was wondering how I could modify the maths part of the code to make it render one specific bit. Here is the maths part: for y in xrange(size[1]): coords = (uleft[0] + (x/size[0]) * (xwidth),uleft[1] - (y/size[1]) * (ywidth)) z = complex(coords[0],coords[1]) o = complex(0,0) dotcolor = 0 # default, convergent for trials in xrange(n): if abs(o) <= 2.0: o = o**2 + z else: dotcolor = trials break # diverged im.putpixel((x,y),dotcolor) And the size definitions: size1 = 500 size2 = 500 n=64 box=((-2,1.25),(0.5,-1.25)) plus = size[1]+size[0] uleft = box[0] lright = box[1] xwidth = lright[0] - uleft[0] ywidth = uleft[1] - lright[1] what do I need to modify to make it render a certain section of the set? A: The line: box=((-2,1.25),(0.5,-1.25)) is the bit that defines the area of coordinate space that is being rendered, so you just need to change this line. First coordinate pair is the top-left of the area, the second is the bottom right. To get a new coordinate from the image should be quite straightforward. You've got two coordinate systems, your "image" system 100x100 pixels in size, origin at (0,0). And your "complex" plane coordinate system defined by "box". For X: X_complex=X_complex_origin+(X_image/X_image_width)*X_complex_width A: The key in understanding how to do this is to understand what the coords = line is doing: coords = (uleft[0] + (x/size[0]) * (xwidth),uleft[1] - (y/size[1]) * (ywidth)) Effectively, the x and y values you are looping through which correspond to the coordinates of the on-screen pixel are being translated to the corresponding point on the complex plane being looked at. This means that (0,0) screen coordinate will translate to the upper left region being looked at (-2,1.25), and (1,0) will be the same, but moved 1/500 of the distance (assuming a 500 pixel width window) between the -2 and 0.5 x-coordinate. That's exactly what that line is doing - I'll expand just the X-coordinate bit with more illustrative variable names to indicate this: mandel_x = mandel_start_x + (screen_x / screen_width) * mandel_width (The mandel_ variables refer to the coordinates on the complex plane, the screen_ variables refer to the on-screen coordinates of the pixel being plotted.) If you want then to take a region of the screen to zoom into, you want to do exactly the same: take the screen coordinates of the upper-left and lower-right region, translate them to the complex-plane coordinates, and make those the new uleft and lright variables. ie to zoom in on the box delimited by on-screen coordinates (x1,y1)..(x2,y2), use: new_uleft = (uleft[0] + (x1/size[0]) * (xwidth), uleft[1] - (y1/size[1]) * (ywidth)) new_lright = (uleft[0] + (x2/size[0]) * (xwidth), uleft[1] - (y2/size[1]) * (ywidth)) (Obviously you'll need to recalculate the size, xwidth, ywidth and other dependent variables based on the new coordinates) In case you're curious, the maths behind the mandelbrot set isn't that complicated (just complex). All it is doing is taking a particular coordinate, treating it as a complex number, and then repeatedly squaring it and adding the original number to it. For some numbers, doing this will cause the result diverge, constantly growing towards infinity as you repeat the process. For others, it will always stay below a certain level (eg. obviously (0.0, 0.0) never gets any bigger under this process. The mandelbrot set (the black region) is those coordinates which don't diverge. Its been shown that if any number gets above the square root of 5, it will diverge - your code is just using 2.0 as its approximation to sqrt(5) (~2.236), but this won't make much noticeable difference. Usually the regions that diverge get plotted with the number of iterations of the process that it takes for them to exceed this value (the trials variable in your code) which is what produces the coloured regions.
How to 'zoom' in on a section of the Mandelbrot set?
I have created a Python file to generate a Mandelbrot set image. The original maths code was not mine, so I do not understand it - I only heavily modified it to make it about 250x faster (Threads rule!). Anyway, I was wondering how I could modify the maths part of the code to make it render one specific bit. Here is the maths part: for y in xrange(size[1]): coords = (uleft[0] + (x/size[0]) * (xwidth),uleft[1] - (y/size[1]) * (ywidth)) z = complex(coords[0],coords[1]) o = complex(0,0) dotcolor = 0 # default, convergent for trials in xrange(n): if abs(o) <= 2.0: o = o**2 + z else: dotcolor = trials break # diverged im.putpixel((x,y),dotcolor) And the size definitions: size1 = 500 size2 = 500 n=64 box=((-2,1.25),(0.5,-1.25)) plus = size[1]+size[0] uleft = box[0] lright = box[1] xwidth = lright[0] - uleft[0] ywidth = uleft[1] - lright[1] what do I need to modify to make it render a certain section of the set?
[ "The line:\nbox=((-2,1.25),(0.5,-1.25))\n\nis the bit that defines the area of coordinate space that is being rendered, so you just need to change this line. First coordinate pair is the top-left of the area, the second is the bottom right. \nTo get a new coordinate from the image should be quite straightforward. You've got two coordinate systems, your \"image\" system 100x100 pixels in size, origin at (0,0). And your \"complex\" plane coordinate system defined by \"box\". For X:\nX_complex=X_complex_origin+(X_image/X_image_width)*X_complex_width\n\n", "The key in understanding how to do this is to understand what the coords = line is doing:\ncoords = (uleft[0] + (x/size[0]) * (xwidth),uleft[1] - (y/size[1]) * (ywidth))\n\nEffectively, the x and y values you are looping through which correspond to the coordinates of the on-screen pixel are being translated to the corresponding point on the complex plane being looked at. This means that (0,0) screen coordinate will translate to the upper left region being looked at (-2,1.25), and (1,0) will be the same, but moved 1/500 of the distance (assuming a 500 pixel width window) between the -2 and 0.5 x-coordinate.\nThat's exactly what that line is doing - I'll expand just the X-coordinate bit with more illustrative variable names to indicate this:\nmandel_x = mandel_start_x + (screen_x / screen_width) * mandel_width\n\n(The mandel_ variables refer to the coordinates on the complex plane, the screen_ variables refer to the on-screen coordinates of the pixel being plotted.)\nIf you want then to take a region of the screen to zoom into, you want to do exactly the same: take the screen coordinates of the upper-left and lower-right region, translate them to the complex-plane coordinates, and make those the new uleft and lright variables. ie to zoom in on the box delimited by on-screen coordinates (x1,y1)..(x2,y2), use:\nnew_uleft = (uleft[0] + (x1/size[0]) * (xwidth), uleft[1] - (y1/size[1]) * (ywidth))\nnew_lright = (uleft[0] + (x2/size[0]) * (xwidth), uleft[1] - (y2/size[1]) * (ywidth))\n\n(Obviously you'll need to recalculate the size, xwidth, ywidth and other dependent variables based on the new coordinates)\nIn case you're curious, the maths behind the mandelbrot set isn't that complicated (just complex).\nAll it is doing is taking a particular coordinate, treating it as a complex number, and then repeatedly squaring it and adding the original number to it. \nFor some numbers, doing this will cause the result diverge, constantly growing towards infinity as you repeat the process. For others, it will always stay below a certain level (eg. obviously (0.0, 0.0) never gets any bigger under this process. The mandelbrot set (the black region) is those coordinates which don't diverge. Its been shown that if any number gets above the square root of 5, it will diverge - your code is just using 2.0 as its approximation to sqrt(5) (~2.236), but this won't make much noticeable difference. \nUsually the regions that diverge get plotted with the number of iterations of the process that it takes for them to exceed this value (the trials variable in your code) which is what produces the coloured regions.\n" ]
[ 15, 4 ]
[]
[]
[ "mandelbrot", "math", "python" ]
stackoverflow_0000524291_mandelbrot_math_python.txt
Q: Splitting arguments -- preserving quoted substrings -- in python Exact duplicate: Split a string by spaces -- preserving quoted substrings -- in Python I want to take in a string and return a list, dictionary or tuple of values as separated by spaces. However, I want to not match spaces that are somehow between quote marks, i.e. apple orange "banana tree" green Should come back as three items, "banana tree" being one whole item. If possible it should also allow for the escaping of quote marks. A: This problem sounds a lot like parsing tag input, you could take a look at django-tagging utils.py implementation which solves this kind of problem A: def splitstring(string): """ >>> string = 'apple orange "banana tree" green' >>> splitstring(string) ['apple', 'orange', 'green', '"banana tree"'] """ import re p = re.compile(r'"[\w ]+"') quoted_item = p.search(string).group() newstring = p.sub('', string) return newstring.split() + [quoted_item]
Splitting arguments -- preserving quoted substrings -- in python
Exact duplicate: Split a string by spaces -- preserving quoted substrings -- in Python I want to take in a string and return a list, dictionary or tuple of values as separated by spaces. However, I want to not match spaces that are somehow between quote marks, i.e. apple orange "banana tree" green Should come back as three items, "banana tree" being one whole item. If possible it should also allow for the escaping of quote marks.
[ "This problem sounds a lot like parsing tag input, you could take a look at django-tagging utils.py implementation which solves this kind of problem\n", "def splitstring(string):\n \"\"\"\n >>> string = 'apple orange \"banana tree\" green'\n >>> splitstring(string)\n ['apple', 'orange', 'green', '\"banana tree\"']\n \"\"\"\n import re\n p = re.compile(r'\"[\\w ]+\"')\n quoted_item = p.search(string).group()\n newstring = p.sub('', string)\n return newstring.split() + [quoted_item]\n\n" ]
[ 1, -2 ]
[]
[]
[ "parsing", "python" ]
stackoverflow_0000524541_parsing_python.txt
Q: python, how to tell what type of obj was returned How can i find out if the obj returned by a func is a int or something else (like a sqlite cursor) A: Use isinstance(item, type) -- for instance: if isinstance(foo, int): pass # handle this case However, explicit type checking is not considered a good practice in the Python world -- it means that much of the power of duck typing is lost: Something which walks and quacks like a duck should be allowed to be a duck, even if it isn't! :)
python, how to tell what type of obj was returned
How can i find out if the obj returned by a func is a int or something else (like a sqlite cursor)
[ "Use isinstance(item, type) -- for instance:\nif isinstance(foo, int):\n pass # handle this case\n\nHowever, explicit type checking is not considered a good practice in the Python world -- it means that much of the power of duck typing is lost: Something which walks and quacks like a duck should be allowed to be a duck, even if it isn't! :)\n" ]
[ 12 ]
[ "Use the built-in \"type\" function, e.g. type(10) -> .\n" ]
[ -1 ]
[ "python", "types" ]
stackoverflow_0000524734_python_types.txt
Q: Python's equivalent of $this->$varName In PHP I can do the following: $myVar = 'name'; print $myClass->$myVar; // Identical to $myClass->name I wish to do this in Python but can't find out how A: In python, it's the getattr built-in function. class Something( object ): def __init__( self ): self.a= 2 self.b= 3 x= Something() getattr( x, 'a' ) getattr( x, 'b' ) A: You'll want to use the getattr builtin function. myvar = 'name' //both should produce the same results value = obj.name value = getattr(obj, myvar)
Python's equivalent of $this->$varName
In PHP I can do the following: $myVar = 'name'; print $myClass->$myVar; // Identical to $myClass->name I wish to do this in Python but can't find out how
[ "In python, it's the getattr built-in function.\nclass Something( object ):\n def __init__( self ):\n self.a= 2\n self.b= 3\n\nx= Something()\ngetattr( x, 'a' )\ngetattr( x, 'b' )\n\n", "You'll want to use the getattr builtin function.\nmyvar = 'name'\n\n//both should produce the same results\nvalue = obj.name\nvalue = getattr(obj, myvar)\n\n" ]
[ 16, 5 ]
[]
[]
[ "php", "python" ]
stackoverflow_0000524831_php_python.txt
Q: python coding speed and cleanest Python is pretty clean, and I can code neat apps quickly. But I notice I have some minor error someplace and I dont find the error at compile but at run time. Then I need to change and run the script again. Is there a way to have it break and let me modify and run? Also, I dislike how python has no enums. If I were to write code that needs a lot of enums and types, should I be doing it in C++? It feels like I can do it quicker in C++. A: "I don't find the error at compile but at run time" Correct. True for all non-compiled interpreted languages. "I need to change and run the script again" Also correct. True for all non-compiled interpreted languages. "Is there a way to have it break and let me modify and run?" What? If it's a run-time error, the script breaks, you fix it and run again. If it's not a proper error, but a logic problem of some kind, then the program finishes, but doesn't work correctly. No language can anticipate what you hoped for and break for you. Or perhaps you mean something else. "...code that needs a lot of enums" You'll need to provide examples of code that needs a lot of enums. I've been writing Python for years, and have no use for enums. Indeed, I've been writing C++ with no use for enums either. You'll have to provide code that needs a lot of enums as a specific example. Perhaps in another question along the lines of "What's a Pythonic replacement for all these enums." It's usually polymorphic class definitions, but without an example, it's hard to be sure. A: With interpreted languages you have a lot of freedom. Freedom isn't free here either. While the interpreter won't torture you into dotting every i and crossing every T before it deems your code worthy of a run, it also won't try to statically analyze your code for all those problems. So you have a few choices. 1) {Pyflakes, pychecker, pylint} will do static analysis on your code. That settles the syntax issue mostly. 2) Test-driven development with nosetests or the like will help you. If you make a code change that breaks your existing code, the tests will fail and you will know about it. This is actually better than static analysis and can be as fast. If you test-first, then you will have all your code checked at test runtime instead of program runtime. Note that with 1 & 2 in place you are a bit better off than if you had just a static-typing compiler on your side. Even so, it will not create a proof of correctness. It is possible that your tests may miss some plumbing you need for the app to actually run. If that happens, you fix it by writing more tests usually. But you still need to fire up the app and bang on it to see what tests you should have written and didn't. A: You might want to look into something like nosey, which runs your unit tests periodically when you've saved changes to a file. You could also set up a save-event trigger to run your unit tests in the background whenever you save a file (possible e.g. with Komodo Edit). That said, what I do is bind the F7 key to run unit tests in the current directory and subdirectories, and the F6 key to run pylint on the current file. Frequent use of these allows me to spot errors pretty quickly. A: Python is an interpreted language, there is no compile stage, at least not that is visible to the user. If you get an error, go back, modify the script, and try again. If your script has long execution time, and you don't want to stop-restart, you can try a debugger like pdb, using which you can fix some of your errors during runtime. There are a large number of ways in which you can implement enums, a quick google search for "python enums" gives everything you're likely to need. However, you should look into whether or not you really need them, and if there's a better, more 'pythonic' way of doing the same thing.
python coding speed and cleanest
Python is pretty clean, and I can code neat apps quickly. But I notice I have some minor error someplace and I dont find the error at compile but at run time. Then I need to change and run the script again. Is there a way to have it break and let me modify and run? Also, I dislike how python has no enums. If I were to write code that needs a lot of enums and types, should I be doing it in C++? It feels like I can do it quicker in C++.
[ "\"I don't find the error at compile but at run time\"\nCorrect. True for all non-compiled interpreted languages.\n\"I need to change and run the script again\"\nAlso correct. True for all non-compiled interpreted languages.\n\"Is there a way to have it break and let me modify and run?\"\nWhat?\nIf it's a run-time error, the script breaks, you fix it and run again.\nIf it's not a proper error, but a logic problem of some kind, then the program finishes, but doesn't work correctly. No language can anticipate what you hoped for and break for you.\nOr perhaps you mean something else.\n\"...code that needs a lot of enums\"\nYou'll need to provide examples of code that needs a lot of enums. I've been writing Python for years, and have no use for enums. Indeed, I've been writing C++ with no use for enums either.\nYou'll have to provide code that needs a lot of enums as a specific example. Perhaps in another question along the lines of \"What's a Pythonic replacement for all these enums.\"\nIt's usually polymorphic class definitions, but without an example, it's hard to be sure.\n", "With interpreted languages you have a lot of freedom. Freedom isn't free here either. While the interpreter won't torture you into dotting every i and crossing every T before it deems your code worthy of a run, it also won't try to statically analyze your code for all those problems. So you have a few choices.\n1) {Pyflakes, pychecker, pylint} will do static analysis on your code. That settles the syntax issue mostly. \n2) Test-driven development with nosetests or the like will help you. If you make a code change that breaks your existing code, the tests will fail and you will know about it. This is actually better than static analysis and can be as fast. If you test-first, then you will have all your code checked at test runtime instead of program runtime.\nNote that with 1 & 2 in place you are a bit better off than if you had just a static-typing compiler on your side. Even so, it will not create a proof of correctness. \nIt is possible that your tests may miss some plumbing you need for the app to actually run. If that happens, you fix it by writing more tests usually. But you still need to fire up the app and bang on it to see what tests you should have written and didn't. \n", "You might want to look into something like nosey, which runs your unit tests periodically when you've saved changes to a file. You could also set up a save-event trigger to run your unit tests in the background whenever you save a file (possible e.g. with Komodo Edit).\nThat said, what I do is bind the F7 key to run unit tests in the current directory and subdirectories, and the F6 key to run pylint on the current file. Frequent use of these allows me to spot errors pretty quickly.\n", "Python is an interpreted language, there is no compile stage, at least not that is visible to the user. If you get an error, go back, modify the script, and try again. If your script has long execution time, and you don't want to stop-restart, you can try a debugger like pdb, using which you can fix some of your errors during runtime.\nThere are a large number of ways in which you can implement enums, a quick google search for \"python enums\" gives everything you're likely to need. However, you should look into whether or not you really need them, and if there's a better, more 'pythonic' way of doing the same thing.\n" ]
[ 9, 3, 3, 2 ]
[]
[]
[ "python" ]
stackoverflow_0000525080_python.txt
Q: pythonic replacement for enums In my python script i am parsing a user created file and typically there will be some errors and there are cases were i warn the user to be more clear. In c i would have an enum like eAssignBad, eAssignMismatch, eAssignmentSignMix (sign mixed with unsigned). Then i would look the value up to print an error or warning msg. I link having the warningMsg in one place and i like the readability of names rather then literal values. Whats a pythonic replacement for this? Duplicate of: How can I represent an 'Enum' in Python? A: Here is one of the best enum implementations I've found so far: http://code.activestate.com/recipes/413486/ But, dare I ask, do you need an enum? You could have a simple dict with your error messages and some integer constants with your error numbers. eAssignBad = 0 eAssignMismatch = 1 eAssignmentSignMix = 2 eAssignErrors = { eAssignBad: 'Bad assignment', eAssignMismatch: 'Mismatched thingy', eAssignmentSignMix: 'Bad sign mixing' } A: You could try making a bunch of exception classes (all subclasses of Exception, perhaps through some common parent class of your own). Each one would have an error message text appropriate for the occasion...
pythonic replacement for enums
In my python script i am parsing a user created file and typically there will be some errors and there are cases were i warn the user to be more clear. In c i would have an enum like eAssignBad, eAssignMismatch, eAssignmentSignMix (sign mixed with unsigned). Then i would look the value up to print an error or warning msg. I link having the warningMsg in one place and i like the readability of names rather then literal values. Whats a pythonic replacement for this? Duplicate of: How can I represent an 'Enum' in Python?
[ "Here is one of the best enum implementations I've found so far:\nhttp://code.activestate.com/recipes/413486/\nBut, dare I ask, do you need an enum?\nYou could have a simple dict with your error messages and some integer constants with your error numbers.\neAssignBad = 0\neAssignMismatch = 1\neAssignmentSignMix = 2\n\neAssignErrors = {\n eAssignBad: 'Bad assignment',\n eAssignMismatch: 'Mismatched thingy',\n eAssignmentSignMix: 'Bad sign mixing'\n}\n\n", "You could try making a bunch of exception classes (all subclasses of Exception, perhaps through some common parent class of your own). Each one would have an error message text appropriate for the occasion...\n" ]
[ 4, 3 ]
[]
[]
[ "enums", "python" ]
stackoverflow_0000525134_enums_python.txt
Q: BaseHTTPRequestHandler freezes while writing to self.wfile after installing Python 3.0 I'm starting to lose my head with this one. I have a class that extends BaseHTTPRequestHandler. It works fine on Python 2.5. And yesterday I was curious and decided to install Python 3.0 on my Mac (I followed this tutorial, to be sure I wasn't messing things up: http://farmdev.com/thoughts/66/python-3-0-on-mac-os-x-alongside-2-6-2-5-etc-/ ). I tried my application on Python 3.0 and the code just freezed on this line: self.wfile.write(f.read()) I searched and got to this bug http://bugs.python.org/issue3826. I couldn't understand if there's already a fix for that. But, the strangest thing was that, when I tried my application on 2.5, it started freezing on the same spot! I then removed everything I installed from 3.0, fixed the paths, and it still gives me the error. I don't know what else to do. The app works fine on 2.5, because I tried it on another computer. Thanks for your help. A: I would suggest develop a simple page that would dump the version details of the perl environment and confirm that now you are back on 2.5. Mostly in such scenarios there are some environment entries or binaries that are left out. A: I'm sorry, it seems to have been a weird setting between routers (mac <-> router <-> router <-> ISP) here at home. Small files ( < 100kB ) were served with no problem at all, but larger files got stuck. I found it out after formatting my mac, and realized that it was still happening. I tried to remove on of the routers out of the way and it sure works now. The real reason that caused it, and why it worked before and suddently stopped working after (coincidently, for sure) installing Python 3.0 is still incognito for me. Thanks for the readers.
BaseHTTPRequestHandler freezes while writing to self.wfile after installing Python 3.0
I'm starting to lose my head with this one. I have a class that extends BaseHTTPRequestHandler. It works fine on Python 2.5. And yesterday I was curious and decided to install Python 3.0 on my Mac (I followed this tutorial, to be sure I wasn't messing things up: http://farmdev.com/thoughts/66/python-3-0-on-mac-os-x-alongside-2-6-2-5-etc-/ ). I tried my application on Python 3.0 and the code just freezed on this line: self.wfile.write(f.read()) I searched and got to this bug http://bugs.python.org/issue3826. I couldn't understand if there's already a fix for that. But, the strangest thing was that, when I tried my application on 2.5, it started freezing on the same spot! I then removed everything I installed from 3.0, fixed the paths, and it still gives me the error. I don't know what else to do. The app works fine on 2.5, because I tried it on another computer. Thanks for your help.
[ "I would suggest develop a simple page that would dump the version details of the perl environment and confirm that now you are back on 2.5. Mostly in such scenarios there are some environment entries or binaries that are left out.\n", "I'm sorry, it seems to have been a weird setting between routers (mac <-> router <-> router <-> ISP) here at home.\nSmall files ( < 100kB ) were served with no problem at all, but larger files got stuck. I found it out after formatting my mac, and realized that it was still happening. I tried to remove on of the routers out of the way and it sure works now. The real reason that caused it, and why it worked before and suddently stopped working after (coincidently, for sure) installing Python 3.0 is still incognito for me.\nThanks for the readers.\n" ]
[ 0, 0 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0000523885_python_sockets.txt
Q: Translating Python Regexp to Shell I'm writing an Applescript playlist generator. Part of the process is to read the iTunes Library XML file to get a list of all of the genres in a user's library. This is the python implementation, which works as I'd like: #!/usr/bin/env python # script to get all of the genres from itunes import re,sys,sets ## Boosted from the internet to handle HTML entities in Genre names def unescape(text): def fixup(m): text = m.group(0) if text[:2] == "&#": # character reference try: if text[:3] == "&#x": return unichr(int(text[3:-1], 16)) else: return unichr(int(text[2:-1])) except ValueError: pass else: # named entity try: text = unichr(htmlentitydefs.name2codepoint[text[1:-1]]) except KeyError: pass return text # leave as is return re.sub("&#?\w+;", fixup, text) # probably faster to use a regex than to try to walk # the entire xml document and aggregate the genres try: xml_path = "/Users/%s/Music/iTunes/iTunes Music Library.xml" % sys.argv[1] except: print '\tUsage: python '+sys.argv[0]+' <your OSX username>' raise SystemExit pattern = "<key>Genre</key><string>([^<]+)</string>" try: xml = file(xml_path,'r').read() except: print '\tUnable to load your iTunes Library XML file' raise SystemExit matches = re.findall(pattern,xml) uniques = map(unescape,list(sets.Set(matches))) ## need to write these out somewhere so the applescript can read them sys.stdout.write('|'.join(uniques)) raise SystemExit The problem is, I'd like the Applescript to be self-contained and not require that this additional file be present (I plan on making this available to other people). And, as far as I can tell, Applescript doesn't offer any type of regular expression capabilities out of the box. I could loop over each track in the library to get all of the genres, but this is a prohibitively long process that I already do once when building the playlist. So, I'm looking for alternatives. Since Applescript allows me to run a shell script and capture the results, I imagine that I can accomplish the same behavior using some type of shell command, be it grep, perl, or something else. My *nix command line skills are extremely rusty and I'm looking for some guidance. So, in short, I'd like to find a way to translate the above python code into something I can call directly from the shell and get a similar result. Thanks! A: Why are you using regex to parse XML? Why not use a proper XML library? Python has some great utilities like ElementTree that make walking the DOM a lot easier, and it yields nice, friendly objects rather than untyped strings. Here are some ways of parsing XML using Applescript: Applescript XML Parser (Available since Tiger apparently) XML Tools you can also use with Applescript Remember, just like Applescript can hook into iTunes, it can hook into other installed utilities like these. Lastly, why not just write the whole thing in Python since it has way better development tools for debugging and runs a lot faster. If you're running Leopard, you have Python 2.5.1 pre-installed. A: Is creating a standalone App the Solution ? Look at py2app: py2app, works like py2exe but targets Mac OS See A: If you're already working in AppleScript, why not just ask iTunes directly? tell application "iTunes" to get genre of every track of library playlist 1
Translating Python Regexp to Shell
I'm writing an Applescript playlist generator. Part of the process is to read the iTunes Library XML file to get a list of all of the genres in a user's library. This is the python implementation, which works as I'd like: #!/usr/bin/env python # script to get all of the genres from itunes import re,sys,sets ## Boosted from the internet to handle HTML entities in Genre names def unescape(text): def fixup(m): text = m.group(0) if text[:2] == "&#": # character reference try: if text[:3] == "&#x": return unichr(int(text[3:-1], 16)) else: return unichr(int(text[2:-1])) except ValueError: pass else: # named entity try: text = unichr(htmlentitydefs.name2codepoint[text[1:-1]]) except KeyError: pass return text # leave as is return re.sub("&#?\w+;", fixup, text) # probably faster to use a regex than to try to walk # the entire xml document and aggregate the genres try: xml_path = "/Users/%s/Music/iTunes/iTunes Music Library.xml" % sys.argv[1] except: print '\tUsage: python '+sys.argv[0]+' <your OSX username>' raise SystemExit pattern = "<key>Genre</key><string>([^<]+)</string>" try: xml = file(xml_path,'r').read() except: print '\tUnable to load your iTunes Library XML file' raise SystemExit matches = re.findall(pattern,xml) uniques = map(unescape,list(sets.Set(matches))) ## need to write these out somewhere so the applescript can read them sys.stdout.write('|'.join(uniques)) raise SystemExit The problem is, I'd like the Applescript to be self-contained and not require that this additional file be present (I plan on making this available to other people). And, as far as I can tell, Applescript doesn't offer any type of regular expression capabilities out of the box. I could loop over each track in the library to get all of the genres, but this is a prohibitively long process that I already do once when building the playlist. So, I'm looking for alternatives. Since Applescript allows me to run a shell script and capture the results, I imagine that I can accomplish the same behavior using some type of shell command, be it grep, perl, or something else. My *nix command line skills are extremely rusty and I'm looking for some guidance. So, in short, I'd like to find a way to translate the above python code into something I can call directly from the shell and get a similar result. Thanks!
[ "Why are you using regex to parse XML? Why not use a proper XML library? Python has some great utilities like ElementTree that make walking the DOM a lot easier, and it yields nice, friendly objects rather than untyped strings.\nHere are some ways of parsing XML using Applescript:\nApplescript XML Parser (Available since Tiger apparently)\nXML Tools you can also use with Applescript\nRemember, just like Applescript can hook into iTunes, it can hook into other installed utilities like these.\nLastly, why not just write the whole thing in Python since it has way better development tools for debugging and runs a lot faster. If you're running Leopard, you have Python 2.5.1 pre-installed.\n", "Is creating a standalone App the Solution ?\nLook at py2app:\npy2app, works like py2exe but targets Mac OS\nSee\n", "If you're already working in AppleScript, why not just ask iTunes directly?\ntell application \"iTunes\" to get genre of every track of library playlist 1\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "applescript", "python" ]
stackoverflow_0000514767_applescript_python.txt
Q: return eats exception I found the following behavior at least weird: def errors(): try: ErrorErrorError finally: return 10 print errors() # prints: 10 # It should raise: NameError: name 'ErrorErrorError' is not defined The exception disappears when you use return inside a finally clause. Is that a bug? Is that documented anywhere? But the real question (and the answer I will mark as correct) is: What is the python developers' reason to allow that odd behavior? A: The exception disappears when you use return inside a finally clause. .. Is that documented anywhere? It is: If finally is present, it specifies a ‘cleanup’ handler. The try clause is executed, including any except and else clauses. If an exception occurs in any of the clauses and is not handled, the exception is temporarily saved. The finally clause is executed. If there is a saved exception, it is re-raised at the end of the finally clause. If the finally clause raises another exception or executes a return or break statement, the saved exception is lost. A: You asked about the Python developers' reasoning. I can't speak for them, but no other behavior makes sense. A function can either return a value, or it can raise an exception; it can't do both. The purpose of a "finally" clause is to provide cleanup code that is "guaranteed" to be run, regardless of exceptions. By putting a return statement in a finally clause, you have declared that you want to return a value, no matter what, regardless of exceptions. If Python behaved as you are asking and raised the exception, it would be breaking the contract of the "finally" clause (because it would fail to return the value you told it to return). A: Here is an interesting comparison for return in finally block, among - Java/C#/Python/JavaScript: (archive link) Return From Finally Just today I was helping with some bug in Java and came across interesting problem - what happens if you use return within try/catch statement? Should the finally section fire up or not? I simplified the problem to following code snippet: What does the following code print out? class ReturnFromFinally { public static int a() { try { return 1; } catch (Exception e) {} finally{ return 2; } } public static void main(String[] args) { System.out.println(a()); } } My initial guess would be, that it should print out 1, I'm calling return, so I assume, one will be returned. However, it is not the case: I understand the logic, finally section has to be executed, but somehow I feel uneasy about this. Let's see what C# does in this case: class ReturnFromFinally { public static int a() { try { return 1; } catch (System.Exception e) {} finally { return 2; } } public static void Main(string[] args) { System.Console.WriteLine(a()); } } I prefer much rather this behavior, control flow cannot be messed with in finally clause, so it prevents us from shooting ourself in the foot. Just for the sake of completeness, let's check what other languages do. Python: def a(): try: return 1 finally: return 2 print a() JavaScript: <script> function ReturnFromFinally() { try { return 1; } catch (e) { } finally { return 2; } } </script> <a onclick="alert(ReturnFromFinally());">Click here</a> There is no finally clause in C++ and PHP, so I can't try out the last two languages I have compiler/interpreter for. Our little experiment nicely showed, that C# has the nicest approach to this problem, but I was quite surprised to learn, that all the other languages handle the problem the same way. A: Returning from a finally is not a good idea. I know C# specifically forbids doing this.
return eats exception
I found the following behavior at least weird: def errors(): try: ErrorErrorError finally: return 10 print errors() # prints: 10 # It should raise: NameError: name 'ErrorErrorError' is not defined The exception disappears when you use return inside a finally clause. Is that a bug? Is that documented anywhere? But the real question (and the answer I will mark as correct) is: What is the python developers' reason to allow that odd behavior?
[ "\nThe exception disappears when you use return inside a finally clause. .. Is that documented anywhere?\n\nIt is:\n\nIf finally is present, it specifies a ‘cleanup’ handler. The try clause is executed, including any except and else clauses. If an exception occurs in any of the clauses and is not handled, the exception is temporarily saved. The finally clause is executed. If there is a saved exception, it is re-raised at the end of the finally clause. If the finally clause raises another exception or executes a return or break statement, the saved exception is lost.\n\n", "You asked about the Python developers' reasoning. I can't speak for them, but no other behavior makes sense. A function can either return a value, or it can raise an exception; it can't do both. The purpose of a \"finally\" clause is to provide cleanup code that is \"guaranteed\" to be run, regardless of exceptions. By putting a return statement in a finally clause, you have declared that you want to return a value, no matter what, regardless of exceptions. If Python behaved as you are asking and raised the exception, it would be breaking the contract of the \"finally\" clause (because it would fail to return the value you told it to return).\n", "Here is an interesting comparison for return in finally block, among - Java/C#/Python/JavaScript: (archive link)\n\nReturn From Finally\nJust today I was helping with some bug in Java and came across\n interesting problem - what happens if you use return within try/catch\n statement? Should the finally section fire up or not? I simplified the\n problem to following code snippet:\nWhat does the following code print out?\nclass ReturnFromFinally { \n public static int a() { \n try { \n return 1; \n } \n catch (Exception e) {} \n finally{ \n return 2; \n } \n } \n\n public static void main(String[] args) { \n System.out.println(a()); \n } \n} \n\nMy initial guess would be, that it should print out 1, I'm calling\n return, so I assume, one will be returned. However, it is not the\n case:\n\nI understand the logic, finally section has to be executed, but\n somehow I feel uneasy about this. Let's see what C# does in this case:\nclass ReturnFromFinally \n{ \n public static int a() \n { \n try { \n return 1; \n } \n catch (System.Exception e) {} \n finally \n { \n return 2; \n } \n } \n\n public static void Main(string[] args) \n { \n System.Console.WriteLine(a()); \n } \n} \n\n\nI prefer much rather this behavior, control flow cannot be messed with\n in finally clause, so it prevents us from shooting ourself in the\n foot. Just for the sake of completeness, let's check what other\n languages do.\nPython:\ndef a(): \n try: \n return 1 \n finally: \n return 2 \nprint a() \n\n\nJavaScript:\n<script> \nfunction ReturnFromFinally() \n{ \n try \n { \n return 1; \n } \n catch (e) \n { \n } \n finally \n { \n return 2; \n } \n} \n</script> \n<a onclick=\"alert(ReturnFromFinally());\">Click here</a> \n\n\nThere is no finally clause in C++ and PHP, so I can't try out the last\n two languages I have compiler/interpreter for.\nOur little experiment nicely showed, that C# has the nicest approach\n to this problem, but I was quite surprised to learn, that all the\n other languages handle the problem the same way.\n\n", "Returning from a finally is not a good idea. I know C# specifically forbids doing this.\n" ]
[ 53, 36, 5, 2 ]
[]
[]
[ "exception", "finally", "python", "return" ]
stackoverflow_0000517060_exception_finally_python_return.txt
Q: NumPy, PIL adding an image I'm trying to add two images together using NumPy and PIL. The way I would do this in MATLAB would be something like: >> M1 = imread('_1.jpg'); >> M2 = imread('_2.jpg'); >> resM = M1 + M2; >> imwrite(resM, 'res.jpg'); I get something like this: alt text http://www.deadlink.cc/matlab.jpg Using a compositing program and adding the images the MATLAB result seems to be right. In Python I'm trying to do the same thing like this: from PIL import Image from numpy import * im1 = Image.open('/Users/rem7/Desktop/_1.jpg') im2 = Image.open('/Users/rem7/Desktop/_2.jpg') im1arr = asarray(im1) im2arr = asarray(im2) addition = im1arr + im2arr resultImage = Image.fromarray(addition) resultImage.save('/Users/rem7/Desktop/a.jpg') and I get something like this: alt text http://www.deadlink.cc/python.jpg Why am I getting all those funky colors? I also tried using ImageMath.eval("a+b", a=im1, b=im2), but I get an error about RGB unsupported. I also saw that there is an Image.blend() but that requires an alpha. What's the best way to achieve what I'm looking for? Source Images (images have been removed): alt text http://www.deadlink.cc/_1.jpg alt text http://www.deadlink.cc/_2.jpg Humm, OK, well I added the source images using the add image icon and they show up when I'm editing the post, but for some reason the images don't show up in the post. (images have been removed) 2013 05 09 A: As everyone suggested already, the weird colors you're observing are overflow. And as you point out in the comment of schnaader's answer you still get overflow if you add your images like this: addition=(im1arr+im2arr)/2 The reason for this overflow is that your NumPy arrays (im1arr im2arr) are of the uint8 type (i.e. 8-bit). This means each element of the array can only hold values up to 255, so when your sum exceeds 255, it loops back around 0: >>>array([255,10,100],dtype='uint8') + array([1,10,160],dtype='uint8') array([ 0, 20, 4], dtype=uint8) To avoid overflow, your arrays should be able to contain values beyond 255. You need to convert them to floats for instance, perform the blending operation and convert the result back to uint8: im1arrF = im1arr.astype('float') im2arrF = im2arr.astype('float') additionF = (im1arrF+im2arrF)/2 addition = additionF.astype('uint8') You should not do this: addition = im1arr/2 + im2arr/2 as you lose information, by squashing the dynamic of the image (you effectively make the images 7-bit) before you perform the blending information. MATLAB note: the reason you don't see this problem in MATLAB, is probably because MATLAB takes care of the overflow implicitly in one of its functions. A: Using PIL's blend() with an alpha value of 0.5 would be equivalent to (im1arr + im2arr)/2. Blend does not require that the images have alpha layers. Try this: from PIL import Image im1 = Image.open('/Users/rem7/Desktop/_1.jpg') im2 = Image.open('/Users/rem7/Desktop/_2.jpg') Image.blend(im1,im2,0.5).save('/Users/rem7/Desktop/a.jpg') A: It seems the code you posted just sums up the values and values bigger than 256 are overflowing. You want something like "(a + b) / 2" or "min(a + b, 256)". The latter seems to be the way that your Matlab example does it. A: To clamp numpy array values: >>> c = a + b >>> c[c > 256] = 256 A: Your sample images are not showing up form me so I am going to do a bit of guessing. I can't remember exactly how the numpy to pil conversion works but there are two likely cases. I am 95% sure it is 1 but am giving 2 just in case I am wrong. 1) 1 im1Arr is a MxN array of integers (ARGB) and when you add im1arr and im2arr together you are overflowing from one channel into the next if the components b1+b2>255. I am guessing matlab represents their images as MxNx3 arrays so each color channel is separate. You can solve this by splitting the PIL image channels and then making numpy arrays 2) 1 im1Arr is a MxNx3 array of bytes and when you add im1arr and im2arr together you are wrapping the component around. You are also going to have to rescale the range back to between 0-255 before displaying. Your choices are divide by 2, scale by 255/array.max() or do a clip. I don't know what matlab does
NumPy, PIL adding an image
I'm trying to add two images together using NumPy and PIL. The way I would do this in MATLAB would be something like: >> M1 = imread('_1.jpg'); >> M2 = imread('_2.jpg'); >> resM = M1 + M2; >> imwrite(resM, 'res.jpg'); I get something like this: alt text http://www.deadlink.cc/matlab.jpg Using a compositing program and adding the images the MATLAB result seems to be right. In Python I'm trying to do the same thing like this: from PIL import Image from numpy import * im1 = Image.open('/Users/rem7/Desktop/_1.jpg') im2 = Image.open('/Users/rem7/Desktop/_2.jpg') im1arr = asarray(im1) im2arr = asarray(im2) addition = im1arr + im2arr resultImage = Image.fromarray(addition) resultImage.save('/Users/rem7/Desktop/a.jpg') and I get something like this: alt text http://www.deadlink.cc/python.jpg Why am I getting all those funky colors? I also tried using ImageMath.eval("a+b", a=im1, b=im2), but I get an error about RGB unsupported. I also saw that there is an Image.blend() but that requires an alpha. What's the best way to achieve what I'm looking for? Source Images (images have been removed): alt text http://www.deadlink.cc/_1.jpg alt text http://www.deadlink.cc/_2.jpg Humm, OK, well I added the source images using the add image icon and they show up when I'm editing the post, but for some reason the images don't show up in the post. (images have been removed) 2013 05 09
[ "As everyone suggested already, the weird colors you're observing are overflow. And as you point out in the comment of schnaader's answer you still get overflow if you add your images like this:\naddition=(im1arr+im2arr)/2\n\nThe reason for this overflow is that your NumPy arrays (im1arr im2arr) are of the uint8 type (i.e. 8-bit). This means each element of the array can only hold values up to 255, so when your sum exceeds 255, it loops back around 0:\n>>>array([255,10,100],dtype='uint8') + array([1,10,160],dtype='uint8')\narray([ 0, 20, 4], dtype=uint8)\n\nTo avoid overflow, your arrays should be able to contain values beyond 255. You need to convert them to floats for instance, perform the blending operation and convert the result back to uint8:\nim1arrF = im1arr.astype('float')\nim2arrF = im2arr.astype('float')\nadditionF = (im1arrF+im2arrF)/2\naddition = additionF.astype('uint8')\n\nYou should not do this:\naddition = im1arr/2 + im2arr/2\n\nas you lose information, by squashing the dynamic of the image (you effectively make the images 7-bit) before you perform the blending information.\nMATLAB note: the reason you don't see this problem in MATLAB, is probably because MATLAB takes care of the overflow implicitly in one of its functions.\n", "Using PIL's blend() with an alpha value of 0.5 would be equivalent to (im1arr + im2arr)/2. Blend does not require that the images have alpha layers.\nTry this:\nfrom PIL import Image\nim1 = Image.open('/Users/rem7/Desktop/_1.jpg')\nim2 = Image.open('/Users/rem7/Desktop/_2.jpg')\nImage.blend(im1,im2,0.5).save('/Users/rem7/Desktop/a.jpg')\n\n", "It seems the code you posted just sums up the values and values bigger than 256 are overflowing. You want something like \"(a + b) / 2\" or \"min(a + b, 256)\". The latter seems to be the way that your Matlab example does it.\n", "To clamp numpy array values:\n>>> c = a + b\n>>> c[c > 256] = 256\n\n", "Your sample images are not showing up form me so I am going to do a bit of guessing.\nI can't remember exactly how the numpy to pil conversion works but there are two likely cases. I am 95% sure it is 1 but am giving 2 just in case I am wrong.\n 1) 1 im1Arr is a MxN array of integers (ARGB) and when you add im1arr and im2arr together you are overflowing from one channel into the next if the components b1+b2>255. I am guessing matlab represents their images as MxNx3 arrays so each color channel is separate. You can solve this by splitting the PIL image channels and then making numpy arrays\n2) 1 im1Arr is a MxNx3 array of bytes and when you add im1arr and im2arr together you are wrapping the component around. \nYou are also going to have to rescale the range back to between 0-255 before displaying. Your choices are divide by 2, scale by 255/array.max() or do a clip. I don't know what matlab does\n" ]
[ 34, 20, 2, 2, 0 ]
[]
[]
[ "image_processing", "numpy", "python", "python_imaging_library" ]
stackoverflow_0000524930_image_processing_numpy_python_python_imaging_library.txt
Q: subprocess.Popen error I am running an msi installer in silent mode and caching logs in the specific file. The following is the command i need to execute. C:\Program Files\ My Installer\Setup.exe /s /v "/qn /lv %TEMP%\log_silent.log" I used: subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s /v "/qn /lv %TEMP%\log_silent.log"'],stdout=subprocess.PIPE).communicate()[0] to execute the command however it does not recognise the operation and gives error regarding wrong option selected. I have cross-verified and found that the command only works this way. A: The problem is very subtle. You're executing the program directly. It gets: argv[0] = "C:\Program Files\ My Installer\Setup.exe" argv[1] = /s /v "/qn /lv %TEMP%\log_silent.log" Whereas it should be: argv[1] = "/s" argv[2] = "/v" argv[3] = "/qn" argv[4] = "/lv %TEMP%\log_silent.log" In other words, it should receive 5 arguments, not 2 arguments. Also, %TEMP% is directly unknown to the program! There are 2 ways to fix this problem: Calling the shell. p = subprocess.Popen('C:\Program Files\ My Installer\Setup.exe /s /v "/qn /lv %TEMP%\log_silent.log"', shell=True) output = p.communicate()[0] Directly call program (more safer) s = ['C:\Program Files\ My Installer\Setup.exe', '/s /v "/qn /lv %TEMP%\log_silent.log"'] safes = [os.path.expandvars(p) for p in argument_string] p = subprocess.Popen(safes[0], safes[1:]) output = p.communicate()[0] A: The problem is that you effectively supply Setup.exe with only one argument. Don't think in terms of the shell, the string you hand over as an argument does not get splitted on spaces anymore, that's your duty! So, if you are absolutely sure that "/qn /lv %TEMP%\log_silent.log" should be one argument, then use this: subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s', '/v', '/qn /lv %TEMP%\log_silent.log'],stdout=subprocess.PIPE).communicate()[0] Otherwise (I guess this one will be correct), use this: subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s', '/v', '/qn', '/lv', '%TEMP%\log_silent.log'],stdout=subprocess.PIPE).communicate()[0] A: Try putting each argument in its own string (reformatted for readability): cmd = ['C:\Program Files\ My Installer\Setup.exe', '/s', '/v', '"/qn', '/lv', '%TEMP%\log_silent.log"'] subprocess.Popen(cmd, stdout=subprocess.PIPE).communicate()[0] I have to say though, those double quotes do not look in the right places to me. A: You said: subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s /v "/qn /lv %TEMP%\log_silent.log"'],stdout=subprocess.PIPE).communicate()[0] Is the directory name really " My Installer" (with a leading space)? Also, as a general rule, you should use forward slashes in path specifications. Python should handle them seamlessly (even on Windows) and you avoid any problems with python interpreting backslashes as escape characters. (for example: >>> s = 'c:\program files\norton antivirus' >>> print s c:\program files orton antivirus )
subprocess.Popen error
I am running an msi installer in silent mode and caching logs in the specific file. The following is the command i need to execute. C:\Program Files\ My Installer\Setup.exe /s /v "/qn /lv %TEMP%\log_silent.log" I used: subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s /v "/qn /lv %TEMP%\log_silent.log"'],stdout=subprocess.PIPE).communicate()[0] to execute the command however it does not recognise the operation and gives error regarding wrong option selected. I have cross-verified and found that the command only works this way.
[ "The problem is very subtle.\nYou're executing the program directly. It gets:\nargv[0] = \"C:\\Program Files\\ My Installer\\Setup.exe\"\nargv[1] = /s /v \"/qn /lv %TEMP%\\log_silent.log\"\n\nWhereas it should be:\nargv[1] = \"/s\"\nargv[2] = \"/v\"\nargv[3] = \"/qn\"\nargv[4] = \"/lv %TEMP%\\log_silent.log\"\n\nIn other words, it should receive 5 arguments, not 2 arguments.\nAlso, %TEMP% is directly unknown to the program!\nThere are 2 ways to fix this problem:\n\nCalling the shell.\np = subprocess.Popen('C:\\Program Files\\ My Installer\\Setup.exe /s /v \"/qn /lv %TEMP%\\log_silent.log\"', shell=True)\noutput = p.communicate()[0]\n\nDirectly call program (more safer)\ns = ['C:\\Program Files\\ My Installer\\Setup.exe', '/s /v \"/qn /lv %TEMP%\\log_silent.log\"']\nsafes = [os.path.expandvars(p) for p in argument_string]\np = subprocess.Popen(safes[0], safes[1:])\noutput = p.communicate()[0]\n\n\n", "The problem is that you effectively supply Setup.exe with only one argument. Don't think in terms of the shell, the string you hand over as an argument does not get splitted on spaces anymore, that's your duty!\nSo, if you are absolutely sure that \"/qn /lv %TEMP%\\log_silent.log\" should be one argument, then use this:\nsubprocess.Popen(['C:\\Program Files\\ My Installer\\Setup.exe', '/s', '/v', '/qn /lv %TEMP%\\log_silent.log'],stdout=subprocess.PIPE).communicate()[0]\n\nOtherwise (I guess this one will be correct), use this:\nsubprocess.Popen(['C:\\Program Files\\ My Installer\\Setup.exe', '/s', '/v', '/qn', '/lv', '%TEMP%\\log_silent.log'],stdout=subprocess.PIPE).communicate()[0]\n\n", "Try putting each argument in its own string (reformatted for readability):\ncmd = ['C:\\Program Files\\ My Installer\\Setup.exe',\n '/s',\n '/v',\n '\"/qn',\n '/lv',\n '%TEMP%\\log_silent.log\"']\n\nsubprocess.Popen(cmd, stdout=subprocess.PIPE).communicate()[0]\n\nI have to say though, those double quotes do not look in the right places to me.\n", "You said:\nsubprocess.Popen(['C:\\Program Files\\ My Installer\\Setup.exe', '/s /v \"/qn /lv %TEMP%\\log_silent.log\"'],stdout=subprocess.PIPE).communicate()[0]\n\nIs the directory name really \" My Installer\" (with a leading space)?\nAlso, as a general rule, you should use forward slashes in path specifications. Python should handle them seamlessly (even on Windows) and you avoid any problems with python interpreting backslashes as escape characters.\n(for example:\n>>> s = 'c:\\program files\\norton antivirus'\n>>> print s\nc:\\program files\norton antivirus\n\n)\n" ]
[ 9, 2, 0, 0 ]
[]
[]
[ "popen", "python", "subprocess" ]
stackoverflow_0000526734_popen_python_subprocess.txt
Q: How to run included tests on deployed pylons application I have installed pylons based application from egg, so it sits somewhere under /usr/lib/python2.5/site-packages. I see that the tests are packaged too and I would like to run them (to catch a problem that shows up on deployed application but not on development version). So how do I run them? Doing "nosetests" from directory containing only test.ini and development.ini gives an error about nonexistent test.ini under site-packages. A: Straight from the horse's mouth: Install nose: easy_install -W nose. Run nose: nosetests --with-pylons=test.ini OR python setup.py nosetests To run "python setup.py nosetests" you need to have a [nosetests] block in your setup.cfg looking like this: [nosetests] verbose=True verbosity=2 with-pylons=test.ini detailed-errors=1 with-doctest=True
How to run included tests on deployed pylons application
I have installed pylons based application from egg, so it sits somewhere under /usr/lib/python2.5/site-packages. I see that the tests are packaged too and I would like to run them (to catch a problem that shows up on deployed application but not on development version). So how do I run them? Doing "nosetests" from directory containing only test.ini and development.ini gives an error about nonexistent test.ini under site-packages.
[ "Straight from the horse's mouth:\nInstall nose: easy_install -W nose.\nRun nose: nosetests --with-pylons=test.ini OR python setup.py nosetests\nTo run \"python setup.py nosetests\" you need to have a [nosetests] block in your setup.cfg looking like this:\n\n[nosetests]\nverbose=True\nverbosity=2\nwith-pylons=test.ini\ndetailed-errors=1\nwith-doctest=True\n\n" ]
[ 1 ]
[]
[]
[ "nose", "paster", "pylons", "python", "unit_testing" ]
stackoverflow_0000188417_nose_paster_pylons_python_unit_testing.txt
Q: Need instructions for Reversi game I am trying to write Reversi game in Python. Can anyone give me some basic ideas and strategy which are simple, good and easy to use? I would appreciate for any help because I've gone to a little far but is stucked between codes and it became more complex too. I think I overdid in some part that should be fairly simple. So.... A: Reversi is an elegantly simple game. I'm going to use a psuedo C#/Java langauge to explain some concepts, but you can transpose them to Python. To break it down into its most simple compnents, you have two basic things: A 2 dimensional array that represents the game board: gameBoard[10,10] And some form of enumaration that stores the state of each tile in the gameboard: enum tile { none, white, black } To render the board, you loop through the gameBoard array, increasing by an offset of the piece size: for (int i = 0; i < 10; i++) { for (int j = 0; j < 10; j++) { // The Piece to draw would be at gameBoard[i,j]; // Pixel locations are calculated by multiplying the array location by an offset. DrawPiece(gameBoard[i,j],i * Width of Tile, j * width of tile); } } Likewise, resolving a mouse click back to a location in the array would be similar, use the mouse location and offset to calculate the actual tile you are on. Each time a tile is placed, you scan the entire array, and apply a simple rules based engine on what the new colors should be. (This is the real challenge, and I'll leave it up to you.) The AI can take advantage of doing this array scan with hypothetical moves, have it scan 10 or so possible moves, then choose the one that produces the best outcome. Try to not make it to smart, as its easy to make an unbeatable AI when you let it play out the entire game in its head. When there are no longer any free locations in the array, You end the game. A: The wikipedia page has all of the rules and some decent strategy advice for reversi/othello. Basically, you need some sort of data structure to represent board state, that is to say the position of all the pieces on the board at any point in the game. As suggested by others, a 2d array is probably a decent choice, but it doesn't really matter so long as it is a representation that makes sense to you. Some of the hard stuff is figuring out which spaces are valid moves and then which pieces to flip over but, again, the wikipedia page has all of the details so it shouldn't be too hard to implement. If you want to create an AI for your game, then I would suggest look at some sort of minimax type algorithm with Alpha-Beta pruning. There are a ton of resources on the web for these and an ai that uses minimax with a decent evaluation function will be able to beat most human players pretty easily, as it can look at least 8 or 9 moves ahead in very little time. There are some other fancier variants on minimax, like negamax or negascout that can do even better than basic minimax, but I'd start with the simpler ones. Wikipedia has pages on all of these algorithms and there is a ton of information on all of them as many AI courses use them for Othello or something similar. One page that is particularly useful is this Java Applet. It allows you to step through the steps of minimax and negamax on a sample state tree with and without alpha-beta pruning. If none of this makes sense, let me know. A: You'll need a 2D array. Beware of [[0] * 8] * 8, instead use [[0 for _ in [0] * 8] for _ in [0] * 8] White should be 1 and black -1 (Or vice versa, of course). This way you can do flips with *=-1 and keep blank blank Double four loops will be able to total scores and determine if the game is done pretty well. map(sum,map(sum,board)) will give you the net score Don't forget to check and see if the player can even move at the beginning of a round A: You may also wish to consider the application of a "fuzzy logic" loop to analyze positions. Reversi/Othello is notorious for forcing players to consider certain strategic gains against strategic losses for each move, as well as prioritizing one positive move over another. A fuzzy system would give you greater control over testing move selection by setting various settings against each other, as well as giving you the ability to create multiple "personalities" to play against by shifting the various weights. A: Don't get suckered into the multi-dim array solution. I've already written a tic-tac-toe with a multi-dim and it worked alright, but then when I started my own version of othello I did a linear integer array and its almost 2x faster. Alternative/faster methods of converting an integer to a cartesian coordinate? I've got everything mostly coded out in PHP but still researching Idea's for the AI as a brute force solution like in Tic-tac-toe isn't going to work. A: You don't even need a linear array. Two 64-bit java long values suffice (one for the white pieces, one for the black. assert (white&black)==0 . You can make play stronger by counting not pieces that are joined to a corner, but pieces that cannot be taken.
Need instructions for Reversi game
I am trying to write Reversi game in Python. Can anyone give me some basic ideas and strategy which are simple, good and easy to use? I would appreciate for any help because I've gone to a little far but is stucked between codes and it became more complex too. I think I overdid in some part that should be fairly simple. So....
[ "Reversi is an elegantly simple game. I'm going to use a psuedo C#/Java langauge to explain some concepts, but you can transpose them to Python.\nTo break it down into its most simple compnents, you have two basic things:\nA 2 dimensional array that represents the game board:\ngameBoard[10,10]\n\nAnd some form of enumaration that stores the state of each tile in the gameboard:\nenum tile\n{\n none,\n white,\n black\n}\n\nTo render the board, you loop through the gameBoard array, increasing by an offset of the piece size:\nfor (int i = 0; i < 10; i++)\n{\n for (int j = 0; j < 10; j++)\n {\n // The Piece to draw would be at gameBoard[i,j];\n // Pixel locations are calculated by multiplying the array location by an offset.\n DrawPiece(gameBoard[i,j],i * Width of Tile, j * width of tile);\n }\n}\n\nLikewise, resolving a mouse click back to a location in the array would be similar, use the mouse location and offset to calculate the actual tile you are on.\nEach time a tile is placed, you scan the entire array, and apply a simple rules based engine on what the new colors should be. (This is the real challenge, and I'll leave it up to you.)\nThe AI can take advantage of doing this array scan with hypothetical moves, have it scan 10 or so possible moves, then choose the one that produces the best outcome. Try to not make it to smart, as its easy to make an unbeatable AI when you let it play out the entire game in its head.\nWhen there are no longer any free locations in the array, You end the game.\n", "The wikipedia page has all of the rules and some decent strategy advice for reversi/othello. Basically, you need some sort of data structure to represent board state, that is to say the position of all the pieces on the board at any point in the game. As suggested by others, a 2d array is probably a decent choice, but it doesn't really matter so long as it is a representation that makes sense to you. Some of the hard stuff is figuring out which spaces are valid moves and then which pieces to flip over but, again, the wikipedia page has all of the details so it shouldn't be too hard to implement.\nIf you want to create an AI for your game, then I would suggest look at some sort of minimax type algorithm with Alpha-Beta pruning. There are a ton of resources on the web for these and an ai that uses minimax with a decent evaluation function will be able to beat most human players pretty easily, as it can look at least 8 or 9 moves ahead in very little time. There are some other fancier variants on minimax, like negamax or negascout that can do even better than basic minimax, but I'd start with the simpler ones. Wikipedia has pages on all of these algorithms and there is a ton of information on all of them as many AI courses use them for Othello or something similar. One page that is particularly useful is this Java Applet. It allows you to step through the steps of minimax and negamax on a sample state tree with and without alpha-beta pruning. \nIf none of this makes sense, let me know.\n", "You'll need a 2D array. Beware of [[0] * 8] * 8, instead use [[0 for _ in [0] * 8] for _ in [0] * 8]\nWhite should be 1 and black -1 (Or vice versa, of course). This way you can do flips with *=-1 and keep blank blank\nDouble four loops will be able to total scores and determine if the game is done pretty well. map(sum,map(sum,board)) will give you the net score\nDon't forget to check and see if the player can even move at the beginning of a round\n", "You may also wish to consider the application of a \"fuzzy logic\" loop to analyze positions. Reversi/Othello is notorious for forcing players to consider certain strategic gains against strategic losses for each move, as well as prioritizing one positive move over another.\nA fuzzy system would give you greater control over testing move selection by setting various settings against each other, as well as giving you the ability to create multiple \"personalities\" to play against by shifting the various weights.\n", "Don't get suckered into the multi-dim array solution. I've already written a tic-tac-toe with a multi-dim and it worked alright, but then when I started my own version of othello I did a linear integer array and its almost 2x faster.\nAlternative/faster methods of converting an integer to a cartesian coordinate?\nI've got everything mostly coded out in PHP but still researching Idea's for the AI as a brute force solution like in Tic-tac-toe isn't going to work.\n", "You don't even need a linear array. Two 64-bit java long values suffice (one for the white pieces, one for the black. assert (white&black)==0 .\nYou can make play stronger by counting not pieces that are joined to a corner, but pieces that cannot be taken.\n" ]
[ 4, 1, 0, 0, 0, 0 ]
[]
[]
[ "python", "reversi" ]
stackoverflow_0000315435_python_reversi.txt
Q: python console intrupt? and cross platform threads I want my app to loop in python but have a way to quit. Is there a way to get input from the console, scan it for letter q and quick when my app is ready to quit? in C i would just create a pthread that waits for cin, scans, locks a global quit var, change, unlock and exit the thread allowing my app to quit when its done dumping a file or w/e it is doing. DO i do this the same way in python and will it be cross platform? (i see a global single instance in python that was windows specific) A: use the threading module to make a thread class. import threading; class foo(threading.Thread): def __init__(self): #initialize anything def run(self): while True: str = raw_input("input something"); class bar: def __init__(self) self.thread = foo(); #initialize the thread (foo) class and store self.thread.start(); #this command will start the loop in the new thread (the run method) if(quit): #quit A: Creating a new thread is easy enough – the threading module will help you out. You may want to make it daemonic (if you have other ways of exiting your program). I think you can change a variable without locking, too – python implements its own threads, and I'm fairly sure something like self.running = False will be atomic. The simplest way to kick off a new thread is with threading.Thread(target=): # inside your class definition def signal_done(self): self.done = True def watcher(self): while True: if q_typed_in_console(): self.signal_done() return def start_watcher(self): t = threading.Thread(target=self.watcher) t.setDaemon(True) # Optional; means thread will exit when main thread does t.start() def main(self): while not self.done: # etc. If you want your thread to be smarter, have its own state, etc. you can subclass threading.Thread yourself. The docs have more. [related to this: the python executable itself is single-threaded, even if you have multiple python threads]
python console intrupt? and cross platform threads
I want my app to loop in python but have a way to quit. Is there a way to get input from the console, scan it for letter q and quick when my app is ready to quit? in C i would just create a pthread that waits for cin, scans, locks a global quit var, change, unlock and exit the thread allowing my app to quit when its done dumping a file or w/e it is doing. DO i do this the same way in python and will it be cross platform? (i see a global single instance in python that was windows specific)
[ "use the threading module to make a thread class.\nimport threading;\n\nclass foo(threading.Thread):\n def __init__(self):\n #initialize anything\n def run(self):\n while True:\n str = raw_input(\"input something\");\n\nclass bar:\n def __init__(self)\n self.thread = foo(); #initialize the thread (foo) class and store\n self.thread.start(); #this command will start the loop in the new thread (the run method)\n if(quit):\n #quit\n\n", "Creating a new thread is easy enough – the threading module will help you out. You may want to make it daemonic (if you have other ways of exiting your program). I think you can change a variable without locking, too – python implements its own threads, and I'm fairly sure something like self.running = False will be atomic.\nThe simplest way to kick off a new thread is with threading.Thread(target=):\n# inside your class definition\ndef signal_done(self):\n self.done = True\n\ndef watcher(self):\n while True:\n if q_typed_in_console():\n self.signal_done()\n return\n\ndef start_watcher(self):\n t = threading.Thread(target=self.watcher)\n t.setDaemon(True) # Optional; means thread will exit when main thread does\n t.start()\n\ndef main(self):\n while not self.done:\n # etc.\n\nIf you want your thread to be smarter, have its own state, etc. you can subclass threading.Thread yourself. The docs have more.\n[related to this: the python executable itself is single-threaded, even if you have multiple python threads]\n" ]
[ 1, 1 ]
[]
[]
[ "console", "multithreading", "python", "quit" ]
stackoverflow_0000526955_console_multithreading_python_quit.txt
Q: Dynamic data in postgresql I intend to have a python script do many UPDATEs per second on 2,433,000 rows. I am currently trying to keep the dynamic column in python as a value in a python dict. Yet to keep my python dict synchronized with changes in the other columns is becoming more and more difficult or nonviable. I know I could put the autovacuum on overdrive, but I wonder if this would be enough to catch up with the sheer amount of UPDATEs. If only I could associate a python variable to each row... I fear that the VACUUM and diskwrite overhead will kill my server? Any suggestions on how to associate extremely dynamic variables to rows/keys? Thx A: PostgreSQL supports asynchronous notifications using the LISTEN and NOTIFY commands. An application (client) LISTENs for a notification using a notification name (e.g. "table_updated"). The database itself can be made to issue notifications either manually i.e. in the code that performs the insertions or modifications (useful when a large number of updates are made, allowing for batch notifications) or automatically inside a row update TRIGGER. You could use such notifications to keep your data structures up to date. Alternatively (or you can use this in combination with the above), you can customize your Python dictionary by overriding the __getitem__(), has_key(), contains() methods and have them perform lookups as needed, allowing you to cache the results using timeouts etc.
Dynamic data in postgresql
I intend to have a python script do many UPDATEs per second on 2,433,000 rows. I am currently trying to keep the dynamic column in python as a value in a python dict. Yet to keep my python dict synchronized with changes in the other columns is becoming more and more difficult or nonviable. I know I could put the autovacuum on overdrive, but I wonder if this would be enough to catch up with the sheer amount of UPDATEs. If only I could associate a python variable to each row... I fear that the VACUUM and diskwrite overhead will kill my server? Any suggestions on how to associate extremely dynamic variables to rows/keys? Thx
[ "PostgreSQL supports asynchronous notifications using the LISTEN and NOTIFY commands. An application (client) LISTENs for a notification using a notification name (e.g. \"table_updated\"). The database itself can be made to issue notifications either manually i.e. in the code that performs the insertions or modifications (useful when a large number of updates are made, allowing for batch notifications) or automatically inside a row update TRIGGER.\nYou could use such notifications to keep your data structures up to date.\nAlternatively (or you can use this in combination with the above), you can customize your Python dictionary by overriding the __getitem__(), has_key(), contains() methods and have them perform lookups as needed, allowing you to cache the results using timeouts etc.\n" ]
[ 3 ]
[]
[]
[ "dynamic_data", "performance", "postgresql", "python", "vacuum" ]
stackoverflow_0000527013_dynamic_data_performance_postgresql_python_vacuum.txt
Q: Intercepting stdout of a subprocess while it is running If this is my subprocess: import time, sys for i in range(200): sys.stdout.write( 'reading %i\n'%i ) time.sleep(.02) And this is the script controlling and modifying the output of the subprocess: import subprocess, time, sys print 'starting' proc = subprocess.Popen( 'c:/test_apps/testcr.py', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE ) print 'process created' while True: #next_line = proc.communicate()[0] next_line = proc.stdout.readline() if next_line == '' and proc.poll() != None: break sys.stdout.write(next_line) sys.stdout.flush() print 'done' Why is readline and communicate waiting until the process is done running? Is there a simple way to pass (and modify) the subprocess' stdout real-time? I'm on Windows XP. A: As Charles already mentioned, the problem is buffering. I ran in to a similar problem when writing some modules for SNMPd, and solved it by replacing stdout with an auto-flushing version. I used the following code, inspired by some posts on ActiveState: class FlushFile(object): """Write-only flushing wrapper for file-type objects.""" def __init__(self, f): self.f = f def write(self, x): self.f.write(x) self.f.flush() # Replace stdout with an automatically flushing version sys.stdout = FlushFile(sys.__stdout__) A: Process output is buffered. On more UNIXy operating systems (or Cygwin), the pexpect module is available, which recites all the necessary incantations to avoid buffering-related issues. However, these incantations require a working pty module, which is not available on native (non-cygwin) win32 Python builds. In the example case where you control the subprocess, you can just have it call sys.stdout.flush() where necessary -- but for arbitrary subprocesses, that option isn't available. See also the question "Why not just use a pipe (popen())?" in the pexpect FAQ.
Intercepting stdout of a subprocess while it is running
If this is my subprocess: import time, sys for i in range(200): sys.stdout.write( 'reading %i\n'%i ) time.sleep(.02) And this is the script controlling and modifying the output of the subprocess: import subprocess, time, sys print 'starting' proc = subprocess.Popen( 'c:/test_apps/testcr.py', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE ) print 'process created' while True: #next_line = proc.communicate()[0] next_line = proc.stdout.readline() if next_line == '' and proc.poll() != None: break sys.stdout.write(next_line) sys.stdout.flush() print 'done' Why is readline and communicate waiting until the process is done running? Is there a simple way to pass (and modify) the subprocess' stdout real-time? I'm on Windows XP.
[ "As Charles already mentioned, the problem is buffering. I ran in to a similar problem when writing some modules for SNMPd, and solved it by replacing stdout with an auto-flushing version.\nI used the following code, inspired by some posts on ActiveState:\nclass FlushFile(object):\n \"\"\"Write-only flushing wrapper for file-type objects.\"\"\"\n def __init__(self, f):\n self.f = f\n def write(self, x):\n self.f.write(x)\n self.f.flush()\n\n# Replace stdout with an automatically flushing version\nsys.stdout = FlushFile(sys.__stdout__)\n\n", "Process output is buffered. On more UNIXy operating systems (or Cygwin), the pexpect module is available, which recites all the necessary incantations to avoid buffering-related issues. However, these incantations require a working pty module, which is not available on native (non-cygwin) win32 Python builds.\nIn the example case where you control the subprocess, you can just have it call sys.stdout.flush() where necessary -- but for arbitrary subprocesses, that option isn't available.\nSee also the question \"Why not just use a pipe (popen())?\" in the pexpect FAQ.\n" ]
[ 16, 8 ]
[]
[]
[ "popen", "process", "python", "stdout", "subprocess" ]
stackoverflow_0000527197_popen_process_python_stdout_subprocess.txt
Q: How to debug deadlock with python? I am developing a multi-threading application, which is deadlocking. I am using Visual C++ Express 2008 to trace the program. Once the deadlock occurs, I just pause the program and trace. I found that when deadlock occurs, there will be two threads called python from my C++ extension. All of them use Queue in python code, so I guess the deadlock might caused by Queue. But however, once the extension goes into python code, I can't see nothing but asm code and binary from the VC++ debugger. I would like to know are there any way to dump the call stack of python code after I paused the program? And how can I know what lock are there in threads caused the deadlock? A: If you can compile your extension module with gcc (for example, by using Cygwin), you could use gdb and the pystack gdb macro to get Python stacks in that situation. I don't know if it would be possible to do something equivalent to pystack in Visual C++ Express, but you might get some ideas from the pystack macro implementation anyway. Since you mention you only see asm/binary in the VC++ debugger, you should make sure you compile Python with debug symbols. If VC++ is still showing asm, it might be that you need to tell VC++ where the source files are (sorry, haven't used VC++ in years so I can't tell what exactly you might need to do if this was the case). You might also get some important information by adding lots of logging calls to your code, both Python side and your C++ extension. In any case, I am almost certain the deadlocks are not due to Queue, but your own code.
How to debug deadlock with python?
I am developing a multi-threading application, which is deadlocking. I am using Visual C++ Express 2008 to trace the program. Once the deadlock occurs, I just pause the program and trace. I found that when deadlock occurs, there will be two threads called python from my C++ extension. All of them use Queue in python code, so I guess the deadlock might caused by Queue. But however, once the extension goes into python code, I can't see nothing but asm code and binary from the VC++ debugger. I would like to know are there any way to dump the call stack of python code after I paused the program? And how can I know what lock are there in threads caused the deadlock?
[ "If you can compile your extension module with gcc (for example, by using Cygwin), you could use gdb and the pystack gdb macro to get Python stacks in that situation. I don't know if it would be possible to do something equivalent to pystack in Visual C++ Express, but you might get some ideas from the pystack macro implementation anyway.\nSince you mention you only see asm/binary in the VC++ debugger, you should make sure you compile Python with debug symbols. If VC++ is still showing asm, it might be that you need to tell VC++ where the source files are (sorry, haven't used VC++ in years so I can't tell what exactly you might need to do if this was the case).\nYou might also get some important information by adding lots of logging calls to your code, both Python side and your C++ extension.\nIn any case, I am almost certain the deadlocks are not due to Queue, but your own code.\n" ]
[ 6 ]
[]
[]
[ "deadlock", "debugging", "multithreading", "python" ]
stackoverflow_0000527296_deadlock_debugging_multithreading_python.txt
Q: How to embed p tag inside some text using Beautifulsoup? I wanted to embed <p> tag where ever there is a \r\n\r\n. u"Finally Sri Lanka showed up, prevented their first 5-0 series whitewash, and stopped India at nine ODI wins in a row. \r\n\r\nFor 62 balls Yuvraj Singh played a dream knock, keeping India in the game despite wickets falling around him. \r\n\r\nPerhaps the toss played a big part. This was only the second time Mahela Jayawardene beat Mahendra Singh Dhoni with the coin in the last 11 occasions. \r\n\r\nIt was Jayasuriya who provided Sri Lanka with the springboard. \r\n\r\nThe pyrotechnics may have stopped upon Jayasuriya's dismissal, but the runs kept coming at a fair pace." I tried solving this using BeautifulSoup but couldn't find the way out of it. Can anyone through some light on this. Thanks in advance. A: ''.join('<p>%s</p>' % line for line in text.split('\r\n\r\n')) # Results: u"<p>Finally Sri Lanka showed up, prevented their first 5-0 series whitewash, and stopped India at nine ODI wins in a row. </p> <p>For 62 balls Yuvraj Singh played a dream knock, keeping India in the game despite wickets falling around him. </p><p>Perhaps the toss played a big part. This was only the second time Mahela Jayawardene beat Mahendra Singh Dhoni with the coin in the last 11 occasions. </p> <p>It was Jayasuriya who provided Sri Lanka with the springboard. </p> <p>The pyrotechnics may have stopped upon Jayasuriya's dismissal, but the runs kept coming at a fair pace.</p>"
How to embed p tag inside some text using Beautifulsoup?
I wanted to embed <p> tag where ever there is a \r\n\r\n. u"Finally Sri Lanka showed up, prevented their first 5-0 series whitewash, and stopped India at nine ODI wins in a row. \r\n\r\nFor 62 balls Yuvraj Singh played a dream knock, keeping India in the game despite wickets falling around him. \r\n\r\nPerhaps the toss played a big part. This was only the second time Mahela Jayawardene beat Mahendra Singh Dhoni with the coin in the last 11 occasions. \r\n\r\nIt was Jayasuriya who provided Sri Lanka with the springboard. \r\n\r\nThe pyrotechnics may have stopped upon Jayasuriya's dismissal, but the runs kept coming at a fair pace." I tried solving this using BeautifulSoup but couldn't find the way out of it. Can anyone through some light on this. Thanks in advance.
[ "''.join('<p>%s</p>' % line for line in text.split('\\r\\n\\r\\n'))\n# Results:\nu\"<p>Finally Sri Lanka showed up, prevented their first 5-0\nseries whitewash, and stopped India at nine ODI wins in a row. </p>\n<p>For 62 balls Yuvraj Singh played a dream knock, keeping India in the \ngame despite wickets falling around him. </p><p>Perhaps the toss played\na big part. This was only the second time Mahela Jayawardene beat Mahendra\nSingh Dhoni with the coin in the last 11 occasions. </p>\n<p>It was Jayasuriya who provided Sri Lanka with the springboard. </p>\n<p>The pyrotechnics may have stopped upon Jayasuriya's dismissal, but \nthe runs kept coming at a fair pace.</p>\"\n\n" ]
[ 5 ]
[]
[]
[ "beautifulsoup", "python" ]
stackoverflow_0000527629_beautifulsoup_python.txt
Q: Can I instantiate a subclass object from the superclass I have the following example code: class A(object): def __init__(self, id): self.myid = id def foo(self, x): print 'foo', self.myid*x class B(A): def __init__(self, id): self.myid = id self.mybid = id*2 def bar(self, x): print 'bar', self.myid, self.mybid, x When used, the following could be generated: >>> a = A(2) >>> a.foo(10) foo 20 >>> >>> b = B(3) >>> b.foo(10) foo 30 >>> b.bar(12) bar 3 6 12 Now lets say I have some more subclasses class C(A): and class D(A):. I also know that the id will always fit in either B, C or D, but never in 2 of them at the same time. Now I would like to call A(23) and get an object of the correct subclass. Something like this: >>> type(A(2)) <class '__main__.B'> >>> type(A(22)) <class '__main__.D'> >>> type(A(31)) <class '__main__.C'> >>> type(A(12)) <class '__main__.B'> Is this impossible or is it possible but just bad design? How should problems like this be solved? A: You should rather implement Abstract Factory pattern, and your factory would then build any object you like, depending on provided parameters. That way your code will remain clean and extensible. Any hack you could use to make it directly can be removed when you upgrade your interpreter version, since no one expects backwards compatibility to preserve such things. EDIT: After a while I'm not sure if you should use Abstract Factory, or Factory Method pattern. It depends on the details of your code, so suit your needs. A: Generally it's not such a good idea when a superclass has any knowledge of the subclasses. Think about what you want to do from an OO point of view. The superclass is providing common behaviour for all objects of that type, e.g. Animal. Then the subclass provides the specialisation of the behaviour, e.g. Dog. Think of it in terms of an "isa" relationship, i.e. a Dog is an Animal. An Animal is a Dog doesn't really make sense. HTH cheers, Rob A: I don't think you can change the type of the object, but you can create another class that will work like a factory for the subclasses. Something like this: class LetterFactory(object): @staticmethod def getLetterObject(n): if n == 1: return A(n) elif n == 2: return B(n) else: return C(n) a = LetterFactory.getLetterObject(1) b = LetterFactory.getLetterObject(2) ...
Can I instantiate a subclass object from the superclass
I have the following example code: class A(object): def __init__(self, id): self.myid = id def foo(self, x): print 'foo', self.myid*x class B(A): def __init__(self, id): self.myid = id self.mybid = id*2 def bar(self, x): print 'bar', self.myid, self.mybid, x When used, the following could be generated: >>> a = A(2) >>> a.foo(10) foo 20 >>> >>> b = B(3) >>> b.foo(10) foo 30 >>> b.bar(12) bar 3 6 12 Now lets say I have some more subclasses class C(A): and class D(A):. I also know that the id will always fit in either B, C or D, but never in 2 of them at the same time. Now I would like to call A(23) and get an object of the correct subclass. Something like this: >>> type(A(2)) <class '__main__.B'> >>> type(A(22)) <class '__main__.D'> >>> type(A(31)) <class '__main__.C'> >>> type(A(12)) <class '__main__.B'> Is this impossible or is it possible but just bad design? How should problems like this be solved?
[ "You should rather implement Abstract Factory pattern, and your factory would then build any object you like, depending on provided parameters. That way your code will remain clean and extensible.\nAny hack you could use to make it directly can be removed when you upgrade your interpreter version, since no one expects backwards compatibility to preserve such things. \nEDIT: After a while I'm not sure if you should use Abstract Factory, or Factory Method pattern. It depends on the details of your code, so suit your needs.\n", "Generally it's not such a good idea when a superclass has any knowledge of the subclasses.\nThink about what you want to do from an OO point of view.\nThe superclass is providing common behaviour for all objects of that type, e.g. Animal. Then the subclass provides the specialisation of the behaviour, e.g. Dog.\nThink of it in terms of an \"isa\" relationship, i.e. a Dog is an Animal.\nAn Animal is a Dog doesn't really make sense.\nHTH\ncheers,\nRob\n", "I don't think you can change the type of the object, but you can create another class that will work like a factory for the subclasses. Something like this:\nclass LetterFactory(object):\n @staticmethod\n def getLetterObject(n):\n if n == 1:\n return A(n)\n elif n == 2:\n return B(n)\n else:\n return C(n)\n\na = LetterFactory.getLetterObject(1)\nb = LetterFactory.getLetterObject(2)\n...\n\n" ]
[ 6, 2, 2 ]
[]
[]
[ "oop", "python" ]
stackoverflow_0000527757_oop_python.txt
Q: input and thread problem, python I am doing something like this in python class MyThread ( threading.Thread ): def run (s): try: s.wantQuit = 0 while(not s.wantQuit): button = raw_input() if button == "q": s.wantQuit=1 except KeyboardInterrupt: s.wantQuit = 1 myThread = MyThread () myThread.start() a=5 while not myThread.wantQuit: print "hey" if (a == 0): break; a = a-1; time.sleep(1) #""" sys.exit() What happens is my app locks up for 5 seconds printing "hey" (5 times) THEN i get the raw_input dialog. How can i have the dialog show up so i can quit anytime instead of when my loop runs out? A: You mean the while loop runs before the thread? Well, you can't predict this unless you synchronize it. No one guarantees you that the thread will run before or after that while loop. But if it's being blocked for 5 seconds that's akward - the thread should have been pre-empted by then. Also, since you're first use of wantToQuit is in the run() method, no one assures you that the thread has been started when you're checking for it's wantToQuit attribute in while not myThread.wantToQuit . A: The behaviour here is not what you described. Look at those sample outputs I got: 1st: pressing q<ENTER> as fast as possible: hey q 2nd: wait a bit before pressing q<ENTER>: hey hey hey q 3rd: Don't touch the keyboard: hey hey hey hey hey hey # Application locks because main thread is over but # there are other threads running. add myThread.wantQuit = 1 # to prevent that if you want A: just tried the code to make sure, but this does do what it's supposed to... you can type q and enter in to the console and make the application quit before a=0 (so it says hey less then 5 times) I don't know what you mean by the raw_input dialog, raw_input normally just takes info from stdin A: huperboreean has your answer. The thread is still being started when the for loop is executed. You want to check that a thread is started before moving into your loop. You could simplify the thread to monitor raw_input, and return when a 'q' is entered. This will kill the thread. You main for loop can check if the thread is alive.
input and thread problem, python
I am doing something like this in python class MyThread ( threading.Thread ): def run (s): try: s.wantQuit = 0 while(not s.wantQuit): button = raw_input() if button == "q": s.wantQuit=1 except KeyboardInterrupt: s.wantQuit = 1 myThread = MyThread () myThread.start() a=5 while not myThread.wantQuit: print "hey" if (a == 0): break; a = a-1; time.sleep(1) #""" sys.exit() What happens is my app locks up for 5 seconds printing "hey" (5 times) THEN i get the raw_input dialog. How can i have the dialog show up so i can quit anytime instead of when my loop runs out?
[ "You mean the while loop runs before the thread? Well, you can't predict this unless you synchronize it. No one guarantees you that the thread will run before or after that while loop. But if it's being blocked for 5 seconds that's akward - the thread should have been pre-empted by then.\nAlso, since you're first use of wantToQuit is in the run() method, no one assures you that the thread has been started when you're checking for it's wantToQuit attribute in while not myThread.wantToQuit .\n", "The behaviour here is not what you described.\nLook at those sample outputs I got:\n1st: pressing q<ENTER> as fast as possible:\nhey\nq\n\n2nd: wait a bit before pressing q<ENTER>:\nhey\nhey\nhey\nq\n\n3rd: Don't touch the keyboard:\nhey\nhey\nhey\nhey\nhey\nhey\n# Application locks because main thread is over but \n# there are other threads running. add myThread.wantQuit = 1\n# to prevent that if you want\n\n", "just tried the code to make sure, but this does do what it's supposed to... you can type q and enter in to the console and make the application quit before a=0 (so it says hey less then 5 times)\nI don't know what you mean by the raw_input dialog, raw_input normally just takes info from stdin\n", "huperboreean has your answer. The thread is still being started when the for loop is executed.\nYou want to check that a thread is started before moving into your loop.\nYou could simplify the thread to monitor raw_input, and return when a 'q' is entered. This will kill the thread.\nYou main for loop can check if the thread is alive.\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0000527420_multithreading_python.txt
Q: How to properly organize a package/module dependency tree? Good morning, I am currently writing a python library. At the moment, modules and classes are deployed in an unorganized way, with no reasoned design. As I approach a more official release, I would like to reorganize classes and modules so that they have a better overall design. I drew a diagram of the import dependencies, and I was planning to aggregate classes by layer level. Also, I was considering some modification to the classes so to reduce these dependencies. What is your strategy for a good overall design of a potentially complex and in-the-making python library? Do you have interesting suggestions ? Thanks Update: I was indeed looking for a rule of thumb. For example, suppose this case happens (init.py removed for clarity) foo/bar/a.py foo/bar/b.py foo/hello/c.py foo/hello/d.py now, if you happen to have d.py importing bar.b and a.py importing hello.c, I would consider this a bad setting. Another case would be foo/bar/a.py foo/bar/baz/b.py foo/bar/baz/c.py suppose that both a.py and b.py import c. you have three solutions: 1) b imports c, a import baz.c 2) you move c in foo/bar. a.py imports c, b.py imports .c 3) you move c somewhere else (say foo/cpackage/c.py) and then both a and b import cpackage.c I tend to prefer 3), but if c.py has no meaning as a standalone module, for example because you want to keep it "private" into the bar package, I would preferentially go for 1). There are many other similar cases. My rule of thumb is to reduce the number of dependencies and crossings at a minimum, so to prevent a highly branched, highly interweaved setup, but I could be wrong. A: "I drew a diagram of the import dependencies, and I was planning to aggregate classes by layer level." Python must read like English (or any other natural language.) An import is a first-class statement that should have real meaning. Organizing things by "layer level" (whatever that is) should be clear, meaningful and obvious. Do not make arbitrary technical groupings of classes into modules and modules into packages. Make the modules and package obvious and logical so that the list of imports is obvious, simple and logical. "Also, I was considering some modification to the classes so to reduce these dependencies." Reducing the dependencies sounds technical and arbitrary. It may not be, but it sounds that way. Without actual examples, it's impossible to say. Your goal is clarity. Also, the module and package are the stand-alone units of reuse. (Not classes; a class, but itself isn't usually reusable.) Your dependency tree should reflect this. You're aiming for modules that can be imported neatly and cleanly into your application. If you have many closely-related modules (or alternative implementations) then packages can be used, but used sparingly. The Python libraries are relatively flat; and there's some wisdom in that. Edit One-way dependency between layers is an essential feature. This is more about proper software design than it is about Python. You should (1) design in layers, (2) design so that the dependencies are very strict between the layers, and then (3) implement that in Python. The packages may not necessarily fit your layering precisely. The packages may physically be a flat list of directories with the dependencies expressed only via import statements. A: The question is very vague. You can achieve this by having base/core things that import nothing from the remainder of the library, and concrete implementations importing from here. Apart from "don't have two modules importing from each-other at import-time", you should be fine. module1.py: import module2 module2.py: import module1 This won't work! A: It depends on the project, right? For example, if you are using a model-view-controller design, then your package would be structured in a way that makes the 3 groups of code independent. If you need some ideas, open up your site-packages directory, and look through some of the code in those modules to see how they are set up. There is no correct way without knowing more about the module; as Ali said, this is a vague question. You really just need to analyze what you have in front of you, and figure out what might work better.
How to properly organize a package/module dependency tree?
Good morning, I am currently writing a python library. At the moment, modules and classes are deployed in an unorganized way, with no reasoned design. As I approach a more official release, I would like to reorganize classes and modules so that they have a better overall design. I drew a diagram of the import dependencies, and I was planning to aggregate classes by layer level. Also, I was considering some modification to the classes so to reduce these dependencies. What is your strategy for a good overall design of a potentially complex and in-the-making python library? Do you have interesting suggestions ? Thanks Update: I was indeed looking for a rule of thumb. For example, suppose this case happens (init.py removed for clarity) foo/bar/a.py foo/bar/b.py foo/hello/c.py foo/hello/d.py now, if you happen to have d.py importing bar.b and a.py importing hello.c, I would consider this a bad setting. Another case would be foo/bar/a.py foo/bar/baz/b.py foo/bar/baz/c.py suppose that both a.py and b.py import c. you have three solutions: 1) b imports c, a import baz.c 2) you move c in foo/bar. a.py imports c, b.py imports .c 3) you move c somewhere else (say foo/cpackage/c.py) and then both a and b import cpackage.c I tend to prefer 3), but if c.py has no meaning as a standalone module, for example because you want to keep it "private" into the bar package, I would preferentially go for 1). There are many other similar cases. My rule of thumb is to reduce the number of dependencies and crossings at a minimum, so to prevent a highly branched, highly interweaved setup, but I could be wrong.
[ "\"I drew a diagram of the import dependencies, and I was planning to aggregate classes by layer level.\"\nPython must read like English (or any other natural language.)\nAn import is a first-class statement that should have real meaning. Organizing things by \"layer level\" (whatever that is) should be clear, meaningful and obvious.\nDo not make arbitrary technical groupings of classes into modules and modules into packages. \nMake the modules and package obvious and logical so that the list of imports is obvious, simple and logical.\n\"Also, I was considering some modification to the classes so to reduce these dependencies.\"\nReducing the dependencies sounds technical and arbitrary. It may not be, but it sounds that way. Without actual examples, it's impossible to say.\nYour goal is clarity. \nAlso, the module and package are the stand-alone units of reuse. (Not classes; a class, but itself isn't usually reusable.) Your dependency tree should reflect this. You're aiming for modules that can be imported neatly and cleanly into your application.\nIf you have many closely-related modules (or alternative implementations) then packages can be used, but used sparingly. The Python libraries are relatively flat; and there's some wisdom in that.\n\nEdit\nOne-way dependency between layers is an essential feature. This is more about proper software design than it is about Python. You should (1) design in layers, (2) design so that the dependencies are very strict between the layers, and then (3) implement that in Python. \nThe packages may not necessarily fit your layering precisely. The packages may physically be a flat list of directories with the dependencies expressed only via import statements.\n", "The question is very vague.\nYou can achieve this by having base/core things that import nothing from the remainder of the library, and concrete implementations importing from here. Apart from \"don't have two modules importing from each-other at import-time\", you should be fine.\nmodule1.py:\nimport module2\n\nmodule2.py:\nimport module1\n\nThis won't work!\n", "It depends on the project, right?\nFor example, if you are using a model-view-controller design, then your package would be structured in a way that makes the 3 groups of code independent.\nIf you need some ideas, open up your site-packages directory, and look through some of the code in those modules to see how they are set up.\nThere is no correct way without knowing more about the module; as Ali said, this is a vague question. You really just need to analyze what you have in front of you, and figure out what might work better.\n" ]
[ 6, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0000527919_python.txt
Q: Python Script: Print new line each time to shell rather than update existing line I am a noob when it comes to python. I have a python script which gives me output like this: [last] ZVZX-W3vo9I: Downloading video webpage [last] ZVZX-W3vo9I: Extracting video information [download] Destination: myvideo.flv [download] 9.9% of 10.09M at 3.30M/s ETA 00:02 The last line keeps getting updated with new values of progress. I want to change this. Instead of updating I want a new line to be printed each time. How can i do this? I think the part concerned is this bit: def report_progress(self, percent_str, data_len_str, speed_str, eta_str): """Report download progress.""" self.to_stdout(u'\r[download] %s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str), skip_eol=True) If more code needs to be seen please let me know so that I can show you what is needed to solve this. Thank you very much for any help. A: If I understand your request properly, you should be able to change that function to this: def report_progress(self, percent_str, data_len_str, speed_str, eta_str): """Report download progress.""" print u'[download] %s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str) That will print the output on a new line each time. A: I'm thinking you may just need to change: skip_eol=True to: skip_eol=False and get rid of the "\r" to see what happens. I think you'll be pleasantly surprised :-) A: You can take out the \r, which moves to the cursor back to the beginning of the line and take out the skip_eol=True probably. Perhaps: self.to_stdout(u'[download] %s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str)) A: The "update" effect is achieved by '\r'. Try this in a Python (2.x) shell: print "00000000\r1111" \r just returns the cursor to the beginning of the line.
Python Script: Print new line each time to shell rather than update existing line
I am a noob when it comes to python. I have a python script which gives me output like this: [last] ZVZX-W3vo9I: Downloading video webpage [last] ZVZX-W3vo9I: Extracting video information [download] Destination: myvideo.flv [download] 9.9% of 10.09M at 3.30M/s ETA 00:02 The last line keeps getting updated with new values of progress. I want to change this. Instead of updating I want a new line to be printed each time. How can i do this? I think the part concerned is this bit: def report_progress(self, percent_str, data_len_str, speed_str, eta_str): """Report download progress.""" self.to_stdout(u'\r[download] %s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str), skip_eol=True) If more code needs to be seen please let me know so that I can show you what is needed to solve this. Thank you very much for any help.
[ "If I understand your request properly, you should be able to change that function to this:\ndef report_progress(self, percent_str, data_len_str, speed_str, eta_str):\n \"\"\"Report download progress.\"\"\"\n print u'[download] %s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str)\n\nThat will print the output on a new line each time.\n", "I'm thinking you may just need to change:\nskip_eol=True\n\nto:\nskip_eol=False\n\nand get rid of the \"\\r\" to see what happens. I think you'll be pleasantly surprised :-)\n", "You can take out the \\r, which moves to the cursor back to the beginning of the line and take out the skip_eol=True probably. Perhaps:\n self.to_stdout(u'[download] %s of %s at %s ETA %s' %\n (percent_str, data_len_str, speed_str, eta_str))\n\n", "The \"update\" effect is achieved by '\\r'.\nTry this in a Python (2.x) shell:\nprint \"00000000\\r1111\"\n\n\\r just returns the cursor to the beginning of the line.\n" ]
[ 3, 3, 0, 0 ]
[]
[]
[ "python", "shell" ]
stackoverflow_0000529395_python_shell.txt
Q: Print method question Python In python, what does the 2nd % signifies? print "%s" % ( i ) A: As others have said, this is the Python string formatting/interpolation operator. It's basically the equivalent of sprintf in C, for example: a = "%d bottles of %s on the wall" % (10, "beer") is equivalent to something like a = sprintf("%d bottles of %s on the wall", 10, "beer"); in C. Each of these has the result of a being set to "10 bottles of beer on the wall" Note however that this syntax is deprecated in Python 3.0; its replacement looks something like a = "{0} bottles of {1} on the wall".format(10, "beer") This works because any string literal is automatically turned into a str object by Python. A: The second % is the string interpolation operator. Link to documentation. A: It's a format specifier Simple usage: # Prints: 0 1 2 3 4 5 6 7 8 9 for i in range(10): print "%d" % i, A: print "%d%s" % (100, "trillion dollars") # outputs: 100 trillion dollars A: If you were to translate the code to English, it says: take the string i and format it in to the predicate string. Another example: name = "world" print "hello, %s" % (name) More information about format specifiers.
Print method question Python
In python, what does the 2nd % signifies? print "%s" % ( i )
[ "As others have said, this is the Python string formatting/interpolation operator. It's basically the equivalent of sprintf in C, for example:\na = \"%d bottles of %s on the wall\" % (10, \"beer\")\nis equivalent to something like\na = sprintf(\"%d bottles of %s on the wall\", 10, \"beer\");\nin C. Each of these has the result of a being set to \"10 bottles of beer on the wall\"\nNote however that this syntax is deprecated in Python 3.0; its replacement looks something like\na = \"{0} bottles of {1} on the wall\".format(10, \"beer\")\nThis works because any string literal is automatically turned into a str object by Python.\n", "The second % is the string interpolation operator.\nLink to documentation.\n", "It's a format specifier\nSimple usage:\n# Prints: 0 1 2 3 4 5 6 7 8 9\nfor i in range(10):\n print \"%d\" % i,\n\n", "print \"%d%s\" % (100, \"trillion dollars\") # outputs: 100 trillion dollars\n\n", "If you were to translate the code to English, it says: take the string i and format it in to the predicate string.\nAnother example:\nname = \"world\"\nprint \"hello, %s\" % (name)\n\nMore information about format specifiers.\n" ]
[ 8, 5, 0, 0, 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0000530114_python_string.txt
Q: Guide in organizing large Django projects Anyone could recommend a good guide/tutorial/article with tips/guidelines in how to organize and partition a large Django project? I'm looking for advices in what to do when you need to start factorizing the initial unique files (models.py, urls.py, views.py) and working with more than a few dozens of entities. A: Each "application" should be small -- a single reusable entity plus a few associated tables. We have about 5 plus/minus 2 tables per application model. Most of our half-dozen applications are smaller than 5 tables. One has zero tables in the model. Each application should be designed to be one reusable concept. In our case, each application is a piece of the overall site; the applications could be removed and replaced separately. Indeed, that's our strategy. As our requirements expand and mature, we can remove and replace applications independently from each other. It's okay to have applications depend on each other. However, the dependency has to be limited to the obvious things like "models" and "forms". Also, applications can depend on the names in each other's URL's. Consequently, your named URL's must have a form like "application-view" so the reverse function or the {% url %} tag can find them properly. Each application should contain it's own batch commands (usually via a formal Command that can be found by the django-admin script. Finally, anything that's more complex than a simple model or form that's shared probably doesn't belong to either application, but needs to be a separate shared library. For example, we use XLRD, but wrap parts of it in our own class so it's more like the built-in csv module. This wrapper for XLRD isn't a proper part of any one application, to it's a separate module, outside the Django applications. A: I've found it to be helpful to take a look at large open-source Django projects and take note of how that project does it. Django's site has a good list of open-source projects: http://code.djangoproject.com/wiki/DjangoResources#Open-SourceDjangoprojects As does Google (although most of these are smaller add-in template tags and Middleware: http://code.google.com/hosting/search?q=label:django Of course, just because one project does it one way does not mean that that way is The Right Way (or The Wrong Way). Some of those projects are more successful than others. In the end, the only way to really learn what works and doesn't work is to try it out yourself. All the tips and hints in the world wont help unless you try it out yourself, but they may help you get started in the right direction.
Guide in organizing large Django projects
Anyone could recommend a good guide/tutorial/article with tips/guidelines in how to organize and partition a large Django project? I'm looking for advices in what to do when you need to start factorizing the initial unique files (models.py, urls.py, views.py) and working with more than a few dozens of entities.
[ "Each \"application\" should be small -- a single reusable entity plus a few associated tables. We have about 5 plus/minus 2 tables per application model. Most of our half-dozen applications are smaller than 5 tables. One has zero tables in the model. \nEach application should be designed to be one reusable concept. In our case, each application is a piece of the overall site; the applications could be removed and replaced separately.\nIndeed, that's our strategy. As our requirements expand and mature, we can remove and replace applications independently from each other.\nIt's okay to have applications depend on each other. However, the dependency has to be limited to the obvious things like \"models\" and \"forms\". Also, applications can depend on the names in each other's URL's. Consequently, your named URL's must have a form like \"application-view\" so the reverse function or the {% url %} tag can find them properly.\nEach application should contain it's own batch commands (usually via a formal Command that can be found by the django-admin script.\nFinally, anything that's more complex than a simple model or form that's shared probably doesn't belong to either application, but needs to be a separate shared library. For example, we use XLRD, but wrap parts of it in our own class so it's more like the built-in csv module. This wrapper for XLRD isn't a proper part of any one application, to it's a separate module, outside the Django applications.\n", "I've found it to be helpful to take a look at large open-source Django projects and take note of how that project does it. Django's site has a good list of open-source projects:\nhttp://code.djangoproject.com/wiki/DjangoResources#Open-SourceDjangoprojects\nAs does Google (although most of these are smaller add-in template tags and Middleware:\nhttp://code.google.com/hosting/search?q=label:django\nOf course, just because one project does it one way does not mean that that way is The Right Way (or The Wrong Way). Some of those projects are more successful than others.\nIn the end, the only way to really learn what works and doesn't work is to try it out yourself. All the tips and hints in the world wont help unless you try it out yourself, but they may help you get started in the right direction.\n" ]
[ 38, 10 ]
[]
[]
[ "django", "projects", "python" ]
stackoverflow_0000529921_django_projects_python.txt
Q: Why does concatenation work differently in these two samples? I am raising exceptions in two different places in my Python code: holeCards = input("Select a hand to play: ") try: if len(holeCards) != 4: raise ValueError(holeCards + ' does not represent a valid hand.') AND (edited to correct raising code) def __init__(self, card): [...] if self.cardFace == -1 or self.cardSuit == -1: raise ValueError(card, 'is not a known card.') For some reason, the first outputs a concatenated string like I expected: ERROR: Amsterdam does not represent a valid hand. But, the second outputs some weird hybrid of set and string: ERROR: ('Kr', 'is not a known card.') Why is the "+" operator behaving differently in these two cases? Edit: The call to init looks like this: card1 = PokerCard(cardsStr[0:2]) card2 = PokerCard(cardsStr[2:4]) A: Um, am I missing something or are you comparing the output of raise ValueError(card, 'is not a known card.') with raise ValueError(card + ' is not a known card.') ??? The second uses "+", but the first uses ",", which does and should give the output you show! (nb. the question was edited from a version with "+" in both cases. Perhaps this question should be deleted???) A: "card" probably represents a tuple containing the string "Kr." When you use the + operator on a tuple, you create a new tuple with the extra item added. edit: nope, I'm wrong. Adding a string to a tuple: >> ("Kr",) + "foo" generates an error: TypeError: can only concatenate tuple (not "str") to tuple It would probably be helpful to determine the type of "card." Do you know what type it is? If not, try putting in a print statement like: if len(card) != 2: print type(card) raise ValueError(card + ' is not a known card.') A: This instantiates the ValueError exception with a single argument, your concated (or added) string: raise ValueError(holeCards + ' does not represent a valid hand.') This instantiates the ValueError exception with 2 arguments, whatever card is and a string: raise ValueError(card, 'is not a known card.') A: In the second case card is not a string for sure. If it was a string then len('2') would be equal to 2 and the exception wouldn't be raised, so check first what are you trying to concatenate, it seems something that added to a string returns something represented as a tuple. I recommend you to use string formatting instead of string concatenation to create the error message. It will use the string representation (__repr__) of the object. With string formatting: >>> "%s foo" % (2) '2 foo' With string concatenation: >>> 2 + " foo" Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: unsupported operand type(s) for +: 'int' and 'str' And other question... what python version/implementation are you using? My cpython interpreter on Linux reports ValueErrors as ValueError, not ERROR... A: Have you overloaded the __add__() somewhere in the code, that could be causing it to return a tuple or something?
Why does concatenation work differently in these two samples?
I am raising exceptions in two different places in my Python code: holeCards = input("Select a hand to play: ") try: if len(holeCards) != 4: raise ValueError(holeCards + ' does not represent a valid hand.') AND (edited to correct raising code) def __init__(self, card): [...] if self.cardFace == -1 or self.cardSuit == -1: raise ValueError(card, 'is not a known card.') For some reason, the first outputs a concatenated string like I expected: ERROR: Amsterdam does not represent a valid hand. But, the second outputs some weird hybrid of set and string: ERROR: ('Kr', 'is not a known card.') Why is the "+" operator behaving differently in these two cases? Edit: The call to init looks like this: card1 = PokerCard(cardsStr[0:2]) card2 = PokerCard(cardsStr[2:4])
[ "Um, am I missing something or are you comparing the output of\nraise ValueError(card, 'is not a known card.')\n\nwith\nraise ValueError(card + ' is not a known card.')\n\n???\nThe second uses \"+\", but the first uses \",\", which does and should give the output you show!\n(nb. the question was edited from a version with \"+\" in both cases. Perhaps this question should be deleted???)\n", "\"card\" probably represents a tuple containing the string \"Kr.\" When you use the + operator on a tuple, you create a new tuple with the extra item added.\nedit: nope, I'm wrong. Adding a string to a tuple:\n>> (\"Kr\",) + \"foo\"\n\ngenerates an error:\nTypeError: can only concatenate tuple (not \"str\") to tuple\n\nIt would probably be helpful to determine the type of \"card.\" Do you know what type it is? If not, try putting in a print statement like:\nif len(card) != 2:\n print type(card)\n raise ValueError(card + ' is not a known card.')\n\n", "This instantiates the ValueError exception with a single argument, your concated (or added) string:\nraise ValueError(holeCards + ' does not represent a valid hand.')\n\nThis instantiates the ValueError exception with 2 arguments, whatever card is and a string:\nraise ValueError(card, 'is not a known card.')\n\n", "In the second case card is not a string for sure. If it was a string then len('2') would be equal to 2 and the exception wouldn't be raised, so check first what are you trying to concatenate, it seems something that added to a string returns something represented as a tuple.\nI recommend you to use string formatting instead of string concatenation to create the error message. It will use the string representation (__repr__) of the object.\nWith string formatting:\n>>> \"%s foo\" % (2)\n'2 foo'\n\nWith string concatenation:\n>>> 2 + \" foo\"\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\nTypeError: unsupported operand type(s) for +: 'int' and 'str'\n\nAnd other question... what python version/implementation are you using? My cpython interpreter on Linux reports ValueErrors as ValueError, not ERROR...\n", "Have you overloaded the __add__() somewhere in the code, that could be causing it to return a tuple or something?\n" ]
[ 8, 5, 4, 1, 0 ]
[]
[]
[ "concatenation", "python" ]
stackoverflow_0000530329_concatenation_python.txt
Q: Deploying bluechannel with fastcgi I am trying to get a basic blue-channel website running through fcgi, I have a django.fcgi file. How do I do this. Thank you A: Read The Fabulous Manual
Deploying bluechannel with fastcgi
I am trying to get a basic blue-channel website running through fcgi, I have a django.fcgi file. How do I do this. Thank you
[ "Read The Fabulous Manual\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0000530480_django_python.txt
Q: Is there a single Python regex that can change all "foo" to "bar" on lines starting with "#"? Is it possible to write a single Python regular expression that can be applied to a multi-line string and change all occurrences of "foo" to "bar", but only on lines beginning with "#"? I was able to get this working in Perl, using Perl's \G regular expression sigil, which matches the end of the previous match. However, Python doesn't appear to support this. Here's the Perl solution, in case it helps: my $x =<<EOF; # foo foo # foo foo EOF $x =~ s{ ( # begin capture (?:\G|^\#) # last match or start of string plus hash .*? # followed by anything, non-greedily ) # end capture foo } {$1bar}xmg; print $x; The proper output, of course, is: # bar foo # bar bar Can this be done in Python? Edit: Yes, I know that it's possible to split the string into individual lines and test each line and then decide whether to apply the transformation, but please take my word that doing so would be non-trivial in this case. I really do need to do it with a single regular expression. A: lines = mystring.split('\n') for line in lines: if line.startswith('#'): line = line.replace('foo', 'bar') No need for a regex. A: It looked pretty easy to do with a regular expression: >>> import re ... text = """line 1 ... line 2 ... Barney Rubble Cutherbert Dribble and foo ... line 4 ... # Flobalob, bing, bong, foo and brian ... line 6""" >>> regexp = re.compile('^(#.+)foo', re.MULTILINE) >>> print re.sub(regexp, '\g<1>bar', text) line 1 line 2 Barney Rubble Cutherbert Dribble and foo line 4 # Flobalob, bing, bong, bar and brian line 6 But then trying your example text is not so good: >>> text = """# foo ... foo ... # foo foo""" >>> regexp = re.compile('^(#.+)foo', re.MULTILINE) >>> print re.sub(regexp, '\g<1>bar', text) # bar foo # foo bar So, try this: >>> regexp = re.compile('(^#|\g.+)foo', re.MULTILINE) >>> print re.sub(regexp, '\g<1>bar', text) # foo foo # foo foo That seemed to work, but I can't find \g in the documentation! Moral: don't try to code after a couple of beers. A: \g works in python just like perl, and is in the docs. "In addition to character escapes and backreferences as described above, \g will use the substring matched by the group named name, as defined by the (?P...) syntax. \g uses the corresponding group number; \g<2> is therefore equivalent to \2, but isn’t ambiguous in a replacement such as \g<2>0. \20 would be interpreted as a reference to group 20, not a reference to group 2 followed by the literal character '0'. The backreference \g<0> substitutes in the entire substring matched by the RE."
Is there a single Python regex that can change all "foo" to "bar" on lines starting with "#"?
Is it possible to write a single Python regular expression that can be applied to a multi-line string and change all occurrences of "foo" to "bar", but only on lines beginning with "#"? I was able to get this working in Perl, using Perl's \G regular expression sigil, which matches the end of the previous match. However, Python doesn't appear to support this. Here's the Perl solution, in case it helps: my $x =<<EOF; # foo foo # foo foo EOF $x =~ s{ ( # begin capture (?:\G|^\#) # last match or start of string plus hash .*? # followed by anything, non-greedily ) # end capture foo } {$1bar}xmg; print $x; The proper output, of course, is: # bar foo # bar bar Can this be done in Python? Edit: Yes, I know that it's possible to split the string into individual lines and test each line and then decide whether to apply the transformation, but please take my word that doing so would be non-trivial in this case. I really do need to do it with a single regular expression.
[ "lines = mystring.split('\\n')\nfor line in lines:\n if line.startswith('#'):\n line = line.replace('foo', 'bar')\n\nNo need for a regex.\n", "It looked pretty easy to do with a regular expression:\n>>> import re\n... text = \"\"\"line 1\n... line 2\n... Barney Rubble Cutherbert Dribble and foo\n... line 4\n... # Flobalob, bing, bong, foo and brian\n... line 6\"\"\"\n>>> regexp = re.compile('^(#.+)foo', re.MULTILINE)\n>>> print re.sub(regexp, '\\g<1>bar', text)\nline 1\nline 2\nBarney Rubble Cutherbert Dribble and foo\nline 4\n# Flobalob, bing, bong, bar and brian\nline 6\n\nBut then trying your example text is not so good:\n>>> text = \"\"\"# foo\n... foo\n... # foo foo\"\"\"\n>>> regexp = re.compile('^(#.+)foo', re.MULTILINE)\n>>> print re.sub(regexp, '\\g<1>bar', text)\n# bar\nfoo\n# foo bar\n\nSo, try this:\n>>> regexp = re.compile('(^#|\\g.+)foo', re.MULTILINE)\n>>> print re.sub(regexp, '\\g<1>bar', text)\n# foo\nfoo\n# foo foo\n\nThat seemed to work, but I can't find \\g in the documentation!\nMoral: don't try to code after a couple of beers.\n", "\\g works in python just like perl, and is in the docs.\n\"In addition to character escapes and backreferences as described above, \\g will use the substring matched by the group named name, as defined by the (?P...) syntax. \\g uses the corresponding group number; \\g<2> is therefore equivalent to \\2, but isn’t ambiguous in a replacement such as \\g<2>0. \\20 would be interpreted as a reference to group 20, not a reference to group 2 followed by the literal character '0'. The backreference \\g<0> substitutes in the entire substring matched by the RE.\"\n" ]
[ 3, 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0000530546_python_regex.txt
Q: Huge collections in Python Basically I am storing millions of vector3 values in a list. But right now the vector3s are defined like so: [5,6,7] which I believe is a list. The values will not be modified nor I need any vector3 functionality. Is this the most performant way to do this? A: If you are storing millions of them, the best way (both for speed and memory use) is to use numpy. If you want to avoid numpy and use only built-in python modules, using tuples instead of lists will save you some overhead. A: The best way is probably using tuples rather than a list. Tuples are faster than lists, and cannot be modified once defined. http://docs.python.org/tutorial/datastructures.html#tuples-and-sequences edit: more specifically, probably a list of tuples would work best: [(4,3,2), (2,4,5)...] A: Traditionally, you would profile your code, find bottlenecks, and deal with them as required. The answer is "No it probably isn't the most performant way" but does that really matter? It might do, and when it does, it can be fixed.
Huge collections in Python
Basically I am storing millions of vector3 values in a list. But right now the vector3s are defined like so: [5,6,7] which I believe is a list. The values will not be modified nor I need any vector3 functionality. Is this the most performant way to do this?
[ "If you are storing millions of them, the best way (both for speed and memory use) is to use numpy. \nIf you want to avoid numpy and use only built-in python modules, using tuples instead of lists will save you some overhead.\n", "The best way is probably using tuples rather than a list. Tuples are faster than lists, and cannot be modified once defined. http://docs.python.org/tutorial/datastructures.html#tuples-and-sequences\nedit: more specifically, probably a list of tuples would work best: [(4,3,2), (2,4,5)...]\n", "Traditionally, you would profile your code, find bottlenecks, and deal with them as required. The answer is \"No it probably isn't the most performant way\" but does that really matter? It might do, and when it does, it can be fixed.\n" ]
[ 13, 6, 4 ]
[]
[]
[ "collections", "performance", "python" ]
stackoverflow_0000530601_collections_performance_python.txt
Q: Is there a way to determine if a subdirectory is in the same filesystem from python when using os.walk? I'm writing a python script that uses os.walk() to walk a directory tree. I'd like to give it the ability to skip subdirectories that are mounted to a different file system, the way find -xdev does. Checking through the documentation on os.walk(), I didn't see any argument to pass in for it to do that automatically. Is there something I can use to do the filtering myself? Hopefully something that runs on both Linux and Mac OS X? A: os.path.ismount() A: I think you can use a combination of the os.stat call and a filtering of the dirnames given by os.walk to do what you want. Something like this: import os for root, dirs, files in os.walk(somerootdir) : do_processing(root, dirs, files) dirs = [i for i in dirs if os.stat(os.path.join(root, i)).st_dev == os.stat(root).st_dev] That should modify the list of directories to recurse into, by removing those which do not have the same device. I have no idea on how it will work on OS X, but it seems to be working here in Linux, after a very little bit of testing.
Is there a way to determine if a subdirectory is in the same filesystem from python when using os.walk?
I'm writing a python script that uses os.walk() to walk a directory tree. I'd like to give it the ability to skip subdirectories that are mounted to a different file system, the way find -xdev does. Checking through the documentation on os.walk(), I didn't see any argument to pass in for it to do that automatically. Is there something I can use to do the filtering myself? Hopefully something that runs on both Linux and Mac OS X?
[ "os.path.ismount()\n", "I think you can use a combination of the os.stat call and a filtering of the dirnames given by os.walk to do what you want. Something like this:\nimport os\nfor root, dirs, files in os.walk(somerootdir) :\n do_processing(root, dirs, files)\n dirs = [i for i in dirs if os.stat(os.path.join(root, i)).st_dev == os.stat(root).st_dev]\n\nThat should modify the list of directories to recurse into, by removing those which do not have the same device.\nI have no idea on how it will work on OS X, but it seems to be working here in Linux, after a very little bit of testing.\n" ]
[ 7, 1 ]
[]
[]
[ "python", "unix" ]
stackoverflow_0000530645_python_unix.txt
Q: How to deploy a Python application with libraries as source with no further dependencies? Background: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako) The situation: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install. The Problem This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access). I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion. Anecdote: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above). A: Just use virtualenv - it is a tool to create isolated Python environments. You can create a set-up script and distribute the whole bunch if you want. A: "I dislike the fact that developers (or me starting on a clean new machine) have to jump through the distutils hoops of having to install the libraries locally before they can get started" Why? What -- specifically -- is wrong with this? You did it to create the project. Your project is so popular others want to do the same. I don't see a problem. Please update your question with specific problems you need solved. Disliking the way open source is distributed isn't a problem -- it's the way that open source works. Edit. The "walled garden" doesn't matter very much. Choice 1. You could, BTW, build an "installer" that runs easy_install 6 times for them. Choice 2. You can save all of the installer kits that easy_install would have used. Then you can provide a script that does an unzip and a python setup.py install for all six. Choice 3. You can provide a zipped version of your site-packages. After they install Python, they unzip your site-packages directory into `C:\Python2.5\lib\site-packages``. Choice 4. You can build your own MSI installer kit for your Python environment. Choice 5. You can host your own pypi-like server and provide an easy_install that checks your server first. A: I sometimes use the approach I describe below, for the exact same reason that @Boris states: I would prefer that the use of some code is as easy as a) svn checkout/update - b) go. But for the record: I use virtualenv/easy_install most of the time. I agree to a certain extent to the critisisms by @Ali A and @S.Lott Anyway, the approach I use depends on modifying sys.path, and works like this: Require python and setuptools (to enable loading code from eggs) on all computers that will use your software. Organize your directory structure this: project/ *.py scriptcustomize.py file.pth thirdparty/ eggs/ mako-vNNN.egg ... .egg code/ elementtree\ *.py ... In your top-level script(s) include the following code at the top: from scriptcustomize import apply_pth_files apply_pth_files(__file__) Add scriptcustomize.py to your project folder: import os from glob import glob import fileinput import sys def apply_pth_files(scriptfilename, at_beginning=False): """At the top of your script: from scriptcustomize import apply_pth_files apply_pth_files(__file__) """ directory = os.path.dirname(scriptfilename) files = glob(os.path.join(directory, '*.pth')) if not files: return for line in fileinput.input(files): line = line.strip() if line and line[0] != '#': path = os.path.join(directory, line) if at_beginning: sys.path.insert(0, path) else: sys.path.append(path) Add one or more *.pth file(s) to your project folder. On each line, put a reference to a directory with packages. For instance: # contents of *.pth file thirdparty/code thirdparty/eggs/mako-vNNN.egg I "kind-of" like this approach. What I like: it is similar to how *.pth files work, but for individual programs instead of your entire site-packages. What I do not like: having to add the two lines at the beginning of the top-level scripts. Again: I use virtualenv most of the time. But I tend to use virtualenv for projects where I have tight control of the deployment scenario. In cases where I do not have tight control, I tend to use the approach I describe above. It makes it really easy to package a project as a zip and have the end user "install" it (by unzipping). A: I agree with the answers by Nosklo and S.Lott. (+1 to both) Can I just add that what you want to do is actually a terrible idea. If you genuinely want people to hack on your code, they will need some understanding of the libraries involved, how they work, what they are, where they come from, the documentation for each etc. Sure provide them with a bootstrap script, but beyond that you will be molly-coddling to the point that they are clueless. Then there are specific issues such as "what if one user wants to install a different version or implementation of a library?", a glaring example here is ElementTree, as this has a number of implementations. A: I'm not suggesting that this is a great idea, but usually what I do in situations like these is that I have a Makefile, checked into subversion, which contains make rules to fetch all the dependent libraries and install them. The makefile can be smart enough to only apply the dependent libraries if they aren't present, so this can be relatively fast. A new developer on the project simply checks out from subversion and then types "make". This approach might work well for you, given that your audience is already used to the idea of using subversion checkouts as part of their fetch process. Also, it has the nice property that all knowledge about your program, including its external dependencies, are captured in the source code repository.
How to deploy a Python application with libraries as source with no further dependencies?
Background: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako) The situation: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install. The Problem This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access). I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion. Anecdote: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above).
[ "Just use virtualenv - it is a tool to create isolated Python environments. You can create a set-up script and distribute the whole bunch if you want.\n", "\"I dislike the fact that developers (or me starting on a clean new machine) have to jump through the distutils hoops of having to install the libraries locally before they can get started\"\nWhy?\nWhat -- specifically -- is wrong with this?\nYou did it to create the project. Your project is so popular others want to do the same.\nI don't see a problem. Please update your question with specific problems you need solved. Disliking the way open source is distributed isn't a problem -- it's the way that open source works.\nEdit. The \"walled garden\" doesn't matter very much. \nChoice 1. You could, BTW, build an \"installer\" that runs easy_install 6 times for them.\nChoice 2. You can save all of the installer kits that easy_install would have used. Then you can provide a script that does an unzip and a python setup.py install for all six.\nChoice 3. You can provide a zipped version of your site-packages. After they install Python, they unzip your site-packages directory into `C:\\Python2.5\\lib\\site-packages``.\nChoice 4. You can build your own MSI installer kit for your Python environment.\nChoice 5. You can host your own pypi-like server and provide an easy_install that checks your server first.\n", "I sometimes use the approach I describe below, for the exact same reason that @Boris states: I would prefer that the use of some code is as easy as a) svn checkout/update - b) go.\nBut for the record:\n\nI use virtualenv/easy_install most of the time.\nI agree to a certain extent to the critisisms by @Ali A and @S.Lott\n\nAnyway, the approach I use depends on modifying sys.path, and works like this:\n\nRequire python and setuptools (to enable loading code from eggs) on all computers that will use your software.\nOrganize your directory structure this:\n\n\nproject/\n *.py\n scriptcustomize.py\n file.pth\n\n thirdparty/\n eggs/\n mako-vNNN.egg\n ... .egg\n code/\n elementtree\\\n *.py\n ...\n\n\nIn your top-level script(s) include the following code at the top:\n\n\nfrom scriptcustomize import apply_pth_files\napply_pth_files(__file__)\n\n\nAdd scriptcustomize.py to your project folder:\n\n\nimport os\nfrom glob import glob\nimport fileinput\nimport sys\n\ndef apply_pth_files(scriptfilename, at_beginning=False):\n \"\"\"At the top of your script:\n from scriptcustomize import apply_pth_files\n apply_pth_files(__file__)\n\n \"\"\"\n directory = os.path.dirname(scriptfilename)\n files = glob(os.path.join(directory, '*.pth'))\n if not files:\n return\n for line in fileinput.input(files):\n line = line.strip()\n if line and line[0] != '#':\n path = os.path.join(directory, line)\n if at_beginning:\n sys.path.insert(0, path)\n else:\n sys.path.append(path)\n\n\nAdd one or more *.pth file(s) to your project folder. On each line, put a reference to a directory with packages. For instance:\n\n\n# contents of *.pth file\nthirdparty/code\nthirdparty/eggs/mako-vNNN.egg\n\n\nI \"kind-of\" like this approach. What I like: it is similar to how *.pth files work, but for individual programs instead of your entire site-packages. What I do not like: having to add the two lines at the beginning of the top-level scripts.\nAgain: I use virtualenv most of the time. But I tend to use virtualenv for projects where I have tight control of the deployment scenario. In cases where I do not have tight control, I tend to use the approach I describe above. It makes it really easy to package a project as a zip and have the end user \"install\" it (by unzipping).\n\n", "I agree with the answers by Nosklo and S.Lott. (+1 to both)\nCan I just add that what you want to do is actually a terrible idea.\nIf you genuinely want people to hack on your code, they will need some understanding of the libraries involved, how they work, what they are, where they come from, the documentation for each etc. Sure provide them with a bootstrap script, but beyond that you will be molly-coddling to the point that they are clueless.\nThen there are specific issues such as \"what if one user wants to install a different version or implementation of a library?\", a glaring example here is ElementTree, as this has a number of implementations.\n", "I'm not suggesting that this is a great idea, but usually what I do in situations like these is that I have a Makefile, checked into subversion, which contains make rules to fetch all the dependent libraries and install them. The makefile can be smart enough to only apply the dependent libraries if they aren't present, so this can be relatively fast.\nA new developer on the project simply checks out from subversion and then types \"make\".\nThis approach might work well for you, given that your audience is already used to the idea of using subversion checkouts as part of their fetch process. Also, it has the nice property that all knowledge about your program, including its external dependencies, are captured in the source code repository.\n" ]
[ 9, 8, 8, 0, 0 ]
[]
[]
[ "bootstrapping", "deployment", "layout", "python" ]
stackoverflow_0000527510_bootstrapping_deployment_layout_python.txt
Q: Parsing datetime strings with microseconds in Python 2.5 I have a text file with a lot of datetime strings in isoformat. The strings are similar to this: '2009-02-10 16:06:52.598800' These strings were generated using str(datetime_object). The problem is that, for some reason, str(datetime_object) generates a different format when the datetime object has microseconds set to zero and some strings look like this: '2009-02-10 16:06:52' How can I parse these strings and convert them into a datetime object? It's very important to get all the data in the object, including microseconds. NOTE: I have to use Python 2.5, the format directive %f for microseconds doesn't exist in 2.5. A: Alternatively: from datetime import datetime def str2datetime(s): parts = s.split('.') dt = datetime.strptime(parts[0], "%Y-%m-%d %H:%M:%S") return dt.replace(microsecond=int(parts[1])) Using strptime itself to parse the date/time string (so no need to think up corner cases for a regex). A: Use the dateutil module. It supports a much wider range of date and time formats than the built in Python ones. You'll need to easy_install dateutil for the following code to work: from dateutil.parser import parser p = parser() datetime_with_microseconds = p.parse('2009-02-10 16:06:52.598800') print datetime_with_microseconds.microsecond results in: 598799 A: Someone has already filed a bug with this issue: Issue 1982. Since you need this to work with python 2.5 you must parse the value manualy and then manipulate the datetime object. A: It might not be the best solution, but you can use a regular expression: m = re.match(r'(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})(?:\.(\d{6}))?', datestr) dt = datetime.datetime(*[int(x) for x in m.groups() if x])
Parsing datetime strings with microseconds in Python 2.5
I have a text file with a lot of datetime strings in isoformat. The strings are similar to this: '2009-02-10 16:06:52.598800' These strings were generated using str(datetime_object). The problem is that, for some reason, str(datetime_object) generates a different format when the datetime object has microseconds set to zero and some strings look like this: '2009-02-10 16:06:52' How can I parse these strings and convert them into a datetime object? It's very important to get all the data in the object, including microseconds. NOTE: I have to use Python 2.5, the format directive %f for microseconds doesn't exist in 2.5.
[ "Alternatively:\nfrom datetime import datetime\n\ndef str2datetime(s):\n parts = s.split('.')\n dt = datetime.strptime(parts[0], \"%Y-%m-%d %H:%M:%S\")\n return dt.replace(microsecond=int(parts[1]))\n\nUsing strptime itself to parse the date/time string (so no need to think up corner cases for a regex).\n", "Use the dateutil module. It supports a much wider range of date and time formats than the built in Python ones.\nYou'll need to easy_install dateutil for the following code to work:\nfrom dateutil.parser import parser\n\np = parser()\ndatetime_with_microseconds = p.parse('2009-02-10 16:06:52.598800')\nprint datetime_with_microseconds.microsecond\n\nresults in:\n598799\n\n", "Someone has already filed a bug with this issue: Issue 1982. Since you need this to work with python 2.5 you must parse the value manualy and then manipulate the datetime object.\n", "It might not be the best solution, but you can use a regular expression:\nm = re.match(r'(\\d{4})-(\\d{2})-(\\d{2}) (\\d{2}):(\\d{2}):(\\d{2})(?:\\.(\\d{6}))?', datestr)\ndt = datetime.datetime(*[int(x) for x in m.groups() if x])\n\n" ]
[ 21, 11, 5, 2 ]
[]
[]
[ "datetime", "parsing", "python", "python_2.5" ]
stackoverflow_0000531157_datetime_parsing_python_python_2.5.txt
Q: Adding a shebang causes No such file or directory error when running my python script I'm trying to run a python script. It works fine when I run it: python2.5 myscript.py inpt0 The problem starts when I add a shebang: #!/usr/bin/env python2.5 Result in: $ myscript.py inpt0 : No such file or directory Try 2: #!/usr/local/bin/python2.5 Result in: $ myscript.py inpt0 : bad interpreter: No such file or directoryon2.5 When I run them directly in the terminal they both work just fine: $ /usr/local/bin/python2.5 Python 2.5.4 (r254:67916, Feb 9 2009, 12:50:32) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> $ /usr/bin/env python2.5 Python 2.5.4 (r254:67916, Feb 9 2009, 12:50:32) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> Any hints on how to make this work with shebang? A: I had similar problems and it turned out to be problem with line-endings. You use windows/linux/mac line endings? Edit: forgot the script name, but as OP says, it's dos2unix <filename>
Adding a shebang causes No such file or directory error when running my python script
I'm trying to run a python script. It works fine when I run it: python2.5 myscript.py inpt0 The problem starts when I add a shebang: #!/usr/bin/env python2.5 Result in: $ myscript.py inpt0 : No such file or directory Try 2: #!/usr/local/bin/python2.5 Result in: $ myscript.py inpt0 : bad interpreter: No such file or directoryon2.5 When I run them directly in the terminal they both work just fine: $ /usr/local/bin/python2.5 Python 2.5.4 (r254:67916, Feb 9 2009, 12:50:32) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> $ /usr/bin/env python2.5 Python 2.5.4 (r254:67916, Feb 9 2009, 12:50:32) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> Any hints on how to make this work with shebang?
[ "I had similar problems and it turned out to be problem with line-endings. You use windows/linux/mac line endings?\nEdit: forgot the script name, but as OP says, it's dos2unix <filename>\n" ]
[ 71 ]
[]
[]
[ "python", "shell" ]
stackoverflow_0000531382_python_shell.txt
Q: Why is python ordering my dictionary like so? Here is the dictionary I have propertyList = { "id": "int", "name": "char(40)", "team": "int", "realOwner": "int", "x": "int", "y": "int", "description": "char(255)", "port": "bool", "secret": "bool", "dead": "bool", "nomadic": "bool", "population": "int", "slaves": "int", } But when I print it out with "\n".join(myDict) I get this name nomadic dead port realOwner secret slaves team y x population id description I know that a dictionary is unordered but it comes out the same every time and I've no idea why. A: For older versions of Python, the real question should be “why not?” — An unordered dictionary is usually implemented as a hash table where the order of elements is well-defined but not immediately obvious (the Python documentation used to state this). Your observations match the rules of a hash table perfectly: apparent arbitrary, but constant order. Python has since changed its dict implementation to preserve the order of insertion, and this is guaranteed as of Python 3.7. The implementation therefore no longer constitutes a pure hash table (but a hash table is still used in its implementation). A: The specification for the built-in dictionary type disclaims any preservation of order, it is best to think of a dictionary as an unordered set of key: value pairs... You may want to check the OrderedDict module, which is an implementation of an ordered dictionary with Key Insertion Order. A: The only thing about dictionary ordering you can rely on is that the order will remain the same if there are no modifications to the dictionary; e.g., iterating over a dictionary twice without modifying it will result in the same sequence of keys. However, though the order of Python dictionaries is deterministic, it can be influenced by factors such as the order of insertions and removals, so equal dictionaries can end up with different orderings: >>> {1: 0, 2: 0}, {2: 0, 1: 0} ({1: 0, 2: 0}, {1: 0, 2: 0}) >>> {1: 0, 9: 0}, {9: 0, 1: 0} ({1: 0, 9: 0}, {9: 0, 1: 0})
Why is python ordering my dictionary like so?
Here is the dictionary I have propertyList = { "id": "int", "name": "char(40)", "team": "int", "realOwner": "int", "x": "int", "y": "int", "description": "char(255)", "port": "bool", "secret": "bool", "dead": "bool", "nomadic": "bool", "population": "int", "slaves": "int", } But when I print it out with "\n".join(myDict) I get this name nomadic dead port realOwner secret slaves team y x population id description I know that a dictionary is unordered but it comes out the same every time and I've no idea why.
[ "For older versions of Python, the real question should be “why not?” — An unordered dictionary is usually implemented as a hash table where the order of elements is well-defined but not immediately obvious (the Python documentation used to state this). Your observations match the rules of a hash table perfectly: apparent arbitrary, but constant order.\nPython has since changed its dict implementation to preserve the order of insertion, and this is guaranteed as of Python 3.7. The implementation therefore no longer constitutes a pure hash table (but a hash table is still used in its implementation).\n", "The specification for the built-in dictionary type\ndisclaims any preservation of order, it is best to think of a dictionary as an unordered set of key: value pairs...\nYou may want to check the OrderedDict module, which is an implementation of an ordered dictionary with Key Insertion Order.\n", "The only thing about dictionary ordering you can rely on is that the order will remain the same if there are no modifications to the dictionary; e.g., iterating over a dictionary twice without modifying it will result in the same sequence of keys. However, though the order of Python dictionaries is deterministic, it can be influenced by factors such as the order of insertions and removals, so equal dictionaries can end up with different orderings:\n>>> {1: 0, 2: 0}, {2: 0, 1: 0}\n({1: 0, 2: 0}, {1: 0, 2: 0})\n>>> {1: 0, 9: 0}, {9: 0, 1: 0}\n({1: 0, 9: 0}, {9: 0, 1: 0})\n\n" ]
[ 80, 10, 8 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0000526125_dictionary_python.txt
Q: Django - how to remove cached results from previous form posts? I've got a django Form which contains a dictionary of strings. I've given the form a submit button and a preview button. When the preview button is pressed after entering some information, a POST is sent, and the strings in the dictionary are automagically recovered (I assume that it's done using session state or something). This is great, exactly what I wanted. The problem is that if I don't submit the form, then do a GET (i.e. browse to the page with the form on it), enter some info and hit preview, the information that was stored in the dictionary from the first preview is still there. How do you clear this information? The following is my form: class ListingImagesForm(forms.Form): #the following should be indented def clear_dictionaries(self): self.statuses = {} self.thumbnail_urls = {} self.image_urls = {} statuses = {} thumbnail_urls = {} image_urls = {} valid_images = SortedDict() #from the django framework photo_0 = forms.ImageField(required=False, label='First photo') photo_1 = forms.ImageField(required=False, label='Second photo') def clean_photo_0(self): return self._clean_photo("photo_0") def clean_photo_1(self): return self._clean_photo("photo_1") def _clean_photo(self, dataName): data = self.cleaned_data[dataName] if data != None: if data.size > max_photo_size: raise forms.ValidationError("The maximum image size allowed is 500KB") elif data.size == 0: raise forms.ValidationError("The image given is empty.") else: self.valid_images[dataName] = data self.statuses[dataName] = True list_of_image_locs = thumbs.save_image_and_thumbs(data.name, data) self.image_urls[dataName] = list_of_image_locs[0] self.thumbnail_urls[dataName] = list_of_image_locs[1] return data And here is the view: @login_required def add(request): #the following should be indented preview = False added = False new_listing = None owner = None if request.POST: form = GeneralListingForm(request.POST) image_form = ListingImagesForm(request.POST, request.FILES) if image_form.is_valid() and form.is_valid(): new_listing = form.save(commit=False) new_listing.owner = request.user.get_profile() if request.POST.get('preview', False): preview = True owner = new_listing.owner elif request.POST.get('submit', False): new_listing.save() for image in image_form.image_urls: url = image_form.image_urls[image] try: new_image = Image.objects.get(photo=url) new_image.listings.add(new_listing) new_image.save() except: new_image = Image(photo=url) new_image.save() new_image.listings.add(new_listing) new_image.save() form = GeneralListingForm() image_form = ListingImagesForm() image_form.clear_dictionaries() added = True else: form = GeneralListingForm() image_form = ListingImagesForm() image_form.clear_dictionaries() return render_to_response('add_listing.html', {'form': form, 'image_form' : image_form, 'preview': preview, 'added': added, 'owner': owner, 'listing': new_listing, 'currentmaintab': 'listings', 'currentcategory': 'all'}, context_instance=RequestContext(request)) I haven't been programming with django or python all that long, so any pointers on fixing up some of the code is welcome :) A: This code is broken in concept; it will never do what you want it to. Your dictionaries are class attributes on the ListingImagesForm class. This class is a module-level global. So you're storing some state in a global variable in-memory in a webserver process. This state is global to all users of your application, not just the user who submitted the form, and will persist (the same for all users) until it's explicitly changed or cleared (or until you just happen to have your next request served by a different process/thread in a production webserver). [EDIT: I used "global" here in an unclear way. Class attributes aren't "global", they are encapsulated in the class namespace just as you'd expect. But you're assigning attributes to the class object, not to instances of the class (which you'd do within an __init__() method). The class object is a module-level global, and it only has one set of attributes. Every time you change them, you're changing them for everyone. If you'd modify the above code so that your three dictionaries are initialized within the __init__() method, then your "cached data" "problem" would go away; but so would the "magical" persistence behavior that you wanted in the first place.] You need to rethink your design with a clear understanding that Django doesn't "automagically" maintain any state for you across requests. All your state must be explicitly passed via POST or GET, or explicitly saved to a session. Class attributes on a Form class should be avoided except for immutable configuration-type information, and instance attributes are only useful for keeping track of temporary state while processing a single request, they won't persist across requests (a new instance of your Form class is created on each request).
Django - how to remove cached results from previous form posts?
I've got a django Form which contains a dictionary of strings. I've given the form a submit button and a preview button. When the preview button is pressed after entering some information, a POST is sent, and the strings in the dictionary are automagically recovered (I assume that it's done using session state or something). This is great, exactly what I wanted. The problem is that if I don't submit the form, then do a GET (i.e. browse to the page with the form on it), enter some info and hit preview, the information that was stored in the dictionary from the first preview is still there. How do you clear this information? The following is my form: class ListingImagesForm(forms.Form): #the following should be indented def clear_dictionaries(self): self.statuses = {} self.thumbnail_urls = {} self.image_urls = {} statuses = {} thumbnail_urls = {} image_urls = {} valid_images = SortedDict() #from the django framework photo_0 = forms.ImageField(required=False, label='First photo') photo_1 = forms.ImageField(required=False, label='Second photo') def clean_photo_0(self): return self._clean_photo("photo_0") def clean_photo_1(self): return self._clean_photo("photo_1") def _clean_photo(self, dataName): data = self.cleaned_data[dataName] if data != None: if data.size > max_photo_size: raise forms.ValidationError("The maximum image size allowed is 500KB") elif data.size == 0: raise forms.ValidationError("The image given is empty.") else: self.valid_images[dataName] = data self.statuses[dataName] = True list_of_image_locs = thumbs.save_image_and_thumbs(data.name, data) self.image_urls[dataName] = list_of_image_locs[0] self.thumbnail_urls[dataName] = list_of_image_locs[1] return data And here is the view: @login_required def add(request): #the following should be indented preview = False added = False new_listing = None owner = None if request.POST: form = GeneralListingForm(request.POST) image_form = ListingImagesForm(request.POST, request.FILES) if image_form.is_valid() and form.is_valid(): new_listing = form.save(commit=False) new_listing.owner = request.user.get_profile() if request.POST.get('preview', False): preview = True owner = new_listing.owner elif request.POST.get('submit', False): new_listing.save() for image in image_form.image_urls: url = image_form.image_urls[image] try: new_image = Image.objects.get(photo=url) new_image.listings.add(new_listing) new_image.save() except: new_image = Image(photo=url) new_image.save() new_image.listings.add(new_listing) new_image.save() form = GeneralListingForm() image_form = ListingImagesForm() image_form.clear_dictionaries() added = True else: form = GeneralListingForm() image_form = ListingImagesForm() image_form.clear_dictionaries() return render_to_response('add_listing.html', {'form': form, 'image_form' : image_form, 'preview': preview, 'added': added, 'owner': owner, 'listing': new_listing, 'currentmaintab': 'listings', 'currentcategory': 'all'}, context_instance=RequestContext(request)) I haven't been programming with django or python all that long, so any pointers on fixing up some of the code is welcome :)
[ "This code is broken in concept; it will never do what you want it to. Your dictionaries are class attributes on the ListingImagesForm class. This class is a module-level global. So you're storing some state in a global variable in-memory in a webserver process. This state is global to all users of your application, not just the user who submitted the form, and will persist (the same for all users) until it's explicitly changed or cleared (or until you just happen to have your next request served by a different process/thread in a production webserver).\n[EDIT: I used \"global\" here in an unclear way. Class attributes aren't \"global\", they are encapsulated in the class namespace just as you'd expect. But you're assigning attributes to the class object, not to instances of the class (which you'd do within an __init__() method). The class object is a module-level global, and it only has one set of attributes. Every time you change them, you're changing them for everyone. If you'd modify the above code so that your three dictionaries are initialized within the __init__() method, then your \"cached data\" \"problem\" would go away; but so would the \"magical\" persistence behavior that you wanted in the first place.]\nYou need to rethink your design with a clear understanding that Django doesn't \"automagically\" maintain any state for you across requests. All your state must be explicitly passed via POST or GET, or explicitly saved to a session. Class attributes on a Form class should be avoided except for immutable configuration-type information, and instance attributes are only useful for keeping track of temporary state while processing a single request, they won't persist across requests (a new instance of your Form class is created on each request).\n" ]
[ 3 ]
[]
[]
[ "django", "python" ]
stackoverflow_0000530715_django_python.txt
Q: How can I find memory leaks in my Python program? Possible Duplicate: Python memory profiler I've got a fairly complex (about 20,000) line Python program which after some development has started consuming increasing amounts of memory when it runs. What are the best tools and techniques for finding out what all the memory is being used for? Usually this comes down to either unexpectedly keeping references to objects, or extension module bugs (which isn't particularly likely since we've been using the Python 2.4 installation for a while). We are using various third party libraries such as Twisted, Twisted Conch and MySQLdb. A: Generally, failing to close cursors is one of the most common kinds of memory leaks. The garbage collector can't see the MySQL resources involved in the cursor. MySQL doesn't know that the Python side was released unless the close() method is called explicitly. Rule of thumb. Open, use and close cursors in as short a span of code as you can manage.
How can I find memory leaks in my Python program?
Possible Duplicate: Python memory profiler I've got a fairly complex (about 20,000) line Python program which after some development has started consuming increasing amounts of memory when it runs. What are the best tools and techniques for finding out what all the memory is being used for? Usually this comes down to either unexpectedly keeping references to objects, or extension module bugs (which isn't particularly likely since we've been using the Python 2.4 installation for a while). We are using various third party libraries such as Twisted, Twisted Conch and MySQLdb.
[ "Generally, failing to close cursors is one of the most common kinds of memory leaks. The garbage collector can't see the MySQL resources involved in the cursor. MySQL doesn't know that the Python side was released unless the close() method is called explicitly.\nRule of thumb. Open, use and close cursors in as short a span of code as you can manage. \n" ]
[ 19 ]
[ "Python's memory is managed by a garbage collector. In general, there shouldn't be a problem with memory leaking (definitely not for Python2.5 and above), unless you happen to be writing extension modules in C/C++. In that case, Valgrind (Blog post -http://bruynooghe.blogspot.com/2008/12/finding-memory-leaks-in-python.html) might be helpful. I found that this person - http://mg.pov.lt/blog/hunting-python-memleaks has used PDB and matplotlib to trace a memory leak. I hope this helps, I have no experience fixing Python memory leaks.\n" ]
[ -1 ]
[ "debugging", "memory_leaks", "python", "twisted" ]
stackoverflow_0000532346_debugging_memory_leaks_python_twisted.txt
Q: How do I layout a 3 pane window using wxPython? I am trying to find a simple way to layout a 3 pane window using wxPython. I want to have a tree list in the left pane, then have a right pane that is split into two - with an edit component in the top part and a grid component in the bottom part. Something along the lines of: -------------------------------------- | | | | | Edit | | Tree | Control | | Control | | | |----------------------| | | | | | Grid | | | | -------------------------------------- I would like the window to be re-sizable and give the user the ability to change the (relative) size of each of the components within the windows by dragging the borders. I figure that I need some combination of sizers and/or splitter-window components but can't find a decent example of this kind of window in the documentation or on the web. A: This is a very simple layout using wx.aui and three panels. I guess you can easily adapt it to suit your needs. Orjanp... import wx import wx.aui class MyFrame(wx.Frame): def __init__(self, *args, **kwargs): wx.Frame.__init__(self, *args, **kwargs) self.mgr = wx.aui.AuiManager(self) leftpanel = wx.Panel(self, -1, size = (200, 150)) rightpanel = wx.Panel(self, -1, size = (200, 150)) bottompanel = wx.Panel(self, -1, size = (200, 150)) self.mgr.AddPane(leftpanel, wx.aui.AuiPaneInfo().Bottom()) self.mgr.AddPane(rightpanel, wx.aui.AuiPaneInfo().Left().Layer(1)) self.mgr.AddPane(bottompanel, wx.aui.AuiPaneInfo().Center().Layer(2)) self.mgr.Update() class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, '07_wxaui.py') frame.Show() self.SetTopWindow(frame) return 1 if __name__ == "__main__": app = MyApp(0) app.MainLoop() A: First of all download wxGlade a gui builder for wxPython (alternative XRCed, i prefere wxGlade). Then you have to decide if you want to use a GridSizer or a Splitter and you are done. Below you find both (between Tree and right side is a GridSizer -> resizes automatically). Between Edit and GridCtrl is a Sizer (manual Resize). Regards. one minute work without entering a single line of code: #!/usr/bin/env python # -*- coding: utf-8 -*- # generated by wxGlade 0.6.3 on Sat Feb 07 10:02:31 2009 import wx import wx.grid # begin wxGlade: extracode # end wxGlade class MyDialog(wx.Dialog): def __init__(self, *args, **kwds): # begin wxGlade: MyDialog.__init__ kwds["style"] = wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER|wx.THICK_FRAME wx.Dialog.__init__(self, *args, **kwds) self.window_1 = wx.SplitterWindow(self, -1, style=wx.SP_3D|wx.SP_BORDER) self.tree_ctrl_1 = wx.TreeCtrl(self, -1, style=wx.TR_HAS_BUTTONS|wx.TR_LINES_AT_ROOT|wx.TR_DEFAULT_STYLE|wx.SUNKEN_BORDER) self.text_ctrl_1 = wx.TextCtrl(self.window_1, -1, "This is the Edit", style=wx.TE_MULTILINE) self.grid_1 = wx.grid.Grid(self.window_1, -1, size=(1, 1)) self.__set_properties() self.__do_layout() # end wxGlade def __set_properties(self): # begin wxGlade: MyDialog.__set_properties self.SetTitle("dialog_1") self.grid_1.CreateGrid(10, 3) # end wxGlade def __do_layout(self): # begin wxGlade: MyDialog.__do_layout grid_sizer_1 = wx.FlexGridSizer(1, 2, 3, 3) grid_sizer_1.Add(self.tree_ctrl_1, 1, wx.EXPAND, 0) self.window_1.SplitHorizontally(self.text_ctrl_1, self.grid_1) grid_sizer_1.Add(self.window_1, 1, wx.EXPAND, 0) self.SetSizer(grid_sizer_1) grid_sizer_1.Fit(self) grid_sizer_1.AddGrowableRow(0) grid_sizer_1.AddGrowableCol(0) grid_sizer_1.AddGrowableCol(1) self.Layout() # end wxGlade # end of class MyDialog class MyApp(wx.App): def OnInit(self): wx.InitAllImageHandlers() mainDlg = MyDialog(None, -1, "") self.SetTopWindow(mainDlg) mainDlg.Show() return 1 # end of class MyApp if __name__ == "__main__": app = MyApp(0) app.MainLoop() A: You should use wxSplitter, here's an example. Another one here. And another. A: You could consider using the wx.aui advanced user interface module, as it allows you to build UIs like this very easily. Also the user can then minimise, maximise, and drag the panes around as they see fit, or not. It's pretty flexible. I actually find it easier to lay out this sort of UI with the aui toolkit, rather than with grids and splitters. Plus all the fancy buttons make apps look cooler. :) There is a nice example in the official demos, called AUI_DockingWindowMgr.
How do I layout a 3 pane window using wxPython?
I am trying to find a simple way to layout a 3 pane window using wxPython. I want to have a tree list in the left pane, then have a right pane that is split into two - with an edit component in the top part and a grid component in the bottom part. Something along the lines of: -------------------------------------- | | | | | Edit | | Tree | Control | | Control | | | |----------------------| | | | | | Grid | | | | -------------------------------------- I would like the window to be re-sizable and give the user the ability to change the (relative) size of each of the components within the windows by dragging the borders. I figure that I need some combination of sizers and/or splitter-window components but can't find a decent example of this kind of window in the documentation or on the web.
[ "This is a very simple layout using wx.aui and three panels. I guess you can easily adapt it to suit your needs.\nOrjanp...\nimport wx\nimport wx.aui\n\nclass MyFrame(wx.Frame):\n def __init__(self, *args, **kwargs):\n wx.Frame.__init__(self, *args, **kwargs)\n\n self.mgr = wx.aui.AuiManager(self)\n\n leftpanel = wx.Panel(self, -1, size = (200, 150))\n rightpanel = wx.Panel(self, -1, size = (200, 150))\n bottompanel = wx.Panel(self, -1, size = (200, 150))\n\n self.mgr.AddPane(leftpanel, wx.aui.AuiPaneInfo().Bottom())\n self.mgr.AddPane(rightpanel, wx.aui.AuiPaneInfo().Left().Layer(1))\n self.mgr.AddPane(bottompanel, wx.aui.AuiPaneInfo().Center().Layer(2))\n\n self.mgr.Update()\n\n\nclass MyApp(wx.App):\n def OnInit(self):\n frame = MyFrame(None, -1, '07_wxaui.py')\n frame.Show()\n self.SetTopWindow(frame)\n return 1\n\nif __name__ == \"__main__\":\n app = MyApp(0)\n app.MainLoop()\n\n", "First of all download wxGlade a gui builder for wxPython (alternative XRCed, i prefere wxGlade).\nThen you have to decide if you want to use a GridSizer or a Splitter and you are done. Below you find both (between Tree and right side is a GridSizer -> resizes automatically). Between Edit and GridCtrl is a Sizer (manual Resize).\nRegards. \none minute work without entering a single line of code:\n#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# generated by wxGlade 0.6.3 on Sat Feb 07 10:02:31 2009\n\nimport wx\nimport wx.grid\n\n# begin wxGlade: extracode\n# end wxGlade\n\n\n\nclass MyDialog(wx.Dialog):\n def __init__(self, *args, **kwds):\n # begin wxGlade: MyDialog.__init__\n kwds[\"style\"] = wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER|wx.THICK_FRAME\n wx.Dialog.__init__(self, *args, **kwds)\n self.window_1 = wx.SplitterWindow(self, -1, style=wx.SP_3D|wx.SP_BORDER)\n self.tree_ctrl_1 = wx.TreeCtrl(self, -1, style=wx.TR_HAS_BUTTONS|wx.TR_LINES_AT_ROOT|wx.TR_DEFAULT_STYLE|wx.SUNKEN_BORDER)\n self.text_ctrl_1 = wx.TextCtrl(self.window_1, -1, \"This is the Edit\", style=wx.TE_MULTILINE)\n self.grid_1 = wx.grid.Grid(self.window_1, -1, size=(1, 1))\n\n self.__set_properties()\n self.__do_layout()\n # end wxGlade\n\n def __set_properties(self):\n # begin wxGlade: MyDialog.__set_properties\n self.SetTitle(\"dialog_1\")\n self.grid_1.CreateGrid(10, 3)\n # end wxGlade\n\n def __do_layout(self):\n # begin wxGlade: MyDialog.__do_layout\n grid_sizer_1 = wx.FlexGridSizer(1, 2, 3, 3)\n grid_sizer_1.Add(self.tree_ctrl_1, 1, wx.EXPAND, 0)\n self.window_1.SplitHorizontally(self.text_ctrl_1, self.grid_1)\n grid_sizer_1.Add(self.window_1, 1, wx.EXPAND, 0)\n self.SetSizer(grid_sizer_1)\n grid_sizer_1.Fit(self)\n grid_sizer_1.AddGrowableRow(0)\n grid_sizer_1.AddGrowableCol(0)\n grid_sizer_1.AddGrowableCol(1)\n self.Layout()\n # end wxGlade\n\n# end of class MyDialog\n\n\nclass MyApp(wx.App):\n def OnInit(self):\n wx.InitAllImageHandlers()\n mainDlg = MyDialog(None, -1, \"\")\n self.SetTopWindow(mainDlg)\n mainDlg.Show()\n return 1\n\n# end of class MyApp\n\nif __name__ == \"__main__\":\n app = MyApp(0)\n app.MainLoop()\n\n", "You should use wxSplitter, here's an example. Another one here. And another.\n", "You could consider using the wx.aui advanced user interface module, as it allows you to build UIs like this very easily. Also the user can then minimise, maximise, and drag the panes around as they see fit, or not. It's pretty flexible. I actually find it easier to lay out this sort of UI with the aui toolkit, rather than with grids and splitters. Plus all the fancy buttons make apps look cooler. :)\nThere is a nice example in the official demos, called AUI_DockingWindowMgr.\n" ]
[ 8, 7, 3, 2 ]
[]
[]
[ "elasticlayout", "layout", "python", "wxpython" ]
stackoverflow_0000523363_elasticlayout_layout_python_wxpython.txt
Q: In python 2.4, how can I execute external commands with csh instead of bash? Without using the new 2.6 subprocess module, how can I get either os.popen or os.system to execute my commands using the tcsh instead of bash? I need to source some scripts which are written in tcsh before executing some other commands and I need to do this within python2.4. EDIT Thanks for answers using 'tcsh -c', but I'd like to avoid this because I have to do escape madness. The string will be interpreted by bash and then interpreted by tcsh. I'll have to do something like: os.system("tcsh -c '"+re.compile("'").sub(r"""'"'"'""",my_cmd)+"'") Can't I just tell python to open up a 'tcsh' sub-process instead of a 'bash' subprocess? Is that possible? P.S. I realize that bash is the cat's meow, but I'm working in a corporate environment and I'm going to choose to not fight a tcsh vs bash battle -- bigger fish to fry. A: Just prefix the shell as part of your command. I don't have tcsh installed but with zsh: >>> os.system ("zsh -c 'echo $0'") zsh 0 A: How about: >>> os.system("tcsh your_own_script") Or just write the script and add #!/bin/tcsh at the beginning of the file and let the OS take care of that.
In python 2.4, how can I execute external commands with csh instead of bash?
Without using the new 2.6 subprocess module, how can I get either os.popen or os.system to execute my commands using the tcsh instead of bash? I need to source some scripts which are written in tcsh before executing some other commands and I need to do this within python2.4. EDIT Thanks for answers using 'tcsh -c', but I'd like to avoid this because I have to do escape madness. The string will be interpreted by bash and then interpreted by tcsh. I'll have to do something like: os.system("tcsh -c '"+re.compile("'").sub(r"""'"'"'""",my_cmd)+"'") Can't I just tell python to open up a 'tcsh' sub-process instead of a 'bash' subprocess? Is that possible? P.S. I realize that bash is the cat's meow, but I'm working in a corporate environment and I'm going to choose to not fight a tcsh vs bash battle -- bigger fish to fry.
[ "Just prefix the shell as part of your command. I don't have tcsh installed but with zsh:\n>>> os.system (\"zsh -c 'echo $0'\")\nzsh\n0\n\n", "How about:\n>>> os.system(\"tcsh your_own_script\")\n\nOr just write the script and add\n#!/bin/tcsh\n\nat the beginning of the file and let the OS take care of that.\n" ]
[ 11, 5 ]
[ "Just set the shell to use to be tcsh:\n>>> os.environ['SHELL'] = 'tcsh'\n>>> os.environ['SHELL']\n'tcsh'\n>>> os.system(\"echo $SHELL\")\ntcsh\n\n" ]
[ -1 ]
[ "csh", "python", "shell", "tcsh" ]
stackoverflow_0000533398_csh_python_shell_tcsh.txt
Q: How do I use the Django ORM to query this many-to-many example? I have the following models: class Author(models.Model): author_name = models.CharField() class Book(models.Model): book_name = models.CharField() class AuthorBook(models.Model): author_id = models.ForeignKeyField(Author) book_id = models.ForeignKeyField(Book) With that being said, I'm trying to emulate this query using the Django ORM (select all of the books written by a specific author, noting that Authors can have many books and Books can have many Authors): SELECT book_name FROM authorbook, book WHERE authorbook.author_id = 1 AND authorbook.book_id = book.id I've read this FAQ page on the Django website, but before I modify my model structure and remove AuthorBook, I was curious if I could emulate that query using the current structure. A: You should be able to do: books = Book.objects.filter(authorbook__author_id=1) to get a QuerySet of Book objects matching your author_id restriction. The nice thing about Django is you can cook this up and play around with it in the shell. You may also find http://docs.djangoproject.com/en/dev/topics/db/queries/#spanning-multi-valued-relationships to be useful. A: "AuthorBook" seems not correctly modeled. You should use a ManyToManyField: class Book(models.Model): name = models.CharField() authors = models.ManyToManyField(Author) Then you can do: books = Book.objects.filter(authors__id=1)
How do I use the Django ORM to query this many-to-many example?
I have the following models: class Author(models.Model): author_name = models.CharField() class Book(models.Model): book_name = models.CharField() class AuthorBook(models.Model): author_id = models.ForeignKeyField(Author) book_id = models.ForeignKeyField(Book) With that being said, I'm trying to emulate this query using the Django ORM (select all of the books written by a specific author, noting that Authors can have many books and Books can have many Authors): SELECT book_name FROM authorbook, book WHERE authorbook.author_id = 1 AND authorbook.book_id = book.id I've read this FAQ page on the Django website, but before I modify my model structure and remove AuthorBook, I was curious if I could emulate that query using the current structure.
[ "You should be able to do:\nbooks = Book.objects.filter(authorbook__author_id=1)\n\nto get a QuerySet of Book objects matching your author_id restriction.\nThe nice thing about Django is you can cook this up and play around with it in the shell. You may also find \nhttp://docs.djangoproject.com/en/dev/topics/db/queries/#spanning-multi-valued-relationships\nto be useful.\n", "\"AuthorBook\" seems not correctly modeled.\nYou should use a ManyToManyField:\nclass Book(models.Model):\n name = models.CharField()\n authors = models.ManyToManyField(Author)\n\nThen you can do:\nbooks = Book.objects.filter(authors__id=1)\n\n" ]
[ 14, 14 ]
[]
[]
[ "django", "django_orm", "python" ]
stackoverflow_0000533726_django_django_orm_python.txt
Q: Disable pixmap background defined by GTK theme per application For our (open source) fullscreen text editor we're changing background colors of gtk.Window, gtk.Fixed, etc. to custom colors. This works fine, but some GTK themes (e.g. Mac4Lin) define background pixmaps instead of background colors for some widgets. Those background pixmaps won't go away when calling modify_bg() methods of those widgets. I know I can set pixmaps with bg_pixmap[NORMAL] = 'blabla.png' and that I can define my own gtkrc overrides with gtk.rc_parse_string(). But I don't know how to unset bg_pixmap[NORMAL]. So, how would I do that? A: Yes ofcourse Mac4Lin uses pixmaps for more granular appearance to match MAC look. Well to disable those backgroud you dont need to override it. if you want background pixmap as its parent's, set it as bg_pixmap[state] = "<parent>" and to disable set it as bg_pixmap[state] = "<none>"
Disable pixmap background defined by GTK theme per application
For our (open source) fullscreen text editor we're changing background colors of gtk.Window, gtk.Fixed, etc. to custom colors. This works fine, but some GTK themes (e.g. Mac4Lin) define background pixmaps instead of background colors for some widgets. Those background pixmaps won't go away when calling modify_bg() methods of those widgets. I know I can set pixmaps with bg_pixmap[NORMAL] = 'blabla.png' and that I can define my own gtkrc overrides with gtk.rc_parse_string(). But I don't know how to unset bg_pixmap[NORMAL]. So, how would I do that?
[ "Yes ofcourse Mac4Lin uses pixmaps for more granular appearance to match MAC look.\nWell to disable those backgroud you dont need to override it.\nif you want background pixmap as its parent's, set it as\nbg_pixmap[state] = \"<parent>\" \n\nand to disable set it as\nbg_pixmap[state] = \"<none>\"\n\n" ]
[ 3 ]
[]
[]
[ "gtk", "pygtk", "python" ]
stackoverflow_0000533321_gtk_pygtk_python.txt
Q: Python Applications: Can You Secure Your Code Somehow? If there is truly a 'best' way, what is the best way to ship a python app and ensure people can't (easily) reverse engineer your algorithms/security/work in general? If there isn't a 'best' way, what are the different options available? Background: I love coding in Python and would love to release more apps with it. One thing that I wonder about is the possibility of people circumventing any licensing code I put in, or being able to just rip off my entire source base. I've heard of Py2Exe and similar applications, but I'm curious if there are 'preferred' ways of doing it, or if this problem is just a fact of life. A: Security through obscurity never works. If you must use a proprietary license, enforce it through the law, not half-baked obfuscation attempts. If you're worried about them learning your security (e.g. cryptography) algorithm, the same applies. Real, useful, security algorithms (like AES) are secure even though the algorithm is fully known. A: Even if you use a compiled language like C# or Java, people can perform reverse engineering if they are motivated and technically competent. Obfuscation is not a reliable protection against this. You can add prohibition against reverse-engineering to your end-user license agreement for your software. Most proprietary companies do this. But that doesn't prevent violation, it only gives you legal recourse. The best solution is to offer products and services in which the user's access to read your code does not harm your ability to sell your product or service. Base your business on service provided, or subscription to periodic updates to data, rather than the code itself. Example: Slashdot actually makes their code for their website available. Does this harm their ability to run their website? No. Another remedy is to set your price point such that the effort to pirate your code is more costly than simply buying legitimate licenses to use your product. Joel Spolsky has made a recommendation to this effects in his articles and podcasts. A: Shipping a commercial mac desktop app in Python, we do exactly as described in the other answers; protect yourself by law with a decent EULA, not by obfuscating. We have never had any troubles with people reverse engineering our code. And if we do, I feel confident we can take legal action. So yes, it's a fact of life. But one that is not too hard to live with. Just get a decent lawyer that writes a decent EULA. A: The word you're looking for is obfuscate. A quick google reveals: http://www.lysator.liu.se/~astrand/projects/pyobfuscate/ but: a) If copyright infringement becomes a problem, then the law is on your side (as long as you include the appropriate copyright notices in all files). b) It's also possible to make a profit on open source applications if you're clever about it. c) If you want your Intellectual Property to be truly secure, then the only answer is to not let anyone have it in the first place: Write your application as a web app, (I recommend using django) and only your web hosting provider has access to your code. A: py2exe On windows py2exe is one way of shipping code to end-users, py2exe bundles the python interpreter, the necessary dlls and your code compiled to python bytecode. Here are the python bytecode instructions to get some clue what it looks like: http://www.python.org/doc/2.5.2/lib/bytecodes.html Or you can use dis to disassemble some pyc/pyo files. So, using py2exe is similar to distributing compiled python (pyc/pyo) files. Shedskin C++ compiler The Shedskin compiler compiles a subset of python to C++ which you can compile to native code using any compiler. pypy I don't know about PyPy too much. According to their docs Pypy is able to generate C code.
Python Applications: Can You Secure Your Code Somehow?
If there is truly a 'best' way, what is the best way to ship a python app and ensure people can't (easily) reverse engineer your algorithms/security/work in general? If there isn't a 'best' way, what are the different options available? Background: I love coding in Python and would love to release more apps with it. One thing that I wonder about is the possibility of people circumventing any licensing code I put in, or being able to just rip off my entire source base. I've heard of Py2Exe and similar applications, but I'm curious if there are 'preferred' ways of doing it, or if this problem is just a fact of life.
[ "Security through obscurity never works. If you must use a proprietary license, enforce it through the law, not half-baked obfuscation attempts.\nIf you're worried about them learning your security (e.g. cryptography) algorithm, the same applies. Real, useful, security algorithms (like AES) are secure even though the algorithm is fully known.\n", "Even if you use a compiled language like C# or Java, people can perform reverse engineering if they are motivated and technically competent. Obfuscation is not a reliable protection against this.\nYou can add prohibition against reverse-engineering to your end-user license agreement for your software. Most proprietary companies do this. But that doesn't prevent violation, it only gives you legal recourse.\nThe best solution is to offer products and services in which the user's access to read your code does not harm your ability to sell your product or service. Base your business on service provided, or subscription to periodic updates to data, rather than the code itself.\nExample: Slashdot actually makes their code for their website available. Does this harm their ability to run their website? No.\nAnother remedy is to set your price point such that the effort to pirate your code is more costly than simply buying legitimate licenses to use your product. Joel Spolsky has made a recommendation to this effects in his articles and podcasts.\n", "Shipping a commercial mac desktop app in Python, we do exactly as described in the other answers; protect yourself by law with a decent EULA, not by obfuscating. \nWe have never had any troubles with people reverse engineering our code. And if we do, I feel confident we can take legal action. So yes, it's a fact of life. But one that is not too hard to live with. Just get a decent lawyer that writes a decent EULA.\n", "The word you're looking for is obfuscate. A quick google reveals:\nhttp://www.lysator.liu.se/~astrand/projects/pyobfuscate/\nbut:\na) If copyright infringement becomes a problem, then the law is on your side (as long as you include the appropriate copyright notices in all files).\nb) It's also possible to make a profit on open source applications if you're clever about it.\nc) If you want your Intellectual Property to be truly secure, then the only answer is to not let anyone have it in the first place: Write your application as a web app, (I recommend using django) and only your web hosting provider has access to your code.\n", "py2exe\nOn windows py2exe is one way of shipping code to end-users, py2exe bundles the python interpreter, the necessary dlls and your code compiled to python bytecode. \nHere are the python bytecode instructions to get some clue what it looks like:\nhttp://www.python.org/doc/2.5.2/lib/bytecodes.html\nOr you can use dis to disassemble some pyc/pyo files.\nSo, using py2exe is similar to distributing compiled python (pyc/pyo) files.\nShedskin C++ compiler\nThe Shedskin compiler compiles a subset of python to C++ which you can compile to native code using any compiler.\npypy\nI don't know about PyPy too much. According to their docs Pypy is able to generate C code.\n" ]
[ 13, 8, 5, 3, 1 ]
[]
[]
[ "python", "reverse_engineering", "security" ]
stackoverflow_0000475216_python_reverse_engineering_security.txt
Q: Is there an easy way to send SCSI passthrough on OSX using native python On Windows I am able to sent SCSI passthrough to devices using win32file.DeviceIOControl(..), on UN*X I can do it using fnctl.ioctl(...). I have been searching for something equivalent in OSX that would allow me to send the IOCTL commands using only native python. I would to send commands to hard drives specifically, not USB devices. Is there anyway to do it without writing a Kernel Extension or any other code using only standard python libraries? A: I saw this blog post recently talking about using SCSI passthrough under OS X. Looks like it isn't as easy as Windows or Unix
Is there an easy way to send SCSI passthrough on OSX using native python
On Windows I am able to sent SCSI passthrough to devices using win32file.DeviceIOControl(..), on UN*X I can do it using fnctl.ioctl(...). I have been searching for something equivalent in OSX that would allow me to send the IOCTL commands using only native python. I would to send commands to hard drives specifically, not USB devices. Is there anyway to do it without writing a Kernel Extension or any other code using only standard python libraries?
[ "I saw this blog post recently talking about using SCSI passthrough under OS X. Looks like it isn't as easy as Windows or Unix\n" ]
[ 1 ]
[]
[]
[ "macos", "python" ]
stackoverflow_0000445980_macos_python.txt
Q: Outputting data a row at a time from mysql using sqlalchemy I want to fetch data from a mysql database using sqlalchemy and use the data in a different class.. Basically I fetch a row at a time, use the data, fetch another row, use the data and so on.. I am running into some problem doing this.. Basically, how do I output data a row at a time from mysql data?.. I have looked into all tutorials but they are not helping much. A: Exactly what problems are you running into? You can simply iterate over the ResultProxy object: for row in conn_or_sess_or_engine.execute(selectable_obj_or_SQLstring): do_something_with(row) A: From what I understand, you're interested in something like this: # s is object returned by the .select() method rs = s.execute() row = rs.fetchone() # do something with the row row = rs.fetchone() # do something with another row You can find this in a tutorial here.
Outputting data a row at a time from mysql using sqlalchemy
I want to fetch data from a mysql database using sqlalchemy and use the data in a different class.. Basically I fetch a row at a time, use the data, fetch another row, use the data and so on.. I am running into some problem doing this.. Basically, how do I output data a row at a time from mysql data?.. I have looked into all tutorials but they are not helping much.
[ "Exactly what problems are you running into?\nYou can simply iterate over the ResultProxy object:\n\nfor row in conn_or_sess_or_engine.execute(selectable_obj_or_SQLstring):\n do_something_with(row)\n\n", "From what I understand, you're interested in something like this:\n# s is object returned by the .select() method\nrs = s.execute()\nrow = rs.fetchone()\n# do something with the row\nrow = rs.fetchone()\n# do something with another row\n\nYou can find this in a tutorial here.\n" ]
[ 1, 0 ]
[]
[]
[ "mysql", "python", "sqlalchemy" ]
stackoverflow_0000536051_mysql_python_sqlalchemy.txt
Q: How can I create a tag in Jinja that contains values from later in the template? I'm using Jinja2, and I'm trying to create a couple tags that work together, such that if I have a template that looks something like this: {{ my_summary() }} ... arbitrary HTML ... {{ my_values('Tom', 'Dick', 'Harry') }} ... arbitrary HTML ... {{ my_values('Fred', 'Barney') }} I'd end up with the following: This page includes information about <b>Tom</b>, <b>Dick</b>, <b>Harry</b>, <b>Fred</b>, and <b>Barney</b>. ... arbitrary HTML ... <h1>Tom, Dick, and Harry</h1> ... arbitrary HTML ... <h1>Fred and Barney</h1> In other words, the my_summary() at the start of the page includes information provided later on in the page. It should be smart enough to take into account expressions which occur in include and import statements, as well. What's the best way to do this? A: Disclaimer: I do not know Jinja. My guess is that you cannot (easily) accomplish this. I would suggest the following alternative: Pass the Tom, Dick, etc. values as variables to the template from the outside. Let your custom tags take the values as arguments. I do not know what "the outside" would be in your case. If the template is used in a web app framework, "the outside" is probably a controller method. For instance: Template: {{ my_summary(list1 + list2) }} ... arbitrary HTML ... {{ my_values(list1) }} ... arbitrary HTML ... {{ my_values(list2) }} Controller: def a_controller_method(request): return render_template('templatefilename', { 'list1': ['Dick', 'Tom', 'Harry'], 'list2': ['Fred', 'Barney']}) If passing the values from the outside is not feasible, I suggest you define them at the top of your template, like this: {% set list1 = ['Dick', ...] %} {% set list2 = ['Fred', ...] %} {{ my_summary(list1 + list2) }} ... arbitrary HTML ... {{ my_values(list1) }} ... arbitrary HTML ... {{ my_values(list2) }}
How can I create a tag in Jinja that contains values from later in the template?
I'm using Jinja2, and I'm trying to create a couple tags that work together, such that if I have a template that looks something like this: {{ my_summary() }} ... arbitrary HTML ... {{ my_values('Tom', 'Dick', 'Harry') }} ... arbitrary HTML ... {{ my_values('Fred', 'Barney') }} I'd end up with the following: This page includes information about <b>Tom</b>, <b>Dick</b>, <b>Harry</b>, <b>Fred</b>, and <b>Barney</b>. ... arbitrary HTML ... <h1>Tom, Dick, and Harry</h1> ... arbitrary HTML ... <h1>Fred and Barney</h1> In other words, the my_summary() at the start of the page includes information provided later on in the page. It should be smart enough to take into account expressions which occur in include and import statements, as well. What's the best way to do this?
[ "Disclaimer: I do not know Jinja.\nMy guess is that you cannot (easily) accomplish this.\nI would suggest the following alternative:\n\nPass the Tom, Dick, etc. values as variables to the template from the outside.\nLet your custom tags take the values as arguments.\nI do not know what \"the outside\" would be in your case. If the template is used in a web app framework, \"the outside\" is probably a controller method.\nFor instance:\n\nTemplate:\n\n{{ my_summary(list1 + list2) }}\n... arbitrary HTML ...\n{{ my_values(list1) }}\n... arbitrary HTML ...\n{{ my_values(list2) }}\n\nController:\n\ndef a_controller_method(request):\n return render_template('templatefilename', {\n 'list1': ['Dick', 'Tom', 'Harry'],\n 'list2': ['Fred', 'Barney']})\n\n\nIf passing the values from the outside is not feasible, I suggest you define them at the top of your template, like this:\n\n\n{% set list1 = ['Dick', ...] %}\n{% set list2 = ['Fred', ...] %}\n{{ my_summary(list1 + list2) }}\n... arbitrary HTML ...\n{{ my_values(list1) }}\n... arbitrary HTML ...\n{{ my_values(list2) }}\n\n" ]
[ 4 ]
[]
[]
[ "jinja2", "python", "templating" ]
stackoverflow_0000535743_jinja2_python_templating.txt
Q: python, funny business with threads and IDEs? Maybe i cant do what i want? I want to have 1 thread do w/e it wants and a 2nd thread to recv user input to set the quit flag. using this code i want to enter q anytime to quit or have it timeout after printing hey 6 times import sys import threading import time class MyThread ( threading.Thread ): def run (s): try: s.wantQuit = 0 while(not s.wantQuit): print "want input" button = raw_input() if button == "q": s.wantQuit=1 except KeyboardInterrupt: s.wantQuit = 1 print "abort with KeyboardInterrupt" print "done mythread" myThread = MyThread () myThread.start() a=5 while not myThread.wantQuit: print "hey" if (a == 0): break; a = a-1; time.sleep(1) myThread.wantQuit=1 print "main thread done" what happens is instead of two threads i have the main one printing hey 4/6 times, then a dialog poping up asking for my input and the app being locked up until i enter it. WTF?! want input hey hey hey hey abort with KeyboardInterrupt done mythread main thread done i am using PyScripter (it has debugging), i also tried pydle which doesnt seem to allow me to enter input and locks up eventually after running it once. A: The problem here is that raw_input waits for an enter to flush the input stream; check out its documentation. PyScripter is probably seeing that the program is waiting for an input and giving you an input box (don't know for sure, never used it.) The program works exactly as I expect it to from the command line; the secondary thread blocks on the raw_input until I hit "q[enter]", at which point the program ends. It is not, AFAICS, easy to check and see if a character is available in the input stream before blocking on a read call. You probably should check out this thread on how to read a character in a blocking fashion without requiring an [enter], and then this post on the challenge of reading a character without blocking at all. You can possibly use a combination of msvcrt.kbhit on windows and this recipe from the python FAQ to get the q character without requiring a keypress, but I'll leave it as an exercise to the reader. Addendum: One thing you could do would be to use the select library to set a timeout on reads from the keyboard, which will make your program act more like you expect: import sys import threading import time import select def timeout_read(n_chars): r, _, _ = select.select([sys.stdin], [], [], 1) return r[0].read(n_chars) if r else "" class MyThread (threading.Thread): def run (self): try: self.wantQuit = 0 while not self.wantQuit: print "want input" button = timeout_read(1) if button == "q": self.wantQuit=1 except KeyboardInterrupt: self.wantQuit = 1 print "abort with KeyboardInterrupt" print "done mythread" myThread = MyThread () myThread.start() a=5 while not myThread.wantQuit: print "hey" if (a == 0): break; a = a-1; time.sleep(1) myThread.wantQuit=1 print "main thread done" Note that you will still need to press "q[enter]" with this solution.
python, funny business with threads and IDEs?
Maybe i cant do what i want? I want to have 1 thread do w/e it wants and a 2nd thread to recv user input to set the quit flag. using this code i want to enter q anytime to quit or have it timeout after printing hey 6 times import sys import threading import time class MyThread ( threading.Thread ): def run (s): try: s.wantQuit = 0 while(not s.wantQuit): print "want input" button = raw_input() if button == "q": s.wantQuit=1 except KeyboardInterrupt: s.wantQuit = 1 print "abort with KeyboardInterrupt" print "done mythread" myThread = MyThread () myThread.start() a=5 while not myThread.wantQuit: print "hey" if (a == 0): break; a = a-1; time.sleep(1) myThread.wantQuit=1 print "main thread done" what happens is instead of two threads i have the main one printing hey 4/6 times, then a dialog poping up asking for my input and the app being locked up until i enter it. WTF?! want input hey hey hey hey abort with KeyboardInterrupt done mythread main thread done i am using PyScripter (it has debugging), i also tried pydle which doesnt seem to allow me to enter input and locks up eventually after running it once.
[ "The problem here is that raw_input waits for an enter to flush the input stream; check out its documentation. PyScripter is probably seeing that the program is waiting for an input and giving you an input box (don't know for sure, never used it.)\nThe program works exactly as I expect it to from the command line; the secondary thread blocks on the raw_input until I hit \"q[enter]\", at which point the program ends.\nIt is not, AFAICS, easy to check and see if a character is available in the input stream before blocking on a read call. You probably should check out this thread on how to read a character in a blocking fashion without requiring an [enter], and then this post on the challenge of reading a character without blocking at all.\nYou can possibly use a combination of msvcrt.kbhit on windows and this recipe from the python FAQ to get the q character without requiring a keypress, but I'll leave it as an exercise to the reader.\nAddendum: One thing you could do would be to use the select library to set a timeout on reads from the keyboard, which will make your program act more like you expect:\nimport sys\nimport threading\nimport time\nimport select\n\ndef timeout_read(n_chars):\n r, _, _ = select.select([sys.stdin], [], [], 1)\n return r[0].read(n_chars) if r else \"\"\n\nclass MyThread (threading.Thread):\n def run (self):\n try:\n self.wantQuit = 0\n while not self.wantQuit:\n print \"want input\"\n button = timeout_read(1)\n if button == \"q\":\n self.wantQuit=1\n except KeyboardInterrupt:\n self.wantQuit = 1\n print \"abort with KeyboardInterrupt\"\n print \"done mythread\"\n\nmyThread = MyThread ()\nmyThread.start()\n\na=5\nwhile not myThread.wantQuit:\n print \"hey\"\n if (a == 0):\n break;\n a = a-1;\n time.sleep(1)\nmyThread.wantQuit=1\nprint \"main thread done\"\n\nNote that you will still need to press \"q[enter]\" with this solution.\n" ]
[ 2 ]
[]
[]
[ "ide", "multithreading", "python" ]
stackoverflow_0000537196_ide_multithreading_python.txt
Q: How would you set up a python web server with multiple vhosts? I've been told wsgi is the way to go and not mod_python. But more specifically, how would you set up your multi website server environment? Choice of web server, etc? A: Apache+mod_wsgi is a common choice. Here's a simple example vhost, setup up to map any requests for /wsgi/something to the application (which can then look at PATH_INFO to choose an action, or however you are doing your dispatching). The root URL '/' is also routed to the WSGI application. LoadModule wsgi_module /usr/local/lib/mod_wsgi.so ... <VirtualHost *:80> ServerName www.example.com DocumentRoot /www/example/htdocs WSGIScriptAliasMatch ^/$ /www/example/application.py WSGIScriptAlias /wsgi /www/example/application.py </VirtualHost> You can use the WSGIProcessGroup directive to separate handlers for different vhosts if you like. If you need vhosts' scripts to be run under different users you'll need to use WSGIDaemonProcess instead of the embedded Python interpreter. application.py would, when run, leave your WSGI callable in the global ‘application’ variable. You can also add a run-as-main footer for compatibility with old-school CGI: #!/usr/bin/env python from mymodule import MyApplication application= MyApplication() if __name__=='main': import wsgiref.handlers wsgiref.handlers.CGIHandler().run(application) A: I'd recommend Nginx for the web server. Fast and easy to set up. You'd probably want to have one unix user per vhost - so every home directory holds its own application, python environment and server configuration. This allows you to restart a particular app safely, simply by killing worker processes that your vhost owns. Just a tip, hope it helps. A: You could use Apache and mod_wsgi. That way, you can still use Apache's built-in support for vhosts.
How would you set up a python web server with multiple vhosts?
I've been told wsgi is the way to go and not mod_python. But more specifically, how would you set up your multi website server environment? Choice of web server, etc?
[ "Apache+mod_wsgi is a common choice.\nHere's a simple example vhost, setup up to map any requests for /wsgi/something to the application (which can then look at PATH_INFO to choose an action, or however you are doing your dispatching). The root URL '/' is also routed to the WSGI application.\nLoadModule wsgi_module /usr/local/lib/mod_wsgi.so\n...\n\n<VirtualHost *:80>\n ServerName www.example.com\n DocumentRoot /www/example/htdocs\n WSGIScriptAliasMatch ^/$ /www/example/application.py\n WSGIScriptAlias /wsgi /www/example/application.py\n</VirtualHost>\n\nYou can use the WSGIProcessGroup directive to separate handlers for different vhosts if you like. If you need vhosts' scripts to be run under different users you'll need to use WSGIDaemonProcess instead of the embedded Python interpreter.\napplication.py would, when run, leave your WSGI callable in the global ‘application’ variable. You can also add a run-as-main footer for compatibility with old-school CGI:\n#!/usr/bin/env python\nfrom mymodule import MyApplication\n\napplication= MyApplication()\n\nif __name__=='main':\n import wsgiref.handlers\n wsgiref.handlers.CGIHandler().run(application)\n\n", "I'd recommend Nginx for the web server. Fast and easy to set up. \nYou'd probably want to have one unix user per vhost - so every home directory holds its own application, python environment and server configuration. This allows you to restart a particular app safely, simply by killing worker processes that your vhost owns. \nJust a tip, hope it helps.\n", "You could use Apache and mod_wsgi. That way, you can still use Apache's built-in support for vhosts.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "environment", "python", "webserver", "wsgi" ]
stackoverflow_0000537399_environment_python_webserver_wsgi.txt
Q: How can I create multiple hashes of a file using only one pass? How can I get a MD5, SHA and other hashes from a file but only doing one pass? I have 100mb files, so I'd hate to process those 100MB files multiple times. A: Something like this perhaps? >>> import hashlib >>> hashes = (hashlib.md5(), hashlib.sha1()) >>> f = open('some_file', 'r') >>> for line in f: ... for hash in hashes: ... hash.update(line) ... >>> for hash in hashes: ... print hash.name, hash.hexdigest() or loop over f.read(1024) or something like that to get fixed-length blocks A: Here's a modified @ʞɔıu's answer using @Jason S' suggestion. from __future__ import with_statement from hashlib import md5, sha1 filename = 'hash_one-pass.py' hashes = md5(), sha1() chunksize = max(4096, max(h.block_size for h in hashes)) with open(filename, 'rb') as f: while True: chunk = f.read(chunksize) if not chunk: break for h in hashes: h.update(chunk) for h in hashes: print h.name, h.hexdigest() A: I don't know Python but I am familiar w/ hash calculations. If you handle the reading of files manually, just read in one block (of 256 bytes or 4096 bytes or whatever) at a time, and pass each block of data to update the hash of each algorithm. (you'll have to initialize state at the beginning and finalize the state at the end.)
How can I create multiple hashes of a file using only one pass?
How can I get a MD5, SHA and other hashes from a file but only doing one pass? I have 100mb files, so I'd hate to process those 100MB files multiple times.
[ "Something like this perhaps?\n>>> import hashlib\n>>> hashes = (hashlib.md5(), hashlib.sha1())\n>>> f = open('some_file', 'r')\n>>> for line in f:\n... for hash in hashes:\n... hash.update(line)\n... \n>>> for hash in hashes:\n... print hash.name, hash.hexdigest()\n\nor loop over f.read(1024) or something like that to get fixed-length blocks\n", "Here's a modified @ʞɔıu's answer using @Jason S' suggestion. \nfrom __future__ import with_statement\nfrom hashlib import md5, sha1\n\nfilename = 'hash_one-pass.py'\n\nhashes = md5(), sha1()\nchunksize = max(4096, max(h.block_size for h in hashes))\nwith open(filename, 'rb') as f:\n while True:\n chunk = f.read(chunksize)\n if not chunk:\n break\n for h in hashes:\n h.update(chunk)\n\nfor h in hashes:\n print h.name, h.hexdigest()\n\n", "I don't know Python but I am familiar w/ hash calculations.\nIf you handle the reading of files manually, just read in one block (of 256 bytes or 4096 bytes or whatever) at a time, and pass each block of data to update the hash of each algorithm. (you'll have to initialize state at the beginning and finalize the state at the end.)\n" ]
[ 15, 8, 3 ]
[]
[]
[ "hash", "python" ]
stackoverflow_0000537542_hash_python.txt
Q: How to scale an image without occasionally inverting it (with the Python Imaging Library) When resizing images along the lines shown in this question occasionally the resulting image is inverted. About 1% of the images I resize are inverted, the rest is fine. So far I was unable to find out what is different about these images. See resized example and original image for examples. Any suggestions on how to track down that problem? A: I was finally able to find someone experienced in JPEG and with some additional knowledge was able to find a solution. JPEG is a very underspecified Format. The second image is a valid JPEG but it is in CMYK color space, not in RGB color space. Design minded tools (read: things from Apple) can process CMYK JPEGs, other stuff (Firefox, IE) can't. CMYK JPEG is very under specified and the way Adobe Photoshop writes it to disk is borderline to buggy. Best of it all there is a patch to fix the issue. A: Your original image won't display for me; Firefox says The image “http://images.hudora.de/o/NIRV2MRR3XJGR52JATL6BOVMQMFSV54I01.jpeg” cannot be displayed, because it contains errors. This suggests that the problem arises when you attempt to resize a corrupted JPEG, and indeed your resized example shows what looks like JPEG corruption to my eye (Ever cracked open a JPEG image and twiddled a few bits to see what it does to the output? I have, and a few of my abominable creations looked like that). There are a few JPEG repair tools out there, but I've never seriously tried any of them and don't know if they might be able to help you out.
How to scale an image without occasionally inverting it (with the Python Imaging Library)
When resizing images along the lines shown in this question occasionally the resulting image is inverted. About 1% of the images I resize are inverted, the rest is fine. So far I was unable to find out what is different about these images. See resized example and original image for examples. Any suggestions on how to track down that problem?
[ "I was finally able to find someone experienced in JPEG and with some additional knowledge was able to find a solution.\n\nJPEG is a very underspecified\nFormat.\nThe second image is a valid JPEG but it is in CMYK color space, not in RGB color space.\nDesign minded tools (read: things from Apple) can process CMYK JPEGs, other stuff (Firefox, IE) can't.\nCMYK JPEG is very under specified and the way Adobe Photoshop writes it to disk is borderline to buggy.\n\nBest of it all there is a patch to fix the issue.\n", "Your original image won't display for me; Firefox says\nThe image “http://images.hudora.de/o/NIRV2MRR3XJGR52JATL6BOVMQMFSV54I01.jpeg” \ncannot be displayed, because it contains errors.\n\nThis suggests that the problem arises when you attempt to resize a corrupted JPEG, and indeed your resized example shows what looks like JPEG corruption to my eye (Ever cracked open a JPEG image and twiddled a few bits to see what it does to the output? I have, and a few of my abominable creations looked like that). There are a few JPEG repair tools out there, but I've never seriously tried any of them and don't know if they might be able to help you out.\n" ]
[ 3, 2 ]
[]
[]
[ "image_processing", "python", "python_imaging_library" ]
stackoverflow_0000523503_image_processing_python_python_imaging_library.txt
Q: Background Image on Jython GUI I am trying to create a GUI in Jython. I want to import a background image that I can place buttons and textfields on. I've already created the frame with the buttons and labels in their appropriate places, I just need to know how to import a background image. The GUI is implemented in Jython. A: Take a look at the Java swing material, essentially you are just using the same api in python syntax. This might help: http://forums.sun.com/thread.jspa?threadID=599393
Background Image on Jython GUI
I am trying to create a GUI in Jython. I want to import a background image that I can place buttons and textfields on. I've already created the frame with the buttons and labels in their appropriate places, I just need to know how to import a background image. The GUI is implemented in Jython.
[ "Take a look at the Java swing material, essentially you are just using the same api in python syntax. This might help: http://forums.sun.com/thread.jspa?threadID=599393\n" ]
[ 1 ]
[]
[]
[ "jython", "python" ]
stackoverflow_0000539313_jython_python.txt
Q: Namespace Specification In Absence of Ambuguity Why do some languages, like C++ and Python, require the namespace of an object be specified even when no ambiguity exists? I understand that there are backdoors to this, like using namespace x in C++, or from x import * in Python. However, I can't understand the rationale behind not wanting the language to just "do the right thing" when only one accessible namespace contains a given identifier and no ambiguity exists. To me it's just unnecessary verbosity and a violation of DRY, since you're being forced to specify something the compiler already knows. For example: import foo # Contains someFunction(). someFunction() # imported from foo. No ambiguity. Works. Vs. import foo # Contains someFunction() import bar # Contains someFunction() also. # foo.someFunction or bar.someFunction? Should be an error only because # ambiguity exists. someFunction() A: One reason is to protect against accidentally introducing a conflict when you change the code (or for an external module/library, when someone else changes it) later on. For example, in Python you can write from foo import * from bar import * without conflicts if you know that modules foo and bar don't have any variables with the same names. But what if in later versions both foo and bar include variables named rofl? Then bar.rofl will cover up foo.rofl without you knowing about it. I also like to be able to look up to the top of the file and see exactly what names are being imported and where they're coming from (I'm talking about Python, of course, but the same reasoning could apply for C++). A: Python takes the view that 'explicit is better than implicit'. (type import this into a python interpreter) Also, say I'm reading someone's code. Perhaps it's your code; perhaps it's my code from six months ago. I see a reference to bar(). Where did the function come from? I could look through the file for a def bar(), but if I don't find it, what then? If python is automatically finding the first bar() available through an import, then I have to search through each file imported to find it. What a pain! And what if the function-finding recurses through the import heirarchy? I'd rather see zomg.bar(); that tells me where the function is from, and ensures I always get the same one if code changes (unless I change the zomg module). A: The problem is about abstraction and reuse : you don't really know if there will not be any future ambiguity. For example, It's very common to setup different libraries in a project just to discover that they all have their own string class implementation, called "string". You compiler will then complain that there is ambiguity if the libraries are not encapsulated in separate namespaces. It's then a delightful pleasure to dodge this kind of ambiguity by specifying wich implementation (like the standard std::string one) you wants to use at each specific instruction or context (read : scope). And if you think that it's obvious in a particular context (read : in a particular function or .cpp in c++, .py file in python - NEVER in C++ header files) you just have to express yourself and say that "it should be obvious", adding the "using namespace" instruction (or import *). Until the compiler complain because it is not. If you use using in specific scopes, you don't break the DRY rule at all. A: There have been languages where the compiler tried to "do the right thing" - Algol and PL/I come to mind. The reason they are not around anymore is that compilers are very bad at doing the right thing, but very good at doing the wrong one, given half a chance! A: The ideal this rule strives for is to make creating reusable components easy - and if you reuse your component, you just don't know which symbols will be defined in other namespaces the client uses. So the rule forces you to make your intention clear with respect to further definitions you don't know about yet. However, this ideal has not been reached for C++, mainly because of Koenig lookup. A: Is it really the right thing? What if I have two types ::bat and ::foo::bar I want to reference the bat type but accidentally hit the r key instead of t (they're right next to each others). Is it "the right thing" for the compiler to then go searching through every namespace to find ::foo::bar without giving me even a warning? Or what if I use "bar" as shorthand for the "::foo::bar" type all over my codebase. Then one day I include a library which defines a ::bar datatype. Suddenly an ambiguity exists where there was none before. And suddenly, "the right thing" has become wrong. The right thing for the compiler to do in this case would be to assume I meant the type I actually wrote. If I write bar with no namespace prefix, it should assume I'm referring to a type bar in the global namespace. But if it does that in our hypothetical scenario, it'll change what type my code references without even alerting me. Alternatively, it could give me an error, but come on, that'd just be ridiculous, because even with the current language rules, there should be no ambiguity here, since one of the types is hidden away in a namespace I didn't specify, so it shouldn't be considered. Another problem is that the compiler may not know what other types exist. In C++, the order of definitions matters. In C#, types can be defined in separate assemblies, and referenced in your code. How does the compiler know that another type with the same name doesn't exist in another assembly, just in a different namespace? How does it know that one won't be added to another assembly later on? The right thing is to do what gives the programmer the fewest nasty surprises. Second-guessing the programmer based on incomplete data is generally not the right thing to do. Most languages give you several tools to avoid having to specify the namespace. In c++, you have "using namespace foo", as well as typedefs. If you don't want to repeat the namespace prefix, then don't. Use the tools made available by the language so you don't have to. A: This all depends on your definition of "right thing". Is it the right thing for the compiler to guess your intention if there's only one match? There are arguments for both sides. A: Interesting question. In the case of C++, as I see it, provided the compiler flagged an error as soon as there was a conflict, the only problem this could cause would be: Auto-lookup of all C++ namespaces would remove the ability to hide the names of internal parts of library code. Library code often contains parts (types, functions, global variables) that are never intended to be visible to the "outside world." C++ has unnamed namespaces for exactly this reason -- to avoid "internal parts" clogging up the global namespace, even when those library namespaces are explicitly imported with using namespace xyz;. Example: Suppose C++ did do auto-lookup, and a particular implementation of the C++ Standard Library contained an internal helper function, std::helper_func(). Suppose a user Joe develops an application containing a function joe::helper_func() using a different library implementation that does not contain std::helper_func(), and calls his own method using unqualified calls to helper_func(). Now Joe's code will compile fine in his environment, but any other user who tries to compile that code using the first library implementation will hit compiler error messages. So the first thing required to make Joe's code portable is to either insert the appropriate using declarations/directives or use fully qualified identifiers. In other words, auto-lookup buys nothing for portable code. Admittedly, this doesn't seem like a problem that's likely to come up very often. But since typing explicit using declarations/directives (e.g. using namespace std;) is not a big deal for most people, solves this problem completely, and would be required for portable development anyway, using them (heh) seems like a sensible way to do things. NOTE: As Klaim pointed out, you would never in any circumstances want to rely on auto-lookup inside a header file, as this would immediately prevent your module from being used at the same time as any module containing a conflicting name. (This is just a logical extension of why you don't do using namespace xyz; inside headers in C++ as it stands.)
Namespace Specification In Absence of Ambuguity
Why do some languages, like C++ and Python, require the namespace of an object be specified even when no ambiguity exists? I understand that there are backdoors to this, like using namespace x in C++, or from x import * in Python. However, I can't understand the rationale behind not wanting the language to just "do the right thing" when only one accessible namespace contains a given identifier and no ambiguity exists. To me it's just unnecessary verbosity and a violation of DRY, since you're being forced to specify something the compiler already knows. For example: import foo # Contains someFunction(). someFunction() # imported from foo. No ambiguity. Works. Vs. import foo # Contains someFunction() import bar # Contains someFunction() also. # foo.someFunction or bar.someFunction? Should be an error only because # ambiguity exists. someFunction()
[ "One reason is to protect against accidentally introducing a conflict when you change the code (or for an external module/library, when someone else changes it) later on. For example, in Python you can write\nfrom foo import *\nfrom bar import *\n\nwithout conflicts if you know that modules foo and bar don't have any variables with the same names. But what if in later versions both foo and bar include variables named rofl? Then bar.rofl will cover up foo.rofl without you knowing about it.\nI also like to be able to look up to the top of the file and see exactly what names are being imported and where they're coming from (I'm talking about Python, of course, but the same reasoning could apply for C++).\n", "Python takes the view that 'explicit is better than implicit'.\n(type import this into a python interpreter)\nAlso, say I'm reading someone's code. Perhaps it's your code; perhaps it's my code from six months ago. I see a reference to bar(). Where did the function come from? I could look through the file for a def bar(), but if I don't find it, what then? If python is automatically finding the first bar() available through an import, then I have to search through each file imported to find it. What a pain! And what if the function-finding recurses through the import heirarchy?\nI'd rather see zomg.bar(); that tells me where the function is from, and ensures I always get the same one if code changes (unless I change the zomg module).\n", "The problem is about abstraction and reuse : you don't really know if there will not be any future ambiguity. \nFor example, It's very common to setup different libraries in a project just to discover that they all have their own string class implementation, called \"string\".\nYou compiler will then complain that there is ambiguity if the libraries are not encapsulated in separate namespaces.\nIt's then a delightful pleasure to dodge this kind of ambiguity by specifying wich implementation (like the standard std::string one) you wants to use at each specific instruction or context (read : scope).\nAnd if you think that it's obvious in a particular context (read : in a particular function or .cpp in c++, .py file in python - NEVER in C++ header files) you just have to express yourself and say that \"it should be obvious\", adding the \"using namespace\" instruction (or import *). Until the compiler complain because it is not.\nIf you use using in specific scopes, you don't break the DRY rule at all.\n", "There have been languages where the compiler tried to \"do the right thing\" - Algol and PL/I come to mind. The reason they are not around anymore is that compilers are very bad at doing the right thing, but very good at doing the wrong one, given half a chance!\n", "The ideal this rule strives for is to make creating reusable components easy - and if you reuse your component, you just don't know which symbols will be defined in other namespaces the client uses. So the rule forces you to make your intention clear with respect to further definitions you don't know about yet.\nHowever, this ideal has not been reached for C++, mainly because of Koenig lookup.\n", "Is it really the right thing?\nWhat if I have two types ::bat and ::foo::bar\nI want to reference the bat type but accidentally hit the r key instead of t (they're right next to each others).\nIs it \"the right thing\" for the compiler to then go searching through every namespace to find ::foo::bar without giving me even a warning?\nOr what if I use \"bar\" as shorthand for the \"::foo::bar\" type all over my codebase.\nThen one day I include a library which defines a ::bar datatype. Suddenly an ambiguity exists where there was none before. And suddenly, \"the right thing\" has become wrong.\nThe right thing for the compiler to do in this case would be to assume I meant the type I actually wrote. If I write bar with no namespace prefix, it should assume I'm referring to a type bar in the global namespace. But if it does that in our hypothetical scenario, it'll change what type my code references without even alerting me.\nAlternatively, it could give me an error, but come on, that'd just be ridiculous, because even with the current language rules, there should be no ambiguity here, since one of the types is hidden away in a namespace I didn't specify, so it shouldn't be considered.\nAnother problem is that the compiler may not know what other types exist. In C++, the order of definitions matters.\nIn C#, types can be defined in separate assemblies, and referenced in your code. How does the compiler know that another type with the same name doesn't exist in another assembly, just in a different namespace? How does it know that one won't be added to another assembly later on?\nThe right thing is to do what gives the programmer the fewest nasty surprises. Second-guessing the programmer based on incomplete data is generally not the right thing to do.\nMost languages give you several tools to avoid having to specify the namespace.\nIn c++, you have \"using namespace foo\", as well as typedefs. If you don't want to repeat the namespace prefix, then don't. Use the tools made available by the language so you don't have to.\n", "This all depends on your definition of \"right thing\". Is it the right thing for the compiler to guess your intention if there's only one match?\nThere are arguments for both sides.\n", "Interesting question. In the case of C++, as I see it, provided the compiler flagged an error as soon as there was a conflict, the only problem this could cause would be:\nAuto-lookup of all C++ namespaces would remove the ability to hide the names of internal parts of library code.\nLibrary code often contains parts (types, functions, global variables) that are never intended to be visible to the \"outside world.\" C++ has unnamed namespaces for exactly this reason -- to avoid \"internal parts\" clogging up the global namespace, even when those library namespaces are explicitly imported with using namespace xyz;.\nExample: Suppose C++ did do auto-lookup, and a particular implementation of the C++ Standard Library contained an internal helper function, std::helper_func(). Suppose a user Joe develops an application containing a function joe::helper_func() using a different library implementation that does not contain std::helper_func(), and calls his own method using unqualified calls to helper_func(). Now Joe's code will compile fine in his environment, but any other user who tries to compile that code using the first library implementation will hit compiler error messages. So the first thing required to make Joe's code portable is to either insert the appropriate using declarations/directives or use fully qualified identifiers. In other words, auto-lookup buys nothing for portable code.\nAdmittedly, this doesn't seem like a problem that's likely to come up very often. But since typing explicit using declarations/directives (e.g. using namespace std;) is not a big deal for most people, solves this problem completely, and would be required for portable development anyway, using them (heh) seems like a sensible way to do things.\nNOTE: As Klaim pointed out, you would never in any circumstances want to rely on auto-lookup inside a header file, as this would immediately prevent your module from being used at the same time as any module containing a conflicting name. (This is just a logical extension of why you don't do using namespace xyz; inside headers in C++ as it stands.)\n" ]
[ 11, 11, 5, 4, 1, 1, 0, 0 ]
[]
[]
[ "c++", "language_design", "namespaces", "python" ]
stackoverflow_0000539578_c++_language_design_namespaces_python.txt
Q: adding scrollbars to pythoncard application scrollingwindow as main frame for the application is not supported yet for pythoncard. how can i add scrollbars to main frame(background)? A: Ive never used pythoncard but in pure wxpython you can just put a ScrolledWindow inside the frame, then use a sizer to controll the scrollbars (asumming the contents of the sizer dont fit in the window). Eg this short code snipit will give you a window with a vertical scrollbar. class Scrolled(wx.ScrolledWindow): def __init__(self, parent): wx.ScrolledWindow.__init__(self, parent, size=(200,200)) self.SetScrollRate(0, 10); sizerV = wx.BoxSizer(wx.VERTICAL) #create a bunch of stuff in the sizer which doesnt fit for i in range(0,50): text = "Line: " + str(i) sizerV.Add(wx.StaticText(self, label=text), 0) self.SetSizer(sizerV) class Frame(wx.Frame): def __init__(self, parent): wx.Frame.__init__(self, parent, size=(200,200), Scrolled(self) title="Scroll Bars", style=wx.CAPTION)
adding scrollbars to pythoncard application
scrollingwindow as main frame for the application is not supported yet for pythoncard. how can i add scrollbars to main frame(background)?
[ "Ive never used pythoncard but in pure wxpython you can just put a ScrolledWindow inside the frame, then use a sizer to controll the scrollbars (asumming the contents of the sizer dont fit in the window). Eg this short code snipit will give you a window with a vertical scrollbar.\nclass Scrolled(wx.ScrolledWindow):\n def __init__(self, parent):\n wx.ScrolledWindow.__init__(self, parent, size=(200,200))\n self.SetScrollRate(0, 10);\n sizerV = wx.BoxSizer(wx.VERTICAL)\n #create a bunch of stuff in the sizer which doesnt fit\n for i in range(0,50):\n text = \"Line: \" + str(i)\n sizerV.Add(wx.StaticText(self, label=text), 0)\n\n self.SetSizer(sizerV)\n\nclass Frame(wx.Frame):\n def __init__(self, parent):\n wx.Frame.__init__(self, parent, size=(200,200), Scrolled(self)\n title=\"Scroll Bars\", style=wx.CAPTION)\n\n" ]
[ 2 ]
[]
[]
[ "python", "pythoncard", "scroll", "wxpython" ]
stackoverflow_0000469219_python_pythoncard_scroll_wxpython.txt
Q: Python Imaging Library and JPEGs on MacOsX I've gotten a hold of Python Imaging Library (PIL) and installed the PNG support stuff just fine. I am however having issues with theJPEG Library. The default setting for it is nothing but they suggest "/home/libraries/jpeg-6b". On the Mac that directory doesn't exist, the library is however installed fine, here's the output of the install. /usr/bin/install -c cjpeg /usr/local/bin/cjpeg /usr/bin/install -c djpeg /usr/local/bin/djpeg /usr/bin/install -c jpegtran /usr/local/bin/jpegtran /usr/bin/install -c rdjpgcom /usr/local/bin/rdjpgcom /usr/bin/install -c wrjpgcom /usr/local/bin/wrjpgcom /usr/bin/install -c -m 644 ./cjpeg.1 /usr/local/man/man1/cjpeg.1 /usr/bin/install -c -m 644 ./djpeg.1 /usr/local/man/man1/djpeg.1 /usr/bin/install -c -m 644 ./jpegtran.1 /usr/local/man/man1/jpegtran.1 /usr/bin/install -c -m 644 ./rdjpgcom.1 /usr/local/man/man1/rdjpgcom.1 /usr/bin/install -c -m 644 ./wrjpgcom.1 /usr/local/man/man1/wrjpgcom.1 I tried pointing PIL to /usr/local/bin/cjpeg, cjpeg and so on but it never recognised it. Does anybody know what I'm doing wrong? A: For me, the only way to have working Python + PIL on OS X was to install both from ports. I've never managed to get fully functional PIL under either system Python or installed manually from python.org. Maybe you could try this approach?
Python Imaging Library and JPEGs on MacOsX
I've gotten a hold of Python Imaging Library (PIL) and installed the PNG support stuff just fine. I am however having issues with theJPEG Library. The default setting for it is nothing but they suggest "/home/libraries/jpeg-6b". On the Mac that directory doesn't exist, the library is however installed fine, here's the output of the install. /usr/bin/install -c cjpeg /usr/local/bin/cjpeg /usr/bin/install -c djpeg /usr/local/bin/djpeg /usr/bin/install -c jpegtran /usr/local/bin/jpegtran /usr/bin/install -c rdjpgcom /usr/local/bin/rdjpgcom /usr/bin/install -c wrjpgcom /usr/local/bin/wrjpgcom /usr/bin/install -c -m 644 ./cjpeg.1 /usr/local/man/man1/cjpeg.1 /usr/bin/install -c -m 644 ./djpeg.1 /usr/local/man/man1/djpeg.1 /usr/bin/install -c -m 644 ./jpegtran.1 /usr/local/man/man1/jpegtran.1 /usr/bin/install -c -m 644 ./rdjpgcom.1 /usr/local/man/man1/rdjpgcom.1 /usr/bin/install -c -m 644 ./wrjpgcom.1 /usr/local/man/man1/wrjpgcom.1 I tried pointing PIL to /usr/local/bin/cjpeg, cjpeg and so on but it never recognised it. Does anybody know what I'm doing wrong?
[ "For me, the only way to have working Python + PIL on OS X was to install both from ports. I've never managed to get fully functional PIL under either system Python or installed manually from python.org. Maybe you could try this approach?\n" ]
[ 5 ]
[]
[]
[ "jpeg", "macos", "python", "python_imaging_library" ]
stackoverflow_0000540991_jpeg_macos_python_python_imaging_library.txt
Q: Does anyone know of a python based web ui for snmp monitoring? Comparable to cacti or mrtg. A: http://www.zenoss.com/ This is a lot more than just SNMP but it is based on Python. A: or you can start building your own solution (like me), you will be surprised how much can you do with few lines of code using for instance cherryp for web server, pysnmp, and python rrd module.
Does anyone know of a python based web ui for snmp monitoring?
Comparable to cacti or mrtg.
[ "http://www.zenoss.com/\nThis is a lot more than just SNMP but it is based on Python.\n", "or you can start building your own solution (like me), you will be surprised how much can you do with few lines of code using for instance cherryp for web server, pysnmp, and python rrd module.\n" ]
[ 3, 0 ]
[]
[]
[ "django", "pylons", "python", "snmp", "turbogears" ]
stackoverflow_0000310759_django_pylons_python_snmp_turbogears.txt
Q: C to Python via SWIG: can't get void** parameters to hold their value I have a C interface that looks like this (simplified): extern bool Operation(void ** ppData); extern float GetFieldValue(void* pData); extern void Cleanup(p); which is used as follows: void * p = NULL; float theAnswer = 0.0f; if (Operation(&p)) { theAnswer = GetFieldValue(p); Cleanup(p); } You'll note that Operation() allocates the buffer p, that GetFieldValue queries p, and that Cleanup frees p. I don't have any control over the C interface -- that code is widely used elsewhere. I'd like to call this code from Python via SWIG, but I was unable to find any good examples of how to pass a pointer to a pointer -- and retrieve its value. I think the correct way to do this is by use of typemaps, so I defined an interface that would automatically dereference p for me on the C side: %typemap(in) void** { $1 = (void**)&($input); } However, I was unable to get the following python code to work: import test p = None theAnswer = 0.0f if test.Operation(p): theAnswer = test.GetFieldValue(p) test.Cleanup(p) After calling test.Operation(), p always kept its initial value of None. Any help with figuring out the correct way to do this in SWIG would be much appreciated. Otherwise, I'm likely to just write a C++ wrapper around the C code that stops Python from having to deal with the pointer. And then wrap that wrapper with SWIG. Somebody stop me! Edit: Thanks to Jorenko, I now have the following SWIG interface: % module Test %typemap (in,numinputs=0) void** (void *temp) { $1 = &temp; } %typemap (argout) void** { PyObject *obj = PyCObject_FromVoidPtr(*$1, Cleanup); $result = PyTuple_Pack(2, $result, obj); } %{ extern bool Operation(void ** ppData); extern float GetFieldValue(void *p); extern void Cleanup(void *p); %} %inline %{ float gfv(void *p){ return GetFieldValue(p);} %} %typemap (in) void* { if (PyCObject_Check($input)) { $1 = PyCObject_AsVoidPtr($input); } } The python code that uses this SWIG interface is as follows: import test success, p = test.Operation() if success: f = test.GetFieldValue(p) # This doesn't work f = test.gvp(p) # This works! test.Cleanup(p) Oddly, in the python code, test.GetFieldValue(p) returns gibberish, but test.gfv(p) returns the correct value. I've inserting debugging code into the typemap for void*, and both have the same value of p! The call Any ideas about that? Update: I've decided to use ctypes. MUCH easier. A: I agree with theller, you should use ctypes instead. It's always easier than thinking about typemaps. But, if you're dead set on using swig, what you need to do is make a typemap for void** that RETURNS the newly allocated void*: %typemap (in,numinputs=0) void** (void *temp) { $1 = &temp; } %typemap (argout) void** { PyObject *obj = PyCObject_FromVoidPtr(*$1); $result = PyTuple_Pack(2, $result, obj); } Then your python looks like: import test success, p = test.Operation() theAnswer = 0.0f if success: theAnswer = test.GetFieldValue(p) test.Cleanup(p) Edit: I'd expect swig to handle a simple by-value void* arg gracefully on its own, but just in case, here's swig code to wrap the void* for GetFieldValue() and Cleanup(): %typemap (in) void* { $1 = PyCObject_AsVoidPtr($input); } A: Would you be willing to use ctypes? Here is sample code that should work (although it is untested): from ctypes import * test = cdll("mydll") test.Operation.restype = c_bool test.Operation.argtypes = [POINTER(c_void_p)] test.GetFieldValue.restype = c_float test.GetFieldValue.argtypes = [c_void_p] test.Cleanup.restype = None test.Cleanup.argtypes = [c_void_p] if __name__ == "__main__": p = c_void_p() if test.Operation(byref(p)): theAnswer = test.GetFieldValue(p) test.Cleanup(p)
C to Python via SWIG: can't get void** parameters to hold their value
I have a C interface that looks like this (simplified): extern bool Operation(void ** ppData); extern float GetFieldValue(void* pData); extern void Cleanup(p); which is used as follows: void * p = NULL; float theAnswer = 0.0f; if (Operation(&p)) { theAnswer = GetFieldValue(p); Cleanup(p); } You'll note that Operation() allocates the buffer p, that GetFieldValue queries p, and that Cleanup frees p. I don't have any control over the C interface -- that code is widely used elsewhere. I'd like to call this code from Python via SWIG, but I was unable to find any good examples of how to pass a pointer to a pointer -- and retrieve its value. I think the correct way to do this is by use of typemaps, so I defined an interface that would automatically dereference p for me on the C side: %typemap(in) void** { $1 = (void**)&($input); } However, I was unable to get the following python code to work: import test p = None theAnswer = 0.0f if test.Operation(p): theAnswer = test.GetFieldValue(p) test.Cleanup(p) After calling test.Operation(), p always kept its initial value of None. Any help with figuring out the correct way to do this in SWIG would be much appreciated. Otherwise, I'm likely to just write a C++ wrapper around the C code that stops Python from having to deal with the pointer. And then wrap that wrapper with SWIG. Somebody stop me! Edit: Thanks to Jorenko, I now have the following SWIG interface: % module Test %typemap (in,numinputs=0) void** (void *temp) { $1 = &temp; } %typemap (argout) void** { PyObject *obj = PyCObject_FromVoidPtr(*$1, Cleanup); $result = PyTuple_Pack(2, $result, obj); } %{ extern bool Operation(void ** ppData); extern float GetFieldValue(void *p); extern void Cleanup(void *p); %} %inline %{ float gfv(void *p){ return GetFieldValue(p);} %} %typemap (in) void* { if (PyCObject_Check($input)) { $1 = PyCObject_AsVoidPtr($input); } } The python code that uses this SWIG interface is as follows: import test success, p = test.Operation() if success: f = test.GetFieldValue(p) # This doesn't work f = test.gvp(p) # This works! test.Cleanup(p) Oddly, in the python code, test.GetFieldValue(p) returns gibberish, but test.gfv(p) returns the correct value. I've inserting debugging code into the typemap for void*, and both have the same value of p! The call Any ideas about that? Update: I've decided to use ctypes. MUCH easier.
[ "I agree with theller, you should use ctypes instead. It's always easier than thinking about typemaps.\nBut, if you're dead set on using swig, what you need to do is make a typemap for void** that RETURNS the newly allocated void*:\n%typemap (in,numinputs=0) void** (void *temp)\n{\n $1 = &temp;\n}\n\n%typemap (argout) void**\n{\n PyObject *obj = PyCObject_FromVoidPtr(*$1);\n $result = PyTuple_Pack(2, $result, obj);\n}\n\nThen your python looks like:\nimport test\nsuccess, p = test.Operation()\ntheAnswer = 0.0f\nif success:\n theAnswer = test.GetFieldValue(p)\n test.Cleanup(p)\n\nEdit:\nI'd expect swig to handle a simple by-value void* arg gracefully on its own, but just in case, here's swig code to wrap the void* for GetFieldValue() and Cleanup():\n%typemap (in) void*\n{\n $1 = PyCObject_AsVoidPtr($input);\n}\n\n", "Would you be willing to use ctypes? Here is sample code that should work (although it is untested):\nfrom ctypes import *\n\ntest = cdll(\"mydll\")\n\ntest.Operation.restype = c_bool\ntest.Operation.argtypes = [POINTER(c_void_p)]\n\ntest.GetFieldValue.restype = c_float\ntest.GetFieldValue.argtypes = [c_void_p]\n\ntest.Cleanup.restype = None\ntest.Cleanup.argtypes = [c_void_p]\n\nif __name__ == \"__main__\":\n p = c_void_p()\n if test.Operation(byref(p)):\n theAnswer = test.GetFieldValue(p)\n test.Cleanup(p)\n\n" ]
[ 7, 4 ]
[]
[]
[ "c", "python", "swig", "word_wrap" ]
stackoverflow_0000540427_c_python_swig_word_wrap.txt
Q: SCons problem - dont understand Variables class I'm working on an SConstruct build file for a project and I'm trying to update from Options to Variables, since Options is being deprecated. I don't understand how to use Variables though. I have 0 python experience which is probably contributing to this. For example, I have this: opts = Variables() opts.Add('fcgi',0) print opts['fcgi'] But I get an error: AttributeError: Variables instance has no attribute '__getitem__': Not sure how this is supposed to work A: Typically you would store the variables in your environment for later testing. opts = Variables() opts.Add('fcgi',0) env = Environment(variables=opts, ...) Then later you can test: if env['fcgi'] == 0: # do something A: That specific error tells you that class Variables hasn't implemented python's __getitem__ interface which would allow you to use [ ...] on opts. If all you want to do is print out your keys, the Variables documentation seems to indicate that you can iterate over your keys: for key in opts.keys(): print key Or you can print out the help text: print opts.GenerateHelpText()
SCons problem - dont understand Variables class
I'm working on an SConstruct build file for a project and I'm trying to update from Options to Variables, since Options is being deprecated. I don't understand how to use Variables though. I have 0 python experience which is probably contributing to this. For example, I have this: opts = Variables() opts.Add('fcgi',0) print opts['fcgi'] But I get an error: AttributeError: Variables instance has no attribute '__getitem__': Not sure how this is supposed to work
[ "Typically you would store the variables in your environment for later testing.\nopts = Variables()\nopts.Add('fcgi',0)\nenv = Environment(variables=opts, ...)\n\nThen later you can test:\nif env['fcgi'] == 0:\n # do something\n\n", "That specific error tells you that class Variables hasn't implemented python's __getitem__ interface which would allow you to use [ ...] on opts. If all you want to do is print out your keys, the Variables documentation seems to indicate that you can iterate over your keys:\nfor key in opts.keys():\n print key\n\nOr you can print out the help text:\nprint opts.GenerateHelpText()\n\n" ]
[ 5, 1 ]
[]
[]
[ "python", "scons", "variables" ]
stackoverflow_0000456100_python_scons_variables.txt
Q: Custom django widget - decompress() arg not populated As an exercise I am trying to create a custom django widget for a 24 hour clock. The widget will is a MultiWidget - a select box for each field. I am trying to follow docs online (kinda sparse) and looking at the Pro Django book, but I can't seem to figure it out. Am I on the right track? I can save my data from the form, but when I prepopulate the form, the form doesn't have the previous values. It seems the issue is that the decompress() methods 'value' argument is always empty, so I have nothing to interpret. from django.forms import widgets import datetime class MilitaryTimeWidget(widgets.MultiWidget): """ A widget that displays 24 hours time selection. """ def __init__(self, attrs=None): hours = [ (i, "%02d" %(i)) for i in range(0, 24) ] minutes = [ (i, "%02d" %(i)) for i in range(0, 60) ] _widgets = ( widgets.Select(attrs=attrs, choices=hours), widgets.Select(attrs=attrs, choices=minutes), ) super(MilitaryTimeWidget, self).__init__(_widgets, attrs) def decompress(self, value): print "******** %s" %value if value: return [int(value.hour), int(value.minute)] return [None, None] def value_from_datadict(self, data, files, name): hour = data.get("%s_0" %name, None) minute = data.get("%s_1" %name, None) if hour and minute: hour = int(hour) minute = int(minute) return datetime.time(hour=hour, minute=minute) return None In my form, I am calling the widget like: arrival_time = forms.TimeField(label="Arrival Time", required=False, widget=MilitaryTimeWidget()) A: Note this line in the docstring for MultiWidget: You'll probably want to use this class with MultiValueField. That's the root of your problem. You might be able to get the single-widget-only approach working (Marty says it's possible in Pro Django, but I've never tried it, and I think it's likely to be more work), but in that case your widget shouldn't be a subclass of MultiWidget. What you need to do (if you want to follow the MultiWidget/MultiValueField path) is: remove your value_from_datadict method define a subclass of MultiValueField with a definition of the compress() method which does the task you're currently doing in value_from_datadict() (transforming a list of numbers into a datetime.time object) set your Widget as the default one for your custom form Field (using the widget class attribute) either create a custom model Field which returns your custom form Field from its formfield() method, or use your custom form field manually as a field override in a ModelForm. Then everything will Just Work. A: I can't reproduce the problem: >>> class MyForm(forms.Form): ... t = forms.TimeField(widget=MilitaryTimeWidget()) ... >>> print MyForm(data={'t_0': '13', 't_1': '34'}) ******** 13:34:00 <tr><th><label for="id_t_0">T:</label></th><td><select name="t_0" id="id_t_0"> <option value="0">00</option> [...] <option value="13" selected="selected">13</option> [...] <option value="23">23</option> </select><select name="t_1" id="id_t_1"> <option value="0">00</option> [...] <option value="34" selected="selected">34</option> [...] <option value="59">59</option> </select></td></tr> Check that your request.POST is correct. As a sidenote, are you sure this widget gives good usability? Four mouse clicks and possible scrolling of the minutes combobox...
Custom django widget - decompress() arg not populated
As an exercise I am trying to create a custom django widget for a 24 hour clock. The widget will is a MultiWidget - a select box for each field. I am trying to follow docs online (kinda sparse) and looking at the Pro Django book, but I can't seem to figure it out. Am I on the right track? I can save my data from the form, but when I prepopulate the form, the form doesn't have the previous values. It seems the issue is that the decompress() methods 'value' argument is always empty, so I have nothing to interpret. from django.forms import widgets import datetime class MilitaryTimeWidget(widgets.MultiWidget): """ A widget that displays 24 hours time selection. """ def __init__(self, attrs=None): hours = [ (i, "%02d" %(i)) for i in range(0, 24) ] minutes = [ (i, "%02d" %(i)) for i in range(0, 60) ] _widgets = ( widgets.Select(attrs=attrs, choices=hours), widgets.Select(attrs=attrs, choices=minutes), ) super(MilitaryTimeWidget, self).__init__(_widgets, attrs) def decompress(self, value): print "******** %s" %value if value: return [int(value.hour), int(value.minute)] return [None, None] def value_from_datadict(self, data, files, name): hour = data.get("%s_0" %name, None) minute = data.get("%s_1" %name, None) if hour and minute: hour = int(hour) minute = int(minute) return datetime.time(hour=hour, minute=minute) return None In my form, I am calling the widget like: arrival_time = forms.TimeField(label="Arrival Time", required=False, widget=MilitaryTimeWidget())
[ "Note this line in the docstring for MultiWidget:\n\nYou'll probably want to use this class with MultiValueField.\n\nThat's the root of your problem. You might be able to get the single-widget-only approach working (Marty says it's possible in Pro Django, but I've never tried it, and I think it's likely to be more work), but in that case your widget shouldn't be a subclass of MultiWidget.\nWhat you need to do (if you want to follow the MultiWidget/MultiValueField path) is:\n\nremove your value_from_datadict method\ndefine a subclass of MultiValueField with a definition of the compress() method which does the task you're currently doing in value_from_datadict() (transforming a list of numbers into a datetime.time object)\nset your Widget as the default one for your custom form Field (using the widget class attribute)\neither create a custom model Field which returns your custom form Field from its formfield() method, or use your custom form field manually as a field override in a ModelForm. \n\nThen everything will Just Work.\n", "I can't reproduce the problem:\n>>> class MyForm(forms.Form):\n... t = forms.TimeField(widget=MilitaryTimeWidget())\n...\n>>> print MyForm(data={'t_0': '13', 't_1': '34'})\n******** 13:34:00\n<tr><th><label for=\"id_t_0\">T:</label></th><td><select name=\"t_0\" id=\"id_t_0\">\n<option value=\"0\">00</option>\n[...]\n<option value=\"13\" selected=\"selected\">13</option>\n[...]\n<option value=\"23\">23</option>\n</select><select name=\"t_1\" id=\"id_t_1\">\n<option value=\"0\">00</option>\n[...]\n<option value=\"34\" selected=\"selected\">34</option>\n[...]\n<option value=\"59\">59</option>\n</select></td></tr>\n\nCheck that your request.POST is correct.\nAs a sidenote, are you sure this widget gives good usability? Four mouse clicks and possible scrolling of the minutes combobox...\n" ]
[ 4, 0 ]
[]
[]
[ "django", "field", "forms", "python", "widget" ]
stackoverflow_0000539899_django_field_forms_python_widget.txt
Q: Invoking built-in operators indirectly in Python Let's say you have a small calculator program that takes numbers and an operator to perform on those numbers as input, then prints out the result of applying the specified operation. So if you input "4 + 5" it will print out "9". Simple, right? Well what I want to be able to write is something this: a, op, b = raw_input().split() print somehowInvokeOperator(op, a, b) The problem is that "somehowInvokeOperator()" part. Is there anyway to do this without resorting to either (a) eval() or (b) some type of dictionary mapping keys like "+" and "-" to functions that perform the appropriate operation? getattr() doesn't appear to work for this. I don't really need this code for anything, I'm just curious to see if this can be solved in Python as elegantly as it can in other dynamic languages. A: Basically no, you will at least need to have a dictionary or function to map operator characters to their implementations. It's actually a little more complicated than that, since not all operators take the form a [op] b, so in general you'd need to do a bit of parsing; see https://docs.python.org/library/operator.html for the full list of correspondences, and for the functions you'll probably want to use for the operator implementations. If you're only trying to implement the binary arithmetic operators like + - * / % ** then a dictionary should be good enough. A: If you really wanted to do this, you would need the standard operator module. See also Emulating numeric types. And, yes, a dictionary full of functions would be a perfectly sound dynamic way to make this happen. import operator operations = {'+' : operator.add} result = operations[op](a, b) A: Warning: this is not pythonic at all!! (goes against every rule of the Zen of Python!) Here's a magical, one-liner dictionary: ops = eval( '{%s}'%','.join([('\''+op + '\' : lambda a,b: a ' + op + ' b') for op in '+-*/%']) ) That defines your dictionary .. which you can use ops['+'](10,4) #returns 14 the basic idea is mapping each operator to a lambda function: { '+' : lambda a,b: a + b }
Invoking built-in operators indirectly in Python
Let's say you have a small calculator program that takes numbers and an operator to perform on those numbers as input, then prints out the result of applying the specified operation. So if you input "4 + 5" it will print out "9". Simple, right? Well what I want to be able to write is something this: a, op, b = raw_input().split() print somehowInvokeOperator(op, a, b) The problem is that "somehowInvokeOperator()" part. Is there anyway to do this without resorting to either (a) eval() or (b) some type of dictionary mapping keys like "+" and "-" to functions that perform the appropriate operation? getattr() doesn't appear to work for this. I don't really need this code for anything, I'm just curious to see if this can be solved in Python as elegantly as it can in other dynamic languages.
[ "Basically no, you will at least need to have a dictionary or function to map operator characters to their implementations. It's actually a little more complicated than that, since not all operators take the form a [op] b, so in general you'd need to do a bit of parsing; see https://docs.python.org/library/operator.html for the full list of correspondences, and for the functions you'll probably want to use for the operator implementations.\nIf you're only trying to implement the binary arithmetic operators like + - * / % ** then a dictionary should be good enough.\n", "If you really wanted to do this, you would need the standard operator module. See also Emulating numeric types. And, yes, a dictionary full of functions would be a perfectly sound dynamic way to make this happen.\nimport operator\noperations = {'+' : operator.add}\nresult = operations[op](a, b)\n\n", "Warning: this is not pythonic at all!! (goes against every rule of the Zen of Python!)\nHere's a magical, one-liner dictionary:\nops = eval( '{%s}'%','.join([('\\''+op + '\\' : lambda a,b: a ' + op + ' b') for op in '+-*/%']) )\n\nThat defines your dictionary .. which you can use\nops['+'](10,4) #returns 14\n\nthe basic idea is mapping each operator to a lambda function:\n{ '+' : lambda a,b: a + b }\n\n" ]
[ 7, 2, 2 ]
[]
[]
[ "python" ]
stackoverflow_0000542987_python.txt
Q: Python string formatting I see you guys using url = '"%s"' % url # This part >>> url = "http://www.site.com/info.xx" >>> print url http://www.site.com/info.xx >>> url = '"%s"' % url >>> print url "http://www.site.com/info.xx" Is it advanced Python? Is there a tutorial for it? How can I learn about it? A: It's common string formatting, and very useful. It's analogous to C-style printf formatting. See String Formatting Operations in the Python.org docs. You can use multiple arguments like this: "%3d\t%s" % (42, "the answer to ...") A: That line of code is using Python string formatting. You can read up more on how to use it here: http://docs.python.org/library/stdtypes.html#string-formatting A: It is not advanced, you can use ' or " to define a string. Check the documentation.
Python string formatting
I see you guys using url = '"%s"' % url # This part >>> url = "http://www.site.com/info.xx" >>> print url http://www.site.com/info.xx >>> url = '"%s"' % url >>> print url "http://www.site.com/info.xx" Is it advanced Python? Is there a tutorial for it? How can I learn about it?
[ "It's common string formatting, and very useful. It's analogous to C-style printf formatting. See String Formatting Operations in the Python.org docs. You can use multiple arguments like this:\n\"%3d\\t%s\" % (42, \"the answer to ...\")\n\n", "That line of code is using Python string formatting. You can read up more on how to use it here: http://docs.python.org/library/stdtypes.html#string-formatting\n", "It is not advanced, you can use ' or \" to define a string.\nCheck the documentation.\n" ]
[ 16, 8, 0 ]
[]
[]
[ "python" ]
stackoverflow_0000543399_python.txt
Q: Struct with a pointer to its own type in ctypes I'm trying to map a struct definition using ctypes: struct attrl { struct attrl *next; char *name; char *resource; char *value; }; I'm unsure what to do with the "next" field of the struct in the ctypes mapping. A definition like: class attrl(Structure): _fields_ = [ ("next", attrl), ("name", c_char_p), ("resource", c_char_p), ("value", c_char_p) ] results in: NameError: name 'attrl' is not defined A: You need the equivalent of a forward declaration, as described here.
Struct with a pointer to its own type in ctypes
I'm trying to map a struct definition using ctypes: struct attrl { struct attrl *next; char *name; char *resource; char *value; }; I'm unsure what to do with the "next" field of the struct in the ctypes mapping. A definition like: class attrl(Structure): _fields_ = [ ("next", attrl), ("name", c_char_p), ("resource", c_char_p), ("value", c_char_p) ] results in: NameError: name 'attrl' is not defined
[ "You need the equivalent of a forward declaration, as described here.\n" ]
[ 4 ]
[]
[]
[ "ctypes", "python" ]
stackoverflow_0000543483_ctypes_python.txt
Q: memory use in large data-structures manipulation/processing I have a number of large (~100 Mb) files which I'm regularly processing. While I'm trying to delete unneeded data structures during processing, memory consumption is a bit too high. I was wondering if there is a way to efficiently manipulate large data, e.g.: def read(self, filename): fc = read_100_mb_file(filename) self.process(fc) def process(self, content): # do some processing of file content Is there a duplication of data structures? Isn't it more memory efficient to use a class-wide attribute like self.fc? When should I use garbage collection? I know about the gc module, but do I call it after I del fc for example? update p.s. 100 Mb is not a problem in itself. but float conversion, further processing add significantly more to both working set and virtual size (I'm on Windows). A: I'd suggest looking at the presentation by David Beazley on using generators in Python. This technique allows you to handle a lot of data, and do complex processing, quickly and without blowing up your memory use. IMO, the trick isn't holding a huge amount of data in memory as efficiently as possible; the trick is avoiding loading a huge amount of data into memory at the same time. A: Before you start tearing your hair out over the garbage collector, you might be able to avoid that 100mb hit of loading the entire file into memory by using a memory-mapped file object. See the mmap module. A: Don't read the entire 100 meg file in at a time. Use streams to process a little bit at a time. Check out this blog post that talks about handling large csv and xml files. http://lethain.com/entry/2009/jan/22/handling-very-large-csv-and-xml-files-in-python/ Here is a sample of the code from the article. from __future__ import with_statement # for python 2.5 with open('data.in','r') as fin: with open('data.out','w') as fout: for line in fin: fout.write(','.join(line.split(' '))) A: So, from your comments I assume that your file looks something like this: item1,item2,item3,item4,item5,item6,item7,...,itemn which you all reduce to a single value by repeated application of some combination function. As a solution, only read a single value at a time: def read_values(f): buf = [] while True: c = f.read(1) if c == ",": yield parse("".join(buf)) buf = [] elif c == "": yield parse("".join(buf)) return else: buf.append(c) with open("some_file", "r") as f: agg = initial for v in read_values(f): agg = combine(agg, v) This way, memory consumption stays constant, unless agg grows in time. Provide appropriate implementations of initial, parse and combine Don't read the file byte-by-byte, but read in a fixed buffer, parse from the buffer and read more as you need it This is basically what the builtin reduce function does, but I've used an explicit for loop here for clarity. Here's the same thing using reduce: with open("some_file", "r") as f: agg = reduce(combine, read_values(f), initial) I hope I interpreted your problem correctly. A: First of all, don't touch the garbage collector. That's not the problem, nor the solution. It sounds like the real problem you're having is not with the file reading at all, but with the data structures that you're allocating as you process the files. Condering using del to remove structures that you no longer need during processing. Also, you might consider using marshal to dump some of the processed data to disk while you work through the next 100mb of input files. For file reading, you have basically two options: unix-style files as streams, or memory mapped files. For streams-based files, the default python file object is already buffered, so the simplest code is also probably the most efficient: with open("filename", "r") as f: for line in f: # do something with a line of the files Alternately, you can use f.read([size]) to read blocks of the file. However, usually you do this to gain CPU performance, by multithreading the processing part of your script, so that you can read and process at the same time. But it doesn't help with memory usage; in fact, it uses more memory. The other option is mmap, which looks like this: with open("filename", "r+") as f: map = mmap.mmap(f.fileno(), 0) line = map.readline() while line != '': # process a line line = map.readline() This sometimes outperforms streams, but it also won't improve memory usage. A: In your example code, data is being stored in the fc variable. If you don't keep a reference to fc around, your entire file contents will be removed from memory when the read method ends. If they are not, then you are keeping a reference somewhere. Maybe the reference is being created in read_100_mb_file, maybe in process. If there is no reference, CPython implementation will deallocate it almost immediatelly. There are some tools to help you find where this reference is, guppy, dowser, pysizer...
memory use in large data-structures manipulation/processing
I have a number of large (~100 Mb) files which I'm regularly processing. While I'm trying to delete unneeded data structures during processing, memory consumption is a bit too high. I was wondering if there is a way to efficiently manipulate large data, e.g.: def read(self, filename): fc = read_100_mb_file(filename) self.process(fc) def process(self, content): # do some processing of file content Is there a duplication of data structures? Isn't it more memory efficient to use a class-wide attribute like self.fc? When should I use garbage collection? I know about the gc module, but do I call it after I del fc for example? update p.s. 100 Mb is not a problem in itself. but float conversion, further processing add significantly more to both working set and virtual size (I'm on Windows).
[ "I'd suggest looking at the presentation by David Beazley on using generators in Python. This technique allows you to handle a lot of data, and do complex processing, quickly and without blowing up your memory use. IMO, the trick isn't holding a huge amount of data in memory as efficiently as possible; the trick is avoiding loading a huge amount of data into memory at the same time.\n", "Before you start tearing your hair out over the garbage collector, you might be able to avoid that 100mb hit of loading the entire file into memory by using a memory-mapped file object. See the mmap module.\n", "Don't read the entire 100 meg file in at a time. Use streams to process a little bit at a time. Check out this blog post that talks about handling large csv and xml files. http://lethain.com/entry/2009/jan/22/handling-very-large-csv-and-xml-files-in-python/\nHere is a sample of the code from the article.\nfrom __future__ import with_statement # for python 2.5\n\nwith open('data.in','r') as fin:\n with open('data.out','w') as fout:\n for line in fin:\n fout.write(','.join(line.split(' ')))\n\n", "So, from your comments I assume that your file looks something like this:\nitem1,item2,item3,item4,item5,item6,item7,...,itemn\n\nwhich you all reduce to a single value by repeated application of some combination function. As a solution, only read a single value at a time:\ndef read_values(f):\n buf = []\n while True:\n c = f.read(1)\n if c == \",\":\n yield parse(\"\".join(buf))\n buf = []\n elif c == \"\":\n yield parse(\"\".join(buf))\n return\n else:\n buf.append(c)\n\nwith open(\"some_file\", \"r\") as f:\n agg = initial\n for v in read_values(f):\n agg = combine(agg, v)\n\nThis way, memory consumption stays constant, unless agg grows in time. \n\nProvide appropriate implementations of initial, parse and combine\nDon't read the file byte-by-byte, but read in a fixed buffer, parse from the buffer and read more as you need it\nThis is basically what the builtin reduce function does, but I've used an explicit for loop here for clarity. Here's the same thing using reduce:\nwith open(\"some_file\", \"r\") as f:\n agg = reduce(combine, read_values(f), initial)\n\n\nI hope I interpreted your problem correctly.\n", "First of all, don't touch the garbage collector. That's not the problem, nor the solution.\nIt sounds like the real problem you're having is not with the file reading at all, but with the data structures that you're allocating as you process the files.\nCondering using del to remove structures that you no longer need during processing. Also, you might consider using marshal to dump some of the processed data to disk while you work through the next 100mb of input files.\nFor file reading, you have basically two options: unix-style files as streams, or memory mapped files. For streams-based files, the default python file object is already buffered, so the simplest code is also probably the most efficient:\n\n with open(\"filename\", \"r\") as f:\n for line in f:\n # do something with a line of the files\n\nAlternately, you can use f.read([size]) to read blocks of the file. However, usually you do this to gain CPU performance, by multithreading the processing part of your script, so that you can read and process at the same time. But it doesn't help with memory usage; in fact, it uses more memory.\nThe other option is mmap, which looks like this:\n\n with open(\"filename\", \"r+\") as f:\n map = mmap.mmap(f.fileno(), 0)\n line = map.readline()\n while line != '':\n # process a line\n line = map.readline()\n\nThis sometimes outperforms streams, but it also won't improve memory usage.\n", "In your example code, data is being stored in the fc variable. If you don't keep a reference to fc around, your entire file contents will be removed from memory when the read method ends. \nIf they are not, then you are keeping a reference somewhere. Maybe the reference is being created in read_100_mb_file, maybe in process. If there is no reference, CPython implementation will deallocate it almost immediatelly.\nThere are some tools to help you find where this reference is, guppy, dowser, pysizer...\n" ]
[ 7, 3, 3, 2, 1, 1 ]
[]
[]
[ "data_structures", "garbage_collection", "memory_leaks", "python" ]
stackoverflow_0000512893_data_structures_garbage_collection_memory_leaks_python.txt
Q: Programmatically stop execution of python script? Is it possible to stop execution of a python script at any line with a command? Like some code quit() # quit at this point some more code (that's not executed) A: sys.exit() will do exactly what you want. import sys sys.exit("Error message") A: You could raise SystemExit(0) instead of going to all the trouble to import sys; sys.exit(0). A: You want sys.exit(). From Python's docs: >>> import sys >>> print sys.exit.__doc__ exit([status]) Exit the interpreter by raising SystemExit(status). If the status is omitted or None, it defaults to zero (i.e., success). If the status is numeric, it will be used as the system exit status. If it is another kind of object, it will be printed and the system exit status will be one (i.e., failure). So, basically, you'll do something like this: from sys import exit # Code! exit(0) # Successful exit A: The exit() and quit() built in functions do just what you want. No import of sys needed. Alternatively, you can raise SystemExit, but you need to be careful not to catch it anywhere (which shouldn't happen as long as you specify the type of exception in all your try.. blocks).
Programmatically stop execution of python script?
Is it possible to stop execution of a python script at any line with a command? Like some code quit() # quit at this point some more code (that's not executed)
[ "sys.exit() will do exactly what you want.\nimport sys\nsys.exit(\"Error message\")\n\n", "You could raise SystemExit(0) instead of going to all the trouble to import sys; sys.exit(0).\n", "You want sys.exit(). From Python's docs:\n >>> import sys\n >>> print sys.exit.__doc__\n exit([status])\n\nExit the interpreter by raising SystemExit(status).\nIf the status is omitted or None, it defaults to zero (i.e., success).\nIf the status is numeric, it will be used as the system exit status.\nIf it is another kind of object, it will be printed and the system\nexit status will be one (i.e., failure).\nSo, basically, you'll do something like this:\nfrom sys import exit\n\n# Code!\n\nexit(0) # Successful exit\n\n", "The exit() and quit() built in functions do just what you want. No import of sys needed.\nAlternatively, you can raise SystemExit, but you need to be careful not to catch it anywhere (which shouldn't happen as long as you specify the type of exception in all your try.. blocks).\n" ]
[ 442, 173, 43, 24 ]
[]
[]
[ "python" ]
stackoverflow_0000543309_python.txt
Q: How to Install Satchmo in Windows? I'm working on a Django project that's slated to be using Satchmo for its e-commerce aspects. I'd like to install it on my Windows Vista machine but some of the cPython modules it needs can't be compiled or easy_installed. Has anyone been able to get Satchmo working on Windows, and if so, what additional steps does it take over the installation instructions? A: Which modules are you having trouble with? Pycrypto binaries are here - http://www.voidspace.org.uk/python/modules.shtml#pycrypto Python Imaging binaries are here - http://www.pythonware.com/products/pil/ I believe everything else is pure python so it should be pretty simple to install the rest.
How to Install Satchmo in Windows?
I'm working on a Django project that's slated to be using Satchmo for its e-commerce aspects. I'd like to install it on my Windows Vista machine but some of the cPython modules it needs can't be compiled or easy_installed. Has anyone been able to get Satchmo working on Windows, and if so, what additional steps does it take over the installation instructions?
[ "Which modules are you having trouble with? \nPycrypto binaries are here - http://www.voidspace.org.uk/python/modules.shtml#pycrypto\nPython Imaging binaries are here - http://www.pythonware.com/products/pil/\nI believe everything else is pure python so it should be pretty simple to install the rest.\n" ]
[ 3 ]
[]
[]
[ "django", "python", "satchmo", "windows", "windows_vista" ]
stackoverflow_0000540046_django_python_satchmo_windows_windows_vista.txt
Q: python sleep == IDE lock up When my script sleeps for 50sec my IDE locks up which is very annoying. I cant switch tabs, look through my source, type code, etc. It happens in pylde and pyscripter, i havent tried other IDEs. What can i do to fix this? i'm actually doing for i in range(0, timeInSeconds): time.sleep(1) hoping the IDE will update once per second but it doesnt look that way. What can i do to fix this? A: I'm assuming you are running your code from within the IDE? Your IDE is probably blocking while running your code. Look for a setting of some sort which might control that behaviour, otherwise I think your only choice would be to change IDE. (Or, run your code from outside the IDE) A: Can you configure to run your script externally? I don't know about the specific IDEs, but I would try to spawn a different process for the debugged script and not run them under the IDE. If that doesn't help, then it is a problem of the IDEs. A: The problem is your IDE not python. I don't use sleep that often, I've just tried it on the Eric IDE and you can use your IDE while your code is running, and sleeping. If can't set your IDE to do so and you need it then consider to change IDE or to run your code from console. A: Personally, I think you should never ever ever execute code in the same loop as your IDE. Since most IDEs run a GUI mainloop, blocking this will cause complete freeze of the user interface. It is just asking for trouble, and I would take out bug reports against both those IDEs. A: I suspect the problem the IDE is sitting in a loop waiting for the script to finish. That in itself is not a problem, provided any user generated messages are still processed while the IDE is in this loop. But what I suspect is going wrong in this case is the IDE is just running the loop without processing and messages and hence the user interface appears to be locked. The IDE would need to be changed to either process GUI messages while in the loop or alternatively it needs to create a thread to run the the script. The thread would then run in the background and the GUI would remain responsive. For example the Zeus for Windows IDE uses the background thread approach and it does not have this problem.
python sleep == IDE lock up
When my script sleeps for 50sec my IDE locks up which is very annoying. I cant switch tabs, look through my source, type code, etc. It happens in pylde and pyscripter, i havent tried other IDEs. What can i do to fix this? i'm actually doing for i in range(0, timeInSeconds): time.sleep(1) hoping the IDE will update once per second but it doesnt look that way. What can i do to fix this?
[ "I'm assuming you are running your code from within the IDE?\nYour IDE is probably blocking while running your code. Look for a setting of some sort which might control that behaviour, otherwise I think your only choice would be to change IDE. (Or, run your code from outside the IDE)\n", "Can you configure to run your script externally? I don't know about the specific IDEs, but I would try to spawn a different process for the debugged script and not run them under the IDE. If that doesn't help, then it is a problem of the IDEs.\n", "The problem is your IDE not python. I don't use sleep that often, I've just tried it on the Eric IDE and you can use your IDE while your code is running, and sleeping. If can't set your IDE to do so and you need it then consider to change IDE or to run your code from console.\n", "Personally, I think you should never ever ever execute code in the same loop as your IDE. Since most IDEs run a GUI mainloop, blocking this will cause complete freeze of the user interface. It is just asking for trouble, and I would take out bug reports against both those IDEs.\n", "I suspect the problem the IDE is sitting in a loop waiting for the script to finish. \nThat in itself is not a problem, provided any user generated messages are still processed while the IDE is in this loop. \nBut what I suspect is going wrong in this case is the IDE is just running the loop without processing and messages and hence the user interface appears to be locked.\nThe IDE would need to be changed to either process GUI messages while in the loop or alternatively it needs to create a thread to run the the script. The thread would then run in the background and the GUI would remain responsive.\nFor example the Zeus for Windows IDE uses the background thread approach and it does not have this problem.\n" ]
[ 2, 0, 0, 0, 0 ]
[]
[]
[ "ide", "lockup", "python" ]
stackoverflow_0000535973_ide_lockup_python.txt
Q: Kerberos authentication with python I need to write a script in python to check a webpage, which is protected by kerberos. Is there any possibility to do this from within python and how? The script is going to be deployed on a linux environment with python 2.4.something installed. dertoni A: I think that python-krbV and most Linux distributions also have a python-kerberos package. For example, Debian has one of the same name. Here's the documentation on it Extract from link: "This Python package is a high-level wrapper for Kerberos (GSSAPI) operations. The goal is to avoid having to build a module that wraps the entire Kerberos.framework, and instead offer a limited set of functions that do what is needed for client/server Kerberos authentication based on http://www.ietf.org/rfc/rfc4559.txt. "
Kerberos authentication with python
I need to write a script in python to check a webpage, which is protected by kerberos. Is there any possibility to do this from within python and how? The script is going to be deployed on a linux environment with python 2.4.something installed. dertoni
[ "I think that python-krbV and most Linux distributions also have a python-kerberos package. For example, Debian has one of the same name. Here's the documentation on it\nExtract from link:\n\n\"This Python package is a high-level wrapper for Kerberos (GSSAPI)\n operations. The goal is to avoid having to build a module that wraps\n the entire Kerberos.framework, and instead offer a limited set of\n functions that do what is needed for client/server Kerberos\n authentication based on http://www.ietf.org/rfc/rfc4559.txt. \"\n\n" ]
[ 15 ]
[]
[]
[ "kerberos", "python", "security" ]
stackoverflow_0000545294_kerberos_python_security.txt
Q: Using different versions of python for different projects in Eclipse So, I'm slowly working in some Python 3.0, but I still have a lot of things that rely on 2.5. But, in Eclipse, every time I change projects between a 3.0 and a 2.5, I need to go through Project -> Properties -> project type. Issue 1: if I just switch the interpreter in the drop down box, that doesn't seem to change anything. I need to click "click here to configure an interpreter not listed", and UP the interpreter I wish to use. Issue 2: That would be fine if I was switching to 3.0 for every project for the rest of my life, but I still am doing a lot of switching between projects and I don't see that changing anytime soon. So, I'm just trying to save a few operations. Is there a way to configure Eclipse so that it remembers which interpreter I want associated with which project? What if I created an entirely new workspace? Is "interpreter" a property of a workspace? Also, it doesn't seem to matter what I choose when I create a new project via File -> New -> Pydev Project. Whatever I last selected through "Properties" is what eclipse is using. This is Eclipse 3.4.0, running in Windows XP. A: You can set the interpreter version on a per-script basis through the Run Configurations menu. To do this go to Run -> Run Configurations, and then make a new entry under Python Run. Fill in your project name and the main script, and then go to the Interpeter tab and you can pick which interpreter you want to use for that script. I've used this to have Python 2.2, 2.5, and 3.0 projects in the same workspace. A: OK -- It definitely seems like "interpreter" is a property of your "workspace". I hadn't really considered that too much because I always thought of the workspace as "a folder in which I keep whatever" instead of a consistent unified environment for one kind of development. Also, you can't switch between workspaces in one instance of Eclipse (it shuts down and restarts), but you can run two instances of Eclipse at once, one for each workspace. Now, I guess I like the fact that Eclipse handles it that way. It has a more "modular" feel, and what originally bothered me I now think it sensible. I don't need to worry about having two interpreters to choose from, or choosing the default or moving one up. I just need to worry about which workspace I'm in. Hope this helps someone. . . EDIT: as noted by Kiv, "interpreter" is not a property of your "workspace" (as I stated above). Instead, for any project, there is a "run configuration" (incidentally, there is also a debug configuration). The run config allows the user to set the executable, and the path, and a number of other options. *I'm sure these things are known to long-time users, but I never had to deal with this until I changed python versions.**
Using different versions of python for different projects in Eclipse
So, I'm slowly working in some Python 3.0, but I still have a lot of things that rely on 2.5. But, in Eclipse, every time I change projects between a 3.0 and a 2.5, I need to go through Project -> Properties -> project type. Issue 1: if I just switch the interpreter in the drop down box, that doesn't seem to change anything. I need to click "click here to configure an interpreter not listed", and UP the interpreter I wish to use. Issue 2: That would be fine if I was switching to 3.0 for every project for the rest of my life, but I still am doing a lot of switching between projects and I don't see that changing anytime soon. So, I'm just trying to save a few operations. Is there a way to configure Eclipse so that it remembers which interpreter I want associated with which project? What if I created an entirely new workspace? Is "interpreter" a property of a workspace? Also, it doesn't seem to matter what I choose when I create a new project via File -> New -> Pydev Project. Whatever I last selected through "Properties" is what eclipse is using. This is Eclipse 3.4.0, running in Windows XP.
[ "You can set the interpreter version on a per-script basis through the Run Configurations menu.\nTo do this go to Run -> Run Configurations, and then make a new entry under Python Run. Fill in your project name and the main script, and then go to the Interpeter tab and you can pick which interpreter you want to use for that script.\nI've used this to have Python 2.2, 2.5, and 3.0 projects in the same workspace.\n", "OK --\nIt definitely seems like \"interpreter\" is a property of your \"workspace\". I hadn't really considered that too much because I always thought of the workspace as \"a folder in which I keep whatever\" instead of a consistent unified environment for one kind of development. \nAlso, you can't switch between workspaces in one instance of Eclipse (it shuts down and restarts), but you can run two instances of Eclipse at once, one for each workspace. \nNow, I guess I like the fact that Eclipse handles it that way. It has a more \"modular\" feel, and what originally bothered me I now think it sensible. I don't need to worry about having two interpreters to choose from, or choosing the default or moving one up. I just need to worry about which workspace I'm in. \nHope this helps someone. . .\nEDIT: as noted by Kiv, \"interpreter\" is not a property of your \"workspace\" (as I stated above). Instead, for any project, there is a \"run configuration\" (incidentally, there is also a debug configuration). The run config allows the user to set the executable, and the path, and a number of other options.\n*I'm sure these things are known to long-time users, but I never had to deal with this until I changed python versions.**\n" ]
[ 11, 1 ]
[]
[]
[ "eclipse", "python" ]
stackoverflow_0000543466_eclipse_python.txt
Q: practice with threads in python I know that Python has a global lock and i've read Glyph's explaination of python multithreading. But I still want to try it out. What I decided to do as an easy (conceptually) task was to do horizontal and vertical edge detection on a picture. Here's what's happening (pseudocode): for pixels in picture: apply sobel operator horizontal for pixels in picture: apply sobel operator vertical info on sobel operator. These two loops can run completely independent of each other, and so would be prime candidates for multithreading. (running these two loops on any significantly large picture can take 10+ seconds). However, when I have tried to use the threading module in python, it takes twice as long because of the global lock. My question is should I abandon all hope of doing this in two threads in python and try in another language? If i can forge ahead, what module(s) should I use? If not, what language should I experiment in? A: Python 2.6 now includes the mulitprocessing module (formerly processing module on older versions of Python). It has essentially the same interface as the threading module, but launches the execution into separate processes rather than threads. This allows Python to take advantage of multiple cores/CPUs and scales well for CPU-intensive tasks compared to the threading module approach. A: If the sobel operator is CPU-bound, then you won't get any benefit from multiple threads because python does not take advantage of multiple cores. Conceivably you could spin off multiple processes, though I'm not sure if that would be practical for working on a single image. 10 seconds doesn't seem like a lot of time to waste. If you're concerned about time because you'll be processing many images, then it might be easier to run multiple processes and have each process deal with a separate subset of the images. A: I recommend using NumPy as well. Not only will it probably be faster, but if you use threads with it, there won't be a global lock. I'll also suggest using multiprocessing as Jay suggests. Anyways, if you really want to practice threading, I'd suggest playing around with PThreads in C. PThreads are insanely simple to use for basic cases and used all over the place. A: Bulk matrix operations like the Sobel operator will definitely realize significant speed gains by (correctly) using Matlab/Octave. It is possible that NumPy may provide similar speedups for matrix/array ops. A: Python mutliprocessing is the right choice if you want to practice parallel programming with Python. If you don't have Python 2.6 (which you don't if you're using Ubuntu for example), you can use the Google code backported version of multiprocessing. It is part of PyPI, which means you can easily install it using EasyInstall (which is part of the python-setuptools package in Ubuntu).
practice with threads in python
I know that Python has a global lock and i've read Glyph's explaination of python multithreading. But I still want to try it out. What I decided to do as an easy (conceptually) task was to do horizontal and vertical edge detection on a picture. Here's what's happening (pseudocode): for pixels in picture: apply sobel operator horizontal for pixels in picture: apply sobel operator vertical info on sobel operator. These two loops can run completely independent of each other, and so would be prime candidates for multithreading. (running these two loops on any significantly large picture can take 10+ seconds). However, when I have tried to use the threading module in python, it takes twice as long because of the global lock. My question is should I abandon all hope of doing this in two threads in python and try in another language? If i can forge ahead, what module(s) should I use? If not, what language should I experiment in?
[ "Python 2.6 now includes the mulitprocessing module (formerly processing module on older versions of Python). \nIt has essentially the same interface as the threading module, but launches the execution into separate processes rather than threads. This allows Python to take advantage of multiple cores/CPUs and scales well for CPU-intensive tasks compared to the threading module approach.\n", "If the sobel operator is CPU-bound, then you won't get any benefit from multiple threads because python does not take advantage of multiple cores.\nConceivably you could spin off multiple processes, though I'm not sure if that would be practical for working on a single image.\n10 seconds doesn't seem like a lot of time to waste. If you're concerned about time because you'll be processing many images, then it might be easier to run multiple processes and have each process deal with a separate subset of the images.\n", "I recommend using NumPy as well. Not only will it probably be faster, but if you use threads with it, there won't be a global lock.\nI'll also suggest using multiprocessing as Jay suggests.\nAnyways, if you really want to practice threading, I'd suggest playing around with PThreads in C. PThreads are insanely simple to use for basic cases and used all over the place.\n", "Bulk matrix operations like the Sobel operator will definitely realize significant speed gains by (correctly) using Matlab/Octave. It is possible that NumPy may provide similar speedups for matrix/array ops.\n", "Python mutliprocessing is the right choice if you want to practice parallel programming with Python. If you don't have Python 2.6 (which you don't if you're using Ubuntu for example), you can use the Google code backported version of multiprocessing. It is part of PyPI, which means you can easily install it using EasyInstall (which is part of the python-setuptools package in Ubuntu).\n" ]
[ 7, 3, 2, 0, 0 ]
[]
[]
[ "image_manipulation", "multithreading", "python" ]
stackoverflow_0000535331_image_manipulation_multithreading_python.txt
Q: Does python support multiprocessor/multicore programming? What is the difference between multiprocessor programming and multicore programming? Preferably show examples in python how to write a small program for multiprocessor programming & multicore programming A: There is no such thing as "multiprocessor" or "multicore" programming. The distinction between "multiprocessor" and "multicore" computers is probably not relevant to you as an application programmer; it has to do with subtleties of how the cores share access to memory. In order to take advantage of a multicore (or multiprocessor) computer, you need a program written in such a way that it can be run in parallel, and a runtime that will allow the program to actually be executed in parallel on multiple cores (and operating system, although any operating system you can run on your PC will do this). This is really parallel programming, although there are different approaches to parallel programming. The ones that are relevant to Python are multiprocessing and multithreading. In languages like C, C++, Java, and C#, you can write parallel programs by executing multiple threads. The global interpreter lock in the CPython and PyPy runtimes preclude this option; but only for those runtimes. (In my personal opinion, multithreading is dangerous and tricky and it is generally a good thing that Python encourages you not to consider it as a way to get a performance advantage.) If you want to write a parallel program which can run on multiple cores in Python, you have a few different options: Write a multithreaded program using the threading module and run it in the IronPython or Jython runtime. Use the processing module, (now included in Python 2.6 as the multiprocessing module), to run your code in multiple processes at once. Use the subprocess module to run multiple python interpreters and communicate between them. Use Twisted and Ampoule. This has the advantage of not just running your code across different processes, but (if you don't share access to things like files) potentially across different computers as well. No matter which of these options you choose, you will need to understand how to split the work that your program is doing up into chunks that make sense to separate. Since I'm not sure what kind of programs you are thinking of writing, it would be difficult to provide a useful example. A: As mentioned in another post Python 2.6 has the multiprocessing module, which can take advantage of multiple cores/processors (it gets around GIL by starting multiple processes transparently). It offers some primitives similar to the threading module. You'll find some (simple) examples of usage in the documentation pages. A: You can actually write programs which will use multiple processors. You cannot do it with threads because of the GIL lock, but you can do it with different process. Either: use the subprocess module, and divide your code to execute a process per processor have a look at parallelpython module if you use python > 2.6 have a look at the multiprocess module. A: You can read about multithreading in python, and threading in general Multithreading in Python: http://www.devshed.com/c/a/Python/Basic-Threading-in-Python/ A: If I understand things correctly, Python has something called the GIL (Global Interpreter Lock) that effectively makes it impossible to take advantage of multicores when doing multiple threads in Python. See eg Guido van Rossum's blog entry on the topic. As far as I know, among the "mainstream" languages only C/C++ and Java have effective support for multicores. A: The main difference is how you organize and distribute data. Multicore typically has higher bandwidths between the different cores in a cpu, and multiprocessor needs to involve the bus between the cpus more. Python 2.6 has gotten multiprocess (process, as in program running) and more synchronization and communication objects for multithreaded programming. A: If you don't have Python 2.6 (which you don't if you're using Ubuntu Edgy or Intrepid for example), you can use the Google code backported version of multiprocessing. It is part of PyPI, which means you can easily install it using EasyInstall (which is part of the python-setuptools package in Ubuntu).
Does python support multiprocessor/multicore programming?
What is the difference between multiprocessor programming and multicore programming? Preferably show examples in python how to write a small program for multiprocessor programming & multicore programming
[ "There is no such thing as \"multiprocessor\" or \"multicore\" programming. The distinction between \"multiprocessor\" and \"multicore\" computers is probably not relevant to you as an application programmer; it has to do with subtleties of how the cores share access to memory.\nIn order to take advantage of a multicore (or multiprocessor) computer, you need a program written in such a way that it can be run in parallel, and a runtime that will allow the program to actually be executed in parallel on multiple cores (and operating system, although any operating system you can run on your PC will do this). This is really parallel programming, although there are different approaches to parallel programming. The ones that are relevant to Python are multiprocessing and multithreading.\nIn languages like C, C++, Java, and C#, you can write parallel programs by executing multiple threads. The global interpreter lock in the CPython and PyPy runtimes preclude this option; but only for those runtimes. (In my personal opinion, multithreading is dangerous and tricky and it is generally a good thing that Python encourages you not to consider it as a way to get a performance advantage.)\nIf you want to write a parallel program which can run on multiple cores in Python, you have a few different options:\n\nWrite a multithreaded program using the threading module and run it in the IronPython or Jython runtime.\nUse the processing module, (now included in Python 2.6 as the multiprocessing module), to run your code in multiple processes at once.\nUse the subprocess module to run multiple python interpreters and communicate between them.\nUse Twisted and Ampoule. This has the advantage of not just running your code across different processes, but (if you don't share access to things like files) potentially across different computers as well.\n\nNo matter which of these options you choose, you will need to understand how to split the work that your program is doing up into chunks that make sense to separate. Since I'm not sure what kind of programs you are thinking of writing, it would be difficult to provide a useful example.\n", "As mentioned in another post Python 2.6 has the multiprocessing module, which can take advantage of multiple cores/processors (it gets around GIL by starting multiple processes transparently). It offers some primitives similar to the threading module. You'll find some (simple) examples of usage in the documentation pages.\n", "You can actually write programs which will use multiple processors. You cannot do it with threads because of the GIL lock, but you can do it with different process.\nEither:\n\nuse the subprocess module, and divide your code to execute a process per processor\nhave a look at parallelpython module\nif you use python > 2.6 have a look at the multiprocess module.\n\n", "You can read about multithreading in python, and threading in general\nMultithreading in Python:\nhttp://www.devshed.com/c/a/Python/Basic-Threading-in-Python/\n", "If I understand things correctly, Python has something called the GIL (Global Interpreter Lock) that effectively makes it impossible to take advantage of multicores when doing multiple threads in Python.\nSee eg Guido van Rossum's blog entry on the topic. As far as I know, among the \"mainstream\" languages only C/C++ and Java have effective support for multicores.\n", "The main difference is how you organize and distribute data. Multicore typically has higher bandwidths between the different cores in a cpu, and multiprocessor needs to involve the bus between the cpus more.\nPython 2.6 has gotten multiprocess (process, as in program running) and more synchronization and communication objects for multithreaded programming.\n", "If you don't have Python 2.6 (which you don't if you're using Ubuntu Edgy or Intrepid for example), you can use the Google code backported version of multiprocessing. It is part of PyPI, which means you can easily install it using EasyInstall (which is part of the python-setuptools package in Ubuntu).\n" ]
[ 97, 24, 5, 2, 2, 1, 0 ]
[]
[]
[ "multicore", "python" ]
stackoverflow_0000203912_multicore_python.txt
Q: Mapping a global variable from a shared library with ctypes I'd like to map an int value pbs_errno declared as a global in the library libtorque.so using ctypes. Currently I can load the library like so: from ctypes import * libtorque = CDLL("libtorque.so") and have successfully mapped a bunch of the functions. However, for error checking purposes many of them set the pbs_errno variable so I need access to that as well. However if I try to access it I get: >>> pytorque.libtorque.pbs_errno <_FuncPtr object at 0x9fc690> Of course, it's not a function pointer and attempting to call it results in a seg fault. It's declared as int pbs_errno; in the main header and extern int pbs_errno; in the API header files. Objdump shows the symbol as: 00000000001294f8 g DO .bss 0000000000000004 Base pbs_errno A: There's a section in the ctypes docs about accessing values exported in dlls: http://docs.python.org/library/ctypes.html#accessing-values-exported-from-dlls e.g. def pbs_errno(): return c_int.in_dll(libtorque, "pbs_errno")
Mapping a global variable from a shared library with ctypes
I'd like to map an int value pbs_errno declared as a global in the library libtorque.so using ctypes. Currently I can load the library like so: from ctypes import * libtorque = CDLL("libtorque.so") and have successfully mapped a bunch of the functions. However, for error checking purposes many of them set the pbs_errno variable so I need access to that as well. However if I try to access it I get: >>> pytorque.libtorque.pbs_errno <_FuncPtr object at 0x9fc690> Of course, it's not a function pointer and attempting to call it results in a seg fault. It's declared as int pbs_errno; in the main header and extern int pbs_errno; in the API header files. Objdump shows the symbol as: 00000000001294f8 g DO .bss 0000000000000004 Base pbs_errno
[ "There's a section in the ctypes docs about accessing values exported in dlls:\nhttp://docs.python.org/library/ctypes.html#accessing-values-exported-from-dlls\ne.g.\n\ndef pbs_errno():\n return c_int.in_dll(libtorque, \"pbs_errno\")\n\n" ]
[ 20 ]
[]
[]
[ "ctypes", "python" ]
stackoverflow_0000544173_ctypes_python.txt
Q: two questions (RFC822, login info) about sending email via python 1 - In my email-sending script, I store spaced-out emails in a string, then I use ", ".join(to.split()). However, it looks like the script only sends to the 1st email - is it something to do with RFC822 format? If so, how can I fix this? 2 - I feel a bit edgy having my password visable in my script. Is there a way to retrieve this info from cookies or saved passwords from firefox? Thanks in advance! A: Use ', '.join() for the list in the To: or Cc: header, but the headers are only for show. What determines where the mail actually goes is the RCPT envelope. Assuming you're using smtplib, that's the second argument: connection.sendmail(senderaddress, to.split(), mailtext) 2: it's possible, but far from straightforward. Browsers don't want external programs looking at their security-sensitive stored data. A: For the second part of your question, you could take a look at the netrc module (http://docs.python.org/library/netrc.html). This isn't much better than having the password in the script, but it does allow the script to be readable for anyone using the computer, while you have the password in a file in your home directory that is only readable by you.
two questions (RFC822, login info) about sending email via python
1 - In my email-sending script, I store spaced-out emails in a string, then I use ", ".join(to.split()). However, it looks like the script only sends to the 1st email - is it something to do with RFC822 format? If so, how can I fix this? 2 - I feel a bit edgy having my password visable in my script. Is there a way to retrieve this info from cookies or saved passwords from firefox? Thanks in advance!
[ "Use ', '.join() for the list in the To: or Cc: header, but the headers are only for show. What determines where the mail actually goes is the RCPT envelope. Assuming you're using smtplib, that's the second argument:\nconnection.sendmail(senderaddress, to.split(), mailtext)\n\n2: it's possible, but far from straightforward. Browsers don't want external programs looking at their security-sensitive stored data.\n", "For the second part of your question, you could take a look at the netrc module (http://docs.python.org/library/netrc.html).\nThis isn't much better than having the password in the script, but it does allow the script to be readable for anyone using the computer, while you have the password in a file in your home directory that is only readable by you.\n" ]
[ 3, 2 ]
[]
[]
[ "passwords", "python", "rfc822", "smtp" ]
stackoverflow_0000543096_passwords_python_rfc822_smtp.txt
Q: Is there a better way (besides COM) to remote-control Excel? I'm working on a regression-testing tool that will validate a very large number of Excel spreadsheets. At the moment I control them via COM from a Python script using the latest version of the pywin32 product. Unfortunately COM seems to have a number of annoying drawbacks: For example, the slightest upset seems to be able to break the connection to the COM-Server, once severed there seems to be no safe way to re-connect to the Excel application. There's absolutely no safety built into the COM Application object. The Excel COM interface will not allow me to safely remote-control two seperate instances of the Excel application operating on the same workbook file, even if they are read-only. Also when something does go wrong I seldom get any useful error-messages... at best I can except a numerical error-code or a barely useful message such as "An Exception has occurred". It's almost impossible to know why something went wrong. Finally, COM lacks the ability to control some of the most fundamental aspects of Excel? For example there's no way to do a guaranteed close of just the Excel process that a COM client is connected to. You cannot even use COM to find Excel's PID. So what if I were to completely abandon COM? Is there an alternative way to control Excel? All I want to do is run macros, open and close workbooks and read and write cell-ranges? Perhaps some .NET experts know a trick or two which have not yet bubbled into the Python community? What about you office-hackers? Could there be a better way to get at Excel's innards than COM? A: There is no way that completely bypasses COM. You can use VSTO (Visual Studio Tools for Office), which has nice .NET wrappers on the COM objects, but it is still COM underneath. A: The Excel COM interface will not allow me to safely remote-control two seperate instances of the Excel application operating on the same workbook file, even if they are read-only. This is not a limitation of COM, this is a limitation of Excel. Excel will not even let you open two files with the same name at the same time if they exist in different directories. It is a fundamental limitation of the Excel program. To answer your other questions If you check your python documentation, there should be a way to connect to an existing server if the connection is lost. The lack of useful error messages again may be to do with Python. You cannot even use COM to find Excel's PID. COM is an internal object model and exposed what it wishes. PID are available to outside processes as much as they are to internal, there is no real reason to expose as a COM interface. A: It is also possible to run Excel as a server application and use it as a calculation engine. This allows non IT users to specify business rules within Excel and call them through webservices. I have not worked with this myself, but I know a coworker of mine used this once. Walkthrough: Developing a Custom Application Using Excel Web Services could be a good starting point. A first glance at that page looks like it requires Sharepoint. This might not be suiteable for every environment. A: Have you looked at the xlrd and xlwt packages? I'm not in need of them any more, but I had good success with xlrd on my last project. Last I knew, they couldn't process macros, but could do basic reading and writing of spreadsheets. Also, they're platform independent (the program I wrote was targetted to run on Linux)! A: You could use Jython with the JExcelApi (http://jexcelapi.sourceforge.net/) to control your Excel application. I've been considering implementing this solution with one of my PyQt projects, but haven't gotten around to trying it yet. I have effectively used the JExcelApi in Java applications before, but have not used Jython (though I know you can import Java classes). NOTE: the JExcelApi may be COM under the hood (I'm not sure).
Is there a better way (besides COM) to remote-control Excel?
I'm working on a regression-testing tool that will validate a very large number of Excel spreadsheets. At the moment I control them via COM from a Python script using the latest version of the pywin32 product. Unfortunately COM seems to have a number of annoying drawbacks: For example, the slightest upset seems to be able to break the connection to the COM-Server, once severed there seems to be no safe way to re-connect to the Excel application. There's absolutely no safety built into the COM Application object. The Excel COM interface will not allow me to safely remote-control two seperate instances of the Excel application operating on the same workbook file, even if they are read-only. Also when something does go wrong I seldom get any useful error-messages... at best I can except a numerical error-code or a barely useful message such as "An Exception has occurred". It's almost impossible to know why something went wrong. Finally, COM lacks the ability to control some of the most fundamental aspects of Excel? For example there's no way to do a guaranteed close of just the Excel process that a COM client is connected to. You cannot even use COM to find Excel's PID. So what if I were to completely abandon COM? Is there an alternative way to control Excel? All I want to do is run macros, open and close workbooks and read and write cell-ranges? Perhaps some .NET experts know a trick or two which have not yet bubbled into the Python community? What about you office-hackers? Could there be a better way to get at Excel's innards than COM?
[ "There is no way that completely bypasses COM. You can use VSTO (Visual Studio Tools for Office), which has nice .NET wrappers on the COM objects, but it is still COM underneath. \n", "\nThe Excel COM interface will not allow me to safely remote-control two seperate instances of the Excel application operating on the same workbook file, even if they are read-only.\n\nThis is not a limitation of COM, this is a limitation of Excel. Excel will not even let you open two files with the same name at the same time if they exist in different directories. It is a fundamental limitation of the Excel program.\nTo answer your other questions\nIf you check your python documentation, there should be a way to connect to an existing server if the connection is lost.\nThe lack of useful error messages again may be to do with Python.\n\nYou cannot even use COM to find Excel's PID.\n\nCOM is an internal object model and exposed what it wishes. PID are available to outside processes as much as they are to internal, there is no real reason to expose as a COM interface.\n", "It is also possible to run Excel as a server application and use it as a calculation engine. This allows non IT users to specify business rules within Excel and call them through webservices. I have not worked with this myself, but I know a coworker of mine used this once. Walkthrough: Developing a Custom Application Using Excel Web Services could be a good starting point. A first glance at that page looks like it requires Sharepoint. This might not be suiteable for every environment. \n", "Have you looked at the xlrd and xlwt packages? I'm not in need of them any more, but I had good success with xlrd on my last project. Last I knew, they couldn't process macros, but could do basic reading and writing of spreadsheets. Also, they're platform independent (the program I wrote was targetted to run on Linux)!\n", "You could use Jython with the JExcelApi (http://jexcelapi.sourceforge.net/) to control your Excel application. I've been considering implementing this solution with one of my PyQt projects, but haven't gotten around to trying it yet. I have effectively used the JExcelApi in Java applications before, but have not used Jython (though I know you can import Java classes).\nNOTE: the JExcelApi may be COM under the hood (I'm not sure).\n" ]
[ 7, 2, 1, 1, 1 ]
[]
[]
[ ".net", "com", "excel", "python" ]
stackoverflow_0000528817_.net_com_excel_python.txt
Q: How Do I Perform Introspection on an Object in Python 2.x? I'm using Python 2.x and I have an object I'm summoning from the aether; the documentation on it is not particularly clear. I would like to be able to get a list of properties for that object and the type of each property. Similarly, I'd like to get a list of methods for that object, as well, plus any other information I could find on that method, such as number of arguments and their respective types. I have a feeling that I am simply missing the correct jargon in my Google searches. Not that I want to derail with specifics, but it's Active Directory, so that's always fun. A: Well ... Your first stop will be a simple dir(object). This will show you all the object's members, both fields and methods. Try it in an interactive Python shell, and play around a little. For instance: > class Foo: def __init__(self): self.a = "bar" self.b = 4711 > a=Foo() > dir(a) ['__doc__', '__init__', '__module__', 'a', 'b'] A: How about something like: >>> o=object() >>> [(a,type(o.__getattribute__(a))) for a in dir(o)] [('__class__', <type 'type'>), ('__delattr__', <type 'method-wrapper'>), ('__doc__', <type 'str'>), ('__format__', <type 'builtin_function_or_method'>), ('__getattribute__', <type 'method-wrapper'>), ('__hash__', <type 'method-wrapper'>), ('__init__', <type 'method-wrapper'>), ('__new__', <type 'builtin_function_or_method'>), ('__reduce__', <type 'builtin_function_or_method'>), ('__reduce_ex__', <type 'builtin_function_or_method'>), ('__repr__', <type 'method-wrapper'>), ('__setattr__', <type 'method-wrapper'>), ('__sizeof__', <type 'builtin_function_or_method'>), ('__str__', <type 'method-wrapper'>), ('__subclasshook__', <type 'builtin_function_or_method'>)] >>> A more structured method will be to use the inspect module: The inspect module provides several useful functions to help get information about live objects such as modules, classes, methods, functions, tracebacks, frame objects, and code objects. For example, it can help you examine the contents of a class, retrieve the source code of a method, extract and format the argument list for a function, or get all the information you need to display a detailed traceback. A: "Guide to Python introspection" is a nice article to get you started. A: You could have a look at the inspect module. It provides a wide variety of tools for inspection of live objects as well as source code. A: If you're using win32com.client.Dispatch, inspecting the Python object might not be much help as it's a generic wrapper for IDispatch. You can use makepy (which comes with Activestate Python) to generate a Python wrapper from the type library. Then you can look at the code for the wrapper.
How Do I Perform Introspection on an Object in Python 2.x?
I'm using Python 2.x and I have an object I'm summoning from the aether; the documentation on it is not particularly clear. I would like to be able to get a list of properties for that object and the type of each property. Similarly, I'd like to get a list of methods for that object, as well, plus any other information I could find on that method, such as number of arguments and their respective types. I have a feeling that I am simply missing the correct jargon in my Google searches. Not that I want to derail with specifics, but it's Active Directory, so that's always fun.
[ "Well ... Your first stop will be a simple dir(object). This will show you all the object's members, both fields and methods. Try it in an interactive Python shell, and play around a little.\nFor instance:\n> class Foo:\n def __init__(self):\n self.a = \"bar\"\n self.b = 4711\n\n> a=Foo()\n> dir(a)\n['__doc__', '__init__', '__module__', 'a', 'b']\n\n", "How about something like:\n>>> o=object()\n>>> [(a,type(o.__getattribute__(a))) for a in dir(o)]\n[('__class__', <type 'type'>), ('__delattr__', <type 'method-wrapper'>), \n('__doc__', <type 'str'>), ('__format__', <type 'builtin_function_or_method'>),\n('__getattribute__', <type 'method-wrapper'>), ('__hash__', <type 'method-wrapper'>),\n('__init__', <type 'method-wrapper'>), \n('__new__', <type 'builtin_function_or_method'>),\n('__reduce__', <type 'builtin_function_or_method'>),\n('__reduce_ex__', <type 'builtin_function_or_method'>),\n('__repr__', <type 'method-wrapper'>), ('__setattr__', <type 'method-wrapper'>),\n('__sizeof__', <type 'builtin_function_or_method'>),\n('__str__', <type 'method-wrapper'>),\n('__subclasshook__', <type 'builtin_function_or_method'>)]\n>>> \n\nA more structured method will be to use the inspect module:\n\nThe inspect module provides several useful functions to help get information about live objects such as modules, classes, methods, functions, tracebacks, frame objects, and code objects. For example, it can help you examine the contents of a class, retrieve the source code of a method, extract and format the argument list for a function, or get all the information you need to display a detailed traceback.\n\n", "\"Guide to Python introspection\" is a nice article to get you started.\n", "You could have a look at the inspect module. It provides a wide variety of tools for inspection of live objects as well as source code.\n", "If you're using win32com.client.Dispatch, inspecting the Python object might not be much help as it's a generic wrapper for IDispatch. \nYou can use makepy (which comes with Activestate Python) to generate a Python wrapper from the type library. Then you can look at the code for the wrapper.\n" ]
[ 25, 9, 5, 4, 0 ]
[]
[]
[ "introspection", "python", "python_datamodel" ]
stackoverflow_0000546337_introspection_python_python_datamodel.txt
Q: Generating lists/reports with in-line summaries in Django I am trying to write a view that will generate a report which displays all Items within my Inventory system, and provide summaries at a certain point. This report is purely just an HTML template by the way. In my case, each Item is part of an Order. An Order can have several items, and I want to be able to display SUM based summaries after the end of each order. So the report kind of looks like this: Order #25 <Qty> <Qty Sold> <Cost> <Cost Value> Some Item 2 1 29.99 29.99 Another Item 4 0 10.00 40.00 <Subtotal Line> 6 1 39.99 69.99 Order #26 <Qty> <Qty Sold> <Cost> <Cost Value> ... Etc, you get the point Now, I'm perfectly capable of displaying all the values and already have a report showing all the Items, but I have no idea how I can place Subtotals within the report like that without doing alot of queries. The Quantity, Qty Sold, and Cost fields are just part of the Item model, and Cost Value is just a simple model function. Any help would be appreciated. Thanks in advance :-) A: Subtotals are SELECT SUM(qty) GROUP BY order_number things. They are entirely separate from a query to get details. The results of the two queries need to be interleaved. A good way to do this is to create each order as a tuple ( list_of_details, appropriate summary ). Then the display is easy {% for order in orderList %} {% for line in order.0 %} {{ line }} {% endfor %} {{ order.1 }} {% endfor %} The hard part is interleaving the two queries. details = Line.objects.all() ddict = defaultdict( list ) for d in details: ddict[d.order_number].append(d) interleaved= [] subtotals = ... Django query to get subtotals ... for s in subtotals: interleaved.append( ( ddict[s.order], s.totals ) ) This interleaved object can be given to your template for rendering. A: You could compute the subtotals in Python in the Django view. The sub-totals could be stored in instances of the Model object with an attribute indicating that it's a sub-total. To keep the report template simple you could insert the sub-total objects in the right places in the result list and use the sub-total attribute to render the sub-total lines differently. A: Assuming you're not going to use any order-specific fields, you could perform single DB query followed by some python calculations: from itertools import groupby items = OrderItem.objects.select_related('order').order_by('order').all() # order_by is essential items_by_order = dict(groupby(items, lambda x: x.order)) for order, items in items_by_order: items_by_order[order]['subtotals'] = ... # calculate subtotals for all needed fields This is more generic approach compared to using separeate SQL query for calculating subtotals which imposes liability of syncronising WHERE clauses on both queries. You can also use any agregate function, not only thoses available on DB side.
Generating lists/reports with in-line summaries in Django
I am trying to write a view that will generate a report which displays all Items within my Inventory system, and provide summaries at a certain point. This report is purely just an HTML template by the way. In my case, each Item is part of an Order. An Order can have several items, and I want to be able to display SUM based summaries after the end of each order. So the report kind of looks like this: Order #25 <Qty> <Qty Sold> <Cost> <Cost Value> Some Item 2 1 29.99 29.99 Another Item 4 0 10.00 40.00 <Subtotal Line> 6 1 39.99 69.99 Order #26 <Qty> <Qty Sold> <Cost> <Cost Value> ... Etc, you get the point Now, I'm perfectly capable of displaying all the values and already have a report showing all the Items, but I have no idea how I can place Subtotals within the report like that without doing alot of queries. The Quantity, Qty Sold, and Cost fields are just part of the Item model, and Cost Value is just a simple model function. Any help would be appreciated. Thanks in advance :-)
[ "Subtotals are SELECT SUM(qty) GROUP BY order_number things.\nThey are entirely separate from a query to get details.\nThe results of the two queries need to be interleaved. A good way to do this is to create each order as a tuple ( list_of_details, appropriate summary ).\nThen the display is easy\n{% for order in orderList %}\n {% for line in order.0 %}\n {{ line }}\n {% endfor %}\n {{ order.1 }}\n{% endfor %}\n\nThe hard part is interleaving the two queries.\ndetails = Line.objects.all()\nddict = defaultdict( list )\nfor d in details:\n ddict[d.order_number].append(d)\n\ninterleaved= []\nsubtotals = ... Django query to get subtotals ... \nfor s in subtotals:\n interleaved.append( ( ddict[s.order], s.totals ) )\n\nThis interleaved object can be given to your template for rendering.\n", "You could compute the subtotals in Python in the Django view.\nThe sub-totals could be stored in instances of the Model object with an attribute indicating that it's a sub-total. To keep the report template simple you could insert the sub-total objects in the right places in the result list and use the sub-total attribute to render the sub-total lines differently.\n", "Assuming you're not going to use any order-specific fields, you could perform single DB query followed by some python calculations:\nfrom itertools import groupby\nitems = OrderItem.objects.select_related('order').order_by('order').all() # order_by is essential\nitems_by_order = dict(groupby(items, lambda x: x.order))\nfor order, items in items_by_order:\n items_by_order[order]['subtotals'] = ... # calculate subtotals for all needed fields\n\nThis is more generic approach compared to using separeate SQL query for calculating subtotals which imposes liability of syncronising WHERE clauses on both queries. You can also use any agregate function, not only thoses available on DB side.\n" ]
[ 3, 1, 1 ]
[]
[]
[ "django", "list", "python", "report" ]
stackoverflow_0000546385_django_list_python_report.txt
Q: Send Info from Script to Module Python Hi I wonder how you can send info over to a module An Example main.py Looks like this from module import * print helloworld() module.py looks like this def helloworld(): print "Hello world!" Anyway i want to send over info from main.py to module.py is it possible? A: It is not clear what you mean by "send info", but if you but the typical way of passing a value would be with a function parameter. main.py: helloworld("Hello world!") module.py def helloworld(message): print message Is that what your looking for? Also the two uses of print in your example are redundant. Addendum: It might be useful for you to read the Python documentation regarding function declarations, or, alternatively, most Python introductory tutorials would cover the same ground in fewer words. Anything you read there is going to apply equally regardless of whether the function is in the same module or another module. A: Yes. You can either send over information when calling functions/classes in module, or you can assign values in module's namespace (not so preferable). As an example: # module.py # good example def helloworld(name): print "Hello, %s" % name # main.py # good example import module module.helloworld("Jim") And for the bad: don't do it: # module.py # bad example def helloworld(): print "Hello, %s" % name # main.py # bad example import module module.name = "Jim" module.helloworld()
Send Info from Script to Module Python
Hi I wonder how you can send info over to a module An Example main.py Looks like this from module import * print helloworld() module.py looks like this def helloworld(): print "Hello world!" Anyway i want to send over info from main.py to module.py is it possible?
[ "It is not clear what you mean by \"send info\", but if you but the typical way of passing a value would be with a function parameter.\nmain.py:\nhelloworld(\"Hello world!\")\n\nmodule.py\ndef helloworld(message):\n print message\n\nIs that what your looking for? Also the two uses of print in your example are redundant.\nAddendum: It might be useful for you to read the Python documentation regarding function declarations, or, alternatively, most Python introductory tutorials would cover the same ground in fewer words. Anything you read there is going to apply equally regardless of whether the function is in the same module or another module.\n", "Yes. You can either send over information when calling functions/classes in module, or you can assign values in module's namespace (not so preferable).\nAs an example:\n# module.py\n# good example\ndef helloworld(name):\n print \"Hello, %s\" % name\n\n# main.py\n# good example\nimport module\nmodule.helloworld(\"Jim\")\n\nAnd for the bad: don't do it:\n# module.py\n# bad example\ndef helloworld(): \n print \"Hello, %s\" % name\n\n# main.py\n# bad example\nimport module\nmodule.name = \"Jim\"\nmodule.helloworld()\n\n" ]
[ 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000547450_python.txt
Q: problem ordering by votes with django-voting I have a model Post, and a model Vote. Vote (form django-voting) is essentially just a pointer to a Post and -1, 0, or 1. There is also Tourn, which is a start date and an end date. A Post made between the start and end of a Tourn is submitted to that tournament. For the sake of rep calculation, I'm trying to find the top 3 winners of a tournament. This is what I have: posts = Post.objects.filter(status=2, created_at__range=(tourn.start_date, tourn.end_date)) start = tourn.start_date - timedelta(days=1) end = tourn.end_date + timedelta(days=1) qn = connection.ops.quote_name ctype = ContentType.objects.get_for_model(Post) posts.extra(select={'score': """ SELECT SUM(vote) FROM %s WHERE content_type_id = %s AND object_id = %s.id AND voted_at > DATE(%s) AND voted_at < DATE(%s) """ % (qn(Vote._meta.db_table), ctype.id, qn(Post._meta.db_table), start, end)}, order_by=['-score']) if tourn.limit_to_category: posts.filter(category=tourn.category) if len(posts) >= 1: tourn_winners_1.append(posts[0].author) resp += " 1: " + posts[0].author.username + "\n" if len(posts) >= 2: tourn_winners_2.append(posts[1].author) resp += " 2: " + posts[1].author.username + "\n" if len(posts) >= 3: tourn_winners_3.append(posts[2].author) resp += " 3: " + posts[2].author.username + "\n" It seems simple enough, but for some reason the results are wrong. The query that gets made is thus: SELECT "blog_post"."id", "blog_post"."title", "blog_post"."slug", "blog_post"."a uthor_id", "blog_post"."creator_ip", "blog_post"."body", "blog_post"."tease", "b log_post"."status", "blog_post"."allow_comments", "blog_post"."publish", "blog_p ost"."created_at", "blog_post"."updated_at", "blog_post"."markup", "blog_post"." tags", "blog_post"."category_id" FROM "blog_post" WHERE ("blog_post"."status" = 2 AND "blog_post"."created_at" BETWEEN 2008-12-21 00:00:00 and 2009-01-04 00:00 :00) ORDER BY "blog_post"."publish" DESC It seems that posts.extra() isn't getting applied to the query at all... A: I think you need to assign posts to the return value of posts.extra(): posts = posts.extra(select={'score': """ SELECT SUM(vote) FROM %s WHERE content_type_id = %s AND object_id = %s.id AND voted_at > DATE(%s) AND voted_at < DATE(%s) """ % (qn(Vote._meta.db_table), ctype.id, qn(Post._meta.db_table), start, end)}, order_by=['-score'])
problem ordering by votes with django-voting
I have a model Post, and a model Vote. Vote (form django-voting) is essentially just a pointer to a Post and -1, 0, or 1. There is also Tourn, which is a start date and an end date. A Post made between the start and end of a Tourn is submitted to that tournament. For the sake of rep calculation, I'm trying to find the top 3 winners of a tournament. This is what I have: posts = Post.objects.filter(status=2, created_at__range=(tourn.start_date, tourn.end_date)) start = tourn.start_date - timedelta(days=1) end = tourn.end_date + timedelta(days=1) qn = connection.ops.quote_name ctype = ContentType.objects.get_for_model(Post) posts.extra(select={'score': """ SELECT SUM(vote) FROM %s WHERE content_type_id = %s AND object_id = %s.id AND voted_at > DATE(%s) AND voted_at < DATE(%s) """ % (qn(Vote._meta.db_table), ctype.id, qn(Post._meta.db_table), start, end)}, order_by=['-score']) if tourn.limit_to_category: posts.filter(category=tourn.category) if len(posts) >= 1: tourn_winners_1.append(posts[0].author) resp += " 1: " + posts[0].author.username + "\n" if len(posts) >= 2: tourn_winners_2.append(posts[1].author) resp += " 2: " + posts[1].author.username + "\n" if len(posts) >= 3: tourn_winners_3.append(posts[2].author) resp += " 3: " + posts[2].author.username + "\n" It seems simple enough, but for some reason the results are wrong. The query that gets made is thus: SELECT "blog_post"."id", "blog_post"."title", "blog_post"."slug", "blog_post"."a uthor_id", "blog_post"."creator_ip", "blog_post"."body", "blog_post"."tease", "b log_post"."status", "blog_post"."allow_comments", "blog_post"."publish", "blog_p ost"."created_at", "blog_post"."updated_at", "blog_post"."markup", "blog_post"." tags", "blog_post"."category_id" FROM "blog_post" WHERE ("blog_post"."status" = 2 AND "blog_post"."created_at" BETWEEN 2008-12-21 00:00:00 and 2009-01-04 00:00 :00) ORDER BY "blog_post"."publish" DESC It seems that posts.extra() isn't getting applied to the query at all...
[ "I think you need to assign posts to the return value of posts.extra():\nposts = posts.extra(select={'score': \"\"\"\n SELECT SUM(vote)\n FROM %s\n WHERE content_type_id = %s\n AND object_id = %s.id\n AND voted_at > DATE(%s)\n AND voted_at < DATE(%s)\n \"\"\" % (qn(Vote._meta.db_table), ctype.id, qn(Post._meta.db_table), start, end)},\n order_by=['-score'])\n\n" ]
[ 3 ]
[]
[]
[ "django", "django_voting", "python" ]
stackoverflow_0000544597_django_django_voting_python.txt
Q: Python using result of function for Regular Expression Substitution I have a block of text, and for every regex match, I want to substitute that match with the return value from another function. The argument to this function is of course the matched text. I have been having trouble trying to come up with a one pass solution to this problem. It feels like it should be pretty simple. A: Right from the documentation: >>> def dashrepl(matchobj): ... if matchobj.group(0) == '-': return ' ' ... else: return '-' >>> re.sub('-{1,2}', dashrepl, 'pro----gram-files') 'pro--gram files' A: Python-agnostic: Match everything before and everything after your text to replace. /^(.*?)(your regexp to match)(.*)$/ Then you have the next before and after the text you're going to replace. The rest is easy -- just insert the result of your function between the two strings.
Python using result of function for Regular Expression Substitution
I have a block of text, and for every regex match, I want to substitute that match with the return value from another function. The argument to this function is of course the matched text. I have been having trouble trying to come up with a one pass solution to this problem. It feels like it should be pretty simple.
[ "Right from the documentation:\n>>> def dashrepl(matchobj):\n... if matchobj.group(0) == '-': return ' '\n... else: return '-'\n>>> re.sub('-{1,2}', dashrepl, 'pro----gram-files')\n'pro--gram files'\n\n", "Python-agnostic: Match everything before and everything after your text to replace.\n/^(.*?)(your regexp to match)(.*)$/\n\nThen you have the next before and after the text you're going to replace. The rest is easy -- just insert the result of your function between the two strings.\n" ]
[ 14, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0000547798_python_regex.txt
Q: dead simple Django file uploading not working :-(( I am trying desperately to do a very simple file upload with Django, without (for now) bothering with templating & co. My HTML is: <form id="uploader" action="bytes/" enctype="multipart/form-data" method="post" > <input type="file" name="uploaded"/> <input type="submit" value="upload"/> </form> My Python is (knowing it is a POST): if path=="bytes/": if 'uploaded' in request.FILES: return HttpResponse("you uploaded a file") else: return HttpResponse("did not get the file") I don't understand why I'm always getting the "did not get the file" message... Can anyone help me, please??? A: Try changing "if 'uploaded' in request.FILES:" to "if request.FILES". You might want to take a look at the documentation as well; there's an example-- http://docs.djangoproject.com/en/dev/topics/http/file-uploads/
dead simple Django file uploading not working :-((
I am trying desperately to do a very simple file upload with Django, without (for now) bothering with templating & co. My HTML is: <form id="uploader" action="bytes/" enctype="multipart/form-data" method="post" > <input type="file" name="uploaded"/> <input type="submit" value="upload"/> </form> My Python is (knowing it is a POST): if path=="bytes/": if 'uploaded' in request.FILES: return HttpResponse("you uploaded a file") else: return HttpResponse("did not get the file") I don't understand why I'm always getting the "did not get the file" message... Can anyone help me, please???
[ "Try changing \"if 'uploaded' in request.FILES:\" to \"if request.FILES\".\nYou might want to take a look at the documentation as well; there's an example-- http://docs.djangoproject.com/en/dev/topics/http/file-uploads/\n" ]
[ 6 ]
[]
[]
[ "django", "python", "upload" ]
stackoverflow_0000547743_django_python_upload.txt
Q: Obtaining all possible states of an object for a NP-Complete(?) problem in Python Not sure that the example (nor the actual usecase) qualifies as NP-Complete, but I'm wondering about the most Pythonic way to do the below assuming that this was the algorithm available. Say you have : class Person: def __init__(self): self.status='unknown' def set(self,value): if value: self.status='happy' else : self.status='sad' ... blah . Maybe it's got their names or where they live or whatev. and some operation that requires a group of Persons. (The key value is here whether the Person is happy or sad.) Hence, given PersonA, PersonB, PersonC, PersonD - I'd like to end up a list of the possible 2**4 combinations of sad and happy Persons. i.e. [ [ PersonA.set(true), PersonB.set(true), PersonC.set(true), PersonD.set(true)], [ PersonA.set(true), PersonB.set(true), PersonC.set(true), PersonD.set(false)], [ PersonA.set(true), PersonB.set(true), PersonC.set(false), PersonD.set(true)], [ PersonA.set(true), PersonB.set(true), PersonC.set(false), PersonD.set(false)], etc.. Is there a good Pythonic way of doing this? I was thinking about list comprehensions (and modifying the object so that you could call it and get returned two objects, true and false), but the comprehension formats I've seen would require me to know the number of Persons in advance. I'd like to do this independent of the number of persons. EDIT : Assume that whatever that operation that I was going to run on this is part of a larger problem set - we need to test out all values of Person for a given set in order to solve our problem. (i.e. I know this doesn't look NP-complete right now =) ) any ideas? Thanks! A: I think this could do it: l = list() for i in xrange(2 ** n): # create the list of n people sublist = [None] * n for j in xrange(n): sublist[j] = Person() sublist[j].set(i & (1 << j)) l.append(sublist) Note that if you wrote Person so that its constructor accepted the value, or such that the set method returned the person itself (but that's a little weird in Python), you could use a list comprehension. With the constructor way: l = [ [Person(i & (1 << j)) for j in xrange(n)] for i in xrange(2 ** n)] The runtime of the solution is O(n 2**n) as you can tell by looking at the loops, but it's not really a "problem" (i.e. a question with a yes/no answer) so you can't really call it NP-complete. See What is an NP-complete in computer science? for more information on that front. A: You can use a cartesian product to get all possible combinations of people and states. Requires Python 2.6+ import itertools people = [person_a,person_b,person_c] states = [True,False] all_people_and_states = itertools.product(people,states) The variable all_people_and_states contains a list of tuples (x,y) where x is a person and y is either True or False. It will contain all possible pairings of people and states. A: According to what you've stated in your problem, you're right -- you do need itertools.product, but not exactly the way you've stated. import itertools truth_values = itertools.product((True, False), repeat = 4) people = (person_a, person_b, person_c, person_d) all_people_and_states = [[person(truth) for person, truth in zip(people, combination)] for combination in truth_values] That should be more along the lines of what you mentioned in your question.
Obtaining all possible states of an object for a NP-Complete(?) problem in Python
Not sure that the example (nor the actual usecase) qualifies as NP-Complete, but I'm wondering about the most Pythonic way to do the below assuming that this was the algorithm available. Say you have : class Person: def __init__(self): self.status='unknown' def set(self,value): if value: self.status='happy' else : self.status='sad' ... blah . Maybe it's got their names or where they live or whatev. and some operation that requires a group of Persons. (The key value is here whether the Person is happy or sad.) Hence, given PersonA, PersonB, PersonC, PersonD - I'd like to end up a list of the possible 2**4 combinations of sad and happy Persons. i.e. [ [ PersonA.set(true), PersonB.set(true), PersonC.set(true), PersonD.set(true)], [ PersonA.set(true), PersonB.set(true), PersonC.set(true), PersonD.set(false)], [ PersonA.set(true), PersonB.set(true), PersonC.set(false), PersonD.set(true)], [ PersonA.set(true), PersonB.set(true), PersonC.set(false), PersonD.set(false)], etc.. Is there a good Pythonic way of doing this? I was thinking about list comprehensions (and modifying the object so that you could call it and get returned two objects, true and false), but the comprehension formats I've seen would require me to know the number of Persons in advance. I'd like to do this independent of the number of persons. EDIT : Assume that whatever that operation that I was going to run on this is part of a larger problem set - we need to test out all values of Person for a given set in order to solve our problem. (i.e. I know this doesn't look NP-complete right now =) ) any ideas? Thanks!
[ "I think this could do it:\nl = list()\nfor i in xrange(2 ** n):\n # create the list of n people\n sublist = [None] * n\n for j in xrange(n):\n sublist[j] = Person()\n sublist[j].set(i & (1 << j))\n l.append(sublist)\n\nNote that if you wrote Person so that its constructor accepted the value, or such that the set method returned the person itself (but that's a little weird in Python), you could use a list comprehension. With the constructor way:\nl = [ [Person(i & (1 << j)) for j in xrange(n)] for i in xrange(2 ** n)]\n\nThe runtime of the solution is O(n 2**n) as you can tell by looking at the loops, but it's not really a \"problem\" (i.e. a question with a yes/no answer) so you can't really call it NP-complete. See What is an NP-complete in computer science? for more information on that front.\n", "You can use a cartesian product to get all possible combinations of people and states. Requires Python 2.6+\nimport itertools\npeople = [person_a,person_b,person_c]\nstates = [True,False]\nall_people_and_states = itertools.product(people,states)\n\nThe variable all_people_and_states contains a list of tuples (x,y) where x is a person and y is either True or False. It will contain all possible pairings of people and states.\n", "According to what you've stated in your problem, you're right -- you do need itertools.product, but not exactly the way you've stated.\nimport itertools\ntruth_values = itertools.product((True, False), repeat = 4)\npeople = (person_a, person_b, person_c, person_d)\nall_people_and_states = [[person(truth) for person, truth in zip(people, combination)] for combination in truth_values]\n\nThat should be more along the lines of what you mentioned in your question.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "combinatorics", "iteration", "python" ]
stackoverflow_0000539676_combinatorics_iteration_python.txt
Q: How to adding middleware to Appengine's webapp framework? I'm using the appengine webapp framework (link). Is it possible to add Django middleware? I can't find any examples. I'm currently trying to get the FirePython middleware to work (link). A: It's easy: You create the WSGI application as per normal, then wrap that application in your WSGI middleware before executing it. See this code from Bloog to see how firepython is added as middleware. A: The GAE webapp framework does not map one to one to the Django framework. It would be hard to do what you want without implementing some kind of adapter yourself, I do not know of any third party handler adapters to do this. That said, I generally use the app-engine-patch so I can use the latest 1.0.2 Django release with AppEngine, and then you can just include the Django middleware the normal way with the setup.py file. If you needed to, you could probably look through the app-engine-patch's adapter to see how they do it, and start with that as a framework. A: "Middleware" as understood by Django is a kind of request/response processor, quite different from what WSGI calls "middleware". Think: django-like middleware will add session attribute to request object basing on what Beaker (WSGI middleware) has put in environ['beaker.session']. While adding WSGI middleware to the stack should be straightforward (you already work on WSGI level in your main.py), adding request/response processor depends on how request and response are abstracted from WSGI. How this can be done using Werkzeug (which is basic WSGI toolset) is described in Werkzeug's wiki and in one of its contrib modules.
How to adding middleware to Appengine's webapp framework?
I'm using the appengine webapp framework (link). Is it possible to add Django middleware? I can't find any examples. I'm currently trying to get the FirePython middleware to work (link).
[ "It's easy: You create the WSGI application as per normal, then wrap that application in your WSGI middleware before executing it.\nSee this code from Bloog to see how firepython is added as middleware.\n", "The GAE webapp framework does not map one to one to the Django framework. It would be hard to do what you want without implementing some kind of adapter yourself, I do not know of any third party handler adapters to do this.\nThat said, I generally use the app-engine-patch so I can use the latest 1.0.2 Django release with AppEngine, and then you can just include the Django middleware the normal way with the setup.py file. If you needed to, you could probably look through the app-engine-patch's adapter to see how they do it, and start with that as a framework.\n", "\"Middleware\" as understood by Django is a kind of request/response processor, quite different from what WSGI calls \"middleware\". Think: django-like middleware will add session attribute to request object basing on what Beaker (WSGI middleware) has put in environ['beaker.session']. While adding WSGI middleware to the stack should be straightforward (you already work on WSGI level in your main.py), adding request/response processor depends on how request and response are abstracted from WSGI.\nHow this can be done using Werkzeug (which is basic WSGI toolset) is described in Werkzeug's wiki and in one of its contrib modules.\n" ]
[ 6, 0, 0 ]
[]
[]
[ "django", "django_middleware", "google_app_engine", "middleware", "python" ]
stackoverflow_0000352079_django_django_middleware_google_app_engine_middleware_python.txt
Q: Should I use Django's contrib applications or build my own? The Django apps come with their own features and design. If your requirements don't match 100% with the features of the contib app, you end up customizing and tweaking the app. I feel this involves more effort than just building your own app to fit your requirements. What do you think? A: It all depends. We had a need for something that was 98% similar to contrib.flatpages. We could have monkeypatched it, but we decided that the code was so straightforward that we would just copy and fork it. It worked out fine. Doing this with contrib.auth, on the other hand, might be a bad move given its interaction with contrib.admin & contrib.session. A: I'd also check out third-party re-usable apps before building my own. Many are listed on Django Plug(g)ables, and most are hosted on Google Code, GitHub or BitBucket. A: Most of the apps in django.contrib are written very well and are highly extensible. Don't like quite how comments works? Subclass the models and forms within it, adding your own functionality and you have a working comment system that fits your sites schema, with little effort. I think the best part when you extend the contrib apps is you're not really doing anything hacky, you're just writing (mostly) regular Python code to add the functionality.
Should I use Django's contrib applications or build my own?
The Django apps come with their own features and design. If your requirements don't match 100% with the features of the contib app, you end up customizing and tweaking the app. I feel this involves more effort than just building your own app to fit your requirements. What do you think?
[ "It all depends. We had a need for something that was 98% similar to contrib.flatpages. We could have monkeypatched it, but we decided that the code was so straightforward that we would just copy and fork it. It worked out fine.\nDoing this with contrib.auth, on the other hand, might be a bad move given its interaction with contrib.admin & contrib.session.\n", "I'd also check out third-party re-usable apps before building my own. Many are listed on Django Plug(g)ables, and most are hosted on Google Code, GitHub or BitBucket.\n", "Most of the apps in django.contrib are written very well and are highly extensible.\nDon't like quite how comments works? Subclass the models and forms within it, adding your own functionality and you have a working comment system that fits your sites schema, with little effort.\nI think the best part when you extend the contrib apps is you're not really doing anything hacky, you're just writing (mostly) regular Python code to add the functionality.\n" ]
[ 7, 6, 4 ]
[]
[]
[ "django", "django_contrib", "python" ]
stackoverflow_0000542594_django_django_contrib_python.txt
Q: Delete Chars in Python does anybody know how to delete all characters behind a specific character?? like this: http://google.com/translate_t into http://google.com A: if you're asking about an abstract string and not url you could go with: >>> astring ="http://google.com/translate_t" >>> astring.rpartition('/')[0] http://google.com A: For urls, using urlparse: >>> import urlparse >>> parts = urlparse.urlsplit('http://google.com/path/to/resource?query=spam#anchor') >>> parts ('http', 'google.com', '/path/to/resource', 'query=spam', 'anchor') >>> urlparse.urlunsplit((parts[0], parts[1], '', '', '')) 'http://google.com' For arbitrary strings, using re: >>> import re >>> re.split(r'\b/\b', 'http://google.com/path/to/resource', 1) ['http://google.com', 'path/to/resource'] A: str="http://google.com/translate_t" shortened=str[0:str.rfind("/")] Should do it. str[a:b] returns a substring in python. And rfind is used to find the index of a character sequence, starting at the end of the string. A: If you know the position of the character then you can use the slice syntax to to create a new string: In [2]: s1 = "abc123" In [3]: s2 = s1[:3] In [4]: print s2 abc To find the position you can use the find() or index() methods of strings. The split() and partition() methods may be useful, too. Those methods are documented in the Python docs for sequences. To remove a part of a string is imposible because strings are immutable. If you want to process URLs then you should definitely use the urlparse library. It lets you split an URL into its parts. If you just want remove a part of the file path then you will have to do that still by yourself.
Delete Chars in Python
does anybody know how to delete all characters behind a specific character?? like this: http://google.com/translate_t into http://google.com
[ "if you're asking about an abstract string and not url you could go with:\n>>> astring =\"http://google.com/translate_t\"\n>>> astring.rpartition('/')[0]\nhttp://google.com\n\n", "For urls, using urlparse:\n>>> import urlparse\n>>> parts = urlparse.urlsplit('http://google.com/path/to/resource?query=spam#anchor')\n>>> parts\n('http', 'google.com', '/path/to/resource', 'query=spam', 'anchor')\n>>> urlparse.urlunsplit((parts[0], parts[1], '', '', ''))\n'http://google.com'\n\nFor arbitrary strings, using re:\n>>> import re\n>>> re.split(r'\\b/\\b', 'http://google.com/path/to/resource', 1)\n['http://google.com', 'path/to/resource']\n\n", "str=\"http://google.com/translate_t\"\nshortened=str[0:str.rfind(\"/\")]\n\nShould do it. str[a:b] returns a substring in python. And rfind is used to find the index of a character sequence, starting at the end of the string.\n", "If you know the position of the character then you can use the slice syntax to to create a new string:\nIn [2]: s1 = \"abc123\"\nIn [3]: s2 = s1[:3]\nIn [4]: print s2\nabc\n\nTo find the position you can use the find() or index() methods of strings.\nThe split() and partition() methods may be useful, too.\nThose methods are documented in the Python docs for sequences.\nTo remove a part of a string is imposible because strings are immutable.\nIf you want to process URLs then you should definitely use the urlparse library. It lets you split an URL into its parts. If you just want remove a part of the file path then you will have to do that still by yourself.\n" ]
[ 6, 5, 2, 2 ]
[]
[]
[ "python" ]
stackoverflow_0000549130_python.txt
Q: In the windows python console, how to make Tab = four spaces? Hello I would like that when I am in the python console tabbing will give me four spaces. Any ideas? A: Download and install AutoHotkey Write this script: SetTitleMatchMode 2 #IfWinActive python tab:: Send, {SPACE} Send, {SPACE} Send, {SPACE} Send, {SPACE} Save it as tab-to-space.ahk, and doubleclick on the file. Note: you might have to captalize "Python" to match your window tite. Or you can have "yhton" and it will match Jython too.
In the windows python console, how to make Tab = four spaces?
Hello I would like that when I am in the python console tabbing will give me four spaces. Any ideas?
[ "\nDownload and install AutoHotkey\nWrite this script:\nSetTitleMatchMode 2\n#IfWinActive python\ntab::\nSend, {SPACE}\nSend, {SPACE}\nSend, {SPACE}\nSend, {SPACE}\n\n\nSave it as tab-to-space.ahk, and doubleclick on the file.\nNote: you might have to captalize \"Python\" to match your window tite. Or you can have \"yhton\" and it will match Jython too.\n" ]
[ 5 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0000549340_python_windows.txt
Q: Python extend with an empty list bug? Why does python 2.5.2 have the following behavior >>>[2].extend([]) == [2] False >>> [2].extend([]) == None True $ python --version Python 2.5.2 I assume I'm not understanding something here, but intuitively I'd think that [2].extend([]) should yield [2] A: Extend is a method of list, which modifies it but doesn't return self (returning None instead). If you need the modified value as the expression value, use +, as in [2]+[]. A: Exactly. >>> x = [2] >>> x.extend([]) # Nothing is printed because the return value is None >>> x == [2] True >>> x [2] They do this on purpose so that you will remember that the extend function is actually modifying the list in-place. Same with sort(). It always returns None.
Python extend with an empty list bug?
Why does python 2.5.2 have the following behavior >>>[2].extend([]) == [2] False >>> [2].extend([]) == None True $ python --version Python 2.5.2 I assume I'm not understanding something here, but intuitively I'd think that [2].extend([]) should yield [2]
[ "Extend is a method of list, which modifies it but doesn't return self (returning None instead). If you need the modified value as the expression value, use +, as in [2]+[].\n", "Exactly.\n>>> x = [2]\n>>> x.extend([]) # Nothing is printed because the return value is None\n>>> x == [2]\nTrue\n>>> x\n[2]\n\nThey do this on purpose so that you will remember that the extend function is actually modifying the list in-place. Same with sort(). It always returns None.\n" ]
[ 50, 14 ]
[]
[]
[ "extend", "list", "python" ]
stackoverflow_0000549741_extend_list_python.txt
Q: Simple unique non-priority queue system I'm working on a simple web crawler in python and I wan't to make a simple queue class, but I'm not quite sure the best way to start. I want something that holds only unique items to process, so that the crawler will only crawl each page once per script run (simply to avoid infinite looping). Can anyone give me or point me to a simple queue example that I could run off of? A: I'd just use a set, it doesn't maintain order but it will help you maintain uniqueness: >>> q = set([9, 8, 7, 7, 8, 5, 4, 1]) >>> q.pop() 1 >>> q.pop() 4 >>> q.pop() 5 >>> q.add(3) >>> q.add(3) >>> q.add(3) >>> q.add(3) >>> q set([3, 7, 8, 9] A: A very simple example would be to stuff each item's URL into a dict, but as the key, not as the value. Then only process the next item if it's url is not in that dict's keys: visited = {} # grab next url from somewhere if url not in visited.keys(): # process url visited[url] = 1 # or whatever, the value is unimportant # repeat with next url You can get more efficient, of course, but this would be simple. A: If I understand correctly, you want to visit each page only once. I think the best way to do this would be to keep a queue of pages still to visit, and a set of visited pages. The problem with the other posted solution is that once you pop a page from the queue, you no longer have a record of whether or not you've been there. I'd use a combination of a set and a list: visited = set() to_visit = [] def queue_page(url): if url not in visited: to_visit.append(url) def visit(url): visited.add(url) ... # some processing # Add all found links to the queue for link in links: queue_page(link) def page_iterator(start_url): visit(start_url) try: yield to_visit.pop(0) except IndexError: raise StopIteration for page in page_iterator(start): visit(page) Of course this a bit of a contrived example, and you'd probably be best off encapsulating this in some way, but it illustrates the concept. A: Why not use a list if you need order (or even a heapq, as was formerly suggested by zacherates before a set was suggested instead) and also use a set to check for duplicates? A: I would extend the list class to add unique-testing code to whatever methods of the list you are using. This could range from simply adding a .append_unique(item) to the class, or overriding all of append, insert, extend, __setitem__, __setslice__, etc, to throw an exception (or be silent, if you wish) in the case of a non-unique item. For example, if you just wanted to make sure the append method maintained uniqueness: class UniqueList(list): def append(self, item): if item not in self: list.append(self, item)
Simple unique non-priority queue system
I'm working on a simple web crawler in python and I wan't to make a simple queue class, but I'm not quite sure the best way to start. I want something that holds only unique items to process, so that the crawler will only crawl each page once per script run (simply to avoid infinite looping). Can anyone give me or point me to a simple queue example that I could run off of?
[ "I'd just use a set, it doesn't maintain order but it will help you maintain uniqueness:\n>>> q = set([9, 8, 7, 7, 8, 5, 4, 1])\n>>> q.pop()\n1\n>>> q.pop()\n4\n>>> q.pop()\n5\n>>> q.add(3)\n>>> q.add(3)\n>>> q.add(3)\n>>> q.add(3)\n>>> q\nset([3, 7, 8, 9]\n\n", "A very simple example would be to stuff each item's URL into a dict, but as the key, not as the value. Then only process the next item if it's url is not in that dict's keys:\nvisited = {}\n# grab next url from somewhere\nif url not in visited.keys():\n # process url\n visited[url] = 1 # or whatever, the value is unimportant\n# repeat with next url\n\nYou can get more efficient, of course, but this would be simple.\n", "If I understand correctly, you want to visit each page only once. I think the best way to do this would be to keep a queue of pages still to visit, and a set of visited pages. The problem with the other posted solution is that once you pop a page from the queue, you no longer have a record of whether or not you've been there.\nI'd use a combination of a set and a list:\nvisited = set()\nto_visit = []\n\ndef queue_page(url):\n if url not in visited:\n to_visit.append(url)\n\ndef visit(url):\n visited.add(url)\n ... # some processing\n\n # Add all found links to the queue\n for link in links:\n queue_page(link)\n\ndef page_iterator(start_url):\n visit(start_url)\n try:\n yield to_visit.pop(0)\n except IndexError:\n raise StopIteration\n\nfor page in page_iterator(start):\n visit(page)\n\nOf course this a bit of a contrived example, and you'd probably be best off encapsulating this in some way, but it illustrates the concept.\n", "Why not use a list if you need order (or even a heapq, as was formerly suggested by zacherates before a set was suggested instead) and also use a set to check for duplicates?\n", "I would extend the list class to add unique-testing code to whatever methods of the list you are using. This could range from simply adding a .append_unique(item) to the class, or overriding all of append, insert, extend, __setitem__, __setslice__, etc, to throw an exception (or be silent, if you wish) in the case of a non-unique item.\nFor example, if you just wanted to make sure the append method maintained uniqueness:\nclass UniqueList(list):\n def append(self, item):\n if item not in self:\n list.append(self, item)\n\n" ]
[ 4, 2, 2, 1, 0 ]
[]
[]
[ "python", "queue" ]
stackoverflow_0000549536_python_queue.txt
Q: Parsing "From" addresses from email text I'm trying to extract email addresses from plain text transcripts of emails. I've cobbled together a bit of code to find the addresses themselves, but I don't know how to make it discriminate between them; right now it just spits out all email addresses in the file. I'd like to make it so it only spits out addresses that are preceeded by "From:" and a few wildcard characters, and ending with ">" (because the emails are set up as From [name]<[email]>). Here's the code now: import re #allows program to use regular expressions foundemail = [] #this is an empty list mailsrch = re.compile(r'[\w\-][\w\-\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4}') #do not currently know exact meaning of this expression but assuming #it means something like "[stuff]@[stuff][stuff1-4 letters]" # "line" is a variable is set to a single line read from the file # ("text.txt"): for line in open("text.txt"): foundemail.extend(mailsrch.findall(line)) # this extends the previously named list via the "mailsrch" variable #which was named before print foundemail A: Try this out: >>> from email.utils import parseaddr >>> parseaddr('From: vg@m.com') ('', 'vg@m.com') >>> parseaddr('From: Van Gale <vg@m.com>') ('Van Gale', 'vg@m.com') >>> parseaddr(' From: Van Gale <vg@m.com> ') ('Van Gale', 'vg@m.com') >>> parseaddr('blah abdf From: Van Gale <vg@m.com> and this') ('Van Gale', 'vg@m.com') Unfortunately it only finds the first email in each line because it's expecting header lines, but maybe that's ok? A: import email msg = email.message_from_string(str) # or # f = open(file) # msg = email.message_from_file(f) msg['from'] # and optionally from email.utils import parseaddr addr = parseaddr(msg['from']) A: If your goal is actually to extract email addresses from text, you should use a library built for that purpose. Regular expressions are not well suited to match arbitrary email addresses. But if you're doing this as an exercise to understand regular expressions better, I'd take the approach of expanding the expression you're using to include the extra text you want to match. So first, let me explain what that regex does: [\w\-][\w\-\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4} [\w\-] matches any "word" character (letter, number, or underscore), or a hyphen [\w\-\.]+ matches (any word character or hyphen or period) one or more times @ matches a literal '@' [\w\-] matches any word character or hyphen [\w\-\.]+ matches (any word character or hyphen or period) one or more times [a-zA-Z]{1,4} matches 1, 2, 3, or 4 lowercase or uppercase letters So this matches a sequence of a "word" that may contain hyphens or periods but doesn't start with a period, followed by an @ sign, followed by another "word" (same sense as before) that ends with a letter. Now, to modify this for your purposes, let's add regex parts to match "From", the name, and the angle brackets: From: [\w\s]+?<([\w\-][\w\-\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4})> From: matches the literal text "From: " [\w\s]+? matches one or more consecutive word characters or space characters. The question mark makes the match non-greedy, so it will match as few characters as possible while still allowing the whole regular expression to match (in this case, it's probably not necessary, but it does make the match more efficient since the thing that comes immediately afterwards is not a word character or space character). < matches a literal less-than sign (opening angle bracket) The same regular expression you had before is now surrounded by parentheses. This makes it a capturing group, so you can call m.group(1) to get the text matched by that part of the regex. > matches a literal greater-than sign Since the regex now uses capturing groups, your code will need to change a little as well: import re foundemail = [] mailsrch = re.compile(r'From: [\w\s]+?<([\w\-][\w\-\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4})>') for line in open("text.txt"): foundemail.extend([m.group(1) for m in mailsrch.finditer(line)]) print foundemail The code [m.group(1) for m in mailsrch.finditer(line)] produces a list out of the first capturing group (remember, that was the part in parentheses) from each match found by the regular expression. A: mailsrch = re.compile(r'[\w\-][\w\-\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4}') Expression breakdown: [\w-]: any word character (alphanumeric, plus underscore) or a dash [\w-.]+: any word character, a dash, or a period/dot, one or more times @: literal @ symbol [\w-][\w-.]+: any word char or dash, followed by any word char, dash, or period one or more times. [a-zA-Z]{1,4}: any alphabetic character 1-4 times. To make this match only lines starting with From:, and wrapped in < and > symbols: import re foundemail = [] mailsrch = re.compile(r'^From:\s+.*<([\w\-][\w\-\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4})>', re.I | re.M) foundemail.extend(mailsrch.findall(open('text.txt').read())) print foundemail A: Use the email and mailbox packages to parse the plain text version of the email. This will convert it to an object that will enable to extract all the addresses in the 'From' field. You can also do a lot of other analysis on the message, if you need to process other header fields, or the message body. As a quick example, the following (untested) code should read all the message in a unix style mailbox, and print all the 'from' headers. import mailbox import email mbox = mailbox.PortableUnixMailbox(open(filename, 'rU'), email.message_from_file) for msg in mbox: from = msg['From'] print from A: Roughly speaking, you can: from email.utils import parseaddr foundemail = [] for line in open("text.txt"): if not line.startswith("From:"): continue n, e = parseaddr(line) foundemail.append(e) print foundemail This utilizes the built-in python parseaddr function to parse the address out of the from line (as demonstrated by other answers), without the overhead necessarily of parsing the entire message (e.g. by using the more full featured email and mailbox packages). The script here simply skips any lines that do not begin with "From:". Whether the overhead matters to you depends on how big your input is and how often you will be doing this operation. A: if you can be reasonably sure that lines containing these email addresses start with whitespace followed by "From:" you can simply do this: addresslines = [] for line in open("text.txt"): if line.strip().startswith("From:"): addresslines.append(line) then later - or on adding them to the list - you can refine the addresslines items to give out exactly what you want A: "[stuff]@[stuff][stuff1-4 letters]" is about right, but if you wanted to you could decode the regular expression using a trick I just found out about, here. Do the compile() in an interactive Python session like this: mailsrch = re.compile(r'[\w\-][\w\-\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4}', 128) It will print out the following: in category category_word literal 45 max_repeat 1 65535 in category category_word literal 45 literal 46 literal 64 in category category_word literal 45 max_repeat 1 65535 in category category_word literal 45 literal 46 max_repeat 1 4 in range (97, 122) range (65, 90) Which, if you can kind of get used to it, shows you exactly how the RE works.
Parsing "From" addresses from email text
I'm trying to extract email addresses from plain text transcripts of emails. I've cobbled together a bit of code to find the addresses themselves, but I don't know how to make it discriminate between them; right now it just spits out all email addresses in the file. I'd like to make it so it only spits out addresses that are preceeded by "From:" and a few wildcard characters, and ending with ">" (because the emails are set up as From [name]<[email]>). Here's the code now: import re #allows program to use regular expressions foundemail = [] #this is an empty list mailsrch = re.compile(r'[\w\-][\w\-\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4}') #do not currently know exact meaning of this expression but assuming #it means something like "[stuff]@[stuff][stuff1-4 letters]" # "line" is a variable is set to a single line read from the file # ("text.txt"): for line in open("text.txt"): foundemail.extend(mailsrch.findall(line)) # this extends the previously named list via the "mailsrch" variable #which was named before print foundemail
[ "Try this out:\n>>> from email.utils import parseaddr\n\n>>> parseaddr('From: vg@m.com')\n('', 'vg@m.com')\n\n>>> parseaddr('From: Van Gale <vg@m.com>')\n('Van Gale', 'vg@m.com')\n\n>>> parseaddr(' From: Van Gale <vg@m.com> ')\n('Van Gale', 'vg@m.com')\n\n>>> parseaddr('blah abdf From: Van Gale <vg@m.com> and this')\n('Van Gale', 'vg@m.com')\n\nUnfortunately it only finds the first email in each line because it's expecting header lines, but maybe that's ok?\n", "import email\nmsg = email.message_from_string(str)\n\n# or\n# f = open(file)\n# msg = email.message_from_file(f)\n\nmsg['from']\n\n# and optionally\nfrom email.utils import parseaddr\naddr = parseaddr(msg['from'])\n\n", "If your goal is actually to extract email addresses from text, you should use a library built for that purpose. Regular expressions are not well suited to match arbitrary email addresses.\nBut if you're doing this as an exercise to understand regular expressions better, I'd take the approach of expanding the expression you're using to include the extra text you want to match. So first, let me explain what that regex does:\n[\\w\\-][\\w\\-\\.]+@[\\w\\-][\\w\\-\\.]+[a-zA-Z]{1,4}\n\n\n[\\w\\-] matches any \"word\" character (letter, number, or underscore), or a hyphen\n[\\w\\-\\.]+ matches (any word character or hyphen or period) one or more times\n@ matches a literal '@'\n[\\w\\-] matches any word character or hyphen\n[\\w\\-\\.]+ matches (any word character or hyphen or period) one or more times\n[a-zA-Z]{1,4} matches 1, 2, 3, or 4 lowercase or uppercase letters\n\nSo this matches a sequence of a \"word\" that may contain hyphens or periods but doesn't start with a period, followed by an @ sign, followed by another \"word\" (same sense as before) that ends with a letter.\nNow, to modify this for your purposes, let's add regex parts to match \"From\", the name, and the angle brackets:\nFrom: [\\w\\s]+?<([\\w\\-][\\w\\-\\.]+@[\\w\\-][\\w\\-\\.]+[a-zA-Z]{1,4})>\n\n\nFrom: matches the literal text \"From: \"\n[\\w\\s]+? matches one or more consecutive word characters or space characters. The question mark makes the match non-greedy, so it will match as few characters as possible while still allowing the whole regular expression to match (in this case, it's probably not necessary, but it does make the match more efficient since the thing that comes immediately afterwards is not a word character or space character).\n< matches a literal less-than sign (opening angle bracket)\nThe same regular expression you had before is now surrounded by parentheses. This makes it a capturing group, so you can call m.group(1) to get the text matched by that part of the regex.\n> matches a literal greater-than sign\n\nSince the regex now uses capturing groups, your code will need to change a little as well:\nimport re\nfoundemail = []\n\nmailsrch = re.compile(r'From: [\\w\\s]+?<([\\w\\-][\\w\\-\\.]+@[\\w\\-][\\w\\-\\.]+[a-zA-Z]{1,4})>')\n\nfor line in open(\"text.txt\"):\n foundemail.extend([m.group(1) for m in mailsrch.finditer(line)])\n\nprint foundemail\n\nThe code [m.group(1) for m in mailsrch.finditer(line)] produces a list out of the first capturing group (remember, that was the part in parentheses) from each match found by the regular expression.\n", "mailsrch = re.compile(r'[\\w\\-][\\w\\-\\.]+@[\\w\\-][\\w\\-\\.]+[a-zA-Z]{1,4}')\n\nExpression breakdown:\n[\\w-]: any word character (alphanumeric, plus underscore) or a dash\n[\\w-.]+: any word character, a dash, or a period/dot, one or more times\n@: literal @ symbol\n[\\w-][\\w-.]+: any word char or dash, followed by any word char, dash, or period one or more times.\n[a-zA-Z]{1,4}: any alphabetic character 1-4 times.\nTo make this match only lines starting with From:, and wrapped in < and > symbols: \nimport re\n\nfoundemail = []\nmailsrch = re.compile(r'^From:\\s+.*<([\\w\\-][\\w\\-\\.]+@[\\w\\-][\\w\\-\\.]+[a-zA-Z]{1,4})>', re.I | re.M)\nfoundemail.extend(mailsrch.findall(open('text.txt').read()))\n\nprint foundemail\n\n", "Use the email and mailbox packages to parse the plain text version of the email. This will convert it to an object that will enable to extract all the addresses in the 'From' field.\nYou can also do a lot of other analysis on the message, if you need to process other header fields, or the message body.\nAs a quick example, the following (untested) code should read all the message in a unix style mailbox, and print all the 'from' headers.\nimport mailbox\nimport email\n\nmbox = mailbox.PortableUnixMailbox(open(filename, 'rU'), email.message_from_file)\n\nfor msg in mbox:\n from = msg['From']\n print from\n\n", "Roughly speaking, you can:\nfrom email.utils import parseaddr\n\nfoundemail = []\nfor line in open(\"text.txt\"):\n if not line.startswith(\"From:\"): continue\n n, e = parseaddr(line)\n foundemail.append(e)\nprint foundemail\n\nThis utilizes the built-in python parseaddr function to parse the address out of the from line (as demonstrated by other answers), without the overhead necessarily of parsing the entire message (e.g. by using the more full featured email and mailbox packages). The script here simply skips any lines that do not begin with \"From:\". Whether the overhead matters to you depends on how big your input is and how often you will be doing this operation.\n", "if you can be reasonably sure that lines containing these email addresses start with whitespace followed by \"From:\" you can simply do this:\naddresslines = []\nfor line in open(\"text.txt\"):\n if line.strip().startswith(\"From:\"):\n addresslines.append(line)\n\nthen later - or on adding them to the list - you can refine the addresslines items to give out exactly what you want\n", "\"[stuff]@[stuff][stuff1-4 letters]\" is about right, but if you wanted to you could decode the regular expression using a trick I just found out about, here. Do the compile() in an interactive Python session like this:\nmailsrch = re.compile(r'[\\w\\-][\\w\\-\\.]+@[\\w\\-][\\w\\-\\.]+[a-zA-Z]{1,4}', 128)\n\nIt will print out the following:\nin \n category category_word\n literal 45\nmax_repeat 1 65535 \n in \n category category_word\n literal 45\n literal 46\nliteral 64 \nin \n category category_word\n literal 45\nmax_repeat 1 65535 \n in \n category category_word\n literal 45\n literal 46\nmax_repeat 1 4 \n in \n range (97, 122)\n range (65, 90)\n\nWhich, if you can kind of get used to it, shows you exactly how the RE works.\n" ]
[ 40, 10, 3, 2, 2, 1, 0, 0 ]
[]
[]
[ "email", "parsing", "python", "string", "text" ]
stackoverflow_0000550009_email_parsing_python_string_text.txt
Q: Django Model API reverse lookup of many to many relationship through intermediary table I have a Resident and can not seem to get the set of SSA's the resident belongs to. I've tried res.ssa_set.all() .ssas_set.all() and .ssa_resident_set.all(). Can't seem to manage it. What's the syntax for a reverse m2m lookup through another table? EDIT: I'm getting an 'QuerySet as no attribute' error. Erm? class SSA(models.Model): name = models.CharField(max_length=100) cost_center = models.IntegerField(max_length=4) street_num = models.CharField(max_length=9) street_name = models.CharField(max_length=40) suburb = models.CharField(max_length=40) post_code = models.IntegerField(max_length=4, blank=True, null=True) def __unicode__(self): return self.name class Resident(models.Model): cris_id = models.CharField(max_length=10, primary_key=True) first_name = models.CharField(max_length=20) last_name = models.CharField(max_length=20) ssas = models.ManyToManyField('SSA', through='SSA_Resident', verbose_name="SSAs") def __unicode__(self): return self._get_full_name() def _get_full_name(self): return u"%s %s" %(self.first_name, self.last_name) full_name = property(_get_full_name) class SSA_Resident(models.Model): id = models.AutoField(primary_key=True) resident = models.ForeignKey('Resident') ssa = models.ForeignKey('SSA', verbose_name="SSA") active = models.BooleanField(default=True) def __unicode__(self): return u"%s - %s" %(self.resident.full_name, self.ssa.name) A: I was trying to evaluate a query set object, not the object itself. Executing a get on the query set and then a lookup of the relation set worked fine. I'm changing to community wiki and leaving this here just incase someone else is as stupid as I was. A working example: resident = Resident.objects.filter(name='Johnny') resident.ssa_set.all() # fail resident = resident.get() # will fail if more than one returned by filter resident.ssa_set.all() # works, since we're operating on an instance, not a queryset
Django Model API reverse lookup of many to many relationship through intermediary table
I have a Resident and can not seem to get the set of SSA's the resident belongs to. I've tried res.ssa_set.all() .ssas_set.all() and .ssa_resident_set.all(). Can't seem to manage it. What's the syntax for a reverse m2m lookup through another table? EDIT: I'm getting an 'QuerySet as no attribute' error. Erm? class SSA(models.Model): name = models.CharField(max_length=100) cost_center = models.IntegerField(max_length=4) street_num = models.CharField(max_length=9) street_name = models.CharField(max_length=40) suburb = models.CharField(max_length=40) post_code = models.IntegerField(max_length=4, blank=True, null=True) def __unicode__(self): return self.name class Resident(models.Model): cris_id = models.CharField(max_length=10, primary_key=True) first_name = models.CharField(max_length=20) last_name = models.CharField(max_length=20) ssas = models.ManyToManyField('SSA', through='SSA_Resident', verbose_name="SSAs") def __unicode__(self): return self._get_full_name() def _get_full_name(self): return u"%s %s" %(self.first_name, self.last_name) full_name = property(_get_full_name) class SSA_Resident(models.Model): id = models.AutoField(primary_key=True) resident = models.ForeignKey('Resident') ssa = models.ForeignKey('SSA', verbose_name="SSA") active = models.BooleanField(default=True) def __unicode__(self): return u"%s - %s" %(self.resident.full_name, self.ssa.name)
[ "I was trying to evaluate a query set object, not the object itself. Executing a get on the query set and then a lookup of the relation set worked fine. I'm changing to community wiki and leaving this here just incase someone else is as stupid as I was.\nA working example:\nresident = Resident.objects.filter(name='Johnny')\nresident.ssa_set.all() # fail\nresident = resident.get() # will fail if more than one returned by filter\nresident.ssa_set.all() # works, since we're operating on an instance, not a queryset\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_models", "many_to_many", "python" ]
stackoverflow_0000550300_django_django_models_many_to_many_python.txt