content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Pass to python method based on length of a list
I have a list called 'optionlist' which may change length from day to day, but I want a tkinter dropdown box to be able to select something from it.
Here's an example of how to define a tkinter optionmenu:
opt1 = OptionMenu(root, var1, 'A', 'B', 'C')
A, B, and C are the options you can select. The problem presented here is that while the OptionMenu is flexible and allows as many options as you want, you have to know exactly how many you want when you write the code. This isn't a list or a tuple being passed.
I'm wondering if anyone knows any kung-fu for making this so I don't have to do:
if len(optionlist) == 1:
opt1 = OptionMenu(root, var1, optionlist[0])
if len(optionlist) == 2:
opt1 = OptionMenu(root, var1, optionlist[0], optionlist[1])
etc, etc, etc
I know you can define a list like this:
elements = [client.get('element') for client in clientlist]
I'm hoping something similar can be done when passing to methods as well.
A:
You want the * operator:
opt1 = OptionMenu(root, var1, *optionlist)
| Pass to python method based on length of a list | I have a list called 'optionlist' which may change length from day to day, but I want a tkinter dropdown box to be able to select something from it.
Here's an example of how to define a tkinter optionmenu:
opt1 = OptionMenu(root, var1, 'A', 'B', 'C')
A, B, and C are the options you can select. The problem presented here is that while the OptionMenu is flexible and allows as many options as you want, you have to know exactly how many you want when you write the code. This isn't a list or a tuple being passed.
I'm wondering if anyone knows any kung-fu for making this so I don't have to do:
if len(optionlist) == 1:
opt1 = OptionMenu(root, var1, optionlist[0])
if len(optionlist) == 2:
opt1 = OptionMenu(root, var1, optionlist[0], optionlist[1])
etc, etc, etc
I know you can define a list like this:
elements = [client.get('element') for client in clientlist]
I'm hoping something similar can be done when passing to methods as well.
| [
"You want the * operator:\nopt1 = OptionMenu(root, var1, *optionlist)\n\n"
] | [
11
] | [] | [] | [
"parameters",
"python"
] | stackoverflow_0000735878_parameters_python.txt |
Q:
PyQt connect method bug when used in a for loop which creates widgets from a list
I have a GUI program,
It auto create buttons from a name list,
and connect to a function prints its name.
but when I run this program, I press all the buttons,
they all return the last button's name.
I wonder why this thing happens. can any one help?
import sys
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import logging
logging.basicConfig(level=logging.DEBUG,)
class MainWindow(QWidget):
def init(self):
names = ('a','b','c')
lo = QHBoxLayout(self)
for name in names:
button = QPushButton(name,self)
lo.addWidget(button)
self.connect(button,SIGNAL("clicked()"),
lambda :logging.debug(name))
if __name__=="__main__":
app = QApplication(sys.argv)
m = MainWindow();m.init();m.show()
app.exec_()
result like:
python t.py
DEBUG:root:c
DEBUG:root:c
DEBUG:root:c
A:
I see at least one bug in your code.
Replace:
lambda :logging.debug(name)
By:
lambda name=name: logging.debug(name)
See Why results of map() and list comprehension are different? for details.
| PyQt connect method bug when used in a for loop which creates widgets from a list | I have a GUI program,
It auto create buttons from a name list,
and connect to a function prints its name.
but when I run this program, I press all the buttons,
they all return the last button's name.
I wonder why this thing happens. can any one help?
import sys
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import logging
logging.basicConfig(level=logging.DEBUG,)
class MainWindow(QWidget):
def init(self):
names = ('a','b','c')
lo = QHBoxLayout(self)
for name in names:
button = QPushButton(name,self)
lo.addWidget(button)
self.connect(button,SIGNAL("clicked()"),
lambda :logging.debug(name))
if __name__=="__main__":
app = QApplication(sys.argv)
m = MainWindow();m.init();m.show()
app.exec_()
result like:
python t.py
DEBUG:root:c
DEBUG:root:c
DEBUG:root:c
| [
"I see at least one bug in your code.\nReplace: \n lambda :logging.debug(name)\n\nBy:\n lambda name=name: logging.debug(name)\n\nSee Why results of map() and list comprehension are different? for details.\n"
] | [
3
] | [] | [] | [
"pyqt",
"python"
] | stackoverflow_0000736651_pyqt_python.txt |
Q:
Ping FeedBurner in Django App
I have a django site, and some of the feeds are published through FeedBurner. I would like to ping FeedBurner whenever I save an instance of a particular model. FeedBurner's website says to use the XML-RPC ping mechanism, but I can't find a lot of documentation on how to implement it.
What's the easiest way to do the XML-RPC ping in django/Python?
A:
You can use Django's signals feature to get a callback after a model is saved:
import xmlrpclib
from django.db.models.signals import post_save
from app.models import MyModel
def ping_handler(sender, instance=None, **kwargs):
if instance is None:
return
rpc = xmlrpclib.Server('http://ping.feedburner.google.com/')
rpc.weblogUpdates.ping(instance.title, instance.get_absolute_url())
post_save.connect(ping_handler, sender=MyModel)
Clearly, you should update this with what works for your app and read up on signals in case you want a different event.
A:
Use pluggable apps, Luke!
http://github.com/svetlyak40wt/django-pingback/
A:
maybe sth like that:
import xmlrpclib
j = xmlrpclib.Server('http://feedburnerrpc')
reply = j.weblogUpdates.ping('website title','http://urltothenewpost')
| Ping FeedBurner in Django App | I have a django site, and some of the feeds are published through FeedBurner. I would like to ping FeedBurner whenever I save an instance of a particular model. FeedBurner's website says to use the XML-RPC ping mechanism, but I can't find a lot of documentation on how to implement it.
What's the easiest way to do the XML-RPC ping in django/Python?
| [
"You can use Django's signals feature to get a callback after a model is saved:\nimport xmlrpclib\nfrom django.db.models.signals import post_save\nfrom app.models import MyModel\n\ndef ping_handler(sender, instance=None, **kwargs):\n if instance is None:\n return\n rpc = xmlrpclib.Server('http://ping.feedburner.google.com/')\n rpc.weblogUpdates.ping(instance.title, instance.get_absolute_url())\n\npost_save.connect(ping_handler, sender=MyModel)\n\nClearly, you should update this with what works for your app and read up on signals in case you want a different event.\n",
"Use pluggable apps, Luke!\nhttp://github.com/svetlyak40wt/django-pingback/\n",
"maybe sth like that:\nimport xmlrpclib\nj = xmlrpclib.Server('http://feedburnerrpc')\nreply = j.weblogUpdates.ping('website title','http://urltothenewpost')\n\n"
] | [
12,
2,
1
] | [] | [] | [
"django",
"python",
"xml_rpc"
] | stackoverflow_0000736413_django_python_xml_rpc.txt |
Q:
Import XML into SQL database
I'm working with a 20 gig XML file that I would like to import into a SQL database (preferably MySQL, since that is what I am familiar with). This seems like it would be a common task, but after Googling around a bit I haven't been able to figure out how to do it. What is the best way to do this?
I know this ability is built into MySQL 6.0, but that is not an option right now because it is an alpha development release.
Also, if I have to do any scripting I would prefer to use Python because that's what I am most familiar with.
Thanks.
A:
You can use the getiterator() function to iterate over the XML file without parsing the whole thing at once. You can do this with ElementTree, which is included in the standard library, or with lxml.
for record in root.getiterator('record'):
add_element_to_database(record) # Depends on your database interface.
# I recommend SQLAlchemy.
A:
Take a look at the iterparse() function from ElementTree or cElementTree (I guess cElementTree would be best if you can use it)
This piece describes more or less what you need to do: http://effbot.org/zone/element-iterparse.htm#incremental-parsing
This will probably be the most efficient way to do it in Python. Make sure not to forget to call .clear() on the appropriate elements (you really don't want to build an in memory tree of a 20gig xml file: the .getiterator() method described in another answer is slightly simpler, but does require the whole tree first - I assume that the poster actually had iterparse() in mind as well)
A:
I've done this several times with Python, but never with such a big XML file. ElementTree is an excellent XML library for Python that would be of assistance. If it was possible, I would divide the XML up into smaller files to make it easier to load into memory and parse.
A:
It may be a common task, but maybe 20GB isn't as common with MySQL as it is with SQL Server.
I've done this using SQL Server Integration Services and a bit of custom code. Whether you need either of those depends on what you need to do with 20GB of XML in a database. Is it going to be a single column of a single row of a table? One row per child element?
SQL Server has an XML datatype if you simply want to store the XML as XML. This type allows you to do queries using XQuery, allows you to create XML indexes over the XML, and allows the XML column to be "strongly-typed" by referring it to a set of XML schemas, which you store in the database.
A:
The MySQL documentation does not seem to indicate that XML import is restricted to version 6. It apparently works with 5, too.
| Import XML into SQL database | I'm working with a 20 gig XML file that I would like to import into a SQL database (preferably MySQL, since that is what I am familiar with). This seems like it would be a common task, but after Googling around a bit I haven't been able to figure out how to do it. What is the best way to do this?
I know this ability is built into MySQL 6.0, but that is not an option right now because it is an alpha development release.
Also, if I have to do any scripting I would prefer to use Python because that's what I am most familiar with.
Thanks.
| [
"You can use the getiterator() function to iterate over the XML file without parsing the whole thing at once. You can do this with ElementTree, which is included in the standard library, or with lxml.\nfor record in root.getiterator('record'):\n add_element_to_database(record) # Depends on your database interface.\n # I recommend SQLAlchemy.\n\n",
"Take a look at the iterparse() function from ElementTree or cElementTree (I guess cElementTree would be best if you can use it)\nThis piece describes more or less what you need to do: http://effbot.org/zone/element-iterparse.htm#incremental-parsing\nThis will probably be the most efficient way to do it in Python. Make sure not to forget to call .clear() on the appropriate elements (you really don't want to build an in memory tree of a 20gig xml file: the .getiterator() method described in another answer is slightly simpler, but does require the whole tree first - I assume that the poster actually had iterparse() in mind as well)\n",
"I've done this several times with Python, but never with such a big XML file. ElementTree is an excellent XML library for Python that would be of assistance. If it was possible, I would divide the XML up into smaller files to make it easier to load into memory and parse.\n",
"It may be a common task, but maybe 20GB isn't as common with MySQL as it is with SQL Server.\nI've done this using SQL Server Integration Services and a bit of custom code. Whether you need either of those depends on what you need to do with 20GB of XML in a database. Is it going to be a single column of a single row of a table? One row per child element?\nSQL Server has an XML datatype if you simply want to store the XML as XML. This type allows you to do queries using XQuery, allows you to create XML indexes over the XML, and allows the XML column to be \"strongly-typed\" by referring it to a set of XML schemas, which you store in the database.\n",
"The MySQL documentation does not seem to indicate that XML import is restricted to version 6. It apparently works with 5, too.\n"
] | [
4,
2,
1,
0,
0
] | [] | [] | [
"python",
"sql",
"xml"
] | stackoverflow_0000723757_python_sql_xml.txt |
Q:
How to read ID3 Tag in an MP3 using Python?
Does anyone has an experience of reading and writing ID3 tags in an MP3 file or a WMA file? There are some libraries but I would like to do it from the scratch. :-)
A:
Dive into Python uses MP3 ID3 tags as an example.
A:
Mutagen https://bitbucket.org/lazka/mutagen
Edited 14/09/23 with current code host location
eyeD3 http://eyed3.nicfit.net/
A:
Try eyeD3, it's a program and a module.
A:
A quick google showed up http://id3-py.sourceforge.net/
Maybe this works for you ?
| How to read ID3 Tag in an MP3 using Python? | Does anyone has an experience of reading and writing ID3 tags in an MP3 file or a WMA file? There are some libraries but I would like to do it from the scratch. :-)
| [
"Dive into Python uses MP3 ID3 tags as an example.\n",
"Mutagen https://bitbucket.org/lazka/mutagen\nEdited 14/09/23 with current code host location\neyeD3 http://eyed3.nicfit.net/\n",
"Try eyeD3, it's a program and a module.\n",
"A quick google showed up http://id3-py.sourceforge.net/\nMaybe this works for you ?\n"
] | [
14,
13,
3,
2
] | [] | [] | [
"id3",
"mp3",
"python",
"tags"
] | stackoverflow_0000736813_id3_mp3_python_tags.txt |
Q:
Pygame: Sprite animation Theory - Need Feedback
After some tweaking of some code I got from someone to cause a characters images to move in regards to its direction and up down left right input I've put this together: (hope the code isn't too messy)
Character Move Code + IMG
The Sprite sheet only runs lengthwise, so basically each sprite section is a different action. Now would there be a way I could make a code that functions with the current one to cycle down from a set 'action' in order to make an animation?
For example:
'Run Left' is sprite 3. So then after we designate that column would it be possible to loop down how ever many frames of the run animation (lets say 4) in order to make an animation?
Example Picture:
http://animania1.ca/ShowFriends/dev/example.jpg
A:
It should be easy.
If you record the frame number in a variable, you can modulo this with the number of frames you have to get an animation frame number to display.
frame_count = 0
animation_frames = 4
while quit == False:
# ...
# snip
# ...
area = pygame.Rect(
image_number * 100,
(frame_count % animation_frames) * 150,
100,
150
)
display.blit(sprite, sprite_pos, area)
pygame.display.flip()
frame_count += 1
If different actions have different numbers of frames, you'll have to update animation_frames when you update image_number.
Also, this assumes that it's ok to play the animation starting at any frame. If this is not the case, you'll need to record what the frame count was when the action started, and take this away from frame count before the modulo:
area = pygame.Rect(
image_number * 100,
((frame_count - action_start_frame) % animation_frames) * 150,
100,
150
)
A note about your event handling. If you hold down, say, left, and tap right but keep holding down left, the sprite stops moving because the last event you processed was a keyup event, despite the fact that I'm still holding left.
If this is not what you want, you can get around it by either keeping a record of the up/down states of the keys you are interested in, or by using the pygame.key.get_pressed interface.
On another note, you appear to be aiming for a fixed frame rate, and at the same time determining how far to move your sprite based on the time taken in the last frame. In my opinion, this probably isn't ideal.
2D action games generally need to work in a predictable manner. If some CPU heavy process starts in the background on your computer and causes your game to no longer be able to churn out 60 frames a second, it's probably preferable for it to slow down, rather then have your objects start skipping huge distances between frames. Imagine if this happened in a 2D action game like Metal Slug where you're having to jump around avoiding bullets?
This also makes any physics calculations much simpler. You'll have to make a judgement call based on what type of game it is.
| Pygame: Sprite animation Theory - Need Feedback | After some tweaking of some code I got from someone to cause a characters images to move in regards to its direction and up down left right input I've put this together: (hope the code isn't too messy)
Character Move Code + IMG
The Sprite sheet only runs lengthwise, so basically each sprite section is a different action. Now would there be a way I could make a code that functions with the current one to cycle down from a set 'action' in order to make an animation?
For example:
'Run Left' is sprite 3. So then after we designate that column would it be possible to loop down how ever many frames of the run animation (lets say 4) in order to make an animation?
Example Picture:
http://animania1.ca/ShowFriends/dev/example.jpg
| [
"It should be easy.\nIf you record the frame number in a variable, you can modulo this with the number of frames you have to get an animation frame number to display.\nframe_count = 0\nanimation_frames = 4\nwhile quit == False:\n # ...\n # snip\n # ...\n area = pygame.Rect(\n image_number * 100,\n (frame_count % animation_frames) * 150,\n 100,\n 150\n )\n display.blit(sprite, sprite_pos, area)\n pygame.display.flip()\n frame_count += 1\n\nIf different actions have different numbers of frames, you'll have to update animation_frames when you update image_number.\nAlso, this assumes that it's ok to play the animation starting at any frame. If this is not the case, you'll need to record what the frame count was when the action started, and take this away from frame count before the modulo:\n area = pygame.Rect(\n image_number * 100,\n ((frame_count - action_start_frame) % animation_frames) * 150,\n 100,\n 150\n )\n\nA note about your event handling. If you hold down, say, left, and tap right but keep holding down left, the sprite stops moving because the last event you processed was a keyup event, despite the fact that I'm still holding left.\nIf this is not what you want, you can get around it by either keeping a record of the up/down states of the keys you are interested in, or by using the pygame.key.get_pressed interface.\nOn another note, you appear to be aiming for a fixed frame rate, and at the same time determining how far to move your sprite based on the time taken in the last frame. In my opinion, this probably isn't ideal.\n2D action games generally need to work in a predictable manner. If some CPU heavy process starts in the background on your computer and causes your game to no longer be able to churn out 60 frames a second, it's probably preferable for it to slow down, rather then have your objects start skipping huge distances between frames. Imagine if this happened in a 2D action game like Metal Slug where you're having to jump around avoiding bullets?\nThis also makes any physics calculations much simpler. You'll have to make a judgement call based on what type of game it is.\n"
] | [
4
] | [] | [] | [
"2d",
"pygame",
"python",
"sprite"
] | stackoverflow_0000737303_2d_pygame_python_sprite.txt |
Q:
Returning default members when accessing to objects in python
I'm writing an "envirorment" where each variable is composed by a value and a description:
class my_var:
def __init__(self, value, description):
self.value = value
self.description = description
Variables are created and put inside a dictionary:
my_dict["foo"] = my_var(0.5, "A foo var")
This is cool but 99% of operations with variable are with the "value" member. So I have to write like this:
print my_dict["foo"].value + 15 # Prints 15.5
or
my_dict["foo"].value = 17
I'd like that all operation on the object my_dict["foo"] could default to the "value" member. In other words I'd like to write:
print my_dict["foo"] + 15 # Prints 5.5
and stuff like that.
The only way I found is to reimplement all underscore-members (eq, add, str, etc) but I feel like this is the wrong way somehow. Is there a magic method I could use?
A workaround would be to have more dictionaries, like this:
my_dict_value["foo"] = 0.5
my_dict_description["foo"] = "A foo var"
but I don't like this solution. Do you have any suggestions?
A:
Two general notes.
Please use Upper Case for Class Names.
Please (unless using Python 3.0) subclass object. class My_Var(object):, for example.
Now to your question.
Let's say you do
x= My_Var(0.5, "A foo var")
How does python distinguish between x, the composite object and x's value (x.value)?
Do you want the following behavior?
Sometimes x means the whole composite object.
Sometimes x means x.value.
How do you distinguish between the two? How will you tell Python which you mean?
A:
You could create an object that mostly acts like "value" but has an additional attribute "description, by implementing the operators in section "Emulating numeric types" of http://docs.python.org/reference/datamodel.html
class Fooness(object):
def __init__(self,val, description):
self._val = val
self.description = description
def __add__(self,other):
return self._val + other
def __sub__(self,other):
return self._val - other
def __mul__(self,other):
return self._val * other
# etc
def __str__(self):
return str(self._val)
f = Fooness(10,"my f'd up fooness")
b = f + 10
print 'b=',b
d = f - 7
print 'd=',d
print 'f.description=',f.description
Produces:
b= 20
d= 3
f.description= my f'd up fooness
A:
I would personally just use two dictionaries, one for values and one for descriptions. Your desire for magic behavior is not very Pythonic.
With that being said, you could implement your own dict class:
class DescDict(dict):
def __init__(self, *args, **kwargs):
self.descs = {}
dict.__init__(self)
def __getitem__(self, name):
return dict.__getitem__(self, name)
def __setitem__(self, name, tup):
value, description = tup
self.descs[name] = description
dict.__setitem__(self, name, value)
def get_desc(self, name):
return self.descs[name]
You'd use this class as follows:
my_dict = DescDict()
my_dict["foo"] = (0.5, "A foo var") # just use a tuple if you only have 2 vals
print my_dict["foo"] + 15 # prints 15.5
print my_dict.get_desc("foo") # prints 'A foo var'
If you decide to go the magic behavior route, then this should be a good starting point.
A:
I think the fact that you have to do so much work to make a fancy shortcut is an indication that you're going against the grain. What you're doing violates LSP; it's counter-intuitive.
my_dict[k] = v;
print my_dict[k] == v # should be True
Even two separate dicts would be preferable to changing the meaning of dict.
| Returning default members when accessing to objects in python | I'm writing an "envirorment" where each variable is composed by a value and a description:
class my_var:
def __init__(self, value, description):
self.value = value
self.description = description
Variables are created and put inside a dictionary:
my_dict["foo"] = my_var(0.5, "A foo var")
This is cool but 99% of operations with variable are with the "value" member. So I have to write like this:
print my_dict["foo"].value + 15 # Prints 15.5
or
my_dict["foo"].value = 17
I'd like that all operation on the object my_dict["foo"] could default to the "value" member. In other words I'd like to write:
print my_dict["foo"] + 15 # Prints 5.5
and stuff like that.
The only way I found is to reimplement all underscore-members (eq, add, str, etc) but I feel like this is the wrong way somehow. Is there a magic method I could use?
A workaround would be to have more dictionaries, like this:
my_dict_value["foo"] = 0.5
my_dict_description["foo"] = "A foo var"
but I don't like this solution. Do you have any suggestions?
| [
"Two general notes.\n\nPlease use Upper Case for Class Names.\nPlease (unless using Python 3.0) subclass object. class My_Var(object):, for example.\n\nNow to your question.\nLet's say you do\nx= My_Var(0.5, \"A foo var\")\n\nHow does python distinguish between x, the composite object and x's value (x.value)?\nDo you want the following behavior?\n\nSometimes x means the whole composite object. \nSometimes x means x.value.\n\nHow do you distinguish between the two? How will you tell Python which you mean?\n",
"You could create an object that mostly acts like \"value\" but has an additional attribute \"description, by implementing the operators in section \"Emulating numeric types\" of http://docs.python.org/reference/datamodel.html\nclass Fooness(object):\n def __init__(self,val, description):\n self._val = val\n self.description = description\n\n def __add__(self,other):\n return self._val + other\n\n def __sub__(self,other):\n return self._val - other\n\n def __mul__(self,other):\n return self._val * other\n\n # etc\n\n def __str__(self):\n return str(self._val)\n\n\nf = Fooness(10,\"my f'd up fooness\")\nb = f + 10\nprint 'b=',b\nd = f - 7\nprint 'd=',d\n\nprint 'f.description=',f.description\n\nProduces:\nb= 20\nd= 3\nf.description= my f'd up fooness\n\n",
"I would personally just use two dictionaries, one for values and one for descriptions. Your desire for magic behavior is not very Pythonic.\nWith that being said, you could implement your own dict class:\nclass DescDict(dict):\n def __init__(self, *args, **kwargs):\n self.descs = {}\n dict.__init__(self)\n\n def __getitem__(self, name):\n return dict.__getitem__(self, name)\n\n def __setitem__(self, name, tup):\n value, description = tup\n self.descs[name] = description\n dict.__setitem__(self, name, value)\n\n def get_desc(self, name):\n return self.descs[name]\n\nYou'd use this class as follows:\nmy_dict = DescDict()\nmy_dict[\"foo\"] = (0.5, \"A foo var\") # just use a tuple if you only have 2 vals\nprint my_dict[\"foo\"] + 15 # prints 15.5\nprint my_dict.get_desc(\"foo\") # prints 'A foo var'\n\nIf you decide to go the magic behavior route, then this should be a good starting point.\n",
"I think the fact that you have to do so much work to make a fancy shortcut is an indication that you're going against the grain. What you're doing violates LSP; it's counter-intuitive. \nmy_dict[k] = v;\nprint my_dict[k] == v # should be True\n\nEven two separate dicts would be preferable to changing the meaning of dict.\n"
] | [
3,
2,
2,
1
] | [] | [] | [
"dynamic_data",
"python"
] | stackoverflow_0000737512_dynamic_data_python.txt |
Q:
How can I make a Python extension module packaged as an egg loadable without installing it?
I'm in the middle of reworking our build scripts to be based upon the wonderful Waf tool (I did use SCons for ages but its just way too slow).
Anyway, I've hit the following situation and I cannot find a resolution to it:
I have a product that depends on a number of previously built egg files.
I'm trying to package the product using PyInstaller as part of the build process.
I build the dependencies first.
Next I want to run PyInstaller to package the product that depends on the eggs I built. I need PyInstaller to be able to load those egg files as part of it's packaging process.
This sounds easy: you work out what PYTHONPATH should be, construct a copy of sys.environ setting the variable up correctly, and then invoke the PyInstaller script using subprocess.Popen passing the previously configured environment as the env argument.
The problem is that setting PYTHONPATH alone does not seem to be enough if the eggs you are adding are extension modules that are packaged as zipsafe. In this case, it turns out that the embedded libraries are not able to be imported.
If I unzip the eggs (renaming the directories to .egg), I can import them with no further settings but this is not what I want in this case.
I can also get the eggs to import from a subshell by doing the following:
Setting PYTHONPATH to the directory that contains the egg you want to import (not the path of the egg itself)
Loading a python shell and using pkg_resources.require to locate the egg.
Once this has been done, the egg loads as normal. Again, this is not practical because I need to be able to run my python shell in a manner where it is ready to import these eggs from the off.
The dirty option would be to output a wrapper script that took the above actions before calling the real target script but this seems like the wrong thing to do: there must be a better way to do this.
A:
Heh, I think this was my bad. The issue appear to have been that the zipsafe flag in setup.py for the extension package was set to False, which appears to affect your ability to treat it as such at all.
Now that I've set that to True I can import the egg files, simply by adding each one to the PYTHONPATH.
I hope someone else finds this answer useful one day!
A:
Although you have a solution, you could always try "virtualenv" that creates a virtual environment of python where you can install and test Python Packages without messing with the core system python:
http://pypi.python.org/pypi/virtualenv
| How can I make a Python extension module packaged as an egg loadable without installing it? | I'm in the middle of reworking our build scripts to be based upon the wonderful Waf tool (I did use SCons for ages but its just way too slow).
Anyway, I've hit the following situation and I cannot find a resolution to it:
I have a product that depends on a number of previously built egg files.
I'm trying to package the product using PyInstaller as part of the build process.
I build the dependencies first.
Next I want to run PyInstaller to package the product that depends on the eggs I built. I need PyInstaller to be able to load those egg files as part of it's packaging process.
This sounds easy: you work out what PYTHONPATH should be, construct a copy of sys.environ setting the variable up correctly, and then invoke the PyInstaller script using subprocess.Popen passing the previously configured environment as the env argument.
The problem is that setting PYTHONPATH alone does not seem to be enough if the eggs you are adding are extension modules that are packaged as zipsafe. In this case, it turns out that the embedded libraries are not able to be imported.
If I unzip the eggs (renaming the directories to .egg), I can import them with no further settings but this is not what I want in this case.
I can also get the eggs to import from a subshell by doing the following:
Setting PYTHONPATH to the directory that contains the egg you want to import (not the path of the egg itself)
Loading a python shell and using pkg_resources.require to locate the egg.
Once this has been done, the egg loads as normal. Again, this is not practical because I need to be able to run my python shell in a manner where it is ready to import these eggs from the off.
The dirty option would be to output a wrapper script that took the above actions before calling the real target script but this seems like the wrong thing to do: there must be a better way to do this.
| [
"Heh, I think this was my bad. The issue appear to have been that the zipsafe flag in setup.py for the extension package was set to False, which appears to affect your ability to treat it as such at all.\nNow that I've set that to True I can import the egg files, simply by adding each one to the PYTHONPATH.\nI hope someone else finds this answer useful one day!\n",
"Although you have a solution, you could always try \"virtualenv\" that creates a virtual environment of python where you can install and test Python Packages without messing with the core system python:\nhttp://pypi.python.org/pypi/virtualenv\n"
] | [
3,
1
] | [] | [] | [
"build_tools",
"egg",
"python",
"setuptools",
"waf"
] | stackoverflow_0000737383_build_tools_egg_python_setuptools_waf.txt |
Q:
How to Modify Choices of ModelMultipleChoiceField
Let's say I have some contrived models:
class Author(Model):
name = CharField()
class Book(Model):
title = CharField()
author = ForeignKey(Author)
And let's say I want to use a ModelForm for Book:
class BookForm(ModelForm):
class Meta:
model = Book
Simple so far. But let's also say that I have a ton of Authors in my database, and I don't want to have such a long multiple choice field. So, I'd like is to restrict the queryset on the BookForm's ModelMultipleChoiceField author field. Let's also say that the queryset I want can't be chosen until __init__, because it relies on an argument to be passed.
This seems like it might do the trick:
class BookForm(ModelForm):
class Meta:
model = Book
def __init__(self, letter):
# returns the queryset based on the letter
choices = getChoices(letter)
self.author.queryset = choices
Of course, if that just worked I wouldn't be here. That gets me an AttributeError. 'BookForm' object has no attribute 'author'. So, I also tried something like this, where I try to override the ModelForm's default field and then set it later:
class BookForm(ModelForm):
author = ModelMultipleChoiceField(queryset=Author.objects.all())
class Meta:
model = Book
def __init__(self, letter):
choices = getChoices(letter)
self.author.queryset = choices
Which produces the same result.
Anyone know how this is intended to be done?
A:
Although Carl is correct about the fields, you're also missing a super class call. This is how I do it:
class BookForm(ModelForm):
author = ModelMultipleChoiceField(queryset=Author.objects.all())
class Meta:
model = Book
def __init__(self, *args, **kwargs):
letter = kwargs.pop('letter')
super(BookForm, self).__init__(*args, **kwargs)
choices = getChoices(letter)
self.fields['author'].queryset = choices
A:
Form objects don't have their fields as attributes, you need to look in the "fields" attribute, which is a dictionary:
self.fields['author'].queryset = choices
If you want to fully understand what's going on here, you might be interested in this answer - it's about Models, but Forms work similarly.
| How to Modify Choices of ModelMultipleChoiceField | Let's say I have some contrived models:
class Author(Model):
name = CharField()
class Book(Model):
title = CharField()
author = ForeignKey(Author)
And let's say I want to use a ModelForm for Book:
class BookForm(ModelForm):
class Meta:
model = Book
Simple so far. But let's also say that I have a ton of Authors in my database, and I don't want to have such a long multiple choice field. So, I'd like is to restrict the queryset on the BookForm's ModelMultipleChoiceField author field. Let's also say that the queryset I want can't be chosen until __init__, because it relies on an argument to be passed.
This seems like it might do the trick:
class BookForm(ModelForm):
class Meta:
model = Book
def __init__(self, letter):
# returns the queryset based on the letter
choices = getChoices(letter)
self.author.queryset = choices
Of course, if that just worked I wouldn't be here. That gets me an AttributeError. 'BookForm' object has no attribute 'author'. So, I also tried something like this, where I try to override the ModelForm's default field and then set it later:
class BookForm(ModelForm):
author = ModelMultipleChoiceField(queryset=Author.objects.all())
class Meta:
model = Book
def __init__(self, letter):
choices = getChoices(letter)
self.author.queryset = choices
Which produces the same result.
Anyone know how this is intended to be done?
| [
"Although Carl is correct about the fields, you're also missing a super class call. This is how I do it:\nclass BookForm(ModelForm):\n author = ModelMultipleChoiceField(queryset=Author.objects.all())\n\n class Meta:\n model = Book\n\n def __init__(self, *args, **kwargs):\n letter = kwargs.pop('letter')\n super(BookForm, self).__init__(*args, **kwargs)\n choices = getChoices(letter)\n self.fields['author'].queryset = choices\n\n",
"Form objects don't have their fields as attributes, you need to look in the \"fields\" attribute, which is a dictionary:\nself.fields['author'].queryset = choices\n\nIf you want to fully understand what's going on here, you might be interested in this answer - it's about Models, but Forms work similarly.\n"
] | [
9,
8
] | [] | [] | [
"django",
"django_forms",
"python"
] | stackoverflow_0000738301_django_django_forms_python.txt |
Q:
In Python, how do I reference a class generically in a static way, like PHP's "self" keyword?
PHP classes can use the keyword "self" in a static context, like this:
<?php
class Test {
public static $myvar = 'a';
public static function t() {
echo self::$myvar; // Generically reference the current class.
echo Test::$myvar; // Same thing, but not generic.
}
}
?>
Obviously I can't use "self" in this way in Python because "self" refers not to a class but to an instance. So is there a way I can reference the current class in a static context in Python, similar to PHP's "self"?
I guess what I'm trying to do is rather un-pythonic. Not sure though, I'm new to Python. Here is my code (using the Django framework):
class Friendship(models.Model):
def addfriend(self, friend):
"""does some stuff"""
@staticmethod # declared "staticmethod", not "classmethod"
def user_addfriend(user, friend): # static version of above method
userf = Friendship(user=user) # creating instance of the current class
userf.addfriend(friend) # calls above method
# later ....
Friendship.user_addfriend(u, f) # works
My code works as expected. I just wanted to know: is there a keyword I could use on the first line of the static method instead of "Friendship"?
This way if the class name changes, the static method won't have to be edited. As it stands the static method would have to be edited if the class name changes.
A:
This should do the trick:
class C(object):
my_var = 'a'
@classmethod
def t(cls):
print cls.my_var
C.t()
A:
In all cases, self.__class__ is an object's class.
http://docs.python.org/library/stdtypes.html#special-attributes
In the (very) rare case where you are trying to mess with static methods, you actually need classmethod for this.
class AllStatic( object ):
@classmethod
def aMethod( cls, arg ):
# cls is the owning class for this method
x = AllStatic()
x.aMethod( 3.14 )
| In Python, how do I reference a class generically in a static way, like PHP's "self" keyword? | PHP classes can use the keyword "self" in a static context, like this:
<?php
class Test {
public static $myvar = 'a';
public static function t() {
echo self::$myvar; // Generically reference the current class.
echo Test::$myvar; // Same thing, but not generic.
}
}
?>
Obviously I can't use "self" in this way in Python because "self" refers not to a class but to an instance. So is there a way I can reference the current class in a static context in Python, similar to PHP's "self"?
I guess what I'm trying to do is rather un-pythonic. Not sure though, I'm new to Python. Here is my code (using the Django framework):
class Friendship(models.Model):
def addfriend(self, friend):
"""does some stuff"""
@staticmethod # declared "staticmethod", not "classmethod"
def user_addfriend(user, friend): # static version of above method
userf = Friendship(user=user) # creating instance of the current class
userf.addfriend(friend) # calls above method
# later ....
Friendship.user_addfriend(u, f) # works
My code works as expected. I just wanted to know: is there a keyword I could use on the first line of the static method instead of "Friendship"?
This way if the class name changes, the static method won't have to be edited. As it stands the static method would have to be edited if the class name changes.
| [
"This should do the trick:\nclass C(object):\n my_var = 'a'\n\n @classmethod\n def t(cls):\n print cls.my_var\n\nC.t()\n\n",
"In all cases, self.__class__ is an object's class.\nhttp://docs.python.org/library/stdtypes.html#special-attributes\nIn the (very) rare case where you are trying to mess with static methods, you actually need classmethod for this.\nclass AllStatic( object ):\n @classmethod\n def aMethod( cls, arg ):\n # cls is the owning class for this method \n\nx = AllStatic()\nx.aMethod( 3.14 )\n\n"
] | [
36,
29
] | [] | [] | [
"class",
"python"
] | stackoverflow_0000738467_class_python.txt |
Q:
How do I detect if my appengine app is being accessed by an iphone/ipod touch?
I need to render the page differently if it's acessed by an iphone/ipod touch. I suppose the information is in the request object, but what would be the syntax?
A:
This is the syntax I was looking for, works with iphone and ipod touch:
uastring = self.request.headers.get('user_agent')
if "Mobile" in uastring and "Safari" in uastring:
# do iphone / ipod stuff
A:
This article outlines a few ways of detecting an iPhone through by checking the HTTP_USER_AGENT agent variable. Depending on where you want to do the check at (HTML level, Javascript, CSS, etc.), I'm sure you can extrapolate this into your Python app. Sorry, I'm not a python guy. 8^D
A:
The Using the Safari on iPhone User Agent String article on the apple website indicate the different user agents for iPhone and iPod touch.
Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A543 Safari/419.3
Mozilla/5.0 (iPod; U; CPU like Mac OS X; en) AppleWebKit/420.1 (KHTML, like Gecko) Version/3.0 Mobile/4A93 Safari/419.3
Mozilla/5.0 (iPhone; U; CPU iPhone OS 2_0 like Mac OS X; en-us) AppleWebKit/525.18.1 (KHTML, like Gecko) Version/3.1.1 Mobile/XXXXX Safari/525.20
A:
Here's how to do implement it as middleware in Django, assuming that's what you're using on appengine.
class DetectiPhone(object):
def process_request(self, request):
if 'HTTP_USER_AGENT' in request.META and request.META['HTTP_USER_AGENT'].find('(iPhone') >= 0:
request.META['iPhone'] = True
Basically look for 'iPhone' in the HTTP_USER_AGENT. Note that iPod Touch has a slightly different signature than the iPhone, hence the broad 'iPhone' search instead of a more restrictive search.
A:
if you're using the standard webapp framework the user agent will be in the request instance. This should be good enough:
if "iPhone" in request.headers["User-Agent"]:
# do iPhone logic
A:
Check the user agent. It will be
Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3
I'm not sure how to do this with appengine, but the equivalent PHP code can be found here: http://www.mattcutts.com/blog/iphone-user-agent/
A:
import os
class MainPage(webapp.RequestHandler):
@login_required
def get(self):
userAgent = os.environ['HTTP_USER_AGENT']
if userAgent.find('iPhone') > 0:
self.response.out.write('iPhone support is coming soon...')
else:
self.response.out.write('Hey... you are not from iPhone...')
| How do I detect if my appengine app is being accessed by an iphone/ipod touch? | I need to render the page differently if it's acessed by an iphone/ipod touch. I suppose the information is in the request object, but what would be the syntax?
| [
"This is the syntax I was looking for, works with iphone and ipod touch:\nuastring = self.request.headers.get('user_agent')\nif \"Mobile\" in uastring and \"Safari\" in uastring:\n # do iphone / ipod stuff\n\n",
"This article outlines a few ways of detecting an iPhone through by checking the HTTP_USER_AGENT agent variable. Depending on where you want to do the check at (HTML level, Javascript, CSS, etc.), I'm sure you can extrapolate this into your Python app. Sorry, I'm not a python guy. 8^D\n",
"The Using the Safari on iPhone User Agent String article on the apple website indicate the different user agents for iPhone and iPod touch.\nMozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A543 Safari/419.3\nMozilla/5.0 (iPod; U; CPU like Mac OS X; en) AppleWebKit/420.1 (KHTML, like Gecko) Version/3.0 Mobile/4A93 Safari/419.3\nMozilla/5.0 (iPhone; U; CPU iPhone OS 2_0 like Mac OS X; en-us) AppleWebKit/525.18.1 (KHTML, like Gecko) Version/3.1.1 Mobile/XXXXX Safari/525.20\n\n",
"Here's how to do implement it as middleware in Django, assuming that's what you're using on appengine. \nclass DetectiPhone(object):\n def process_request(self, request):\n if 'HTTP_USER_AGENT' in request.META and request.META['HTTP_USER_AGENT'].find('(iPhone') >= 0:\n request.META['iPhone'] = True\n\nBasically look for 'iPhone' in the HTTP_USER_AGENT. Note that iPod Touch has a slightly different signature than the iPhone, hence the broad 'iPhone' search instead of a more restrictive search.\n",
"if you're using the standard webapp framework the user agent will be in the request instance. This should be good enough:\n if \"iPhone\" in request.headers[\"User-Agent\"]:\n # do iPhone logic\n\n",
"Check the user agent. It will be\n\nMozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3 \n\nI'm not sure how to do this with appengine, but the equivalent PHP code can be found here: http://www.mattcutts.com/blog/iphone-user-agent/\n",
"import os\n\nclass MainPage(webapp.RequestHandler):\n @login_required\n def get(self):\n userAgent = os.environ['HTTP_USER_AGENT']\n if userAgent.find('iPhone') > 0:\n self.response.out.write('iPhone support is coming soon...')\n else:\n self.response.out.write('Hey... you are not from iPhone...')\n\n"
] | [
7,
2,
2,
1,
1,
0,
0
] | [] | [] | [
"google_app_engine",
"iphone",
"python",
"web_applications"
] | stackoverflow_0000617202_google_app_engine_iphone_python_web_applications.txt |
Q:
Followup: Multiprocessing or Multithreading for Python simulation software
this is a follow up to this. (You don't have to read all the answers, just the question)
People explained to me the difference between processes and threads. On the one hand, I wanted processes so I could fully exploit all core of the CPU, on the other hand, passing information between processes was less than ideal, and I didn't want to have two copies of the huge object I was dealing with.
So I've been thinking about a way to do this, combining processes and threads; tell me if this makes sense. The main process in my program is the GUI process. I will have it spawn a "rendering-manager" thread. The rendering-manager thread will be responsible for rendering the simulation, however, it will not render them by itself, but spawn other processes to do the work for it.
These are the goals:
Rendering should take advantage of all the cores available.
The GUI should never become sluggish.
The reason I want the rendering-manager to be a thread is because it has to share a lot of information with the GUI: Namely, the simulation-timeline.
So do you think this is a good design? Do you have any suggestions for improvement?
Update:
Sorry for my confusing use of the word "render". By render I mean calculate the simulation, not render it on screen.
A:
Before using processes, make sure that:
Your algorithm can be parallelized between all the processors.
You need this parallelism.
In my opinion a good rule of thumb is:
Make it work.
Make it right.
Make it fast.
So I'd suggest to “simply” use threads first. Maybe you will realize that even with one thread computing the simulation it's fast enough.
A:
The main process in my program is the GUI process. I will have it spawn a "rendering-manager" thread. The rendering-manager thread will be responsible for rendering the simulation, however, it will not render them by itself, but spawn other processes to do the work for it.
I'm no expert on graphics technologies, but this sounds a lot like what GPUs are intended for. Perhaps pygame is more what you're looking for?
| Followup: Multiprocessing or Multithreading for Python simulation software | this is a follow up to this. (You don't have to read all the answers, just the question)
People explained to me the difference between processes and threads. On the one hand, I wanted processes so I could fully exploit all core of the CPU, on the other hand, passing information between processes was less than ideal, and I didn't want to have two copies of the huge object I was dealing with.
So I've been thinking about a way to do this, combining processes and threads; tell me if this makes sense. The main process in my program is the GUI process. I will have it spawn a "rendering-manager" thread. The rendering-manager thread will be responsible for rendering the simulation, however, it will not render them by itself, but spawn other processes to do the work for it.
These are the goals:
Rendering should take advantage of all the cores available.
The GUI should never become sluggish.
The reason I want the rendering-manager to be a thread is because it has to share a lot of information with the GUI: Namely, the simulation-timeline.
So do you think this is a good design? Do you have any suggestions for improvement?
Update:
Sorry for my confusing use of the word "render". By render I mean calculate the simulation, not render it on screen.
| [
"Before using processes, make sure that:\n\nYour algorithm can be parallelized between all the processors.\nYou need this parallelism.\n\nIn my opinion a good rule of thumb is:\n\nMake it work.\nMake it right.\nMake it fast.\n\nSo I'd suggest to “simply” use threads first. Maybe you will realize that even with one thread computing the simulation it's fast enough.\n",
"\nThe main process in my program is the GUI process. I will have it spawn a \"rendering-manager\" thread. The rendering-manager thread will be responsible for rendering the simulation, however, it will not render them by itself, but spawn other processes to do the work for it.\n\nI'm no expert on graphics technologies, but this sounds a lot like what GPUs are intended for. Perhaps pygame is more what you're looking for?\n"
] | [
2,
0
] | [] | [] | [
"multicore",
"multiprocessing",
"multithreading",
"python",
"simulation"
] | stackoverflow_0000737826_multicore_multiprocessing_multithreading_python_simulation.txt |
Q:
QFileDialog passing directory to python script
Im writing a little python program that goes through an XML file and does some replacement of tags. It takes three arguments, a path from whcih it creates a directory tree, the XML file its reading and the xml file its outputting to. It works fine from the command line just passing in arguments. As its not just for me, i thought id put a Qt front on it. Below is the majority of the Qt front. MOVtoMXF is the class that does all the replacement. So you can see that im basically just grabbing strings and feeding them into the class that ive already made and tested.
class Form(QDialog):
def ConnectButtons(self):
self.connect(self.pathBrowseB, SIGNAL("clicked()"), self.pathFileBrowse)
self.connect(self.xmlFileBrowseB, SIGNAL("clicked()"), self.xmlFileBrowse)
self.connect(self.outputFileBrowseB, SIGNAL("clicked()"), self.outputFileBrowse)
def accept(self):
path = self.pathBox.displayText()
xmlFile = self.xmlFileBox.displayText()
outFileName = self.outfileNameBox.displayText()
print path + " " + xmlFile + " " + outFileName
mov1 = MOVtoMXF.MOVtoMXF(path, xmlFile, outFileName)
mov1.ScanFile()
self.done()
def pathFileBrowse(self):
file = str(QFileDialog.getExistingDirectory(self, "Select Directory"))
self.pathBox.setText(file)
def xmlFileBrowse(self):
file = str(QFileDialog.getOpenFileName(self, "Save File"))
self.xmlFileBox.setText(file)
def outputFileBrowse(self):
file = str(QFileDialog.getSaveFileName(self, "Save File"))
self.outfileNameBox.setText(file)
the probelm is that when i feed in a Path, it now comes back with an error, either the directory doesnt exist, or if I have a trailing slash on the end that
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/posixpath.py", line 62, in join
elif path == '' or path.endswith('/'):
I think its probably some mismatch between the QFileDialog, the QString its passing back and the string the my python expects. but im not sure how to go about fixing it.
Im running on Max OS X 10.5.6
pyQt 4.4.4
QT 4.4.0
thanks for any help you can give.
Mark
A:
Two potential solutions.
Method 1:
If you must use the displayText() method, I suggest you wrap the call to displayText() with an explicit string cast:
path = str(self.pathBox.displayText())
xmlFile = str(self.xmlFileBox.displayText())
outFileName = str(self.outfileNameBox.displayText())
The reason is that displayText() returns what I believe is a constant memory reference at the C++ level, meaning that you are not being returned a copy of the QString, but actually whatever QString is available at the memory reference.
When you call the displayText() function, it is the string you expected, but eventually it is something else when the contents at the memory reference are changed. I have noticed this peculiarity with several methods on different controls, most notably QDateEdit/QDateTimeEdit/QTimeEdit controls, where I typically have to make an explicit copy of, say, the QDate returned by the date() function of QDateEdit by wrapping it in a QDate constructor.
Method 2:
Otherwise, use the text() method instead. The QString returned is a constant value, instead of a constant memory reference. See this doc:
http://doc.trolltech.com/4.4/qlineedit.html#text-prop
displayText : const QString
text : QString
Update:
It looks like Riverbank will be addressing this problem in future versions of PyQt in case anybody is still having this problem:
PyQt4 Roadmap
Implicit Copying of const&
Implemented in current snapshots.
When PyQt wraps a const& value
returned by a C++ function it wraps
the address of the value itself. Also,
it does not enforce the const
attribute. This can cause unexpected
behavour (and program crashes) either
by the underlying value disappearing
or the value being unexpectedly
modified.
The correct way to handle this is to
explicitly make a copy of the value
using its type's copy constructor.
However, that is not Pythonic and
knowing that it needs to be done
requires knowledge of the C++ API.
PyQt will be changed so that it will
automatically invoke the copy
constructor and will wrap the copy.
| QFileDialog passing directory to python script | Im writing a little python program that goes through an XML file and does some replacement of tags. It takes three arguments, a path from whcih it creates a directory tree, the XML file its reading and the xml file its outputting to. It works fine from the command line just passing in arguments. As its not just for me, i thought id put a Qt front on it. Below is the majority of the Qt front. MOVtoMXF is the class that does all the replacement. So you can see that im basically just grabbing strings and feeding them into the class that ive already made and tested.
class Form(QDialog):
def ConnectButtons(self):
self.connect(self.pathBrowseB, SIGNAL("clicked()"), self.pathFileBrowse)
self.connect(self.xmlFileBrowseB, SIGNAL("clicked()"), self.xmlFileBrowse)
self.connect(self.outputFileBrowseB, SIGNAL("clicked()"), self.outputFileBrowse)
def accept(self):
path = self.pathBox.displayText()
xmlFile = self.xmlFileBox.displayText()
outFileName = self.outfileNameBox.displayText()
print path + " " + xmlFile + " " + outFileName
mov1 = MOVtoMXF.MOVtoMXF(path, xmlFile, outFileName)
mov1.ScanFile()
self.done()
def pathFileBrowse(self):
file = str(QFileDialog.getExistingDirectory(self, "Select Directory"))
self.pathBox.setText(file)
def xmlFileBrowse(self):
file = str(QFileDialog.getOpenFileName(self, "Save File"))
self.xmlFileBox.setText(file)
def outputFileBrowse(self):
file = str(QFileDialog.getSaveFileName(self, "Save File"))
self.outfileNameBox.setText(file)
the probelm is that when i feed in a Path, it now comes back with an error, either the directory doesnt exist, or if I have a trailing slash on the end that
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/posixpath.py", line 62, in join
elif path == '' or path.endswith('/'):
I think its probably some mismatch between the QFileDialog, the QString its passing back and the string the my python expects. but im not sure how to go about fixing it.
Im running on Max OS X 10.5.6
pyQt 4.4.4
QT 4.4.0
thanks for any help you can give.
Mark
| [
"Two potential solutions.\nMethod 1:\nIf you must use the displayText() method, I suggest you wrap the call to displayText() with an explicit string cast:\npath = str(self.pathBox.displayText()) \nxmlFile = str(self.xmlFileBox.displayText()) \noutFileName = str(self.outfileNameBox.displayText())\n\nThe reason is that displayText() returns what I believe is a constant memory reference at the C++ level, meaning that you are not being returned a copy of the QString, but actually whatever QString is available at the memory reference. \nWhen you call the displayText() function, it is the string you expected, but eventually it is something else when the contents at the memory reference are changed. I have noticed this peculiarity with several methods on different controls, most notably QDateEdit/QDateTimeEdit/QTimeEdit controls, where I typically have to make an explicit copy of, say, the QDate returned by the date() function of QDateEdit by wrapping it in a QDate constructor. \nMethod 2:\nOtherwise, use the text() method instead. The QString returned is a constant value, instead of a constant memory reference. See this doc:\nhttp://doc.trolltech.com/4.4/qlineedit.html#text-prop\ndisplayText : const QString \ntext : QString\n\nUpdate:\nIt looks like Riverbank will be addressing this problem in future versions of PyQt in case anybody is still having this problem:\nPyQt4 Roadmap\n\nImplicit Copying of const&\nImplemented in current snapshots.\nWhen PyQt wraps a const& value\n returned by a C++ function it wraps\n the address of the value itself. Also,\n it does not enforce the const\n attribute. This can cause unexpected\n behavour (and program crashes) either\n by the underlying value disappearing\n or the value being unexpectedly\n modified.\nThe correct way to handle this is to\n explicitly make a copy of the value\n using its type's copy constructor.\n However, that is not Pythonic and\n knowing that it needs to be done\n requires knowledge of the C++ API.\nPyQt will be changed so that it will\n automatically invoke the copy\n constructor and will wrap the copy.\n\n"
] | [
1
] | [] | [] | [
"parsing",
"pyqt",
"python",
"qfile",
"qt"
] | stackoverflow_0000739288_parsing_pyqt_python_qfile_qt.txt |
Q:
Pretty-printing C# from Python
Suppose I wrote a compiler in Python or Ruby that translates a language into a C# AST.
How do I pretty-print this AST from Python or Ruby to get nicely indented C# code?
Thanks, Joel
A:
In python the pprint module is available.
Depending on how your data is structured it may not return the result your looking for.
A:
Once you have an AST, this should be very easy. When you walk your AST, all you have to do is keep track of what your current indent level is -- you could use a global for this. The code that's walking the tree simply needs to increment the indent level every time you enter a block, and decrement it when you exit a block. Then, whenever you print a line of code, you call it like this:
print "\t"*indentlevel + code
You should end up with nicely formatted code. However, I'm a bit confused that you're asking this question -- if you have the skills to parse C# into an AST, I can't imagine you wouldn't be able to write a pretty-printing output function. :-)
| Pretty-printing C# from Python | Suppose I wrote a compiler in Python or Ruby that translates a language into a C# AST.
How do I pretty-print this AST from Python or Ruby to get nicely indented C# code?
Thanks, Joel
| [
"In python the pprint module is available.\nDepending on how your data is structured it may not return the result your looking for.\n",
"Once you have an AST, this should be very easy. When you walk your AST, all you have to do is keep track of what your current indent level is -- you could use a global for this. The code that's walking the tree simply needs to increment the indent level every time you enter a block, and decrement it when you exit a block. Then, whenever you print a line of code, you call it like this:\nprint \"\\t\"*indentlevel + code\n\nYou should end up with nicely formatted code. However, I'm a bit confused that you're asking this question -- if you have the skills to parse C# into an AST, I can't imagine you wouldn't be able to write a pretty-printing output function. :-)\n"
] | [
1,
1
] | [
"One way would be to just print it and then invoke a code formatter.\n"
] | [
-2
] | [
"c#",
"parsing",
"pretty_print",
"python",
"ruby"
] | stackoverflow_0000734413_c#_parsing_pretty_print_python_ruby.txt |
Q:
Apply multiple negative regex to expression in Python
This question is similar to "How to concisely cascade through multiple regex statements in Python" except instead of matching one regular expression and doing something I need to make sure I do not match a bunch of regular expressions, and if no matches are found (aka I have valid data) then do something. I have found one way to do it but am thinking there must be a better way, especially if I end up with many regular expressions.
Basically I am filtering URL's for bad stuff ("", \\", etc.) that occurs when I yank what looks like a valid URL out of an HTML document but it turns out to be part of a JavaScript (and thus needs to be evaluated, and thus the escaping characters). I can't use Beautiful soup to process these pages since they are far to mangled (actually I use BeautifulSoup, then fall back to my ugly but workable parser).
So far I have found the following works relatively well: I compile a dict or regular expressions outside the main loop (so I only have to compile it once, but benefit from the speed increase every time I use it), I then loop a URL through this dict, if there is a match then the URL is bad, if not the url is good:
regex_bad_url = {"1" : re.compile('\"\"'),
"2" : re.compile('\\\"')}
Followed by:
url_state = "good"
for key, pattern in regex_bad_url_components.items():
match = re.search(pattern, url)
if (match):
url_state = "bad"
if (url_state == "good"):
# do stuff here ...
Now the obvious thought is to use regex "or" ("|"), i.e.:
re.compile('(\"\"|\\\")')
Which reduces the number of compares and whatnot, but makes it much harder to trouble shoot (with one expression per compare I can easily add a print statement like:
print "URL: ", url, " matched by key ", key
So is there someway to get the best of both worlds (i.e. minimal number of compares) yet still be able to print out which regex is matching the URL, or do I simply need to bite the bullet and have my slower but easier to troubleshoot code when debugging and then squoosh all the regex's together into one line for production? (which means one more step of programming and code maintenance and possible problems).
Update:
Good answer by Dave Webb, so the actual code for this would look like:
match = re.search(r'(?P<double_quotes>\"\")|(?P<slash_quote>\\\")', fullurl)
if (match == None):
# do stuff here ...
else:
#optional for debugging
print "url matched by", match.lastgroup
A:
"Squoosh" all the regexes into one line but put each in a named group using (?P<name>...) then use MatchOjbect.lastgroup to find which matched.
| Apply multiple negative regex to expression in Python | This question is similar to "How to concisely cascade through multiple regex statements in Python" except instead of matching one regular expression and doing something I need to make sure I do not match a bunch of regular expressions, and if no matches are found (aka I have valid data) then do something. I have found one way to do it but am thinking there must be a better way, especially if I end up with many regular expressions.
Basically I am filtering URL's for bad stuff ("", \\", etc.) that occurs when I yank what looks like a valid URL out of an HTML document but it turns out to be part of a JavaScript (and thus needs to be evaluated, and thus the escaping characters). I can't use Beautiful soup to process these pages since they are far to mangled (actually I use BeautifulSoup, then fall back to my ugly but workable parser).
So far I have found the following works relatively well: I compile a dict or regular expressions outside the main loop (so I only have to compile it once, but benefit from the speed increase every time I use it), I then loop a URL through this dict, if there is a match then the URL is bad, if not the url is good:
regex_bad_url = {"1" : re.compile('\"\"'),
"2" : re.compile('\\\"')}
Followed by:
url_state = "good"
for key, pattern in regex_bad_url_components.items():
match = re.search(pattern, url)
if (match):
url_state = "bad"
if (url_state == "good"):
# do stuff here ...
Now the obvious thought is to use regex "or" ("|"), i.e.:
re.compile('(\"\"|\\\")')
Which reduces the number of compares and whatnot, but makes it much harder to trouble shoot (with one expression per compare I can easily add a print statement like:
print "URL: ", url, " matched by key ", key
So is there someway to get the best of both worlds (i.e. minimal number of compares) yet still be able to print out which regex is matching the URL, or do I simply need to bite the bullet and have my slower but easier to troubleshoot code when debugging and then squoosh all the regex's together into one line for production? (which means one more step of programming and code maintenance and possible problems).
Update:
Good answer by Dave Webb, so the actual code for this would look like:
match = re.search(r'(?P<double_quotes>\"\")|(?P<slash_quote>\\\")', fullurl)
if (match == None):
# do stuff here ...
else:
#optional for debugging
print "url matched by", match.lastgroup
| [
"\"Squoosh\" all the regexes into one line but put each in a named group using (?P<name>...) then use MatchOjbect.lastgroup to find which matched.\n"
] | [
2
] | [] | [] | [
"coding_style",
"python",
"regex"
] | stackoverflow_0000739651_coding_style_python_regex.txt |
Q:
Iterating over object instances of a given class in Python
Given a class that keeps a registry of its Objects:
class Person(object):
__registry = []
def __init__(self, name):
self.__registry.append(self)
self.name = name
How would I make the following code work (without using Person.__registry):
for personobject in Person:
print personobject
While researching I found a hint that one could go for a __metaclass__ with a __getitem__-method. Any ideas how this would look like?
A:
You can make your class object iterable with a simple metaclass.
class IterRegistry(type):
def __iter__(cls):
return iter(cls._registry)
class Person(object):
__metaclass__ = IterRegistry
_registry = []
def __init__(self, name):
self._registry.append(self)
self.name = name
(I have also changed __registry to _registry to make it easier to access from the metaclass).
Then,
>>> p = Person('John')
>>> p2 = Person('Mary')
>>> for personobject in Person:
... print personobject
...
<person.Person object at 0x70410>
<person.Person object at 0x70250>
A:
First, do not use double __ names. They're reserved for use by Python. If you want "private" use single _.
Second, keep this kind of thing as simple as possible. Don't waste a lot of time and energy on something complex. This is a simple problem, keep the code as simple as possible to get the job done.
class Person(object):
_registry = []
def __init__(self, name):
self._registry.append(self)
self.name = name
for p in Person._registry:
print p
A:
you can do it with:
for item in Person.__registry:
print(item)
| Iterating over object instances of a given class in Python | Given a class that keeps a registry of its Objects:
class Person(object):
__registry = []
def __init__(self, name):
self.__registry.append(self)
self.name = name
How would I make the following code work (without using Person.__registry):
for personobject in Person:
print personobject
While researching I found a hint that one could go for a __metaclass__ with a __getitem__-method. Any ideas how this would look like?
| [
"You can make your class object iterable with a simple metaclass.\nclass IterRegistry(type):\n def __iter__(cls):\n return iter(cls._registry)\n\nclass Person(object):\n __metaclass__ = IterRegistry\n _registry = []\n\n def __init__(self, name):\n self._registry.append(self)\n self.name = name\n\n(I have also changed __registry to _registry to make it easier to access from the metaclass).\nThen,\n>>> p = Person('John')\n>>> p2 = Person('Mary')\n>>> for personobject in Person:\n... print personobject\n...\n<person.Person object at 0x70410>\n<person.Person object at 0x70250>\n\n",
"First, do not use double __ names. They're reserved for use by Python. If you want \"private\" use single _.\nSecond, keep this kind of thing as simple as possible. Don't waste a lot of time and energy on something complex. This is a simple problem, keep the code as simple as possible to get the job done.\nclass Person(object):\n _registry = []\n\n def __init__(self, name):\n self._registry.append(self)\n self.name = name\n\nfor p in Person._registry:\n print p\n\n",
"you can do it with:\nfor item in Person.__registry:\n print(item)\n\n"
] | [
33,
15,
4
] | [] | [] | [
"oop",
"python"
] | stackoverflow_0000739882_oop_python.txt |
Q:
Is it possible to update a Google calendar from App Engine without logging in as the owner?
I'd like to be able to use the Google Data API from an AppEngine application to update a calendar while not logged in as the calendar's owner or the a user that the calendar is shared with. This is in contrast to the examples here:
http://code.google.com/appengine/articles/more_google_data.html
The login and password for the calendar's owner could be embedded in the application. Is there any way to accomplish the necessary authentication?
A:
It should be possible using OAuth, i havent used it myself but my understanding is the user logs in and then gives your app permission to access their private data (e.g. Calendar records). Once they have authorised your app you will be able to access their data without them logging in.
Here is an article explaining oauth and the google data api.
http://code.google.com/apis/gdata/articles/oauth.html
A:
It's possible to use ClientLogin as described here:
http://code.google.com/apis/accounts/docs/AuthForInstalledApps.html#Response
Note the section at the bottom of the document that mentions handling a CAPTCHA challenge.
There's example code included in the gdata python client in
samples/calendar/calendarExample.py
You need to call run_on_app_engine with the right arguments to make this work as described in the Appendix here:
http://code.google.com/appengine/articles/gdata.html
Note that the same document recommends against using ClientLogin for web apps. Using OAuth or AuthSub is the correct solution, but this is simpler and good enough for testing.
| Is it possible to update a Google calendar from App Engine without logging in as the owner? | I'd like to be able to use the Google Data API from an AppEngine application to update a calendar while not logged in as the calendar's owner or the a user that the calendar is shared with. This is in contrast to the examples here:
http://code.google.com/appengine/articles/more_google_data.html
The login and password for the calendar's owner could be embedded in the application. Is there any way to accomplish the necessary authentication?
| [
"It should be possible using OAuth, i havent used it myself but my understanding is the user logs in and then gives your app permission to access their private data (e.g. Calendar records). Once they have authorised your app you will be able to access their data without them logging in.\nHere is an article explaining oauth and the google data api.\nhttp://code.google.com/apis/gdata/articles/oauth.html\n",
"It's possible to use ClientLogin as described here:\nhttp://code.google.com/apis/accounts/docs/AuthForInstalledApps.html#Response\nNote the section at the bottom of the document that mentions handling a CAPTCHA challenge. \nThere's example code included in the gdata python client in\nsamples/calendar/calendarExample.py\nYou need to call run_on_app_engine with the right arguments to make this work as described in the Appendix here:\nhttp://code.google.com/appengine/articles/gdata.html\nNote that the same document recommends against using ClientLogin for web apps. Using OAuth or AuthSub is the correct solution, but this is simpler and good enough for testing.\n"
] | [
3,
1
] | [] | [] | [
"google_app_engine",
"python",
"web_services"
] | stackoverflow_0000723719_google_app_engine_python_web_services.txt |
Q:
Python: Locks from `threading` and `multiprocessing` interchangable?
Are the locks from the threading module interchangeable with those from the multiprocessing module?
A:
You can typically use the two interchangeably, but you need to cognizant of the differences. For example, multiprocessing.Event is backed by a named semaphore, which is sensitive to the platform under the application.
Multiprocessing.Lock is backed by Multiprocessing.SemLock - so it needs named semaphores. In essence, you can use them interchangeably, but using multiprocessing's locks introduces some platform requirements on the application (namely, it doesn't run on BSD :))
A:
I don't think so. Threading locks are within the same process, while the multiprocessing lock would likely be in shared memory.
Last time I checked, multiprocessing doesn't allow you to share the lock in a Queue, which is a threading lock.
A:
Yes, you can use locks from the multiprocessing module as normal in your one-process application, but if you're using multiprocessing, you should use its locks.
| Python: Locks from `threading` and `multiprocessing` interchangable? | Are the locks from the threading module interchangeable with those from the multiprocessing module?
| [
"You can typically use the two interchangeably, but you need to cognizant of the differences. For example, multiprocessing.Event is backed by a named semaphore, which is sensitive to the platform under the application. \nMultiprocessing.Lock is backed by Multiprocessing.SemLock - so it needs named semaphores. In essence, you can use them interchangeably, but using multiprocessing's locks introduces some platform requirements on the application (namely, it doesn't run on BSD :)) \n",
"I don't think so. Threading locks are within the same process, while the multiprocessing lock would likely be in shared memory.\nLast time I checked, multiprocessing doesn't allow you to share the lock in a Queue, which is a threading lock.\n",
"Yes, you can use locks from the multiprocessing module as normal in your one-process application, but if you're using multiprocessing, you should use its locks.\n"
] | [
8,
1,
1
] | [] | [] | [
"locking",
"multiprocessing",
"multithreading",
"python"
] | stackoverflow_0000739687_locking_multiprocessing_multithreading_python.txt |
Q:
Organizing a large Python project that must share an internal state?
I'm currently in the middle of porting a fairly large Perl The problem is that it uses little Perl tricks to make its code available for useing. I've done about the same with Python, making the codebase one big module for importing. I've had a firm grasp of Python for a long time, but I have no experience with large projects written in Python that need to access other parts of itself while maintaining an internal state.
I haven't yet tried simply importing the entire thing in one line (import core), but I know I'm currently not doing things in the best of ways. Here's an example from the master script that sets everything in motion:
self.Benchmark = Benchmark(self)
self.Exceptions = Exceptions
self.Settings = Settings(self)
self.Cache = Cache(self)
self.Deal = Deal(self)
self.Utils = Utils(self)
self.FileParsers = FileParsers(self)
self.Network = Network(self)
self.Plugins = Plugins(self)
self.Misc = Misc(self)
It works, but I'm not happy with it. Right now, the master class script imports each piece of the core module and creates an instance of the contained classes, passing itself as an argument to __init__ in those classes. Like so:
class FileParsers:
def __init__(self, parent):
self.parent = parent
Now the code in that class can access the entire rest of the codebase through the parent class.
self.parent.Settings.loadSysConfig()
So my question is this: considering the above, what would be the best way to reorganize the project and refactor the code so that it retains its current ability to access everything else? The code is very noncritical, so I'm not that worried about internal data integrity, I just don't like having to go through the parent class in such an ugly way. And those long chains slow the code down as well.
EDIT: Whoops, forgot these: links to the SVN repos for both project. Mine is here, and the project I'm porting is here.
A:
It's really hard to tell without actually being able to see the code, but you should probably just consider importing the items that each module uses, in that module. It's not unusual to have a long list of imports - here's an example from my own website:
# standard
import inspect
import linecache
import neo_cgi
import neo_cs
import neo_util
import os
import random
import sys
import time
from _apache import SERVER_RETURN
from mod_python import apache
from mod_python import util
from mod_python.util import FieldStorage
from os.path import dirname, isfile, join, splitext
# set up path
pydir = dirname(__file__)
if pydir not in sys.path:
sys.path.append(pydir)
# things I wrote
import auth
import handlers.accounts, handlers.publish, handlers.standard
import logger
import markup
import programs
import summarize
from auth import check_auth
from common import hdf_iterate, load_hdf_cgi_vars, load_hdf_common_vars
from common import hdf_insert_value, hdf_insert_list, hdf_insert_dict
from handlers import chain, farm, opt
from handlers import URIPrefixFilter
from handlers.standard import TabBarHandler
and I'm sure a lot of larger modules have even longer lists.
In your case, maybe have a Settings module with a singleton object (or with the settings as module properties) and do
import Settings
or whatever.
A:
what would be the best way to reorganize the project and refactor the code so that it retains its current ability to access everything else?
I think you're actually quite close already, and probably better than many Python projects where they just assume that there is only one instance of the application, and store application-specific values in a module global or singleton.
(This is OK for many simple applications, but really it's nicest to be able to bundle everything up into one Application object that owns all inner classes and methods that need to know the application's state.)
The first thing I would do from the looks of the code above would be to factor out any of those modules and classes that aren't a core competency of your application, things that don't necessarily need access to the application's state. Names like “Utils” and “Misc” sound suspiciously like much of their contents aren't really specific to your app; they could perhaps be refactored out into separate standalone modules, or submodules of your package that only have static functions, stuff not relying on application state.
Next, I would put the main owner Application class in the package's __init__.py rather than a ‘master script’. Then from your run-script or just the interpreter, you can get a complete instance of the application as simply as:
import myapplication
a= myapplication.Application()
You could also consider moving any basic deployment settings from the Settings class into the initialiser:
a= myapplication.Application(basedir= '/opt/myapp', site= 'www.example.com', debug= False)
(If you only have one possible set of settings and every time you instantiate Application() you get the same one, there's little use in having all this ability to encapsulate your whole application; you might as well simply be using module globals.)
What I'm doing with some of my apps is making the owned classes monkey-patch themselves into actual members of the owner application object:
# myapplication/__init__.py
class Application(object):
def __init__(self, dbfactory, debug):
# ...
self.mailer= self.Mailer(self)
self.webservice= self.Webservice(self)
# ...
import myapplication.mailer, myapplication.webservice
# myapplication/mailer.py
import myapplication
class Mailer(object):
def __init__(self, owner):
self.owner= owner
def send(self, message, recipients):
# ...
myapplication.Application.Mailer= Mailer
Then it's possible to extend, change or configure the Application from outside it by replacing/subclassing the inner classes:
import myapplication
class MockApplication(myapplication.Application):
class Mailer(myapplication.Application.Mailer):
def send(self, message, recipients):
self.owner.log('Mail send called (not actually sent)')
return True
I'm not that worried about internal data integrity
Well no, this is Python not Java: we don't worry too much about Evil Programmers using properties and methods they shouldn't, we just put ‘_’ at the start of the name and let that be a suitable warning to all.
And those long chains slow the code down as well.
Not really noticeably. Readability is the important factor; anything else is premature optimisation.
| Organizing a large Python project that must share an internal state? | I'm currently in the middle of porting a fairly large Perl The problem is that it uses little Perl tricks to make its code available for useing. I've done about the same with Python, making the codebase one big module for importing. I've had a firm grasp of Python for a long time, but I have no experience with large projects written in Python that need to access other parts of itself while maintaining an internal state.
I haven't yet tried simply importing the entire thing in one line (import core), but I know I'm currently not doing things in the best of ways. Here's an example from the master script that sets everything in motion:
self.Benchmark = Benchmark(self)
self.Exceptions = Exceptions
self.Settings = Settings(self)
self.Cache = Cache(self)
self.Deal = Deal(self)
self.Utils = Utils(self)
self.FileParsers = FileParsers(self)
self.Network = Network(self)
self.Plugins = Plugins(self)
self.Misc = Misc(self)
It works, but I'm not happy with it. Right now, the master class script imports each piece of the core module and creates an instance of the contained classes, passing itself as an argument to __init__ in those classes. Like so:
class FileParsers:
def __init__(self, parent):
self.parent = parent
Now the code in that class can access the entire rest of the codebase through the parent class.
self.parent.Settings.loadSysConfig()
So my question is this: considering the above, what would be the best way to reorganize the project and refactor the code so that it retains its current ability to access everything else? The code is very noncritical, so I'm not that worried about internal data integrity, I just don't like having to go through the parent class in such an ugly way. And those long chains slow the code down as well.
EDIT: Whoops, forgot these: links to the SVN repos for both project. Mine is here, and the project I'm porting is here.
| [
"It's really hard to tell without actually being able to see the code, but you should probably just consider importing the items that each module uses, in that module. It's not unusual to have a long list of imports - here's an example from my own website:\n# standard\nimport inspect\nimport linecache\nimport neo_cgi\nimport neo_cs\nimport neo_util\nimport os\nimport random\nimport sys\nimport time\nfrom _apache import SERVER_RETURN\nfrom mod_python import apache\nfrom mod_python import util\nfrom mod_python.util import FieldStorage\nfrom os.path import dirname, isfile, join, splitext\n\n# set up path\npydir = dirname(__file__)\nif pydir not in sys.path:\n sys.path.append(pydir)\n\n# things I wrote\nimport auth\nimport handlers.accounts, handlers.publish, handlers.standard\nimport logger\nimport markup\nimport programs\nimport summarize\nfrom auth import check_auth\nfrom common import hdf_iterate, load_hdf_cgi_vars, load_hdf_common_vars\nfrom common import hdf_insert_value, hdf_insert_list, hdf_insert_dict\nfrom handlers import chain, farm, opt\nfrom handlers import URIPrefixFilter\nfrom handlers.standard import TabBarHandler\n\nand I'm sure a lot of larger modules have even longer lists.\nIn your case, maybe have a Settings module with a singleton object (or with the settings as module properties) and do\nimport Settings\n\nor whatever.\n",
"\nwhat would be the best way to reorganize the project and refactor the code so that it retains its current ability to access everything else?\n\nI think you're actually quite close already, and probably better than many Python projects where they just assume that there is only one instance of the application, and store application-specific values in a module global or singleton.\n(This is OK for many simple applications, but really it's nicest to be able to bundle everything up into one Application object that owns all inner classes and methods that need to know the application's state.)\nThe first thing I would do from the looks of the code above would be to factor out any of those modules and classes that aren't a core competency of your application, things that don't necessarily need access to the application's state. Names like “Utils” and “Misc” sound suspiciously like much of their contents aren't really specific to your app; they could perhaps be refactored out into separate standalone modules, or submodules of your package that only have static functions, stuff not relying on application state.\nNext, I would put the main owner Application class in the package's __init__.py rather than a ‘master script’. Then from your run-script or just the interpreter, you can get a complete instance of the application as simply as:\nimport myapplication\n\na= myapplication.Application()\n\nYou could also consider moving any basic deployment settings from the Settings class into the initialiser:\na= myapplication.Application(basedir= '/opt/myapp', site= 'www.example.com', debug= False)\n\n(If you only have one possible set of settings and every time you instantiate Application() you get the same one, there's little use in having all this ability to encapsulate your whole application; you might as well simply be using module globals.)\nWhat I'm doing with some of my apps is making the owned classes monkey-patch themselves into actual members of the owner application object:\n# myapplication/__init__.py\n\nclass Application(object):\n def __init__(self, dbfactory, debug):\n # ...\n self.mailer= self.Mailer(self)\n self.webservice= self.Webservice(self)\n # ...\n\nimport myapplication.mailer, myapplication.webservice\n\n\n# myapplication/mailer.py\n\nimport myapplication\n\nclass Mailer(object):\n def __init__(self, owner):\n self.owner= owner\n\n def send(self, message, recipients):\n # ...\n\nmyapplication.Application.Mailer= Mailer\n\nThen it's possible to extend, change or configure the Application from outside it by replacing/subclassing the inner classes:\nimport myapplication\n\nclass MockApplication(myapplication.Application):\n class Mailer(myapplication.Application.Mailer):\n def send(self, message, recipients):\n self.owner.log('Mail send called (not actually sent)')\n return True\n\n\nI'm not that worried about internal data integrity\n\nWell no, this is Python not Java: we don't worry too much about Evil Programmers using properties and methods they shouldn't, we just put ‘_’ at the start of the name and let that be a suitable warning to all.\n\nAnd those long chains slow the code down as well.\n\nNot really noticeably. Readability is the important factor; anything else is premature optimisation.\n"
] | [
1,
0
] | [] | [] | [
"code_organization",
"project_management",
"python"
] | stackoverflow_0000739311_code_organization_project_management_python.txt |
Q:
Insert Command into Bash Shell
Is there any way to inject a command into a bash prompt in Linux? I am working on a command history app - like the Ctrl+R lookup but different. I am using python for this.
I will show a list of commands from history based on the user's search term - if the user presses enter, the app will execute the command and print the results. So far, so good.
If the user chooses a command and then press the right or left key, I want to insert the command into the prompt - so that the user can edit the command before executing it.
If you are on Linux, just fire up a bash console, press Ctrl+r, type cd(or something), and then press the right arrow key - the selected command will be shown at the prompt. This is the functionality I am looking for - but I want to know how to do that from within python.
A:
You can do this, but only if the shell runs as a subprocess of your Python program; you can't feed content into the stdin of your parent process. (If you could, UNIX would have a host of related security issues when folks run processes with fewer privileges than the calling shell!)
If you're familiar with how Expect allows passthrough to interactive subprocesses (with specific key sequences from the user or strings received from the child process triggering matches and sending control back to your program), the same thing can be done from Python with pexpect. Alternately, as another post mentioned, the curses module provides full control over the drawing of terminal displays -- which you'll want if this history menu is happening within the window rather than in a graphical (X11/win32) pop-up.
A:
See readline module. It implements all these features.
A:
If I understand correctly, you would like history behaviour similar to that of bash in
a python app. If this is what you want the GNU Readline Library is the way to go.
There is a python wrapper GNU readline interface but it runs only on Unix.
readline.py is seem to be a version for Windows, but I never tried it.
A:
ncurses with its python port is a way to go, IMHO.
| Insert Command into Bash Shell | Is there any way to inject a command into a bash prompt in Linux? I am working on a command history app - like the Ctrl+R lookup but different. I am using python for this.
I will show a list of commands from history based on the user's search term - if the user presses enter, the app will execute the command and print the results. So far, so good.
If the user chooses a command and then press the right or left key, I want to insert the command into the prompt - so that the user can edit the command before executing it.
If you are on Linux, just fire up a bash console, press Ctrl+r, type cd(or something), and then press the right arrow key - the selected command will be shown at the prompt. This is the functionality I am looking for - but I want to know how to do that from within python.
| [
"You can do this, but only if the shell runs as a subprocess of your Python program; you can't feed content into the stdin of your parent process. (If you could, UNIX would have a host of related security issues when folks run processes with fewer privileges than the calling shell!)\nIf you're familiar with how Expect allows passthrough to interactive subprocesses (with specific key sequences from the user or strings received from the child process triggering matches and sending control back to your program), the same thing can be done from Python with pexpect. Alternately, as another post mentioned, the curses module provides full control over the drawing of terminal displays -- which you'll want if this history menu is happening within the window rather than in a graphical (X11/win32) pop-up.\n",
"See readline module. It implements all these features.\n",
"If I understand correctly, you would like history behaviour similar to that of bash in \na python app. If this is what you want the GNU Readline Library is the way to go.\nThere is a python wrapper GNU readline interface but it runs only on Unix.\nreadline.py is seem to be a version for Windows, but I never tried it.\n",
"ncurses with its python port is a way to go, IMHO.\n"
] | [
3,
3,
3,
1
] | [] | [] | [
"bash",
"command",
"linux",
"python",
"shell"
] | stackoverflow_0000524068_bash_command_linux_python_shell.txt |
Q:
Python: Good place to learn about `multiprocessing.Manager`?
I want to learn to use multiprocessing.Manager. I looked at the documentation but it's not easy enough for me. Anyone knows of a good tutorial or something like that?
A:
The documentation of multiprocessing.Manager contains extensive examples for using a Manager and the various objects associated with the calss:
Managers provide a way to create data
which can be shared between different
processes. A manager object controls a
server process which manages shared
objects. Other processes can access
the shared objects by using proxies.
[and so on]
Manager objects allow you to create variables shared in multiple processes. What is it that you exactly want to achieve? Maybe a request for an example would help?
| Python: Good place to learn about `multiprocessing.Manager`? | I want to learn to use multiprocessing.Manager. I looked at the documentation but it's not easy enough for me. Anyone knows of a good tutorial or something like that?
| [
"The documentation of multiprocessing.Manager contains extensive examples for using a Manager and the various objects associated with the calss:\n\nManagers provide a way to create data\n which can be shared between different\n processes. A manager object controls a\n server process which manages shared\n objects. Other processes can access\n the shared objects by using proxies.\n [and so on]\n\nManager objects allow you to create variables shared in multiple processes. What is it that you exactly want to achieve? Maybe a request for an example would help?\n"
] | [
6
] | [] | [] | [
"multiprocessing",
"python"
] | stackoverflow_0000740848_multiprocessing_python.txt |
Q:
Can InstantDjango be Used Rather than the Normal Installation
Is it possible to do development just using Instant Django? Do I need to have the normal version working or can I just use this instant version? Has anyone used it?
A:
It is, of course, possible to use InstantDjango for development. InstantDjango uses SQLite3, which is a perfectly reasonable relational database for embedded or light/sometimes-moderate use. The whole purpose of django is that the ORM layer gives you database portability.
That said, I would not use InstantDjango for deployment in a halfway-serious web app. SQLite just does not scale anywhere near as far as Apache (etc) with MySQL/Postgres. In some cases, the way that SQLite handles data types (or, rather, glosses over data types) can lead to issues with a django app that is subsequently deployed with MySQL/Postgres... if you develop using SQLite, always test with your actual deployment environment before going live.
You've asked a number of questions on SO in the last couple days about deploying Django with one or the other of the major relational database packages (Getting started with Django-Instant Django ; Is it Me or Are Rails and Django Difficult to Install on Windows? ). I suspect the reason you've not had many answers, and therefore feel the need to keep asking the same question with different phrasing, is that we need more specific examples of the errors you're having.
Plenty of folks install Django with MySQL, Postgres, and other databases, every day on Windows and *nix systems. If you give us the exact details of which non-SQLite database you're trying to use, the way you've installed it, how your settings for that database are configured in django, and the error messages you're getting, we will have a better shot at helping you.
If you're still having trouble based on the answers you've had, perhaps you can turn to a professional system administrator and/or DBA you know to show you the ropes with installing and configuring this kind of software.
Until that time, by all means, start developing using InstantDjango and SQLite. It will not have to be thrown away for vastly re-written when you migrate to a different relational database, and will help you make forward-progress with the framework that can only bolster your knowledge for understanding how to deploy it in production.
| Can InstantDjango be Used Rather than the Normal Installation | Is it possible to do development just using Instant Django? Do I need to have the normal version working or can I just use this instant version? Has anyone used it?
| [
"It is, of course, possible to use InstantDjango for development. InstantDjango uses SQLite3, which is a perfectly reasonable relational database for embedded or light/sometimes-moderate use. The whole purpose of django is that the ORM layer gives you database portability.\nThat said, I would not use InstantDjango for deployment in a halfway-serious web app. SQLite just does not scale anywhere near as far as Apache (etc) with MySQL/Postgres. In some cases, the way that SQLite handles data types (or, rather, glosses over data types) can lead to issues with a django app that is subsequently deployed with MySQL/Postgres... if you develop using SQLite, always test with your actual deployment environment before going live.\nYou've asked a number of questions on SO in the last couple days about deploying Django with one or the other of the major relational database packages (Getting started with Django-Instant Django ; Is it Me or Are Rails and Django Difficult to Install on Windows? ). I suspect the reason you've not had many answers, and therefore feel the need to keep asking the same question with different phrasing, is that we need more specific examples of the errors you're having. \nPlenty of folks install Django with MySQL, Postgres, and other databases, every day on Windows and *nix systems. If you give us the exact details of which non-SQLite database you're trying to use, the way you've installed it, how your settings for that database are configured in django, and the error messages you're getting, we will have a better shot at helping you. \nIf you're still having trouble based on the answers you've had, perhaps you can turn to a professional system administrator and/or DBA you know to show you the ropes with installing and configuring this kind of software.\nUntil that time, by all means, start developing using InstantDjango and SQLite. It will not have to be thrown away for vastly re-written when you migrate to a different relational database, and will help you make forward-progress with the framework that can only bolster your knowledge for understanding how to deploy it in production.\n"
] | [
4
] | [] | [] | [
"django",
"instant",
"python"
] | stackoverflow_0000740929_django_instant_python.txt |
Q:
Django: Adding additional properties to Model Class Object
This is using Google App Engine. I am not sure if this is applicable to just normal Django development or if Google App Engine will play a part. If it does, would you let me know so I can update the description of this problem.
class MessageModel(db.Model):
to_user_id = db.IntegerProperty()
to_user = db.StringProperty(multiline=False)
message = db.StringProperty(multiline=False)
date_created = db.DateTimeProperty(auto_now_add=True)
Now when I do a query a get a list of "MessageModel" and send it to the template.html to bind against, I would like to include a few more properties such as the "since_date_created" to output how long ago since the last output, potentially play around with the message property and add other parameters that will help with the layout such as "highlight" , "background-color" etc...
The only way I thought of is to loop through the initial Query Object and create a new list where I would add the property values and then append it back to a list.
for msg in messagesSQL:
msg.lalaland = "test"
msg.since_created_time = 321932
msglist.append(msg)
Then instead of passing the template.html messagesSQL, I will now pass it msglist.
A:
You should still be able to send it messagesSQL to the template after you've added elements to it via the for loop. Python allows that sort of thing.
Something else that might make sense in some cases would be to give your MessageModel methods. For instance, if you have a
def since_date_created(self):
'''Compute the time since creation time based on self.date_created.'''
Then (assuming you have "messagesSQL" in the template), you can use the function as
{% for msg in messagesSQL %}
{{ msg.since_date_created }}
{% endfor %}
Basically, you can call any method in the model as long as you it needs no arguments passed to it.
A:
You can obtain that by defining methods in the model
like
class MessageModel(db.Model):
# Definition
def since_date_created(self):
# ...
Now in the template, you can use it like
Time since created {{ message.since_date_created }}
| Django: Adding additional properties to Model Class Object | This is using Google App Engine. I am not sure if this is applicable to just normal Django development or if Google App Engine will play a part. If it does, would you let me know so I can update the description of this problem.
class MessageModel(db.Model):
to_user_id = db.IntegerProperty()
to_user = db.StringProperty(multiline=False)
message = db.StringProperty(multiline=False)
date_created = db.DateTimeProperty(auto_now_add=True)
Now when I do a query a get a list of "MessageModel" and send it to the template.html to bind against, I would like to include a few more properties such as the "since_date_created" to output how long ago since the last output, potentially play around with the message property and add other parameters that will help with the layout such as "highlight" , "background-color" etc...
The only way I thought of is to loop through the initial Query Object and create a new list where I would add the property values and then append it back to a list.
for msg in messagesSQL:
msg.lalaland = "test"
msg.since_created_time = 321932
msglist.append(msg)
Then instead of passing the template.html messagesSQL, I will now pass it msglist.
| [
"You should still be able to send it messagesSQL to the template after you've added elements to it via the for loop. Python allows that sort of thing.\nSomething else that might make sense in some cases would be to give your MessageModel methods. For instance, if you have a \ndef since_date_created(self):\n '''Compute the time since creation time based on self.date_created.'''\n\nThen (assuming you have \"messagesSQL\" in the template), you can use the function as\n{% for msg in messagesSQL %}\n {{ msg.since_date_created }}\n{% endfor %}\n\nBasically, you can call any method in the model as long as you it needs no arguments passed to it.\n",
"You can obtain that by defining methods in the model\nlike\nclass MessageModel(db.Model):\n # Definition\n def since_date_created(self):\n # ...\n\nNow in the template, you can use it like\nTime since created {{ message.since_date_created }}\n\n"
] | [
5,
4
] | [] | [] | [
"django",
"google_app_engine",
"python"
] | stackoverflow_0000741270_django_google_app_engine_python.txt |
Q:
Parsing specific elements out of a very large HTML file
I have a very large HTML file (several megabytes). I know the data I want is under something like <div class=someName>here</div>
What is a good library to parse through the HTML page so I can loop through elements and grab each someName? I want to do this in either C#, Python or C++.
A:
I would use Python and BeautifulSoup for the job. It is very solid at handling this kind of stuff.
For your case, you can use SoupStrainer to make BeautifulSoup only parse DIVs in the document that have the class you want, so it doesn't have to have the whole thing in memory.
For example, say your document looks like this:
<div class="test">Hello World</div>
<div class="hello">Aloha World</div>
<div>Hey There</div>
You can write this:
>>> from BeautifulSoup import BeautifulSoup, SoupStrainer
>>> doc = '''
... <div class="test">Hello World</div>
... <div class="hello">Aloha World</div>
... <div>Hey There</div>
... '''
>>> findDivs = SoupStrainer('div', {'class':'hello'})
>>> [tag for tag in BeautifulSoup(doc, parseOnlyThese=findDivs)]
[<div class="hello">Aloha World</div>]
A:
The Html Agility Pack is a stellar option if you want to use C#
A:
Xerces is well documented, supported and tested. (C++)
http://xerces.apache.org/xerces-c/
(yes, it's an XML parser but it should do the trick)
A:
Sounds like a case for good old regular expressions.
Input:
<div class="test">Hello World</div>
<div class="somename">Aloha World</div>
<div>Hey There</div>
RegEx:
\<div\sclass\=\"somename\"\>(?<Text>.*?)\<\/div\>
Yields:
Aloha World (note: In a single group named Text)
Probably need to account for enclosing quotes missing etc...
Although with regular expressions now you have two problems.
A:
Give TinyXML a try. (C++ XML parser)
| Parsing specific elements out of a very large HTML file | I have a very large HTML file (several megabytes). I know the data I want is under something like <div class=someName>here</div>
What is a good library to parse through the HTML page so I can loop through elements and grab each someName? I want to do this in either C#, Python or C++.
| [
"I would use Python and BeautifulSoup for the job. It is very solid at handling this kind of stuff.\nFor your case, you can use SoupStrainer to make BeautifulSoup only parse DIVs in the document that have the class you want, so it doesn't have to have the whole thing in memory.\nFor example, say your document looks like this:\n<div class=\"test\">Hello World</div>\n<div class=\"hello\">Aloha World</div>\n<div>Hey There</div>\n\nYou can write this:\n>>> from BeautifulSoup import BeautifulSoup, SoupStrainer\n>>> doc = '''\n... <div class=\"test\">Hello World</div>\n... <div class=\"hello\">Aloha World</div>\n... <div>Hey There</div>\n... '''\n>>> findDivs = SoupStrainer('div', {'class':'hello'})\n>>> [tag for tag in BeautifulSoup(doc, parseOnlyThese=findDivs)]\n[<div class=\"hello\">Aloha World</div>]\n\n",
"The Html Agility Pack is a stellar option if you want to use C#\n",
"Xerces is well documented, supported and tested. (C++)\nhttp://xerces.apache.org/xerces-c/\n(yes, it's an XML parser but it should do the trick)\n",
"Sounds like a case for good old regular expressions.\nInput:\n<div class=\"test\">Hello World</div>\n<div class=\"somename\">Aloha World</div>\n<div>Hey There</div>\n\nRegEx:\n\\<div\\sclass\\=\\\"somename\\\"\\>(?<Text>.*?)\\<\\/div\\>\n\nYields:\nAloha World (note: In a single group named Text)\n\nProbably need to account for enclosing quotes missing etc...\nAlthough with regular expressions now you have two problems.\n",
"Give TinyXML a try. (C++ XML parser)\n"
] | [
12,
3,
1,
1,
0
] | [] | [] | [
"c#",
"c++",
"html",
"parsing",
"python"
] | stackoverflow_0000739325_c#_c++_html_parsing_python.txt |
Q:
Encapsulation vs. inheritance, help making a choice
I need to write handlers for several different case types (in Python). The interface for all this types are the same, but the handling logic is different.
One option would be defining a common class that receives the particular handler type as one of the __init__ parameters:
class Handler:
def __init__ (self, handlerType):
self._handlerType = handlerType
self._handler = handlerType.handleStuff
def handleStuff(self, *args, **kwargs):
return self._handler(args, kwargs)
# case specific handlers
class Handler_Case1:
def handleStuff(self, *args, **kwargs):
print 'Handling Case 1'
class Handler_Case2:
def handleStuff(self, *args, **kwargs):
print 'Handling Case 2'
if __name__ == '__main__':
handlers = []
handlers.append(Handler(Handler_Case1))
handlers.append(Handler(Handler_Case2))
for h in handlers:
h.handleStuff()
However, this results in a TypeError:
TypeError: unbound method handleStuff() must be called with Handler_Case1 instance as first argument (got tuple instance instead)
Another option is to mimic abstract function, as shown here("Q: Can you implement abstract classes in Python in 0 lines of code?"):
class Handler:
def handleStuff(self, *args, **kwargs): abstract
def commonFunction(self):
print 'Common function'
# case specific handlers
class Handler_Case1(Handler):
def handleStuff(self, *args, **kwargs):
print 'Handling Case 1'
class Handler_Case2(Handler):
def handleStuff(self, *args, **kwargs):
print 'Handling Case 2'
if __name__ == '__main__':
handlers = []
h1 = (Handler_Case1())
h2 = (Handler_Case2())
handlers.append(h1)
handlers.append(h2)
for h in handlers:
h.handleStuff()
print
So, actually, I have two questions:
Which of the two approaches is more pythonic?
and
How to implement the first one?
A:
I might be missing some subtle intricacy in your question, but given your first example, what precludes you from doing something like this:
class HandlerCase1(object):
def handle_stuff(self, *args, **kwargs):
print "Handling case 1"
class HandlerCase2(object):
def handle_stuff(self, *args, **kwargs):
print "Handling case 2"
if __name__ == "__main__":
handlers = []
handlers.append(HandlerCase1())
handlers.append(HandlerCase2())
for h in handlers:
h.handle_stuff()
And if you want the classes to share some common (base) functionality, is there something keeping you from doing this:
class Handler(object):
def common_function(self):
print "Common function"
class HandlerCase1(Handler):
def handle_stuff(self, *args, **kwargs):
print "Handling case 1"
class HandlerCase2(Handler):
def handle_stuff(self, *args, **kwargs):
print "Handling case 2"
if __name__ == "__main__":
handlers = []
handlers.append(HandlerCase1())
handlers.append(HandlerCase2())
for h in handlers:
h.handle_stuff()
h.common_function()
| Encapsulation vs. inheritance, help making a choice | I need to write handlers for several different case types (in Python). The interface for all this types are the same, but the handling logic is different.
One option would be defining a common class that receives the particular handler type as one of the __init__ parameters:
class Handler:
def __init__ (self, handlerType):
self._handlerType = handlerType
self._handler = handlerType.handleStuff
def handleStuff(self, *args, **kwargs):
return self._handler(args, kwargs)
# case specific handlers
class Handler_Case1:
def handleStuff(self, *args, **kwargs):
print 'Handling Case 1'
class Handler_Case2:
def handleStuff(self, *args, **kwargs):
print 'Handling Case 2'
if __name__ == '__main__':
handlers = []
handlers.append(Handler(Handler_Case1))
handlers.append(Handler(Handler_Case2))
for h in handlers:
h.handleStuff()
However, this results in a TypeError:
TypeError: unbound method handleStuff() must be called with Handler_Case1 instance as first argument (got tuple instance instead)
Another option is to mimic abstract function, as shown here("Q: Can you implement abstract classes in Python in 0 lines of code?"):
class Handler:
def handleStuff(self, *args, **kwargs): abstract
def commonFunction(self):
print 'Common function'
# case specific handlers
class Handler_Case1(Handler):
def handleStuff(self, *args, **kwargs):
print 'Handling Case 1'
class Handler_Case2(Handler):
def handleStuff(self, *args, **kwargs):
print 'Handling Case 2'
if __name__ == '__main__':
handlers = []
h1 = (Handler_Case1())
h2 = (Handler_Case2())
handlers.append(h1)
handlers.append(h2)
for h in handlers:
h.handleStuff()
print
So, actually, I have two questions:
Which of the two approaches is more pythonic?
and
How to implement the first one?
| [
"I might be missing some subtle intricacy in your question, but given your first example, what precludes you from doing something like this:\nclass HandlerCase1(object):\n def handle_stuff(self, *args, **kwargs):\n print \"Handling case 1\"\n\n\nclass HandlerCase2(object):\n def handle_stuff(self, *args, **kwargs):\n print \"Handling case 2\"\n\n\nif __name__ == \"__main__\":\n handlers = []\n handlers.append(HandlerCase1())\n handlers.append(HandlerCase2())\n for h in handlers:\n h.handle_stuff()\n\nAnd if you want the classes to share some common (base) functionality, is there something keeping you from doing this:\nclass Handler(object):\n def common_function(self):\n print \"Common function\"\n\n\nclass HandlerCase1(Handler):\n def handle_stuff(self, *args, **kwargs):\n print \"Handling case 1\"\n\n\nclass HandlerCase2(Handler):\n def handle_stuff(self, *args, **kwargs):\n print \"Handling case 2\"\n\n\nif __name__ == \"__main__\":\n handlers = []\n handlers.append(HandlerCase1())\n handlers.append(HandlerCase2())\n for h in handlers:\n h.handle_stuff()\n h.common_function()\n\n"
] | [
4
] | [] | [] | [
"abstract_class",
"design_patterns",
"inheritance",
"python"
] | stackoverflow_0000742376_abstract_class_design_patterns_inheritance_python.txt |
Q:
Code works in global scope but not local scope?
This function should be returning 36 but it returns 0. If I run through the logic line by line in interactive mode I get 36.
Code
from math import *
line = ((2, 5), (4, -1))
point = (6, 11)
def cross(line, point):
#reference: http://www.topcoder.com/tc?module=Static&d1=tutorials&d2=geometry1
ab = ac = [None, None]
ab[0] = line[1][0] - line[0][0]
ab[1] = line[1][1] - line[0][1]
print ab
ac[0] = point[0] - line[0][0]
ac[1] = point[1] - line[0][1]
print ac
step1 = ab[0] * ac[1]
print step1
step2 = ab[1] * ac[0]
print step2
step3 = step1 - step2
print step3
return float(value)
cross(line, point)
Output
[2, -6] # ab
[4, 6] #ac
24 #step 1 (Should be 12)
24 #step 2 (Should be -24)
0 #step 3 (Should be 36)
According to the interactive mode this should be the result of step1, step2, and step3
>>> ab = [2, -6]
>>> ac = [4, 6]
>>> step1 = ab[0] * ac[1]
>>> step1
12
>>> step2 = ab[1] * ac[0]
>>> step2
-24
>>> step3 = step1 - step2
>>> step3
36
(It would be great if someone can give this a good title)
A:
You have ab and ac pointing to the same reference. Change this:
ab = ac = [None, None]
to this:
ab = [None, None]
ac = [None, None]
A:
In the line ab = ac = [None, None], you assign the same list to the variables ab and ac. When you change one, you change the other at the same time.
The reason it works interactively, is that you don't init the lists in the same way.
Swap the first line in your function with this:
ab = [None, None]
ac = [None, None]
| Code works in global scope but not local scope? | This function should be returning 36 but it returns 0. If I run through the logic line by line in interactive mode I get 36.
Code
from math import *
line = ((2, 5), (4, -1))
point = (6, 11)
def cross(line, point):
#reference: http://www.topcoder.com/tc?module=Static&d1=tutorials&d2=geometry1
ab = ac = [None, None]
ab[0] = line[1][0] - line[0][0]
ab[1] = line[1][1] - line[0][1]
print ab
ac[0] = point[0] - line[0][0]
ac[1] = point[1] - line[0][1]
print ac
step1 = ab[0] * ac[1]
print step1
step2 = ab[1] * ac[0]
print step2
step3 = step1 - step2
print step3
return float(value)
cross(line, point)
Output
[2, -6] # ab
[4, 6] #ac
24 #step 1 (Should be 12)
24 #step 2 (Should be -24)
0 #step 3 (Should be 36)
According to the interactive mode this should be the result of step1, step2, and step3
>>> ab = [2, -6]
>>> ac = [4, 6]
>>> step1 = ab[0] * ac[1]
>>> step1
12
>>> step2 = ab[1] * ac[0]
>>> step2
-24
>>> step3 = step1 - step2
>>> step3
36
(It would be great if someone can give this a good title)
| [
"You have ab and ac pointing to the same reference. Change this:\nab = ac = [None, None]\n\nto this:\nab = [None, None]\nac = [None, None]\n\n",
"In the line ab = ac = [None, None], you assign the same list to the variables ab and ac. When you change one, you change the other at the same time.\nThe reason it works interactively, is that you don't init the lists in the same way.\nSwap the first line in your function with this:\nab = [None, None]\nac = [None, None]\n\n"
] | [
5,
1
] | [] | [] | [
"python"
] | stackoverflow_0000742496_python.txt |
Q:
Discrete Event Queuing Simulation
I'm stuck trying to implement a single server queue. I've adapted some pseudocode from Norm Matloff's Simpy tutorial to Python and the code is here. Now I am struggling to find some way to calculate the mean waiting time of a job/customer.
At this point my brain has tied itself into a knot! Any pointers, ideas, tips or pseudocode would be appreciated.
A:
You should know when each customer arrived in the queue. When they arrive at the server you should add one to the number of customers served as well as accumulate the amount of time he waited. At the end of the simulation you simply divide the accumulated time by the number of customers and you have a mean wait time for the job/customer.
The core problem is in accounting for different events and updating statistics based on those events.
Your simulation should initialize all of the structures of your simulation into a reasonable state:
Initialize the queue of customers to no one in it
Initialize any count of served customers to 0
Initialize any accumulated wait times to 0
Initialize the current system time to 0
Etc.
Once all the system has been initailized you create an event that a cusotmer arrives. This will normally be determined by some given distribution. Generating system events will need to update the statistics of the system. You have a choice at this point of generating all of the job/customers arrival times. The service time of each customer is also something that you will generate from a given distribution.
You must then handle each event and update the statistics accordingly. For example when the first customer arrives the queue has been empty from the time the simulation started to the current time. The average number of customers in the queue is likely a parameter of interest. You should accumulate the 0 * elapsed seconds into an accumulator. Once the customer arrives at the empty queue you should generate the service time. Either the next customer will arrive before or after the given job finishes. If the next cusomter arrives before the previous one has been serviced then you add him into the queue (accumulating the fact no one has been waiting). Depending on what event occurs next you must accumulate the statistics that occur in that time interval. The idle time of the server is also a parameter of interest in such simulations.
To make things more clear consider the fact there are 18 people in line and the server has completed a job for the first customer. The interval of between the arrival of the 18th customer and the time the first persons job is complete is a weighted average to be added to an accumulator. There have been 18 people in line for 4 seconds for example.
The server has not been idle so you should take an entry off the queue and start processing the next job. The job will take some amount of time usually defined from some distribution. If the next customer arrives before the current job is finished the fact 17 people were in line would be added to your weighted value.
Again at the fundamental level you are accumulating statistics between relevant events in your system:
while (current_time < total_simulation_time)
handle_next_event
generate_subsequent_events
accumulate_statistics
update_current_time
endwhile
Display "Average wait time: " accumulated_wait_time / number_of_customers_served
Hope that helps it seems a bit longwinded.
| Discrete Event Queuing Simulation | I'm stuck trying to implement a single server queue. I've adapted some pseudocode from Norm Matloff's Simpy tutorial to Python and the code is here. Now I am struggling to find some way to calculate the mean waiting time of a job/customer.
At this point my brain has tied itself into a knot! Any pointers, ideas, tips or pseudocode would be appreciated.
| [
"You should know when each customer arrived in the queue. When they arrive at the server you should add one to the number of customers served as well as accumulate the amount of time he waited. At the end of the simulation you simply divide the accumulated time by the number of customers and you have a mean wait time for the job/customer.\nThe core problem is in accounting for different events and updating statistics based on those events.\nYour simulation should initialize all of the structures of your simulation into a reasonable state:\n\nInitialize the queue of customers to no one in it\nInitialize any count of served customers to 0\nInitialize any accumulated wait times to 0\nInitialize the current system time to 0\nEtc.\n\nOnce all the system has been initailized you create an event that a cusotmer arrives. This will normally be determined by some given distribution. Generating system events will need to update the statistics of the system. You have a choice at this point of generating all of the job/customers arrival times. The service time of each customer is also something that you will generate from a given distribution.\nYou must then handle each event and update the statistics accordingly. For example when the first customer arrives the queue has been empty from the time the simulation started to the current time. The average number of customers in the queue is likely a parameter of interest. You should accumulate the 0 * elapsed seconds into an accumulator. Once the customer arrives at the empty queue you should generate the service time. Either the next customer will arrive before or after the given job finishes. If the next cusomter arrives before the previous one has been serviced then you add him into the queue (accumulating the fact no one has been waiting). Depending on what event occurs next you must accumulate the statistics that occur in that time interval. The idle time of the server is also a parameter of interest in such simulations. \nTo make things more clear consider the fact there are 18 people in line and the server has completed a job for the first customer. The interval of between the arrival of the 18th customer and the time the first persons job is complete is a weighted average to be added to an accumulator. There have been 18 people in line for 4 seconds for example. \nThe server has not been idle so you should take an entry off the queue and start processing the next job. The job will take some amount of time usually defined from some distribution. If the next customer arrives before the current job is finished the fact 17 people were in line would be added to your weighted value.\nAgain at the fundamental level you are accumulating statistics between relevant events in your system:\nwhile (current_time < total_simulation_time)\n handle_next_event\n generate_subsequent_events\n accumulate_statistics\n update_current_time\nendwhile\n\nDisplay \"Average wait time: \" accumulated_wait_time / number_of_customers_served\n\nHope that helps it seems a bit longwinded.\n"
] | [
5
] | [] | [] | [
"python",
"queue"
] | stackoverflow_0000742776_python_queue.txt |
Q:
Immutability and thread safety in Python
I'm cleaning some of the Python code I wrote when I was...not as knowledgeable. Primarily I am killing some of the complexity that stemmed from an incomplete understanding of threading in Python. I need to make a list of items thread-safe, and I'd like to do it via immutable lists, instead of the usual locking approach. I know that immutable objects are very special with regard to threading because all the thread-safety issues surrounding incomplete state changes simply disappear.
So, I ask: is the following code thread-safe?
class ImmutableList(object):
def __init__(self):
self._list = ()
def __iter__(self):
return self._list.__iter__()
def append(self, x):
self._list = self._list + tuple([x])
I think it is, because a new list is constructed each time. If the list is updated while another thread is iterating through it, the old list will continue to be used for the remainder of the iteration. This is fine by me, but may not be for everyone.
Also, is this a good idea? I only want to apply this to a few situations where the list size is small, and the lists aren't changed much (event listeners spring to mind).
A:
First of all, appending to a list is already thread-safe in the CPython reference implementation of the Python programming language. In other words, while the language specification doesn't require that the list class be thread-safe, it is anyway. So unless you're using Jython or IronPython or some other Python implementation like that, then you're fine.
Second, you'd also need to overload the other list operations, such as __setitem__ and __setslice__, etc. I'm assuming that your implementation handles this.
Finally, the answer to your question is no: your code isn't thread safe. Consider the following situation:
Your list contains (5, 6)
Thread 1 tries to append 7, and Thread 2 tries to append 8
Thread 1 constructs another tuple (5, 6, 7) and before that can be assigned to _list, there's a context switch
Thread 2 performs its assignment, so the list is now (5, 6, 8)
Thread 1 gets control of the CPU back and assigns to _list, overwriting the previous append. The list is now (5, 6, 7) and the 8 has been lost.
The moral of this story is that you should use locking and avoid cleverness.
A:
A true immutable list implementation will not allow the underlying list structure to change, like you are here. As @[Eli Courtwright] pointed out, your implementation is not thread safe. That is because it is not really immutable. To make an immutable implementation, any methods that would have changed the list, would instead return a new list reflecting the desired change.
With respect to your code example, this would require you to do something like this:
class ImmutableList(object):
def __init__(self):
self._list = ()
def __iter__(self):
return self._list.__iter__()
def append(self, x):
return self._list + tuple([x])
| Immutability and thread safety in Python | I'm cleaning some of the Python code I wrote when I was...not as knowledgeable. Primarily I am killing some of the complexity that stemmed from an incomplete understanding of threading in Python. I need to make a list of items thread-safe, and I'd like to do it via immutable lists, instead of the usual locking approach. I know that immutable objects are very special with regard to threading because all the thread-safety issues surrounding incomplete state changes simply disappear.
So, I ask: is the following code thread-safe?
class ImmutableList(object):
def __init__(self):
self._list = ()
def __iter__(self):
return self._list.__iter__()
def append(self, x):
self._list = self._list + tuple([x])
I think it is, because a new list is constructed each time. If the list is updated while another thread is iterating through it, the old list will continue to be used for the remainder of the iteration. This is fine by me, but may not be for everyone.
Also, is this a good idea? I only want to apply this to a few situations where the list size is small, and the lists aren't changed much (event listeners spring to mind).
| [
"First of all, appending to a list is already thread-safe in the CPython reference implementation of the Python programming language. In other words, while the language specification doesn't require that the list class be thread-safe, it is anyway. So unless you're using Jython or IronPython or some other Python implementation like that, then you're fine.\nSecond, you'd also need to overload the other list operations, such as __setitem__ and __setslice__, etc. I'm assuming that your implementation handles this.\nFinally, the answer to your question is no: your code isn't thread safe. Consider the following situation:\n\nYour list contains (5, 6)\nThread 1 tries to append 7, and Thread 2 tries to append 8\nThread 1 constructs another tuple (5, 6, 7) and before that can be assigned to _list, there's a context switch\nThread 2 performs its assignment, so the list is now (5, 6, 8)\nThread 1 gets control of the CPU back and assigns to _list, overwriting the previous append. The list is now (5, 6, 7) and the 8 has been lost.\n\nThe moral of this story is that you should use locking and avoid cleverness.\n",
"A true immutable list implementation will not allow the underlying list structure to change, like you are here. As @[Eli Courtwright] pointed out, your implementation is not thread safe. That is because it is not really immutable. To make an immutable implementation, any methods that would have changed the list, would instead return a new list reflecting the desired change.\nWith respect to your code example, this would require you to do something like this:\nclass ImmutableList(object):\n def __init__(self):\n self._list = ()\n\n def __iter__(self):\n return self._list.__iter__()\n\n def append(self, x):\n return self._list + tuple([x])\n\n"
] | [
15,
4
] | [] | [] | [
"immutability",
"multithreading",
"python"
] | stackoverflow_0000742882_immutability_multithreading_python.txt |
Q:
Dynamic processes in Python
I have a question concerning Python multiprocessing. I am trying to take a dataset, break into chunks, and pass those chunks to concurrently running processes. I need to transform large tables of data using simple calculations (eg. electrical resistance -> temperature for a thermistor).
The code listed below almost works as desired, but it doesn't seem to be spawning any new processes (or if so only one at a time).
from multiprocessing import Process
class Worker(Process):
# example data transform
def process(self, x): return (x * 2) / 3
def __init__(self, list):
self.data = list
self.result = map(self.process, self.data)
super(Worker, self).__init__()
if __name__ == '__main__':
start = datetime.datetime.now()
dataset = range(10000) # null dataset
processes = 3
for i in range(processes):
chunk = int(math.floor(len(dataset) / float(processes)))
if i + 1 == processes:
remainder = len(dataset) % processes
else: remainder = 0
tmp = dataset[i * chunk : (i + 1) * chunk + remainder]
exec('worker'+str(i)+' = Worker(tmp)')
exec('worker'+str(i)+'.start()')
for i in range(processes):
exec('worker'+str(i)+'.join()')
# just a placeholder to make sure the initial values of the set are as expected
exec('print worker'+str(i)+'.result[0]')
A:
You haven't overridden the run method. There are two ways with processes (or threads) to have it execute code:
Create a process specifying target
Subclass the process, overriding the run method.
Overriding __init__ just means your process is all dressed up with nowhere to go. It should be used to give it attributes that it needs to perform what it needs to perform, but it shouldn't specify the task to be performed.
In your code, all the heavy lifting is done in this line:
exec('worker'+str(i)+' = Worker(tmp)')
and nothing is done here:
exec('worker'+str(i)+'.start()')
So checking the results with exec('print worker'+str(i)+'.result[0]') should give you something meaningful, but only because the code you want to be executed has been executed, but on process construction, not on process start.
Try this:
class Worker(Process):
# example data transform
def process(self, x): return (x * 2) / 3
def __init__(self, list):
self.data = list
self.result = []
super(Worker, self).__init__()
def run(self):
self.result = map(self.process, self.data)
EDIT:
Okay... so I was just flying based on my threading instincts here, and they were all wrong. What we both didn't understand about processes is that you can't directly share variables. Whatever you pass to a new process to start is read, copied, and gone forever. Unless you use one of the two standard ways to share data: queues and pipes. I've played around a little bit trying to get your code to work, but so far no luck. I think that will put you on the right track.
A:
No need to send the number of chunks to each process, just use get_nowait() and handle the eventual Queue.Empty exception. Every process will get different amounts of CPU time and this should keep them all busy.
import multiprocessing, Queue
class Worker(multiprocessing.Process):
def process(self, x):
for i in range(15):
x += (float(i) / 2.6)
return x
def __init__(self, input, output):
self.input = input
self.output = output
super(Worker, self).__init__()
def run(self):
try:
while True:
self.output.put(self.process(self.input.get_nowait()))
except Queue.Empty:
pass
if name == 'main':
dataset = range(10)
processes = multiprocessing.cpu_count()
input = multiprocessing.Queue()
output = multiprocessing.Queue()
for obj in dataset:
input.put(obj)
for i in range(processes):
Worker(input, output).start()
for i in range(len(dataset)):
print output.get()
A:
Ok, so it looks like the list was not thread safe, and I have moved to using a Queue (although it appears to be much slower). This code essentially accomplishes what I was trying to do:
import math, multiprocessing
class Worker(multiprocessing.Process):
def process(self, x):
for i in range(15):
x += (float(i) / 2.6)
return x
def __init__(self, input, output, chunksize):
self.input = input
self.output = output
self.chunksize = chunksize
super(Worker, self).__init__()
def run(self):
for x in range(self.chunksize):
self.output.put(self.process(self.input.get()))
if __name__ == '__main__':
dataset = range(10)
processes = multiprocessing.cpu_count()
input = multiprocessing.Queue()
output = multiprocessing.Queue()
for obj in dataset:
input.put(obj)
for i in range(processes):
chunk = int(math.floor(len(dataset) / float(processes)))
if i + 1 == processes:
remainder = len(dataset) % processes
else: remainder = 0
Worker(input, output, chunk + remainder).start()
for i in range(len(dataset)):
print output.get()
| Dynamic processes in Python | I have a question concerning Python multiprocessing. I am trying to take a dataset, break into chunks, and pass those chunks to concurrently running processes. I need to transform large tables of data using simple calculations (eg. electrical resistance -> temperature for a thermistor).
The code listed below almost works as desired, but it doesn't seem to be spawning any new processes (or if so only one at a time).
from multiprocessing import Process
class Worker(Process):
# example data transform
def process(self, x): return (x * 2) / 3
def __init__(self, list):
self.data = list
self.result = map(self.process, self.data)
super(Worker, self).__init__()
if __name__ == '__main__':
start = datetime.datetime.now()
dataset = range(10000) # null dataset
processes = 3
for i in range(processes):
chunk = int(math.floor(len(dataset) / float(processes)))
if i + 1 == processes:
remainder = len(dataset) % processes
else: remainder = 0
tmp = dataset[i * chunk : (i + 1) * chunk + remainder]
exec('worker'+str(i)+' = Worker(tmp)')
exec('worker'+str(i)+'.start()')
for i in range(processes):
exec('worker'+str(i)+'.join()')
# just a placeholder to make sure the initial values of the set are as expected
exec('print worker'+str(i)+'.result[0]')
| [
"You haven't overridden the run method. There are two ways with processes (or threads) to have it execute code:\n\nCreate a process specifying target\nSubclass the process, overriding the run method.\n\nOverriding __init__ just means your process is all dressed up with nowhere to go. It should be used to give it attributes that it needs to perform what it needs to perform, but it shouldn't specify the task to be performed.\nIn your code, all the heavy lifting is done in this line:\nexec('worker'+str(i)+' = Worker(tmp)')\n\nand nothing is done here:\nexec('worker'+str(i)+'.start()')\n\nSo checking the results with exec('print worker'+str(i)+'.result[0]') should give you something meaningful, but only because the code you want to be executed has been executed, but on process construction, not on process start.\nTry this:\nclass Worker(Process):\n # example data transform\n def process(self, x): return (x * 2) / 3\n\n def __init__(self, list):\n self.data = list\n self.result = []\n super(Worker, self).__init__()\n\n def run(self):\n self.result = map(self.process, self.data)\n\nEDIT:\nOkay... so I was just flying based on my threading instincts here, and they were all wrong. What we both didn't understand about processes is that you can't directly share variables. Whatever you pass to a new process to start is read, copied, and gone forever. Unless you use one of the two standard ways to share data: queues and pipes. I've played around a little bit trying to get your code to work, but so far no luck. I think that will put you on the right track.\n",
"No need to send the number of chunks to each process, just use get_nowait() and handle the eventual Queue.Empty exception. Every process will get different amounts of CPU time and this should keep them all busy.\nimport multiprocessing, Queue\n\nclass Worker(multiprocessing.Process):\n def process(self, x): \n for i in range(15):\n x += (float(i) / 2.6)\n return x\n\n def __init__(self, input, output):\n self.input = input\n self.output = output\n super(Worker, self).__init__()\n\n def run(self):\n try:\n while True:\n self.output.put(self.process(self.input.get_nowait()))\n except Queue.Empty:\n pass\n\n\nif name == 'main':\n dataset = range(10)\n processes = multiprocessing.cpu_count()\n input = multiprocessing.Queue()\n output = multiprocessing.Queue()\n\n for obj in dataset:\n input.put(obj)\n for i in range(processes):\n Worker(input, output).start()\n\n for i in range(len(dataset)):\n print output.get()\n\n",
"Ok, so it looks like the list was not thread safe, and I have moved to using a Queue (although it appears to be much slower). This code essentially accomplishes what I was trying to do:\nimport math, multiprocessing\n\nclass Worker(multiprocessing.Process):\n def process(self, x): \n for i in range(15):\n x += (float(i) / 2.6)\n return x\n\n def __init__(self, input, output, chunksize):\n self.input = input\n self.output = output\n self.chunksize = chunksize\n super(Worker, self).__init__()\n\n def run(self):\n for x in range(self.chunksize):\n self.output.put(self.process(self.input.get()))\n\n\nif __name__ == '__main__':\n dataset = range(10)\n processes = multiprocessing.cpu_count()\n input = multiprocessing.Queue()\n output = multiprocessing.Queue()\n\n for obj in dataset:\n input.put(obj)\n\n for i in range(processes):\n chunk = int(math.floor(len(dataset) / float(processes)))\n if i + 1 == processes:\n remainder = len(dataset) % processes\n else: remainder = 0\n\n Worker(input, output, chunk + remainder).start()\n\n for i in range(len(dataset)):\n print output.get()\n\n"
] | [
1,
1,
0
] | [] | [] | [
"multiprocessing",
"multithreading",
"python"
] | stackoverflow_0000740717_multiprocessing_multithreading_python.txt |
Q:
how to convert string representation bytes back to bytes?
I am using SUDS to talk with a web service written by C#. The service recieves a url, crawls its web page, then return its content as byte[].
its type in SOAP is:
<s:element minOccurs="0" maxOccurs="1" name="rawByte" type="s:base64Binary" />
sample client codes:
>>> from suds.client import Client
>>> url = "http://WSServer/Service1.asmx?wsdl"
>>> client = Client(url)
>>> page = client.service.GetURLContent("http://www.google.co.uk")
>>> print page
(CrawlResult){
crawStatus = "SUCC"
rawByte = "PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb2 ... "
the problem is how to convert the rawByte from string to bytes, then explain it as text with encoding (like "ascii").
I am not clear with that.
Thanks,
Daniel.
A:
As the SOAP element says, the bytes are base64-encoded.
To decode, use the python module
base64.
A:
I need to convert it back to string with the binascii module, like:
>>> btxt = binascii.a2b_base64(page.rawByte)
then btxt can be treated as a normal string.
| how to convert string representation bytes back to bytes? | I am using SUDS to talk with a web service written by C#. The service recieves a url, crawls its web page, then return its content as byte[].
its type in SOAP is:
<s:element minOccurs="0" maxOccurs="1" name="rawByte" type="s:base64Binary" />
sample client codes:
>>> from suds.client import Client
>>> url = "http://WSServer/Service1.asmx?wsdl"
>>> client = Client(url)
>>> page = client.service.GetURLContent("http://www.google.co.uk")
>>> print page
(CrawlResult){
crawStatus = "SUCC"
rawByte = "PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb2 ... "
the problem is how to convert the rawByte from string to bytes, then explain it as text with encoding (like "ascii").
I am not clear with that.
Thanks,
Daniel.
| [
"As the SOAP element says, the bytes are base64-encoded.\nTo decode, use the python module \nbase64.\n",
"I need to convert it back to string with the binascii module, like:\n>>> btxt = binascii.a2b_base64(page.rawByte)\n\nthen btxt can be treated as a normal string.\n"
] | [
3,
2
] | [] | [] | [
"python",
"web_services"
] | stackoverflow_0000743374_python_web_services.txt |
Q:
Something wrong with output from list in Python
I want a Python program to import a list of words from a text file and print out the content of the text file as two lists. The data in the text file is on this form:
A Alfa
B Betta
C Charlie
I want a Python program to print out one list with A,B,C and one with Alfa, Betta, Charlie.
This is what I've written:
english2german = open('english2german.txt', 'r')
englist = []
gerlist = []
for i, line in enumerate(english2german):
englist[i:], gerlist[i:] = line.split()
This is making two lists, but will only print out the first letter in each word.
How can I make my code to print out the whole word?
A:
You want something like this:
english2german = open("english2german.txt")
englist = []
gerlist = []
for line in english2german:
(e, g) = line.split()
englist.append(e)
gerlist.append(g)
The problem with your code before is that englist[i:] is actually a slice of a list, not just a single index. A string is also iterable, so you were basically stuffing a single letter into several indices. In other words, something like gerlist[0:] = "alfa" actually results in gerlist = ['a', 'l', 'f', 'a'].
A:
And even shorter than amo-ej1's answer, and likely faster:
In [1]: english2german = open('english2german.txt')
In [2]: eng, ger = zip(*( line.split() for line in english2german ))
In [3]: eng
Out[3]: ('A', 'B', 'C')
In [4]: ger
Out[4]: ('Alfa', 'Betta', 'Charlie')
If you're using Python 3.0 or from future_builtins import zip, this is memory-efficient too. Otherwise replace zip with izip from itertools if english2german is very long.
A:
just an addition: you're working with files.
please close them :) or use the with construct:
with open('english2german.txt') as english2german:
englist, gerlist = zip(*(line.split() for line in english2german))
A:
Like this you mean:
english2german = open('k.txt', 'r')
englist = []
gerlist = []
for i, line in enumerate(english2german):
englist.append(line.split()[0])
gerlist.append(line.split()[1])
print englist
print gerlist
which generates:
['A', 'B', 'C']
['Alfa', 'Betta', 'Charlie']
A:
The solutions already posted are OK if you have no spaces in any of the words (ie each line has a single space). If I understand correctly, you are trying to build a dictionary, so I would suggest you consider the fact that you can also have definitions of multiple word expressions. In that case, you'd better use some other character instead of a space to separate the definition from the word. Something like "|", which is impossible to appear in a word.
Then, you do something like this:
for line in english2german:
(e, g) = line.split("|")
englist.append(e)
gerlist.append(g)
A:
Slightly meta-answer(?) to Autoplectic's suggestion of using zip()
With 3 lines in the input file (from the supplied data in the question):
The zip() method takes an average of 0.404729390144 seconds, compared to 0.341339087486 with the simple for loop constructing two lists (the code from mipadi's currently accepted answer).
With 10,000 lines in the input file (random generated 3-12 character words. I reduced the timeit.repeat() values to 100 times, repeated twice):
zip() took an average of 1.43965339661 seconds, compared to 1.52318406105 with the for loop.
Both benchmarks were done using Python version 2.5.1
Hardly a huge difference.. Given how much more readable the simple for loop is, I would recommend using it.. The zip code might be a bit quicker with large files, but the difference is about 0.083 seconds with 10,000 lines..
Benchmarking code:
import timeit
# https://stackoverflow.com/questions/743248/something-wrong-with-output-from-list-in-python/743313#743313
code_zip = """english2german = open('english2german.txt')
eng, ger = zip(*( line.split() for line in english2german ))
"""
# https://stackoverflow.com/questions/743248/something-wrong-with-output-from-list-in-python/743268#743268
code_for = """english2german = open("english2german.txt")
englist = []
gerlist = []
for line in english2german:
(e, g) = line.split()
englist.append(e)
gerlist.append(g)
"""
for code in [code_zip, code_for]:
t = timeit.Timer(stmt = code)
try:
times = t.repeat(10, 10000)
except:
t.print_exc()
else:
print "Code:"
print code
print "Time:"
print times
print "Average:"
print sum(times) / len(times)
print "-" * 20
| Something wrong with output from list in Python | I want a Python program to import a list of words from a text file and print out the content of the text file as two lists. The data in the text file is on this form:
A Alfa
B Betta
C Charlie
I want a Python program to print out one list with A,B,C and one with Alfa, Betta, Charlie.
This is what I've written:
english2german = open('english2german.txt', 'r')
englist = []
gerlist = []
for i, line in enumerate(english2german):
englist[i:], gerlist[i:] = line.split()
This is making two lists, but will only print out the first letter in each word.
How can I make my code to print out the whole word?
| [
"You want something like this:\nenglish2german = open(\"english2german.txt\")\nenglist = []\ngerlist = []\n\nfor line in english2german:\n (e, g) = line.split()\n englist.append(e)\n gerlist.append(g)\n\nThe problem with your code before is that englist[i:] is actually a slice of a list, not just a single index. A string is also iterable, so you were basically stuffing a single letter into several indices. In other words, something like gerlist[0:] = \"alfa\" actually results in gerlist = ['a', 'l', 'f', 'a'].\n",
"And even shorter than amo-ej1's answer, and likely faster:\nIn [1]: english2german = open('english2german.txt')\nIn [2]: eng, ger = zip(*( line.split() for line in english2german ))\nIn [3]: eng\nOut[3]: ('A', 'B', 'C')\nIn [4]: ger\nOut[4]: ('Alfa', 'Betta', 'Charlie')\n\nIf you're using Python 3.0 or from future_builtins import zip, this is memory-efficient too. Otherwise replace zip with izip from itertools if english2german is very long.\n",
"just an addition: you're working with files.\nplease close them :) or use the with construct:\nwith open('english2german.txt') as english2german:\n englist, gerlist = zip(*(line.split() for line in english2german))\n\n",
"Like this you mean:\nenglish2german = open('k.txt', 'r')\nenglist = []\ngerlist = []\n\nfor i, line in enumerate(english2german):\n englist.append(line.split()[0])\n gerlist.append(line.split()[1])\n\nprint englist\nprint gerlist\n\nwhich generates:\n['A', 'B', 'C']\n['Alfa', 'Betta', 'Charlie']\n",
"The solutions already posted are OK if you have no spaces in any of the words (ie each line has a single space). If I understand correctly, you are trying to build a dictionary, so I would suggest you consider the fact that you can also have definitions of multiple word expressions. In that case, you'd better use some other character instead of a space to separate the definition from the word. Something like \"|\", which is impossible to appear in a word.\nThen, you do something like this:\nfor line in english2german:\n (e, g) = line.split(\"|\")\n englist.append(e)\n gerlist.append(g)\n\n",
"Slightly meta-answer(?) to Autoplectic's suggestion of using zip()\nWith 3 lines in the input file (from the supplied data in the question):\nThe zip() method takes an average of 0.404729390144 seconds, compared to 0.341339087486 with the simple for loop constructing two lists (the code from mipadi's currently accepted answer).\nWith 10,000 lines in the input file (random generated 3-12 character words. I reduced the timeit.repeat() values to 100 times, repeated twice):\nzip() took an average of 1.43965339661 seconds, compared to 1.52318406105 with the for loop.\nBoth benchmarks were done using Python version 2.5.1\nHardly a huge difference.. Given how much more readable the simple for loop is, I would recommend using it.. The zip code might be a bit quicker with large files, but the difference is about 0.083 seconds with 10,000 lines..\nBenchmarking code:\nimport timeit\n\n# https://stackoverflow.com/questions/743248/something-wrong-with-output-from-list-in-python/743313#743313\ncode_zip = \"\"\"english2german = open('english2german.txt')\neng, ger = zip(*( line.split() for line in english2german ))\n\"\"\"\n\n# https://stackoverflow.com/questions/743248/something-wrong-with-output-from-list-in-python/743268#743268\ncode_for = \"\"\"english2german = open(\"english2german.txt\")\nenglist = []\ngerlist = []\n\nfor line in english2german:\n (e, g) = line.split()\n englist.append(e)\n gerlist.append(g)\n\"\"\"\n\nfor code in [code_zip, code_for]:\n t = timeit.Timer(stmt = code)\n try:\n times = t.repeat(10, 10000)\n except:\n t.print_exc()\n else:\n print \"Code:\"\n print code\n print \"Time:\"\n print times\n print \"Average:\"\n print sum(times) / len(times)\n print \"-\" * 20\n\n"
] | [
6,
6,
3,
1,
1,
1
] | [] | [] | [
"list",
"python",
"text"
] | stackoverflow_0000743248_list_python_text.txt |
Q:
setattr with kwargs, pythonic or not?
I'm using __init__() like this in some SQLAlchemy ORM classes that have many parameters (upto 20).
def __init__(self, **kwargs):
for k, v in kwargs.iteritems():
setattr(self, k, v)
Is it "pythonic" to set attributes like this?
A:
Yes. Another way to do this is.
def __init__(self, **kwargs):
self.__dict__.update( kwargs )
A:
Yes, if there's not a "nicer" way of supplying the arguments.
For example, using your ORM classes you mention, perhaps it would be more Python'y to allow..
col = Varchar()
col.index = True
col.length = 255
..rather than..
col = Varchar(index = True, length = 255)
Okay that's not the best example, since the **kwargs method would actually be nicer.. but my point is you should always consider alternative methods of achieving something, before using sometimes-discouraged things like **kwargs..
Another thing to keep in mind is you might lose behaviour a user expects, such as raising a TypeError if the user supplies an invalid keyword arg, which could be worked around like..
def __init__(self, **kwargs):
valid_kwargs = ['x', 'y', 'z']
for k, v in kwargs.iteritems():
if k not in valid_kwargs:
raise TypeError("Invalid keyword argument %s" % k)
setattr(self, k, v)
A final thing to consider:
class Hmm:
def __init__(self, **kwargs):
for k, v in kwargs.iteritems():
setattr(self, k, v)
def mymethod(self):
print "mymethod should print this message.."
x = Hmm(mymethod = None)
x.mymethod() # raises TypeError: 'NoneType' object is not callable
A:
To me it seems pretty pythonic if you only need this in one place in your code.
The following link provides a more 'generic' approach to the same problem (e.g. with a decorator and some extra functionality), have a look at: http://code.activestate.com/recipes/551763/
| setattr with kwargs, pythonic or not? | I'm using __init__() like this in some SQLAlchemy ORM classes that have many parameters (upto 20).
def __init__(self, **kwargs):
for k, v in kwargs.iteritems():
setattr(self, k, v)
Is it "pythonic" to set attributes like this?
| [
"Yes. Another way to do this is.\ndef __init__(self, **kwargs):\n self.__dict__.update( kwargs )\n\n",
"Yes, if there's not a \"nicer\" way of supplying the arguments.\nFor example, using your ORM classes you mention, perhaps it would be more Python'y to allow..\ncol = Varchar()\ncol.index = True\ncol.length = 255\n\n..rather than..\ncol = Varchar(index = True, length = 255)\n\nOkay that's not the best example, since the **kwargs method would actually be nicer.. but my point is you should always consider alternative methods of achieving something, before using sometimes-discouraged things like **kwargs..\nAnother thing to keep in mind is you might lose behaviour a user expects, such as raising a TypeError if the user supplies an invalid keyword arg, which could be worked around like..\ndef __init__(self, **kwargs):\n valid_kwargs = ['x', 'y', 'z']\n for k, v in kwargs.iteritems():\n if k not in valid_kwargs:\n raise TypeError(\"Invalid keyword argument %s\" % k)\n setattr(self, k, v)\n\nA final thing to consider:\nclass Hmm:\n def __init__(self, **kwargs):\n for k, v in kwargs.iteritems():\n setattr(self, k, v)\n def mymethod(self):\n print \"mymethod should print this message..\"\n\nx = Hmm(mymethod = None)\nx.mymethod() # raises TypeError: 'NoneType' object is not callable\n\n",
"To me it seems pretty pythonic if you only need this in one place in your code. \nThe following link provides a more 'generic' approach to the same problem (e.g. with a decorator and some extra functionality), have a look at: http://code.activestate.com/recipes/551763/\n"
] | [
29,
9,
1
] | [] | [] | [
"initialization",
"python"
] | stackoverflow_0000739625_initialization_python.txt |
Q:
Django RSS Feed Wrong Domain
I have an RSS feed that I'm setting up on my new site using Django. Currently I have an RSS feed being served per user, rather than just one big nasty, global RSS feed. The only problem is that the links that are returned by the RSS feed have the completely wrong domain name in the links. The end path is perfectly correct, and the get_absolute_url method seems to work everything else in my applications, just not here. You would think I'd be getting the default "www.example.com/item/item_id", but instead I get another domain that's hosted on this server. At first I was thinking it was just pulling the hostname of the server, but it's not. It's also not pulling what the SITE_ID is set to either. Django docs say that the feeds will pull the domain from the SITE_ID setting, but it's just not. I've grepped my entire application for the domain it's pulling, and found absolutely nothing.
I'm sure I'm missing something simple, but for the life of me I can't deduce it. The domain it's building the URLs with simply doesn't exist anywhere in the application's code or database. So where on Earth is it coming up with the domain?
UPDATE:
ServerName in Apache was set to the domain that I was seeing being used by the RSS Feeds to build the URLs. I changed that, and restarted Apached, wrong domain still in use. Any other ideas on how to force Django to use the right domain?
A:
May be it's coming from environment variables? Try:
export | grep your.mistery.domain
see if that comes up with anything, do that as the same user under which you are running your Django apps.
You know you can always implement your item_link() method which would return the URL that you want, see documentation here
| Django RSS Feed Wrong Domain | I have an RSS feed that I'm setting up on my new site using Django. Currently I have an RSS feed being served per user, rather than just one big nasty, global RSS feed. The only problem is that the links that are returned by the RSS feed have the completely wrong domain name in the links. The end path is perfectly correct, and the get_absolute_url method seems to work everything else in my applications, just not here. You would think I'd be getting the default "www.example.com/item/item_id", but instead I get another domain that's hosted on this server. At first I was thinking it was just pulling the hostname of the server, but it's not. It's also not pulling what the SITE_ID is set to either. Django docs say that the feeds will pull the domain from the SITE_ID setting, but it's just not. I've grepped my entire application for the domain it's pulling, and found absolutely nothing.
I'm sure I'm missing something simple, but for the life of me I can't deduce it. The domain it's building the URLs with simply doesn't exist anywhere in the application's code or database. So where on Earth is it coming up with the domain?
UPDATE:
ServerName in Apache was set to the domain that I was seeing being used by the RSS Feeds to build the URLs. I changed that, and restarted Apached, wrong domain still in use. Any other ideas on how to force Django to use the right domain?
| [
"May be it's coming from environment variables? Try:\nexport | grep your.mistery.domain\n\nsee if that comes up with anything, do that as the same user under which you are running your Django apps.\nYou know you can always implement your item_link() method which would return the URL that you want, see documentation here\n"
] | [
3
] | [] | [] | [
"django",
"python",
"rss"
] | stackoverflow_0000742974_django_python_rss.txt |
Q:
Reading Huge File in Python
I have a 384MB text file with 50 million lines. Each line contains 2 space-separated integers: a key and a value. The file is sorted by key. I need an efficient way of looking up the values of a list of about 200 keys in Python.
My current approach is included below. It takes 30 seconds. There must be more efficient Python foo to get this down to a reasonable efficiency of a couple of seconds at most.
# list contains a sorted list of the keys we need to lookup
# there is a sentinel at the end of list to simplify the code
# we use pointer to iterate through the list of keys
for line in fin:
line = map(int, line.split())
while line[0] == list[pointer].key:
list[pointer].value = line[1]
pointer += 1
while line[0] > list[pointer].key:
pointer += 1
if pointer >= len(list) - 1:
break # end of list; -1 is due to sentinel
Coded binary search + seek solution (thanks kigurai!):
entries = 24935502 # number of entries
width = 18 # fixed width of an entry in the file padded with spaces
# at the end of each line
for i, search in enumerate(list): # list contains the list of search keys
left, right = 0, entries-1
key = None
while key != search and left <= right:
mid = (left + right) / 2
fin.seek(mid * width)
key, value = map(int, fin.readline().split())
if search > key:
left = mid + 1
else:
right = mid - 1
if key != search:
value = None # for when search key is not found
search.result = value # store the result of the search
A:
If you only need 200 of 50 million lines, then reading all of it into memory is a waste. I would sort the list of search keys and then apply binary search to the file using seek() or something similar. This way you would not read the entire file to memory which I think should speed things up.
A:
Slight optimization of S.Lotts answer:
from collections import defaultdict
keyValues= defaultdict(list)
targetKeys= # some list of keys as strings
for line in fin:
key, value = line.split()
if key in targetKeys:
keyValues[key].append( value )
Since we're using a dictionary rather than a list, the keys don't have to be numbers. This saves the map() operation and a string to integer conversion for each line. If you want the keys to be numbers, do the conversion a the end, when you only have to do it once for each key, rather than for each of 50 million lines.
A:
It's not clear what "list[pointer]" is all about. Consider this, however.
from collections import defaultdict
keyValues= defaultdict(list)
targetKeys= # some list of keys
for line in fin:
key, value = map( int, line.split())
if key in targetKeys:
keyValues[key].append( value )
A:
I would use memory-maping: http://docs.python.org/library/mmap.html.
This way you can use the file as if it's stored in memory, but the OS decides which pages should actually be read from the file.
A:
Here is a recursive binary search on the text file
import os, stat
class IntegerKeyTextFile(object):
def __init__(self, filename):
self.filename = filename
self.f = open(self.filename, 'r')
self.getStatinfo()
def getStatinfo(self):
self.statinfo = os.stat(self.filename)
self.size = self.statinfo[stat.ST_SIZE]
def parse(self, line):
key, value = line.split()
k = int(key)
v = int(value)
return (k,v)
def __getitem__(self, key):
return self.findKey(key)
def findKey(self, keyToFind, startpoint=0, endpoint=None):
"Recursively search a text file"
if endpoint is None:
endpoint = self.size
currentpoint = (startpoint + endpoint) // 2
while True:
self.f.seek(currentpoint)
if currentpoint <> 0:
# may not start at a line break! Discard.
baddata = self.f.readline()
linestart = self.f.tell()
keyatpoint = self.f.readline()
if not keyatpoint:
# read returned empty - end of file
raise KeyError('key %d not found'%(keyToFind,))
k,v = self.parse(keyatpoint)
if k == keyToFind:
print 'key found at ', linestart, ' with value ', v
return v
if endpoint == startpoint:
raise KeyError('key %d not found'%(keyToFind,))
if k > keyToFind:
return self.findKey(keyToFind, startpoint, currentpoint)
else:
return self.findKey(keyToFind, currentpoint, endpoint)
A sample text file created in jEdit seems to work:
>>> i = integertext.IntegerKeyTextFile('c:\\sampledata.txt')
>>> i[1]
key found at 0 with value 345
345
It could definitely be improved by caching found keys and using the cache to determine future starting seek points.
A:
If you have any control over the format of the file, the "sort and binary search" responses are correct. The detail is that this only works with records of a fixed size and offset (well, I should say it only works easily with fixed length records).
With fixed length records, you can easily seek() around the sorted file to find your keys.
A:
One possible optimization is to do a bit of buffering using the sizehint option in file.readlines(..). This allows you to load multiple lines in memory totaling to approximately sizehint bytes.
A:
You need to implement binary search using seek()
| Reading Huge File in Python | I have a 384MB text file with 50 million lines. Each line contains 2 space-separated integers: a key and a value. The file is sorted by key. I need an efficient way of looking up the values of a list of about 200 keys in Python.
My current approach is included below. It takes 30 seconds. There must be more efficient Python foo to get this down to a reasonable efficiency of a couple of seconds at most.
# list contains a sorted list of the keys we need to lookup
# there is a sentinel at the end of list to simplify the code
# we use pointer to iterate through the list of keys
for line in fin:
line = map(int, line.split())
while line[0] == list[pointer].key:
list[pointer].value = line[1]
pointer += 1
while line[0] > list[pointer].key:
pointer += 1
if pointer >= len(list) - 1:
break # end of list; -1 is due to sentinel
Coded binary search + seek solution (thanks kigurai!):
entries = 24935502 # number of entries
width = 18 # fixed width of an entry in the file padded with spaces
# at the end of each line
for i, search in enumerate(list): # list contains the list of search keys
left, right = 0, entries-1
key = None
while key != search and left <= right:
mid = (left + right) / 2
fin.seek(mid * width)
key, value = map(int, fin.readline().split())
if search > key:
left = mid + 1
else:
right = mid - 1
if key != search:
value = None # for when search key is not found
search.result = value # store the result of the search
| [
"If you only need 200 of 50 million lines, then reading all of it into memory is a waste. I would sort the list of search keys and then apply binary search to the file using seek() or something similar. This way you would not read the entire file to memory which I think should speed things up.\n",
"Slight optimization of S.Lotts answer:\nfrom collections import defaultdict\nkeyValues= defaultdict(list)\ntargetKeys= # some list of keys as strings\nfor line in fin:\n key, value = line.split()\n if key in targetKeys:\n keyValues[key].append( value )\n\nSince we're using a dictionary rather than a list, the keys don't have to be numbers. This saves the map() operation and a string to integer conversion for each line. If you want the keys to be numbers, do the conversion a the end, when you only have to do it once for each key, rather than for each of 50 million lines.\n",
"It's not clear what \"list[pointer]\" is all about. Consider this, however.\nfrom collections import defaultdict\nkeyValues= defaultdict(list)\ntargetKeys= # some list of keys\nfor line in fin:\n key, value = map( int, line.split())\n if key in targetKeys:\n keyValues[key].append( value )\n\n",
"I would use memory-maping: http://docs.python.org/library/mmap.html.\nThis way you can use the file as if it's stored in memory, but the OS decides which pages should actually be read from the file.\n",
"Here is a recursive binary search on the text file\nimport os, stat\n\nclass IntegerKeyTextFile(object):\n def __init__(self, filename):\n self.filename = filename\n self.f = open(self.filename, 'r')\n self.getStatinfo()\n\n def getStatinfo(self):\n self.statinfo = os.stat(self.filename)\n self.size = self.statinfo[stat.ST_SIZE]\n\n def parse(self, line):\n key, value = line.split()\n k = int(key)\n v = int(value)\n return (k,v)\n\n def __getitem__(self, key):\n return self.findKey(key)\n\n def findKey(self, keyToFind, startpoint=0, endpoint=None):\n \"Recursively search a text file\"\n\n if endpoint is None:\n endpoint = self.size\n\n currentpoint = (startpoint + endpoint) // 2\n\n while True:\n self.f.seek(currentpoint)\n if currentpoint <> 0:\n # may not start at a line break! Discard.\n baddata = self.f.readline() \n\n linestart = self.f.tell()\n keyatpoint = self.f.readline()\n\n if not keyatpoint:\n # read returned empty - end of file\n raise KeyError('key %d not found'%(keyToFind,))\n\n k,v = self.parse(keyatpoint)\n\n if k == keyToFind:\n print 'key found at ', linestart, ' with value ', v\n return v\n\n if endpoint == startpoint:\n raise KeyError('key %d not found'%(keyToFind,))\n\n if k > keyToFind:\n return self.findKey(keyToFind, startpoint, currentpoint)\n else:\n return self.findKey(keyToFind, currentpoint, endpoint)\n\nA sample text file created in jEdit seems to work: \n>>> i = integertext.IntegerKeyTextFile('c:\\\\sampledata.txt')\n>>> i[1]\nkey found at 0 with value 345\n345\n\nIt could definitely be improved by caching found keys and using the cache to determine future starting seek points.\n",
"If you have any control over the format of the file, the \"sort and binary search\" responses are correct. The detail is that this only works with records of a fixed size and offset (well, I should say it only works easily with fixed length records).\nWith fixed length records, you can easily seek() around the sorted file to find your keys.\n",
"One possible optimization is to do a bit of buffering using the sizehint option in file.readlines(..). This allows you to load multiple lines in memory totaling to approximately sizehint bytes.\n",
"You need to implement binary search using seek()\n"
] | [
11,
7,
4,
3,
3,
2,
0,
0
] | [] | [] | [
"file_io",
"large_files",
"performance",
"python"
] | stackoverflow_0000744256_file_io_large_files_performance_python.txt |
Q:
Imports in python are static, any solution?
foo.py :
i = 10
def fi():
global i
i = 99
bar.py :
import foo
from foo import i
print i, foo.i
foo.fi()
print i, foo.i
This is problematic. Why does i not change when foo.i changes?
A:
What Ross is saying is to restucture foo like so:
_i = 10
def getI():
return _i
def fi():
global _i
_i = 99
Then you will see it works the way you want:
>>> import foo
>>> print foo.getI()
10
>>> foo.fi()
>>> print foo.getI()
99
It is also 'better' in the sense that you avoid exporting a global, but still provide read access to it.
A:
What import does in bar.py is set up an identifier called i in the bar.py module namespace that points to the same address as the identifier called i in the foo.py module namespace.
This is an important distinction... bar.i is not pointing to foo.i, but rather to the same space in memory where the object 10 is held that foo.i happens to point to at the same time. In python, the variable names are not the memory space... they are the identifier that points to a memory space. When you import in bar, you are setting up a local namespace identifier.
Your code behaves as expected until foo.fi() is called, when the identifier i in the foo.py namespace is changed to point to the literal 99, which is an object in memory obviously at a different place than 10. Now the module-level namespace dict for foo has i identifying a different object in memory than the identifier i in bar.py.
Shane and rossfabricant have good suggestions on how to adjust your modules to achieve what you want.
A:
i inside foo.py is a different i from the one in bar.py. When in bar.py you do:
from foo import i
That creates a new i in bar.py that refers to the same object as the i in foo.py.
Your problem is: When you call foo.fi() and it does that:
i = 99
That assignment makes foo.py's i point to another integer object (99). Integer objects are immutable themselves (thankfully) so it only changes what foo.py's i is pointing to. Not bar.py's i. bar.py's i still points to the old object it was pointing before. (the integer immutable object 10)
You can test what I am talking about by placing the following command in bar.py:
print foo.i
it should print 99.
A:
You could call a function instead of referencing a global variable.
| Imports in python are static, any solution? | foo.py :
i = 10
def fi():
global i
i = 99
bar.py :
import foo
from foo import i
print i, foo.i
foo.fi()
print i, foo.i
This is problematic. Why does i not change when foo.i changes?
| [
"What Ross is saying is to restucture foo like so:\n_i = 10\n\ndef getI():\n return _i\n\ndef fi():\n global _i\n _i = 99\n\nThen you will see it works the way you want:\n>>> import foo\n>>> print foo.getI()\n10\n>>> foo.fi()\n>>> print foo.getI()\n99\n\nIt is also 'better' in the sense that you avoid exporting a global, but still provide read access to it.\n",
"What import does in bar.py is set up an identifier called i in the bar.py module namespace that points to the same address as the identifier called i in the foo.py module namespace.\nThis is an important distinction... bar.i is not pointing to foo.i, but rather to the same space in memory where the object 10 is held that foo.i happens to point to at the same time. In python, the variable names are not the memory space... they are the identifier that points to a memory space. When you import in bar, you are setting up a local namespace identifier.\nYour code behaves as expected until foo.fi() is called, when the identifier i in the foo.py namespace is changed to point to the literal 99, which is an object in memory obviously at a different place than 10. Now the module-level namespace dict for foo has i identifying a different object in memory than the identifier i in bar.py.\nShane and rossfabricant have good suggestions on how to adjust your modules to achieve what you want.\n",
"i inside foo.py is a different i from the one in bar.py. When in bar.py you do:\nfrom foo import i\n\nThat creates a new i in bar.py that refers to the same object as the i in foo.py.\nYour problem is: When you call foo.fi() and it does that:\ni = 99\n\nThat assignment makes foo.py's i point to another integer object (99). Integer objects are immutable themselves (thankfully) so it only changes what foo.py's i is pointing to. Not bar.py's i. bar.py's i still points to the old object it was pointing before. (the integer immutable object 10)\nYou can test what I am talking about by placing the following command in bar.py:\nprint foo.i\n\nit should print 99.\n",
"You could call a function instead of referencing a global variable. \n"
] | [
8,
7,
3,
0
] | [] | [] | [
"python"
] | stackoverflow_0000744325_python.txt |
Q:
Do Python regular expressions allow embedded options?
In particular, I'd like to know if I can specify an embedded option in the pattern string that will enable multiline mode. That is, typically with Python regular expressions multiline mode is enabled like this:
pattern = re.compile(r'foo', re.MULTILINE)
I'd like a way to get multiline matching by specifying it in the pattern string, rather than using the re.MULTILINE option. You can do this in Java with the embedded (?m) expression. e.g.,
pattern = re.compile(r'(?m)foo')
Is this possible in Python, or am I required to use the re.M option? And in general, is there a good reference for embedded pattern options in Python?
A:
yes.
From the docs:
(?iLmsux) (One or more letters from the set 'i',
'L', 'm', 's', 'u', 'x'.)
The group
matches the empty string; the letters
set the corresponding flags: re.I
(ignore case), re.L (locale
dependent), re.M (multi-line), re.S
(dot matches all), re.U (Unicode
dependent), and re.X (verbose), for
the entire regular expression. (The
flags are described in Module
Contents.)
This is useful if you wish
to include the flags as part of the
regular expression, instead of passing
a flag argument to the compile()
function.
Note that the (?x) flag changes how
the expression is parsed. It should be
used first in the expression string,
or after one or more whitespace
characters. If there are
non-whitespace characters before the
flag, the results are undefined.
| Do Python regular expressions allow embedded options? | In particular, I'd like to know if I can specify an embedded option in the pattern string that will enable multiline mode. That is, typically with Python regular expressions multiline mode is enabled like this:
pattern = re.compile(r'foo', re.MULTILINE)
I'd like a way to get multiline matching by specifying it in the pattern string, rather than using the re.MULTILINE option. You can do this in Java with the embedded (?m) expression. e.g.,
pattern = re.compile(r'(?m)foo')
Is this possible in Python, or am I required to use the re.M option? And in general, is there a good reference for embedded pattern options in Python?
| [
"yes.\nFrom the docs:\n\n(?iLmsux) (One or more letters from the set 'i',\n 'L', 'm', 's', 'u', 'x'.) \nThe group\n matches the empty string; the letters\n set the corresponding flags: re.I\n (ignore case), re.L (locale\n dependent), re.M (multi-line), re.S\n (dot matches all), re.U (Unicode\n dependent), and re.X (verbose), for\n the entire regular expression. (The\n flags are described in Module\n Contents.)\nThis is useful if you wish\n to include the flags as part of the\n regular expression, instead of passing\n a flag argument to the compile()\n function.\nNote that the (?x) flag changes how\n the expression is parsed. It should be\n used first in the expression string,\n or after one or more whitespace\n characters. If there are\n non-whitespace characters before the\n flag, the results are undefined.\n\n"
] | [
6
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000744885_python_regex.txt |
Q:
Python/Django Modeling Question
What is the best way to have many children records pointing to one parent record in the same model/table in Django?
Is this implementation correct?:
class TABLE(models.Model):
id = models.AutoField(primary_key=True)
parent = models.ForeignKey("TABLE", unique=False)
A:
Django has a special syntax for ForeignKey for self-joins:
class TABLE(models.Model):
id = models.AutoField(primary_key=True)
parent = models.ForeignKey('self')
Source (second paragraph)
A:
Two things:
First, you need to allow the possibility of a null value for parent, otherwise your TABLE tree can have no root.
Second, you need to worry about the possibility of "I'm my own grandpa." For a lively discussion, see here.
| Python/Django Modeling Question | What is the best way to have many children records pointing to one parent record in the same model/table in Django?
Is this implementation correct?:
class TABLE(models.Model):
id = models.AutoField(primary_key=True)
parent = models.ForeignKey("TABLE", unique=False)
| [
"Django has a special syntax for ForeignKey for self-joins:\nclass TABLE(models.Model):\n id = models.AutoField(primary_key=True)\n parent = models.ForeignKey('self')\n\nSource (second paragraph)\n",
"Two things:\nFirst, you need to allow the possibility of a null value for parent, otherwise your TABLE tree can have no root.\nSecond, you need to worry about the possibility of \"I'm my own grandpa.\" For a lively discussion, see here.\n"
] | [
10,
2
] | [] | [] | [
"database",
"django",
"model",
"python"
] | stackoverflow_0000744921_database_django_model_python.txt |
Q:
Adding a user supplied property (at runtime) to an instance of Expando class in Google App Engine?
By creating datastore models that inherit from the Expando class I can
make my model-entities/instances have dynamic properties. That is
great! But what I want is the names of these dynamic properties to be
determined at runtime. Is that possible?
For example,
class ExpandoTest (db.Expando):
prop1 = db.StringProperty()
prop2 = db.StringProperty()
entity_one = ExpandoTest()
entity_two = ExpandoTest()
# what I do not want
entity_one.prop3 = 'Demo of dynamic property'
# what I want
entity_two.<property_name_as_entered_by_user_at_runtime> = 'This
property name was entered by the user, Great!!'
Is this possible? If so, how?
I've already tried several ways to do this but didn't succeed.
Thanks in advance.
A:
Usually, we use the setattr function directly.
setattr( entity_two, 'some_variable', some_value )
A:
Just found the solution to my own question. It was really simple but as I am a python noob I ended up posting the question that you see above.
For the code sample that I had used, this is what needs to be done:
entity_two.__setattr(some_variable, some_value) #where some_variable is populated by user at runtime :)
| Adding a user supplied property (at runtime) to an instance of Expando class in Google App Engine? | By creating datastore models that inherit from the Expando class I can
make my model-entities/instances have dynamic properties. That is
great! But what I want is the names of these dynamic properties to be
determined at runtime. Is that possible?
For example,
class ExpandoTest (db.Expando):
prop1 = db.StringProperty()
prop2 = db.StringProperty()
entity_one = ExpandoTest()
entity_two = ExpandoTest()
# what I do not want
entity_one.prop3 = 'Demo of dynamic property'
# what I want
entity_two.<property_name_as_entered_by_user_at_runtime> = 'This
property name was entered by the user, Great!!'
Is this possible? If so, how?
I've already tried several ways to do this but didn't succeed.
Thanks in advance.
| [
"Usually, we use the setattr function directly.\nsetattr( entity_two, 'some_variable', some_value )\n\n",
"Just found the solution to my own question. It was really simple but as I am a python noob I ended up posting the question that you see above.\nFor the code sample that I had used, this is what needs to be done: \nentity_two.__setattr(some_variable, some_value) #where some_variable is populated by user at runtime :)\n\n"
] | [
3,
0
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0000746942_google_app_engine_python.txt |
Q:
How to write ampersand in node attribude?
I need to have following attribute value in my XML node:
CommandLine="copy $(TargetPath) ..\..\
echo dummy > dummy.txt"
Actually this is part of a .vcproj file generated in VS2008. 
 means line break, as there should be 2 separate commands.
I'm using Python 2.5 with minidom to parse XML - but unfortunately I don't know how to store sequences like 
, the best thing i can get is &#x0D;.
How can I store exactly 
?
UPD : Exactly speaking i have to store not &, but \r\n sequence in form of


A:
You should try storing the actual characters (ASCII 13 and ASCII 10) in the attribute value, instead of their already-escaped counterparts.
EDIT: It looks like minidom does not handle newlines in attribute values correctly.
Even though a literal line break in an attribute value is allowed, but it will face normalization upon document parsing, at which point it is converted to a space.
I filed a bug in this regard: http://bugs.python.org/issue5752
A:
I'm using Python 2.5 with minidom to parse XML - but unfortunately I don't know how to store sequences like
Well, you can't specify that you want hex escapes specifically, but according to the DOM LS standard, implementations should change \r\n in attribute values to character references automatically.
Unfortunately, minidom doesn't:
>>> from xml.dom import minidom
>>> document= minidom.parseString('<a/>')
>>> document.documentElement.setAttribute('a', 'a\r\nb')
>>> document.toxml()
u'<?xml version="1.0" ?><a a="a\r\nb"/>'
This is a bug in minidom. Try the same in another DOM (eg. pxdom):
>>> import pxdom
>>> document= pxdom.parseString('<a/>')
>>> document.documentElement.setAttribute('a', 'a\r\nb')
>>> document.pxdomContent
u'<?xml version="1.0" ?><a a="a b"/>'
A:
An ampersand is a special character in XML and as such most xml parsers require valid xml in order to function. Let minidom escape the ampersand for you (really it should already be escaped) and then when you need to display the escaped value, unescape it.
| How to write ampersand in node attribude? | I need to have following attribute value in my XML node:
CommandLine="copy $(TargetPath) ..\..\
echo dummy > dummy.txt"
Actually this is part of a .vcproj file generated in VS2008. 
 means line break, as there should be 2 separate commands.
I'm using Python 2.5 with minidom to parse XML - but unfortunately I don't know how to store sequences like 
, the best thing i can get is &#x0D;.
How can I store exactly 
?
UPD : Exactly speaking i have to store not &, but \r\n sequence in form of


| [
"You should try storing the actual characters (ASCII 13 and ASCII 10) in the attribute value, instead of their already-escaped counterparts.\n\nEDIT: It looks like minidom does not handle newlines in attribute values correctly. \nEven though a literal line break in an attribute value is allowed, but it will face normalization upon document parsing, at which point it is converted to a space.\nI filed a bug in this regard: http://bugs.python.org/issue5752\n",
"\nI'm using Python 2.5 with minidom to parse XML - but unfortunately I don't know how to store sequences like \r\n\nWell, you can't specify that you want hex escapes specifically, but according to the DOM LS standard, implementations should change \\r\\n in attribute values to character references automatically.\nUnfortunately, minidom doesn't:\n>>> from xml.dom import minidom\n>>> document= minidom.parseString('<a/>')\n>>> document.documentElement.setAttribute('a', 'a\\r\\nb')\n>>> document.toxml()\nu'<?xml version=\"1.0\" ?><a a=\"a\\r\\nb\"/>'\n\nThis is a bug in minidom. Try the same in another DOM (eg. pxdom):\n>>> import pxdom\n>>> document= pxdom.parseString('<a/>')\n>>> document.documentElement.setAttribute('a', 'a\\r\\nb')\n>>> document.pxdomContent\nu'<?xml version=\"1.0\" ?><a a=\"a b\"/>'\n\n",
"An ampersand is a special character in XML and as such most xml parsers require valid xml in order to function. Let minidom escape the ampersand for you (really it should already be escaped) and then when you need to display the escaped value, unescape it.\n"
] | [
1,
1,
0
] | [] | [] | [
"python",
"xml"
] | stackoverflow_0000746602_python_xml.txt |
Q:
I need a Python Function that will output a random string of 4 different characters when given the desired probabilites of the characters
For example,
The function could be something like def RandABCD(n, .25, .34, .25, .25):
Where n is the length of the string to be generated and the following numbers are the desired probabilities of A, B, C, D.
I would imagine this is quite simple, however i am having trouble creating a working program. Any help would be greatly appreciated.
A:
Here's the code to select a single weighted value. You should be able to take it from here. It uses bisect and random to accomplish the work.
from bisect import bisect
from random import random
def WeightedABCD(*weights):
chars = 'ABCD'
breakpoints = [sum(weights[:x+1]) for x in range(4)]
return chars[bisect(breakpoints, random())]
Call it like this: WeightedABCD(.25, .34, .25, .25).
EDIT: Here is a version that works even if the weights don't add up to 1.0:
from bisect import bisect_left
from random import uniform
def WeightedABCD(*weights):
chars = 'ABCD'
breakpoints = [sum(weights[:x+1]) for x in range(4)]
return chars[bisect_left(breakpoints, uniform(0.0,breakpoints[-1]))]
A:
The random class is quite powerful in python. You can generate a list with the characters desired at the appropriate weights and then use random.choice to obtain a selection.
First, make sure you do an import random.
For example, let's say you wanted a truly random string from A,B,C, or D.
1. Generate a list with the characters
li = ['A','B','C','D']
Then obtain values from it using random.choice
output = "".join([random.choice(li) for i in range(0, n)])
You could easily make that a function with n as a parameter.
In the above case, you have an equal chance of getting A,B,C, or D.
You can use duplicate entries in the list to give characters higher probabilities. So, for example, let's say you wanted a 50% chance of A and 25% chances of B and C respectively. You could have an array like this:
li = ['A','A','B','C']
And so on.
It would not be hard to parameterize the characters coming in with desired weights, to model that I'd use a dictionary.
characterbasis = {'A':25, 'B':25, 'C':25, 'D':25}
Make that the first parameter, and the second being the length of the string and use the above code to generate your string.
A:
For four letters, here's something quick off the top of my head:
from random import random
def randABCD(n, pA, pB, pC, pD):
# assumes pA + pB + pC + pD == 1
cA = pA
cB = cA + pB
cC = cB + pC
def choose():
r = random()
if r < cA:
return 'A'
elif r < cB:
return 'B'
elif r < cC:
return 'C'
else:
return 'D'
return ''.join([choose() for i in xrange(n)])
I have no doubt that this can be made much cleaner/shorter, I'm just in a bit of a hurry right now.
The reason I wouldn't be content with David in Dakota's answer of using a list of duplicate characters is that depending on your probabilities, it may not be possible to create a list with duplicates in the right numbers to simulate the probabilities you want. (Well, I guess it might always be possible, but you might wind up needing a huge list - what if your probabilities were 0.11235442079, 0.4072777384, 0.2297927874, 0.25057505341?)
EDIT: here's a much cleaner generic version that works with any number of letters with any weights:
from bisect import bisect
from random import uniform
def rand_string(n, content):
''' Creates a string of letters (or substrings) chosen independently
with specified probabilities. content is a dictionary mapping
a substring to its "weight" which is proportional to its probability,
and n is the desired number of elements in the string.
This does not assume the sum of the weights is 1.'''
l, cdf = zip(*[(l, w) for l, w in content.iteritems()])
cdf = list(cdf)
for i in xrange(1, len(cdf)):
cdf[i] += cdf[i - 1]
return ''.join([l[bisect(cdf, uniform(0, cdf[-1]))] for i in xrange(n)])
A:
Here is a rough idea of what might suit you
import random as r
def distributed_choice(probs):
r= r.random()
cum = 0.0
for pair in probs:
if (r < cum + pair[1]):
return pair[0]
cum += pair[1]
The parameter probs takes a list of pairs of the form (object, probability). It is assumed that the sum of probabilities is 1 (otherwise, its trivial to normalize).
To use it just execute:
''.join([distributed_choice(probs)]*4)
A:
Hmm, something like:
import random
class RandomDistribution:
def __init__(self, kv):
self.entries = kv.keys()
self.where = []
cnt = 0
for x in self.entries:
self.where.append(cnt)
cnt += kv[x]
self.where.append(cnt)
def find(self, key):
l, r = 0, len(self.where)-1
while l+1 < r:
m = (l+r)/2
if self.where[m] <= key:
l=m
else:
r=m
return self.entries[l]
def randomselect(self):
return self.find(random.random()*self.where[-1])
rd = RandomDistribution( {"foo": 5.5, "bar": 3.14, "baz": 2.8 } )
for x in range(1000):
print rd.randomselect()
should get you most of the way...
A:
Thank you all for your help, I was able to figure something out, mostly with this info.
For my particular need, I did something like this:
import random
#Create a function to randomize a given string
def makerandom(seq):
return ''.join(random.sample(seq, len(seq)))
def randomDNA(n, probA=0.25, probC=0.25, probG=0.25, probT=0.25):
notrandom=''
A=int(n*probA)
C=int(n*probC)
T=int(n*probT)
G=int(n*probG)
#The remainder part here is used to make sure all n are used, as one cannot
#have half an A for example.
remainder=''
for i in range(0, n-(A+G+C+T)):
ramainder+=random.choice("ATGC")
notrandom=notrandom+ 'A'*A+ 'C'*C+ 'G'*G+ 'T'*T + remainder
return makerandom(notrandom)
| I need a Python Function that will output a random string of 4 different characters when given the desired probabilites of the characters | For example,
The function could be something like def RandABCD(n, .25, .34, .25, .25):
Where n is the length of the string to be generated and the following numbers are the desired probabilities of A, B, C, D.
I would imagine this is quite simple, however i am having trouble creating a working program. Any help would be greatly appreciated.
| [
"Here's the code to select a single weighted value. You should be able to take it from here. It uses bisect and random to accomplish the work.\nfrom bisect import bisect\nfrom random import random\n\ndef WeightedABCD(*weights):\n chars = 'ABCD'\n breakpoints = [sum(weights[:x+1]) for x in range(4)]\n return chars[bisect(breakpoints, random())]\n\nCall it like this: WeightedABCD(.25, .34, .25, .25).\nEDIT: Here is a version that works even if the weights don't add up to 1.0:\nfrom bisect import bisect_left\nfrom random import uniform\n\ndef WeightedABCD(*weights):\n chars = 'ABCD'\n breakpoints = [sum(weights[:x+1]) for x in range(4)]\n return chars[bisect_left(breakpoints, uniform(0.0,breakpoints[-1]))]\n\n",
"The random class is quite powerful in python. You can generate a list with the characters desired at the appropriate weights and then use random.choice to obtain a selection. \nFirst, make sure you do an import random.\nFor example, let's say you wanted a truly random string from A,B,C, or D.\n1. Generate a list with the characters\nli = ['A','B','C','D']\n\nThen obtain values from it using random.choice\noutput = \"\".join([random.choice(li) for i in range(0, n)])\n\nYou could easily make that a function with n as a parameter.\nIn the above case, you have an equal chance of getting A,B,C, or D. \nYou can use duplicate entries in the list to give characters higher probabilities. So, for example, let's say you wanted a 50% chance of A and 25% chances of B and C respectively. You could have an array like this:\nli = ['A','A','B','C']\nAnd so on.\nIt would not be hard to parameterize the characters coming in with desired weights, to model that I'd use a dictionary. \ncharacterbasis = {'A':25, 'B':25, 'C':25, 'D':25}\nMake that the first parameter, and the second being the length of the string and use the above code to generate your string.\n",
"For four letters, here's something quick off the top of my head:\nfrom random import random\n\ndef randABCD(n, pA, pB, pC, pD):\n # assumes pA + pB + pC + pD == 1\n cA = pA\n cB = cA + pB\n cC = cB + pC\n def choose():\n r = random()\n if r < cA:\n return 'A'\n elif r < cB:\n return 'B'\n elif r < cC:\n return 'C'\n else:\n return 'D'\n return ''.join([choose() for i in xrange(n)])\n\nI have no doubt that this can be made much cleaner/shorter, I'm just in a bit of a hurry right now.\nThe reason I wouldn't be content with David in Dakota's answer of using a list of duplicate characters is that depending on your probabilities, it may not be possible to create a list with duplicates in the right numbers to simulate the probabilities you want. (Well, I guess it might always be possible, but you might wind up needing a huge list - what if your probabilities were 0.11235442079, 0.4072777384, 0.2297927874, 0.25057505341?)\nEDIT: here's a much cleaner generic version that works with any number of letters with any weights:\nfrom bisect import bisect\nfrom random import uniform\n\ndef rand_string(n, content):\n ''' Creates a string of letters (or substrings) chosen independently\n with specified probabilities. content is a dictionary mapping\n a substring to its \"weight\" which is proportional to its probability,\n and n is the desired number of elements in the string.\n\n This does not assume the sum of the weights is 1.'''\n l, cdf = zip(*[(l, w) for l, w in content.iteritems()])\n cdf = list(cdf)\n for i in xrange(1, len(cdf)):\n cdf[i] += cdf[i - 1]\n return ''.join([l[bisect(cdf, uniform(0, cdf[-1]))] for i in xrange(n)]) \n\n",
"Here is a rough idea of what might suit you\nimport random as r\n\ndef distributed_choice(probs):\n r= r.random()\n cum = 0.0\n\n for pair in probs:\n if (r < cum + pair[1]):\n return pair[0] \n cum += pair[1]\n\nThe parameter probs takes a list of pairs of the form (object, probability). It is assumed that the sum of probabilities is 1 (otherwise, its trivial to normalize).\nTo use it just execute:\n''.join([distributed_choice(probs)]*4)\n\n",
"Hmm, something like:\nimport random\nclass RandomDistribution:\n def __init__(self, kv):\n self.entries = kv.keys()\n self.where = []\n cnt = 0\n for x in self.entries:\n self.where.append(cnt)\n cnt += kv[x]\n self.where.append(cnt) \n\n def find(self, key):\n l, r = 0, len(self.where)-1\n while l+1 < r:\n m = (l+r)/2\n if self.where[m] <= key:\n l=m\n else:\n r=m\n return self.entries[l]\n\n def randomselect(self):\n return self.find(random.random()*self.where[-1])\n\nrd = RandomDistribution( {\"foo\": 5.5, \"bar\": 3.14, \"baz\": 2.8 } )\nfor x in range(1000):\n print rd.randomselect()\n\nshould get you most of the way...\n",
"Thank you all for your help, I was able to figure something out, mostly with this info. \nFor my particular need, I did something like this:\nimport random\n#Create a function to randomize a given string\ndef makerandom(seq):\n return ''.join(random.sample(seq, len(seq)))\ndef randomDNA(n, probA=0.25, probC=0.25, probG=0.25, probT=0.25):\n notrandom=''\n A=int(n*probA)\n C=int(n*probC)\n T=int(n*probT)\n G=int(n*probG)\n\n#The remainder part here is used to make sure all n are used, as one cannot\n#have half an A for example.\n remainder=''\n for i in range(0, n-(A+G+C+T)):\n ramainder+=random.choice(\"ATGC\")\n notrandom=notrandom+ 'A'*A+ 'C'*C+ 'G'*G+ 'T'*T + remainder\n return makerandom(notrandom)\n\n"
] | [
4,
2,
2,
0,
0,
0
] | [] | [] | [
"python",
"random"
] | stackoverflow_0000744127_python_random.txt |
Q:
Basic python. Quick question regarding calling a function
I've got a basic problem in python, and I would be glad for some help :-)
I have two functions. One that convert a text file to a dictionary. And one that splits a sentence into separate words:
(This is the functiondoc.txt)
def autoparts():
list_of_parts= open('list_of_parts.txt', 'r')
for line in list_of_parts:
k, v= line.split()
list1.append(k)
list2.append(v)
dictionary = dict(zip(k, v))
def splittext(text):
words = text.split()
print words
Now I want to make a program that uses these two functions.
(this is the program.txt)
from functiondoc import *
# A and B are keys in the dict. The values are 'rear_bumper' 'back_seat'
text = 'A B' # Input
# Splits the input into separate strings.
input_ = split_line(text)
Here's the part I cant get right. I need to use the autoparts function to output the values (rear_bumper back_seat), but I'm not sure how to call that function so it does that. I don't think it's that hard. But I can't figure it out...
Kind Regards,
Th
A:
Some quick points:
You should not name Python source files ".txt", you should use ".py".
Your indents look wrong, but that might just be Stack Overflow.
You need to call the autoparts() function to set up the dictionary.
The autoparts() function should probably return the dictionary, to make it usable by other code.
When opening a text file, you should use the t mode specifier. On some platforms, the lower-level I/O code must know that it is reading text, so you need to tell it.
A:
As people have pointed out, you need to use the py extension for python source files. Your files would become "functiondoc.py" and "program.py". This will make your import functiondoc work correctly (as long as they are in the same directory)
The biggest problem with the autoparts function is you never returned anything. The other big problem is you used the wrong variable..
for line in list_of_parts:
k, v = line.split()
list1.append(k)
list2.append(v)
# k and v are now the last line split up, *not* the list you've been constructing.
# The following incorrect line:
dictionary = dict(zip(k, v))
# ...should be:
dictionary = dict(zip(list1, list2))
# ..although you shouldn't use zip for this:
You almost never have to use zip, there are times when it can be useful, but for creating a simple dict, it's incorrect.. Instead of doing..
for line in list_of_parts:
...
dictionary = dict(zip(k, v))
..simply create an empty dict before the loop, then do mydict[key_variable] = value_variable
For example, how I might have written the function..
def autoparts():
# open() returns a file object, not the contents of the file,
# you need to use .read() or .readlines() to get the actual text
input_file = open('list_of_parts.txt', 'r')
all_lines = input_file.read_lines() # reads files as a list (one index per line)
mydict = {} # initialise a empty dictionary
for line in list_of_parts:
k, v = line.split()
mydict[k] = v
return mydict # you have to explicitly return stuff, or it returns None
A:
In addition to all of the other hints and tips, I think you're missing something crucial: your functions actually need to return something.
When you create autoparts() or splittext(), the idea is that this will be a function that you can call, and it can (and should) give something back.
Once you figure out the output that you want your function to have, you need to put it in a return statement.
For example, if you wanted to splittext to return the list of words, rather than print them, you would need the line return words. If you want your autoparts to return the dictionary you've built, you would use return dictionary.
To be more precise (and to answer your comment/question below): you don't want to "return a function that makes a dictionary"; you want to return the dictionary while inside the function. So, the last line of your function should be return dictionary (inside the function!) See, for example, the (accepted!) solution from dbr, above.
I think you need to go back to the beginning and read a book or website about python in particular and programming in general, since you are slightly rusty on some of the concepts. One good one (others are available, of course) is http://diveintopython3.ep.io/
A:
Don't bother creating the lists first, just go straight to the dictionary:
parts_dict={}
list_of_parts = open('list_of_parts.txt', 'r')
for line in list_of_parts:
k, v = line.split()
parts_dict[k] = v
Also, are these keys unique? Because if not some of the values will get overwritten.
A:
There are a lot of problems with what you've written so far, but your question was how to call the auto parts function. Here's how; first, rename your files to functiondocs.py and program.py - they're python so make them python files.
Next, to call the autoparts function, you simply change your main program listing from:
from functiondoc import *
# A and B are keys in the dict. The values are 'rear_bumper' 'back_seat'
text = 'A B' # Input
# Splits the input into separate strings.
input_ = split_line(text)
to:
from functiondoc import *
# Call the autparts function
autoparts()
In my opinion, it looks like you're asking us to do a CS homework assignment.. but maybe I'm just cynical ;-)
A:
Here's about the simplest way you could do this:
def filetodict(filename):
return dict(line.split() for line in open(filename))
parts = filetodict("list_of_parts.txt")
print parts
Here's the output:
{'a': 'apple', 'c': 'cheese', 'b': 'bacon', 'e': 'egg', 'd': 'donut'}
The file contents:
a apple
b bacon
c cheese
d donut
e egg
| Basic python. Quick question regarding calling a function | I've got a basic problem in python, and I would be glad for some help :-)
I have two functions. One that convert a text file to a dictionary. And one that splits a sentence into separate words:
(This is the functiondoc.txt)
def autoparts():
list_of_parts= open('list_of_parts.txt', 'r')
for line in list_of_parts:
k, v= line.split()
list1.append(k)
list2.append(v)
dictionary = dict(zip(k, v))
def splittext(text):
words = text.split()
print words
Now I want to make a program that uses these two functions.
(this is the program.txt)
from functiondoc import *
# A and B are keys in the dict. The values are 'rear_bumper' 'back_seat'
text = 'A B' # Input
# Splits the input into separate strings.
input_ = split_line(text)
Here's the part I cant get right. I need to use the autoparts function to output the values (rear_bumper back_seat), but I'm not sure how to call that function so it does that. I don't think it's that hard. But I can't figure it out...
Kind Regards,
Th
| [
"Some quick points:\n\nYou should not name Python source files \".txt\", you should use \".py\".\nYour indents look wrong, but that might just be Stack Overflow.\nYou need to call the autoparts() function to set up the dictionary.\nThe autoparts() function should probably return the dictionary, to make it usable by other code.\nWhen opening a text file, you should use the t mode specifier. On some platforms, the lower-level I/O code must know that it is reading text, so you need to tell it.\n\n",
"As people have pointed out, you need to use the py extension for python source files. Your files would become \"functiondoc.py\" and \"program.py\". This will make your import functiondoc work correctly (as long as they are in the same directory)\nThe biggest problem with the autoparts function is you never returned anything. The other big problem is you used the wrong variable..\nfor line in list_of_parts:\n k, v = line.split()\n list1.append(k)\n list2.append(v)\n\n# k and v are now the last line split up, *not* the list you've been constructing.\n# The following incorrect line:\ndictionary = dict(zip(k, v))\n# ...should be:\ndictionary = dict(zip(list1, list2))\n# ..although you shouldn't use zip for this:\n\nYou almost never have to use zip, there are times when it can be useful, but for creating a simple dict, it's incorrect.. Instead of doing..\nfor line in list_of_parts:\n ...\ndictionary = dict(zip(k, v))\n\n..simply create an empty dict before the loop, then do mydict[key_variable] = value_variable\nFor example, how I might have written the function..\ndef autoparts():\n # open() returns a file object, not the contents of the file,\n # you need to use .read() or .readlines() to get the actual text\n input_file = open('list_of_parts.txt', 'r')\n all_lines = input_file.read_lines() # reads files as a list (one index per line)\n\n mydict = {} # initialise a empty dictionary\n\n for line in list_of_parts:\n k, v = line.split()\n mydict[k] = v\n\n return mydict # you have to explicitly return stuff, or it returns None\n\n",
"In addition to all of the other hints and tips, I think you're missing something crucial: your functions actually need to return something.\nWhen you create autoparts() or splittext(), the idea is that this will be a function that you can call, and it can (and should) give something back.\nOnce you figure out the output that you want your function to have, you need to put it in a return statement.\nFor example, if you wanted to splittext to return the list of words, rather than print them, you would need the line return words. If you want your autoparts to return the dictionary you've built, you would use return dictionary.\nTo be more precise (and to answer your comment/question below): you don't want to \"return a function that makes a dictionary\"; you want to return the dictionary while inside the function. So, the last line of your function should be return dictionary (inside the function!) See, for example, the (accepted!) solution from dbr, above.\nI think you need to go back to the beginning and read a book or website about python in particular and programming in general, since you are slightly rusty on some of the concepts. One good one (others are available, of course) is http://diveintopython3.ep.io/\n",
"Don't bother creating the lists first, just go straight to the dictionary:\nparts_dict={}\nlist_of_parts = open('list_of_parts.txt', 'r')\nfor line in list_of_parts:\n k, v = line.split()\n parts_dict[k] = v\n\nAlso, are these keys unique? Because if not some of the values will get overwritten.\n",
"There are a lot of problems with what you've written so far, but your question was how to call the auto parts function. Here's how; first, rename your files to functiondocs.py and program.py - they're python so make them python files.\nNext, to call the autoparts function, you simply change your main program listing from:\nfrom functiondoc import *\n\n# A and B are keys in the dict. The values are 'rear_bumper' 'back_seat'\ntext = 'A B' # Input\n\n# Splits the input into separate strings.\ninput_ = split_line(text)\n\nto:\nfrom functiondoc import *\n\n# Call the autparts function\nautoparts()\n\nIn my opinion, it looks like you're asking us to do a CS homework assignment.. but maybe I'm just cynical ;-)\n",
"Here's about the simplest way you could do this:\ndef filetodict(filename):\n return dict(line.split() for line in open(filename))\n\nparts = filetodict(\"list_of_parts.txt\")\nprint parts\n\nHere's the output:\n{'a': 'apple', 'c': 'cheese', 'b': 'bacon', 'e': 'egg', 'd': 'donut'}\n\nThe file contents:\na apple\nb bacon\nc cheese\nd donut\ne egg\n\n"
] | [
4,
3,
2,
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0000746774_python.txt |
Q:
For each function in class within python
In python is it possible to run each function inside a class?
EDIT:
What i am trying to do is call of the functions inside a class, collect their return variables and work with that.
A:
yes, you can.
Quick and dirty:
class foo:
def one(self):
print "here is one"
def two(self):
print "here is two"
def three(self):
print "here is three"
obj = foo()
for entry in dir(obj):
print entry, callable(getattr(obj,entry))
if callable(getattr(obj,entry)):
getattr(obj,entry)()
If you want a more refined concept, check the unittest.py module. There should be code that executes all methods starting with the string "test"
A:
Depends what you mean by "function". Something like this could work, though:
import inspect
def methods(c):
return (m for m in (getattr(c, d) for d in dir(c))
if inspect.ismethoddescriptor(m) or inspect.ismethod(m))
Then:
class C:
def f(self): pass
>>> list(methods(C))
[<unbound method C.f>]
A:
The dir builtin will list all attributes of an object, for example:
>>> class MyClass:
... def one(self):
... print "one"
... def two(self):
... print "two"
... def three(self):
... print "three"
...
>>> dir(MyClass)
['__doc__', '__module__', 'one', 'three', 'two']
It also works on an initialised class..
>>> c = MyClass()
>>> dir(c)
['__doc__', '__module__', 'one', 'three', 'two']
Methods are just attributes which happen to be callable (via c.attribute() ) - we can use the getattr function to reference that method via a variable..
>>> myfunc = getattr(c, 'one')
>>> myfunc
<bound method MyClass.one of <__main__.MyClass instance at 0x7b0d0>>
Then we can simply call that variable..
>>> myfunc()
one # the output from the c.one() method
Since some attributes are not functions (in the above example, __doc__ and __module__). We can us the callable builtin to check if it's a callable method (a function):
>>> callable(c.three)
True
>>> callable(c.__doc__)
False
So to combine all that into a loop:
>>> for cur_method_name in dir(c):
... the_attr = getattr(c, cur_method_name)
... if callable(the_attr):
... the_attr()
...
one
three
two
Remember this will call methods like __init__ again, which probably isn't desired. You might want to skip any cur_method_name which start with an underscore
A:
Here is one that uses yield to loop through the functions in the class.
def get_functions(mod):
for entry in dir(mod):
obj=getattr(mod,entry);
if hasattr(obj, '__call__') and hasattr(obj,'__func__') :
yield obj
class foo:
def one(self):
print ("here is two")
return 1
def two(self):
print ("here is two")
return 2
def three(self):
print ("here is three")
return 3
print(sum([fun() for fun in get_functions(foo())]))
A:
Since you wrote the class, you already know all the functions.
class ThisIsPeculiar( object ):
def aFunction( self, arg1 ):
pass
def anotherFunction( self, thisArg, thatArg ):
pass
functionsToCall = [ aFunction, anotherFunction ]
>>> p= ThisIsPeculiar()
>>> p.functionsToCall
[<function aFunction at 0x6b830>, <function anotherFunction at 0x6b870>]
A:
Try using the inspect module:
import inspect
class Spam:
def eggs(self):
print "eggs"
def ducks(self):
print "ducks"
value = "value"
spam = Spam()
for name, method in inspect.getmembers(spam, callable):
method()
Output:
ducks
eggs
| For each function in class within python | In python is it possible to run each function inside a class?
EDIT:
What i am trying to do is call of the functions inside a class, collect their return variables and work with that.
| [
"yes, you can.\nQuick and dirty: \nclass foo:\n def one(self):\n print \"here is one\"\n def two(self):\n print \"here is two\"\n def three(self):\n print \"here is three\"\n\n\nobj = foo()\nfor entry in dir(obj):\n print entry, callable(getattr(obj,entry))\n if callable(getattr(obj,entry)):\n getattr(obj,entry)()\n\nIf you want a more refined concept, check the unittest.py module. There should be code that executes all methods starting with the string \"test\"\n",
"Depends what you mean by \"function\". Something like this could work, though:\nimport inspect\n\ndef methods(c):\n return (m for m in (getattr(c, d) for d in dir(c))\n if inspect.ismethoddescriptor(m) or inspect.ismethod(m))\n\nThen:\nclass C:\n def f(self): pass\n\n>>> list(methods(C))\n[<unbound method C.f>]\n\n",
"The dir builtin will list all attributes of an object, for example:\n>>> class MyClass:\n... def one(self):\n... print \"one\"\n... def two(self):\n... print \"two\"\n... def three(self):\n... print \"three\"\n... \n>>> dir(MyClass)\n['__doc__', '__module__', 'one', 'three', 'two']\n\nIt also works on an initialised class..\n>>> c = MyClass()\n>>> dir(c)\n['__doc__', '__module__', 'one', 'three', 'two']\n\nMethods are just attributes which happen to be callable (via c.attribute() ) - we can use the getattr function to reference that method via a variable..\n>>> myfunc = getattr(c, 'one')\n>>> myfunc\n<bound method MyClass.one of <__main__.MyClass instance at 0x7b0d0>>\n\nThen we can simply call that variable..\n>>> myfunc()\none # the output from the c.one() method\n\nSince some attributes are not functions (in the above example, __doc__ and __module__). We can us the callable builtin to check if it's a callable method (a function):\n>>> callable(c.three)\nTrue\n>>> callable(c.__doc__)\nFalse\n\nSo to combine all that into a loop:\n>>> for cur_method_name in dir(c):\n... the_attr = getattr(c, cur_method_name)\n... if callable(the_attr):\n... the_attr()\n... \none\nthree\ntwo\n\nRemember this will call methods like __init__ again, which probably isn't desired. You might want to skip any cur_method_name which start with an underscore\n",
"Here is one that uses yield to loop through the functions in the class. \ndef get_functions(mod):\n for entry in dir(mod):\n obj=getattr(mod,entry);\n if hasattr(obj, '__call__') and hasattr(obj,'__func__') :\n yield obj\n\nclass foo:\n def one(self):\n print (\"here is two\")\n return 1\n def two(self):\n print (\"here is two\")\n return 2\n def three(self):\n print (\"here is three\")\n return 3\n\n\nprint(sum([fun() for fun in get_functions(foo())]))\n\n",
"Since you wrote the class, you already know all the functions.\nclass ThisIsPeculiar( object ):\n def aFunction( self, arg1 ):\n pass\n def anotherFunction( self, thisArg, thatArg ):\n pass\n functionsToCall = [ aFunction, anotherFunction ]\n\n>>> p= ThisIsPeculiar()\n>>> p.functionsToCall\n[<function aFunction at 0x6b830>, <function anotherFunction at 0x6b870>]\n\n",
"Try using the inspect module:\nimport inspect\n\nclass Spam:\n def eggs(self):\n print \"eggs\"\n def ducks(self):\n print \"ducks\"\n value = \"value\"\n\nspam = Spam()\nfor name, method in inspect.getmembers(spam, callable):\n method()\n\nOutput:\nducks\neggs\n\n"
] | [
4,
3,
3,
1,
1,
1
] | [] | [] | [
"oop",
"python",
"reflection"
] | stackoverflow_0000742708_oop_python_reflection.txt |
Q:
Calling unknown Python functions
This was the best name I could come up with for the topic and none of my searches yielded information relevant to the question.
How do I call a function from a string, i.e.
functions_to_call = ["func_1", "func_2", "func_3"]
for f in functions_to_call:
call f
A:
You can use the python builtin locals() to get local declarations, eg:
def f():
print "Hello, world"
def g():
print "Goodbye, world"
for fname in ["f", "g"]:
fn = locals()[fname]
print "Calling %s" % (fname)
fn()
You can use the "imp" module to load functions from user-specified python files which gives you a bit more flexibility.
Using locals() makes sure you can't call generic python, whereas with eval, you could end up with the user setting your string to something untoward like:
f = 'open("/etc/passwd").readlines'
print eval(f+"()")
or similar and end up with your programming doing things you don't expect to be possible. Using similar tricks with locals() and dicts in general will just give attackers KeyErrors.
A:
how do you not know the name of the function to call? Store the functions instead of the name:
functions_to_call = [int, str, float]
value = 33.5
for function in functions_to_call:
print "calling", function
print "result:", function(value)
A:
Something like that...when i was looking at function pointers in python..
def myfunc(x):
print x
dict = {
"myfunc": myfunc
}
dict["myfunc"]("hello")
func = dict.get("myfunc")
if callable(func):
func(10)
A:
Have a look at the getattr function:
http://docs.python.org/library/functions.html?highlight=getattr#getattr
import sys
functions_to_call = ["func_1", "func_2", "func_3"]
for f in functions_to_call:
getattr(sys.modules[__name__], f)()
A:
functions_to_call = ["func_1", "func_2", "func_3"]
for f in functions_to_call:
eval(f+'()')
Edited to add:
Yes, eval() generally is a bad idea, but this is what the OP was looking for.
A:
Don't use eval! It's almost never required, functions in python are just attributes like everything else, and are accessible either using getattr on a class, or via locals():
>>> print locals()
{'__builtins__': <module '__builtin__' (built-in)>,
'__doc__': None,
'__name__': '__main__',
'func_1': <function func_1 at 0x74bf0>,
'func_2': <function func_2 at 0x74c30>,
'func_3': <function func_3 at 0x74b70>,
}
Since that's a dictionary, you can get the functions via the dict-keys func_1, func_2 and func_3:
>>> f1 = locals()['func_1']
>>> f1
<function func_1 at 0x74bf0>
>>> f1()
one
So, the solution without resorting to eval:
>>> def func_1():
... print "one"
...
>>> def func_2():
... print "two"
...
>>> def func_3():
... print "three"
...
>>> functions_to_call = ["func_1", "func_2", "func_3"]
>>> for fname in functions_to_call:
... cur_func = locals()[fname]
... cur_func()
...
one
two
three
A:
See the eval and compile functions.
This function can also be used to execute arbitrary code objects (such as those created by compile()). In this case pass a code object instead of a string. If the code object has been compiled with 'exec' as the kind argument, eval()‘s return value will be None.
| Calling unknown Python functions | This was the best name I could come up with for the topic and none of my searches yielded information relevant to the question.
How do I call a function from a string, i.e.
functions_to_call = ["func_1", "func_2", "func_3"]
for f in functions_to_call:
call f
| [
"You can use the python builtin locals() to get local declarations, eg:\ndef f():\n print \"Hello, world\"\n\ndef g():\n print \"Goodbye, world\"\n\nfor fname in [\"f\", \"g\"]:\n fn = locals()[fname]\n print \"Calling %s\" % (fname)\n fn()\n\nYou can use the \"imp\" module to load functions from user-specified python files which gives you a bit more flexibility.\nUsing locals() makes sure you can't call generic python, whereas with eval, you could end up with the user setting your string to something untoward like:\nf = 'open(\"/etc/passwd\").readlines'\nprint eval(f+\"()\")\n\nor similar and end up with your programming doing things you don't expect to be possible. Using similar tricks with locals() and dicts in general will just give attackers KeyErrors.\n",
"how do you not know the name of the function to call? Store the functions instead of the name:\nfunctions_to_call = [int, str, float]\n\nvalue = 33.5\n\nfor function in functions_to_call:\n print \"calling\", function\n print \"result:\", function(value)\n\n",
"Something like that...when i was looking at function pointers in python..\ndef myfunc(x):\n print x\n\ndict = {\n \"myfunc\": myfunc\n}\n\ndict[\"myfunc\"](\"hello\")\n\nfunc = dict.get(\"myfunc\")\nif callable(func):\n func(10)\n\n",
"Have a look at the getattr function:\nhttp://docs.python.org/library/functions.html?highlight=getattr#getattr\nimport sys\n\nfunctions_to_call = [\"func_1\", \"func_2\", \"func_3\"]\n\nfor f in functions_to_call:\n getattr(sys.modules[__name__], f)()\n\n",
"functions_to_call = [\"func_1\", \"func_2\", \"func_3\"]\n\nfor f in functions_to_call:\n eval(f+'()')\n\nEdited to add:\nYes, eval() generally is a bad idea, but this is what the OP was looking for.\n",
"Don't use eval! It's almost never required, functions in python are just attributes like everything else, and are accessible either using getattr on a class, or via locals():\n>>> print locals()\n{'__builtins__': <module '__builtin__' (built-in)>,\n '__doc__': None,\n '__name__': '__main__',\n 'func_1': <function func_1 at 0x74bf0>,\n 'func_2': <function func_2 at 0x74c30>,\n 'func_3': <function func_3 at 0x74b70>,\n}\n\nSince that's a dictionary, you can get the functions via the dict-keys func_1, func_2 and func_3:\n>>> f1 = locals()['func_1']\n>>> f1\n<function func_1 at 0x74bf0>\n>>> f1()\none\n\nSo, the solution without resorting to eval:\n>>> def func_1():\n... print \"one\"\n... \n>>> def func_2():\n... print \"two\"\n... \n>>> def func_3():\n... print \"three\"\n... \n>>> functions_to_call = [\"func_1\", \"func_2\", \"func_3\"]\n>>> for fname in functions_to_call:\n... cur_func = locals()[fname]\n... cur_func()\n... \none\ntwo\nthree\n\n",
"See the eval and compile functions.\n\nThis function can also be used to execute arbitrary code objects (such as those created by compile()). In this case pass a code object instead of a string. If the code object has been compiled with 'exec' as the kind argument, eval()‘s return value will be None.\n\n"
] | [
19,
14,
8,
6,
2,
2,
1
] | [] | [] | [
"python"
] | stackoverflow_0000744626_python.txt |
Q:
benchmarking django apps
I'm interested in testing the performance of my django apps as I go, what is the best way to get line by line performance data?
note: Googling this returns lots of people benchmarking django itself. I'm not looking for a benchmarks of django, I'm trying to test the performance of the django apps that I'm writing :)
Thanks!
edit: By "line by line" I just mean timing individual functions, db calls, etc to find out where the bottlenecks are on a very granular level
A:
There's two layers to this. We have most of #1 in place for our testing. We're about to start on #2.
Django in isolation. The ordinary Django unit tests works well here. Create some tests that cycle through a few (less than 6) "typical" use cases. Get this, post that, etc. Collect timing data. This isn't real web performance, but it's an easy-to-work with test scenario that you can use for tuning.
Your whole web stack. In this case, you need a regular server running Squid, Apache, Django, MySQL, whatever. You need a second computer(s) to act a client exercise your web site through urllib2, doing a few (less than 6) "typical" use cases. Get this, post that, etc. Collect timing data. This still isn't "real" web performance, because it isn't through the internet, but it's as close as you're going to get without a really elaborate setup.
Note that the #2 (end-to-end) includes a great deal of caching for performance. If your client scripts are doing similar work, caching will be really beneficial. if your client scripts do unique things each time, caching will be less beneficial.
The hardest part is determining what the "typical" workload is. This isn't functional testing, so the workload doesn't have to include everything. Also, the more concurrent sessions your client is running, the slower it becomes. Don't struggle trying to optimize your server when your test client is the slowest part of the processing.
Edit
If "line-by-line" means "profiling", well, you've got to get a Python profiler running.
https://docs.python.org/library/profile.html
Note that there's plenty of caching in the Django ORM layer. So running a view function a half-dozen times to get a meaningful set of measurements isn't sensible. You have to run a "typical" set of operations and then find hot-spots in the profile.
Generally, your application is easy to optimize -- you shouldn't be doing much. Your view functions should be short and have no processing to speak of. Your form and model method functions, similarly, should be very short.
A:
One way to get line by line performance data (profiling) your Django app is to use a WSGI middleware component like repoze.profile.
Assuming you are using mod_wsgi with Apache you can insert repoze.profile into your app like this:
...
application = django.core.handlers.wsgi.WSGIHandler()
...
from repoze.profile.profiler import AccumulatingProfileMiddleware
application = AccumulatingProfileMiddleware(
application,
log_filename='/path/to/logs/profile.log',
discard_first_request=True,
flush_at_shutdown=True,
path='/_profile'
)
And now you can point your browser to /_profile to view your profile data. Of course this won't work with mod_python or the internal Django server.
| benchmarking django apps | I'm interested in testing the performance of my django apps as I go, what is the best way to get line by line performance data?
note: Googling this returns lots of people benchmarking django itself. I'm not looking for a benchmarks of django, I'm trying to test the performance of the django apps that I'm writing :)
Thanks!
edit: By "line by line" I just mean timing individual functions, db calls, etc to find out where the bottlenecks are on a very granular level
| [
"There's two layers to this. We have most of #1 in place for our testing. We're about to start on #2.\n\nDjango in isolation. The ordinary Django unit tests works well here. Create some tests that cycle through a few (less than 6) \"typical\" use cases. Get this, post that, etc. Collect timing data. This isn't real web performance, but it's an easy-to-work with test scenario that you can use for tuning.\nYour whole web stack. In this case, you need a regular server running Squid, Apache, Django, MySQL, whatever. You need a second computer(s) to act a client exercise your web site through urllib2, doing a few (less than 6) \"typical\" use cases. Get this, post that, etc. Collect timing data. This still isn't \"real\" web performance, because it isn't through the internet, but it's as close as you're going to get without a really elaborate setup.\n\nNote that the #2 (end-to-end) includes a great deal of caching for performance. If your client scripts are doing similar work, caching will be really beneficial. if your client scripts do unique things each time, caching will be less beneficial.\nThe hardest part is determining what the \"typical\" workload is. This isn't functional testing, so the workload doesn't have to include everything. Also, the more concurrent sessions your client is running, the slower it becomes. Don't struggle trying to optimize your server when your test client is the slowest part of the processing.\n\nEdit\nIf \"line-by-line\" means \"profiling\", well, you've got to get a Python profiler running.\nhttps://docs.python.org/library/profile.html\nNote that there's plenty of caching in the Django ORM layer. So running a view function a half-dozen times to get a meaningful set of measurements isn't sensible. You have to run a \"typical\" set of operations and then find hot-spots in the profile. \nGenerally, your application is easy to optimize -- you shouldn't be doing much. Your view functions should be short and have no processing to speak of. Your form and model method functions, similarly, should be very short.\n",
"One way to get line by line performance data (profiling) your Django app is to use a WSGI middleware component like repoze.profile.\nAssuming you are using mod_wsgi with Apache you can insert repoze.profile into your app like this:\n...\napplication = django.core.handlers.wsgi.WSGIHandler()\n...\nfrom repoze.profile.profiler import AccumulatingProfileMiddleware\napplication = AccumulatingProfileMiddleware(\n application,\n log_filename='/path/to/logs/profile.log',\n discard_first_request=True,\n flush_at_shutdown=True,\n path='/_profile'\n)\n\nAnd now you can point your browser to /_profile to view your profile data. Of course this won't work with mod_python or the internal Django server.\n"
] | [
7,
5
] | [] | [] | [
"django",
"profiling",
"python"
] | stackoverflow_0000748130_django_profiling_python.txt |
Q:
Python: Convert those TinyURL (bit.ly, tinyurl, ow.ly) to full URLS
I am just learning python and is interested in how this can be accomplished. During the search for the answer, I came across this service: http://www.longurlplease.com
For example:
http://bit.ly/rgCbf can be converted to:
http://webdesignledger.com/freebies/the-best-social-media-icons-all-in-one-place
I did some inspecting with Firefox and see that the original url is not in the header.
A:
Enter urllib2, which offers the easiest way of doing this:
>>> import urllib2
>>> fp = urllib2.urlopen('http://bit.ly/rgCbf')
>>> fp.geturl()
'http://webdesignledger.com/freebies/the-best-social-media-icons-all-in-one-place'
For reference's sake, however, note that this is also possible with httplib:
>>> import httplib
>>> conn = httplib.HTTPConnection('bit.ly')
>>> conn.request('HEAD', '/rgCbf')
>>> response = conn.getresponse()
>>> response.getheader('location')
'http://webdesignledger.com/freebies/the-best-social-media-icons-all-in-one-place'
And with PycURL, although I'm not sure if this is the best way to do it using it:
>>> import pycurl
>>> conn = pycurl.Curl()
>>> conn.setopt(pycurl.URL, "http://bit.ly/rgCbf")
>>> conn.setopt(pycurl.FOLLOWLOCATION, 1)
>>> conn.setopt(pycurl.CUSTOMREQUEST, 'HEAD')
>>> conn.setopt(pycurl.NOBODY, True)
>>> conn.perform()
>>> conn.getinfo(pycurl.EFFECTIVE_URL)
'http://webdesignledger.com/freebies/the-best-social-media-icons-all-in-one-place'
| Python: Convert those TinyURL (bit.ly, tinyurl, ow.ly) to full URLS | I am just learning python and is interested in how this can be accomplished. During the search for the answer, I came across this service: http://www.longurlplease.com
For example:
http://bit.ly/rgCbf can be converted to:
http://webdesignledger.com/freebies/the-best-social-media-icons-all-in-one-place
I did some inspecting with Firefox and see that the original url is not in the header.
| [
"Enter urllib2, which offers the easiest way of doing this:\n>>> import urllib2\n>>> fp = urllib2.urlopen('http://bit.ly/rgCbf')\n>>> fp.geturl()\n'http://webdesignledger.com/freebies/the-best-social-media-icons-all-in-one-place'\n\nFor reference's sake, however, note that this is also possible with httplib:\n>>> import httplib\n>>> conn = httplib.HTTPConnection('bit.ly')\n>>> conn.request('HEAD', '/rgCbf')\n>>> response = conn.getresponse()\n>>> response.getheader('location')\n'http://webdesignledger.com/freebies/the-best-social-media-icons-all-in-one-place'\n\nAnd with PycURL, although I'm not sure if this is the best way to do it using it:\n>>> import pycurl\n>>> conn = pycurl.Curl()\n>>> conn.setopt(pycurl.URL, \"http://bit.ly/rgCbf\")\n>>> conn.setopt(pycurl.FOLLOWLOCATION, 1)\n>>> conn.setopt(pycurl.CUSTOMREQUEST, 'HEAD')\n>>> conn.setopt(pycurl.NOBODY, True)\n>>> conn.perform()\n>>> conn.getinfo(pycurl.EFFECTIVE_URL)\n'http://webdesignledger.com/freebies/the-best-social-media-icons-all-in-one-place'\n\n"
] | [
32
] | [] | [] | [
"bit.ly",
"python",
"tinyurl"
] | stackoverflow_0000748324_bit.ly_python_tinyurl.txt |
Q:
adding comments to pot files automatically
I want to pull certain comments from my py files that give context to translations, rather than manually editing the .pot file basically i want to go from this python file:
# For Translators: some useful info about the sentence below
_("Some string blah blah")
to this pot file:
# For Translators: some useful info about the sentence below
#: something.py:1
msgid "Some string blah blah"
msgstr ""
A:
After much pissing about I found the best way to do this:
#. Translators:
# Blah blah blah
_("String")
Then search for comments with a . like so:
xgettext --language=Python --keyword=_ --add-comments=. --output=test.pot *.py
A:
I was going to suggest the compiler module, but it ignores comments:
f.py:
# For Translators: some useful info about the sentence below
_("Some string blah blah")
..and the compiler module:
>>> import compiler
>>> m = compiler.parseFile("f.py")
>>> m
Module(None, Stmt([Discard(CallFunc(Name('_'), [Const('Some string blah blah')], None, None))]))
The AST module in Python 2.6 seems to do the same.
Not sure if it's possible, but if you use triple-quoted strings instead..
"""For Translators: some useful info about the sentence below"""
_("Some string blah blah")
..you can reliably parse the Python file with the compiler module:
>>> m = compiler.parseFile("f.py")
>>> m
Module('For Translators: some useful info about the sentence below', Stmt([Discard(CallFunc(Name('_'), [Const('Some string blah blah')], None, None))]))
I made an attempt at writing a mode complete script to extract docstrings - it's incomplete, but seems to grab most docstrings: http://pastie.org/446156 (or on github.com/dbr/so_scripts)
The other, much simpler, option would be to use regular expressions, for example:
f = """# For Translators: some useful info about the sentence below
_("Some string blah blah")
""".split("\n")
import re
for i, line in enumerate(f):
m = re.findall("\S*# (For Translators: .*)$", line)
if len(m) > 0 and i != len(f):
print "Line Number:", i+1
print "Message:", m
print "Line:", f[i + 1]
..outputs:
Line Number: 1
Message: ['For Translators: some useful info about the sentence below']
Line: _("Some string blah blah")
Not sure how the .pot file is generated, so I can't be any help at-all with that part..
| adding comments to pot files automatically | I want to pull certain comments from my py files that give context to translations, rather than manually editing the .pot file basically i want to go from this python file:
# For Translators: some useful info about the sentence below
_("Some string blah blah")
to this pot file:
# For Translators: some useful info about the sentence below
#: something.py:1
msgid "Some string blah blah"
msgstr ""
| [
"After much pissing about I found the best way to do this:\n#. Translators:\n# Blah blah blah\n_(\"String\")\n\nThen search for comments with a . like so:\nxgettext --language=Python --keyword=_ --add-comments=. --output=test.pot *.py\n\n",
"I was going to suggest the compiler module, but it ignores comments:\nf.py:\n# For Translators: some useful info about the sentence below\n_(\"Some string blah blah\")\n\n..and the compiler module:\n>>> import compiler\n>>> m = compiler.parseFile(\"f.py\")\n>>> m\nModule(None, Stmt([Discard(CallFunc(Name('_'), [Const('Some string blah blah')], None, None))]))\n\nThe AST module in Python 2.6 seems to do the same.\nNot sure if it's possible, but if you use triple-quoted strings instead..\n\"\"\"For Translators: some useful info about the sentence below\"\"\"\n_(\"Some string blah blah\")\n\n..you can reliably parse the Python file with the compiler module:\n>>> m = compiler.parseFile(\"f.py\")\n>>> m\nModule('For Translators: some useful info about the sentence below', Stmt([Discard(CallFunc(Name('_'), [Const('Some string blah blah')], None, None))]))\n\nI made an attempt at writing a mode complete script to extract docstrings - it's incomplete, but seems to grab most docstrings: http://pastie.org/446156 (or on github.com/dbr/so_scripts)\nThe other, much simpler, option would be to use regular expressions, for example:\nf = \"\"\"# For Translators: some useful info about the sentence below\n_(\"Some string blah blah\")\n\"\"\".split(\"\\n\")\n\nimport re\n\nfor i, line in enumerate(f):\n m = re.findall(\"\\S*# (For Translators: .*)$\", line)\n if len(m) > 0 and i != len(f):\n print \"Line Number:\", i+1\n print \"Message:\", m\n print \"Line:\", f[i + 1]\n\n..outputs:\nLine Number: 1\nMessage: ['For Translators: some useful info about the sentence below']\nLine: _(\"Some string blah blah\")\n\nNot sure how the .pot file is generated, so I can't be any help at-all with that part..\n"
] | [
2,
1
] | [] | [] | [
"internationalization",
"localization",
"python"
] | stackoverflow_0000744894_internationalization_localization_python.txt |
Q:
How do I get all the entities of a type with a required property in Google App Engine?
I have a model which has a required string property like the following:
class Jean(db.Model):
sex = db.StringProperty(required=True, choices=set(["male", "female"]))
When I try calling Jean.all(), python complains about not having a required property.
Surely there must be a way to get all of them.
If Steve is correct (his answer does make sense). How can I determine if that's actually causing the problem. How do I find out what exactly is in my datastore?
A:
Maybe you have old data in the datastore with no sex property (added before you specified the required property), then the system complain that there is an entry without sex property.
Try adding a default value:
class Jean(db.Model):
sex = db.StringProperty(required=True, choices=set(["male", "female"]), default="male")
I hope it helps.
/edit:
Go to the local datastore viewer (default is at http://localhost:8080/_ah/admin/) and list your entities. You can try fixing the issue manually (if possible) by filling the missing property.
| How do I get all the entities of a type with a required property in Google App Engine? | I have a model which has a required string property like the following:
class Jean(db.Model):
sex = db.StringProperty(required=True, choices=set(["male", "female"]))
When I try calling Jean.all(), python complains about not having a required property.
Surely there must be a way to get all of them.
If Steve is correct (his answer does make sense). How can I determine if that's actually causing the problem. How do I find out what exactly is in my datastore?
| [
"Maybe you have old data in the datastore with no sex property (added before you specified the required property), then the system complain that there is an entry without sex property.\nTry adding a default value:\nclass Jean(db.Model):\n sex = db.StringProperty(required=True, choices=set([\"male\", \"female\"]), default=\"male\")\n\nI hope it helps.\n/edit:\nGo to the local datastore viewer (default is at http://localhost:8080/_ah/admin/) and list your entities. You can try fixing the issue manually (if possible) by filling the missing property.\n"
] | [
1
] | [] | [] | [
"data_modeling",
"entity",
"google_app_engine",
"python"
] | stackoverflow_0000748952_data_modeling_entity_google_app_engine_python.txt |
Q:
Django: How to use stored model instances as form choices?
I have a model which is essentially just a string (django.db.models.CharField). There will only be several instances of this model stored. How could I use those values as choices in a form?
To illustrate, the model could be BlogTopic. I'd like to offer users the ability to choose one or several topics to subscribe to.
I started writing something like:
from mysite.blog.models import BlogTopic
choices = [(topic.id, topic.name) for topic in BlogTopic.objects.all()]
class SubscribeForm(forms.Form):
topics = forms.ChoiceField(choices=choices)
But I'm not sure when choices would be defined. I assume only when the module is first imported (i.e. when starting Django). Obviously that is not a very good approach.
This seems like it would be a common requirement, but I can't seem to find any examples. I suspect I may be missing something obvious here. Anyway, thanks in advance for your answers.
A:
topics = forms.ModelMultipleChoiceField(queryset=BlogTopic.objects.all())
| Django: How to use stored model instances as form choices? | I have a model which is essentially just a string (django.db.models.CharField). There will only be several instances of this model stored. How could I use those values as choices in a form?
To illustrate, the model could be BlogTopic. I'd like to offer users the ability to choose one or several topics to subscribe to.
I started writing something like:
from mysite.blog.models import BlogTopic
choices = [(topic.id, topic.name) for topic in BlogTopic.objects.all()]
class SubscribeForm(forms.Form):
topics = forms.ChoiceField(choices=choices)
But I'm not sure when choices would be defined. I assume only when the module is first imported (i.e. when starting Django). Obviously that is not a very good approach.
This seems like it would be a common requirement, but I can't seem to find any examples. I suspect I may be missing something obvious here. Anyway, thanks in advance for your answers.
| [
"topics = forms.ModelMultipleChoiceField(queryset=BlogTopic.objects.all())\n\n"
] | [
26
] | [] | [] | [
"django",
"django_forms",
"python"
] | stackoverflow_0000749000_django_django_forms_python.txt |
Q:
Is Python interpreted (like Javascript or PHP)?
Is Python strictly interpreted at run time, or can it be used to develop programs that run as background applications (like a Java app or C program)?
A:
As the varied responses will tell you, the line between interpreted and compiled is no longer as clear as it was when such terms were coined. In fact, it's also something of a mistake to consider languages as being either interpreted or compiled, as different implementations of languages may do different things. These days you can find both C interpreters and Javascript compilers.
Even when looking at an implementation, things still aren't clear-cut. There are layers of interpretation. Here are a few of the gradations between interpreted and compiled:
Pure interpretation. Pretty much what it says on the tin. Read a line of source and immediately do what it says. This isn't actually done by many production languages - pretty much just things like shell scripts.
Tokenisation + interpretation. A trivial optimisation on the above. Rather than interpret each line from scratch, it's first tokenised (that is, rather than seeing a string like "print 52 + x", it's translated into a stream of tokens (eg. [PRINT_STATEMENT, INTEGER(52), PLUS_SIGN, IDENTIFIER('x')] ) to avoid repeatedly performing that state of interpretation. Many versions of basic worked this way.
Bytecode compilation. This is the approach taken by languages like Java and C# (though see below). The code is transformed into instructions for a "virtual machine". These instructions are then interpreted. This is also the approach taken by python (or at least cpython, the most common implementation.) The Jython and Ironpython implementations also take this approach, but compile to the bytecode for the Java and C# virtual machines resepectively.
Bytecode + Just in Time compilation. As above, but rather than interpreting the bytecodes, the code that would be performed is compiled from the bytecode at the point of execution, and then run. In some cases, this can actually outperform native compilation, as it is free to perform runtime analysis on the code, and can use specific features of the current processor (while static compilation may need to compile for a lowest common denominator CPU). Later versions of Java, and C# use this approach. Psyco performs this for python.
Native machine-code compilation. The code is compiled to the machine code of the target system. You may think we've now completely eliminated interpretation, but even here there are subtleties. Some machine code instructions are not actually directly implemented in hardware, but are in fact implemented via microcode - even machine code is sometimes interpreted!
A:
There's multiple questions here:
No, Python is not interpreted. The standard implementation compiles to bytecode, and then executes in a virtual machine. Many modern JavaScript engines also do this.
Regardless of implementation (interpreter, VM, machine code), anything you want can run in the background. You can run shell scripts in the background, if you want.
A:
Technically, Python is compiled to bytecode and then interpreted in a virtual machine. If the Python compiler is able to write out the bytecode into a .pyc file, it will (usually) do so.
On the other hand, there's no explicit compilation step in Python as there is with Java or C. From the point of view of the developer, it looks like Python is just interpreting the .py file directly. Plus, Python offers an interactive prompt where you can type Python statements and have them executed immediately. So the workflow in Python is much more similar to that of an interpreted language than that of a compiled language. To me (and a lot of other developers, I suppose), that distinction of workflow is more important than whether there's an intermediate bytecode step or not.
A:
Python is an interpreted language but it is the bytecode which is interpreted at run time. There are also many tools out there that can assist you in making your programs run as a windows service / UNIX daemon.
A:
Yes, Python is interpreted, but you can also run them as long-running applications.
A:
Yes, it's interpreted, its main implementation compiles bytecode first and then runs it though (kind of if you took a java source and the JVM compiled it before running it). Still, you can run your application in background. Actually, you can run pretty much anything in background.
| Is Python interpreted (like Javascript or PHP)? | Is Python strictly interpreted at run time, or can it be used to develop programs that run as background applications (like a Java app or C program)?
| [
"As the varied responses will tell you, the line between interpreted and compiled is no longer as clear as it was when such terms were coined. In fact, it's also something of a mistake to consider languages as being either interpreted or compiled, as different implementations of languages may do different things. These days you can find both C interpreters and Javascript compilers.\nEven when looking at an implementation, things still aren't clear-cut. There are layers of interpretation. Here are a few of the gradations between interpreted and compiled:\n\nPure interpretation. Pretty much what it says on the tin. Read a line of source and immediately do what it says. This isn't actually done by many production languages - pretty much just things like shell scripts.\nTokenisation + interpretation. A trivial optimisation on the above. Rather than interpret each line from scratch, it's first tokenised (that is, rather than seeing a string like \"print 52 + x\", it's translated into a stream of tokens (eg. [PRINT_STATEMENT, INTEGER(52), PLUS_SIGN, IDENTIFIER('x')] ) to avoid repeatedly performing that state of interpretation. Many versions of basic worked this way.\nBytecode compilation. This is the approach taken by languages like Java and C# (though see below). The code is transformed into instructions for a \"virtual machine\". These instructions are then interpreted. This is also the approach taken by python (or at least cpython, the most common implementation.) The Jython and Ironpython implementations also take this approach, but compile to the bytecode for the Java and C# virtual machines resepectively.\nBytecode + Just in Time compilation. As above, but rather than interpreting the bytecodes, the code that would be performed is compiled from the bytecode at the point of execution, and then run. In some cases, this can actually outperform native compilation, as it is free to perform runtime analysis on the code, and can use specific features of the current processor (while static compilation may need to compile for a lowest common denominator CPU). Later versions of Java, and C# use this approach. Psyco performs this for python.\nNative machine-code compilation. The code is compiled to the machine code of the target system. You may think we've now completely eliminated interpretation, but even here there are subtleties. Some machine code instructions are not actually directly implemented in hardware, but are in fact implemented via microcode - even machine code is sometimes interpreted!\n\n",
"There's multiple questions here:\n\nNo, Python is not interpreted. The standard implementation compiles to bytecode, and then executes in a virtual machine. Many modern JavaScript engines also do this.\nRegardless of implementation (interpreter, VM, machine code), anything you want can run in the background. You can run shell scripts in the background, if you want.\n\n",
"Technically, Python is compiled to bytecode and then interpreted in a virtual machine. If the Python compiler is able to write out the bytecode into a .pyc file, it will (usually) do so.\nOn the other hand, there's no explicit compilation step in Python as there is with Java or C. From the point of view of the developer, it looks like Python is just interpreting the .py file directly. Plus, Python offers an interactive prompt where you can type Python statements and have them executed immediately. So the workflow in Python is much more similar to that of an interpreted language than that of a compiled language. To me (and a lot of other developers, I suppose), that distinction of workflow is more important than whether there's an intermediate bytecode step or not.\n",
"Python is an interpreted language but it is the bytecode which is interpreted at run time. There are also many tools out there that can assist you in making your programs run as a windows service / UNIX daemon.\n",
"Yes, Python is interpreted, but you can also run them as long-running applications.\n",
"Yes, it's interpreted, its main implementation compiles bytecode first and then runs it though (kind of if you took a java source and the JVM compiled it before running it). Still, you can run your application in background. Actually, you can run pretty much anything in background.\n"
] | [
94,
53,
25,
4,
2,
2
] | [] | [] | [
"python"
] | stackoverflow_0000745743_python.txt |
Q:
Search a list of strings for any sub-string from another list
Given these 3 lists of data and a list of keywords:
good_data1 = ['hello, world', 'hey, world']
good_data2 = ['hey, man', 'whats up']
bad_data = ['hi, earth', 'sup, planet']
keywords = ['world', 'he']
I'm trying to write a simple function to check if any of the keywords exist as a substring of any word in the data lists. It should return True for the good_data lists and False for bad_data.
I know how to do this in what seems to be an inefficient way:
def checkData(data):
for s in data:
for k in keywords:
if k in s:
return True
return False
A:
Are you looking for
any( k in s for k in keywords )
It's more compact, but might be less efficient.
A:
In your example, with so few items, it doesn't really matter. But if you have a list of several thousand items, this might help.
Since you don't care which element in the list contains the keyword, you can scan the whole list once (as one string) instead of one item at the time. For that you need a join character that you know won't occur in the keyword, in order to avoid false positives. I use the newline in this example.
def check_data(data):
s = "\n".join(data);
for k in keywords:
if k in s:
return True
return False
In my completely unscientific test, my version checked a list of 5000 items 100000 times in about 30 seconds. I stopped your version after 3 minutes -- got tired of waiting to post =)
A:
If you have many keywords, you might want to try a suffix tree [1]. Insert all the words from the three data lists, storing which list each word comes from in it's terminating node. Then you can perform queries on the tree for each keyword really, really fast.
Warning: suffix trees are very complicated to implement!
[1] http://en.wikipedia.org/wiki/Suffix_tree
A:
You may be able to improve matters by building your list of keywords as a regular expression.
This may allow them to be tested in parallel, but will very much depend on what the keywords are (eg. some work may be reused testing for "hello" and "hell", rather than searching every phrase from the start for each word.
You could do this by executing:
import re
keyword_re = re.compile("|".join(map(re.escape, keywords)))
Then:
>>> bool(keyword_re.search('hello, world'))
True
>>> bool(keyword_re.search('hi, earth'))
False
(It will actually return a match object on found, and None if not found - this might be useful if you need to know which keyword matched)
However, how much (if anything) this gains you will depend on the keywords. If you only have one or two, keep your current approach. If you have a large list, it may be worth tring and profiling to see which performs better.
[Edit]
For reference, here's how the approaches do for your example:
good1 good2 good3 bad1 bad2
original : 0.206 0.233 0.229 0.390 63.879
gnud (join) : 0.257 0.347 4.600 0.281 6.706
regex : 0.766 1.018 0.397 0.764 124.351
regex (join) : 0.345 0.337 3.305 0.481 48.666
Obviously for this case, your approach performs far better than the regex one. Whether this will always be the case depends a lot on the number and complexity of keywords, and the input data that will be checked. For large numbers of keywords, and lengthy lists or rarely matching phrases, regexes may work better, but do get timing information, and perhaps try even simpler optimisations (like moving the most common words to the front of your keyword list) first. Sometimes the simplest approach really is the best.
[Edit2] Updated the table with gnud's solution, and a similar approach before applying the regexes. I also added 2 new tests:
good_data3 = good_data2 * 500 # 1000 items, the first of which matches.
bad_data2 = bad_data * 500 # 1000 items, none of which matches.
Which show up the various strengths and weaknesses. Joining does do worse when a match would immediately be found (as there is an always paid, up-front cost in joining the list - this is a best possible case for the linear search method), however for non-matching lists, it performs better. Much better when there are a large number of items in the list.case).
A:
I think this is pretty efficient and clear, though you could use map() to avoid the many nests. I agree with ross on the dictionary idea for larger lists.
| Search a list of strings for any sub-string from another list | Given these 3 lists of data and a list of keywords:
good_data1 = ['hello, world', 'hey, world']
good_data2 = ['hey, man', 'whats up']
bad_data = ['hi, earth', 'sup, planet']
keywords = ['world', 'he']
I'm trying to write a simple function to check if any of the keywords exist as a substring of any word in the data lists. It should return True for the good_data lists and False for bad_data.
I know how to do this in what seems to be an inefficient way:
def checkData(data):
for s in data:
for k in keywords:
if k in s:
return True
return False
| [
"Are you looking for\nany( k in s for k in keywords )\n\nIt's more compact, but might be less efficient.\n",
"In your example, with so few items, it doesn't really matter. But if you have a list of several thousand items, this might help.\nSince you don't care which element in the list contains the keyword, you can scan the whole list once (as one string) instead of one item at the time. For that you need a join character that you know won't occur in the keyword, in order to avoid false positives. I use the newline in this example.\ndef check_data(data):\n s = \"\\n\".join(data);\n for k in keywords:\n if k in s:\n return True\n\n return False\n\nIn my completely unscientific test, my version checked a list of 5000 items 100000 times in about 30 seconds. I stopped your version after 3 minutes -- got tired of waiting to post =)\n",
"If you have many keywords, you might want to try a suffix tree [1]. Insert all the words from the three data lists, storing which list each word comes from in it's terminating node. Then you can perform queries on the tree for each keyword really, really fast.\nWarning: suffix trees are very complicated to implement!\n[1] http://en.wikipedia.org/wiki/Suffix_tree\n",
"You may be able to improve matters by building your list of keywords as a regular expression. \nThis may allow them to be tested in parallel, but will very much depend on what the keywords are (eg. some work may be reused testing for \"hello\" and \"hell\", rather than searching every phrase from the start for each word.\nYou could do this by executing:\nimport re\nkeyword_re = re.compile(\"|\".join(map(re.escape, keywords)))\n\nThen:\n>>> bool(keyword_re.search('hello, world'))\nTrue\n>>> bool(keyword_re.search('hi, earth'))\nFalse\n\n(It will actually return a match object on found, and None if not found - this might be useful if you need to know which keyword matched)\nHowever, how much (if anything) this gains you will depend on the keywords. If you only have one or two, keep your current approach. If you have a large list, it may be worth tring and profiling to see which performs better.\n[Edit]\nFor reference, here's how the approaches do for your example:\n good1 good2 good3 bad1 bad2\noriginal : 0.206 0.233 0.229 0.390 63.879\ngnud (join) : 0.257 0.347 4.600 0.281 6.706\nregex : 0.766 1.018 0.397 0.764 124.351\nregex (join) : 0.345 0.337 3.305 0.481 48.666\n\nObviously for this case, your approach performs far better than the regex one. Whether this will always be the case depends a lot on the number and complexity of keywords, and the input data that will be checked. For large numbers of keywords, and lengthy lists or rarely matching phrases, regexes may work better, but do get timing information, and perhaps try even simpler optimisations (like moving the most common words to the front of your keyword list) first. Sometimes the simplest approach really is the best.\n[Edit2] Updated the table with gnud's solution, and a similar approach before applying the regexes. I also added 2 new tests:\ngood_data3 = good_data2 * 500 # 1000 items, the first of which matches.\nbad_data2 = bad_data * 500 # 1000 items, none of which matches.\n\nWhich show up the various strengths and weaknesses. Joining does do worse when a match would immediately be found (as there is an always paid, up-front cost in joining the list - this is a best possible case for the linear search method), however for non-matching lists, it performs better. Much better when there are a large number of items in the list.case).\n",
"I think this is pretty efficient and clear, though you could use map() to avoid the many nests. I agree with ross on the dictionary idea for larger lists.\n"
] | [
38,
16,
4,
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0000749342_python.txt |
Q:
Python: How to estimate / calculate memory footprint of data structures?
What's a good way to estimate the memory footprint of an object?
Conversely, what's a good way to measure the footprint?
For example, say I have a dictionary whose values are lists of integer,float tuples:
d['key'] = [ (1131, 3.11e18), (9813, 2.48e19), (4991, 9.11e18) ]
I have 4G of physical memory and would like to figure out approximately how many rows (key:values) I can store in memory before I spill into swap. This is on linux/ubuntu 8.04 and OS X 10.5.6 .
Also, what's the best way to figure out the actual in-memory footprint of my program? How do I best figure out when it's exhausting physical memory and spilling?
A:
Guppy has a nice memory profiler (Heapy):
>>> from guppy import hpy
>>> hp = hpy()
>>> hp.setrelheap() # ignore all existing objects
>>> d = {}
>>> d['key'] = [ (1131, 3.11e18), (9813, 2.48e19), (4991, 9.11e18) ]
>>> hp.heap()
Partition of a set of 24 objects. Total size = 1464 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 2 8 676 46 676 46 types.FrameType
1 6 25 220 15 896 61 str
2 6 25 184 13 1080 74 tuple
...
Heapy is a little underdocumented, so you might have to dig through the web page or source code a little, but it's very powerful. There are also some articles which might be relevant.
A:
You can do this with a memory profiler, of which there are a couple I'm aware of:
PySizer - poissibly obsolete, as the homepage now recommends:
Heapy.
This is possibly a duplicate of this question.
| Python: How to estimate / calculate memory footprint of data structures? | What's a good way to estimate the memory footprint of an object?
Conversely, what's a good way to measure the footprint?
For example, say I have a dictionary whose values are lists of integer,float tuples:
d['key'] = [ (1131, 3.11e18), (9813, 2.48e19), (4991, 9.11e18) ]
I have 4G of physical memory and would like to figure out approximately how many rows (key:values) I can store in memory before I spill into swap. This is on linux/ubuntu 8.04 and OS X 10.5.6 .
Also, what's the best way to figure out the actual in-memory footprint of my program? How do I best figure out when it's exhausting physical memory and spilling?
| [
"Guppy has a nice memory profiler (Heapy):\n>>> from guppy import hpy\n>>> hp = hpy()\n>>> hp.setrelheap() # ignore all existing objects\n>>> d = {}\n>>> d['key'] = [ (1131, 3.11e18), (9813, 2.48e19), (4991, 9.11e18) ]\n>>> hp.heap()\n Partition of a set of 24 objects. Total size = 1464 bytes.\n Index Count % Size % Cumulative % Kind (class / dict of class)\n 0 2 8 676 46 676 46 types.FrameType\n 1 6 25 220 15 896 61 str\n 2 6 25 184 13 1080 74 tuple\n ...\n\nHeapy is a little underdocumented, so you might have to dig through the web page or source code a little, but it's very powerful. There are also some articles which might be relevant.\n",
"You can do this with a memory profiler, of which there are a couple I'm aware of:\n\nPySizer - poissibly obsolete, as the homepage now recommends:\nHeapy.\n\nThis is possibly a duplicate of this question.\n"
] | [
10,
5
] | [] | [] | [
"memory_management",
"memory_size",
"python"
] | stackoverflow_0000749625_memory_management_memory_size_python.txt |
Q:
Can pysvn 1.6.3 be made to work with Subversion 1.6 under linux?
I see no reference on their website for this. I get pysvn to configure and build, but then it fails all the test. Has anyone had any luck getting this to work under linux?
A:
No, it cannot.
Your best bet is to use Subversion 1.5.5. See the site for more details.
| Can pysvn 1.6.3 be made to work with Subversion 1.6 under linux? | I see no reference on their website for this. I get pysvn to configure and build, but then it fails all the test. Has anyone had any luck getting this to work under linux?
| [
"No, it cannot. \nYour best bet is to use Subversion 1.5.5. See the site for more details. \n"
] | [
1
] | [] | [] | [
"linux",
"pysvn",
"python",
"svn"
] | stackoverflow_0000683278_linux_pysvn_python_svn.txt |
Q:
How to visualize IP addresses as they change in python?
I've written a little script that collects my external IP address every time I open a new terminal window and appends it, at well as the current time, to a text file. I'm looking for ideas on a way to visualize when/how often my IP address changes. I bounce between home and campus and could separate them using the script, but it would be nice to visualize them separately.
I frequently use matplotlib. Any ideas?
A:
Plot your IP as a point on the xkcd internet map (or some zoomed in subset of the map, to better show different but closely neighboring IPs).
Plot each point "stacked" proportional to how often you've had that IP, and color the IPs to make more recent points brighter, less recent points proportionally darker.
A:
"When" is one dimensional temporal data, which is well shown by a timeline. At larger timescales, you'd probably lose the details, but most any plot of "when" would have this defect.
For "How often", a standard 2d (bar) plot of time vs frequency, divided into buckets for each day/week/month, would be a standard way to go. A moving average might also be informational.
You could combine the timeline & bar plot, with the timeline visible when you're zoomed in & the frequency display when zoomed out.
How about a bar plot with time on the horizontal axis where the width of each bar is the length of time your computer held a particular IP address and the height of each bar is inversely proportional to the width? That would also give a plot of when vs how often plot.
You could also interpret the data as a pulse density modulated signal, like what you get on a SuperAudio CD. You could graph this or even listen to the data. As there's no obvious time length for an IP change event, the length of a pulse would be a tunable parameter. Along similar lines, you could view the data as a square wave (triangular wave, sawtooth &c), where each IP change event is a level transition. Sounds like a fun Pure Data project.
A:
There's a section in the matplotlib user guide about drawing bars on a chart to represent ranges. I've never done that myself but it seems appropriate for what you're looking for.
A:
Assuming you specified terminal, i'll assume you are on a UNIX variant system. Using the -f switch along with the command line utility tail can allow you to constantly monitor the end of a file. You could also use something like IBM's inotify, which can monitor file changes or dnotify (and place the file in it's own directory) which usually comes standard on most distributions (you can then call tail -n 1 to get the last line). Once the line changes, you can grab the current system time since epoch using Python's time.time() and subtract it from the time of the last change, then plot this difference using matplotlib. I assume you could categorize the times
into ranges to make the graphing easier on yourself. 1 Bar for less than 1 hour change intervals, another for changes between 1 - 5 hours, and so on.
There is a Python implementation of tail -f located here if you don't want to use it directly. Upon a detection of a change in the file, you could perform the above.
| How to visualize IP addresses as they change in python? | I've written a little script that collects my external IP address every time I open a new terminal window and appends it, at well as the current time, to a text file. I'm looking for ideas on a way to visualize when/how often my IP address changes. I bounce between home and campus and could separate them using the script, but it would be nice to visualize them separately.
I frequently use matplotlib. Any ideas?
| [
"Plot your IP as a point on the xkcd internet map (or some zoomed in subset of the map, to better show different but closely neighboring IPs). \nPlot each point \"stacked\" proportional to how often you've had that IP, and color the IPs to make more recent points brighter, less recent points proportionally darker. \n",
"\"When\" is one dimensional temporal data, which is well shown by a timeline. At larger timescales, you'd probably lose the details, but most any plot of \"when\" would have this defect.\nFor \"How often\", a standard 2d (bar) plot of time vs frequency, divided into buckets for each day/week/month, would be a standard way to go. A moving average might also be informational.\nYou could combine the timeline & bar plot, with the timeline visible when you're zoomed in & the frequency display when zoomed out.\nHow about a bar plot with time on the horizontal axis where the width of each bar is the length of time your computer held a particular IP address and the height of each bar is inversely proportional to the width? That would also give a plot of when vs how often plot.\nYou could also interpret the data as a pulse density modulated signal, like what you get on a SuperAudio CD. You could graph this or even listen to the data. As there's no obvious time length for an IP change event, the length of a pulse would be a tunable parameter. Along similar lines, you could view the data as a square wave (triangular wave, sawtooth &c), where each IP change event is a level transition. Sounds like a fun Pure Data project.\n",
"There's a section in the matplotlib user guide about drawing bars on a chart to represent ranges. I've never done that myself but it seems appropriate for what you're looking for.\n",
"Assuming you specified terminal, i'll assume you are on a UNIX variant system. Using the -f switch along with the command line utility tail can allow you to constantly monitor the end of a file. You could also use something like IBM's inotify, which can monitor file changes or dnotify (and place the file in it's own directory) which usually comes standard on most distributions (you can then call tail -n 1 to get the last line). Once the line changes, you can grab the current system time since epoch using Python's time.time() and subtract it from the time of the last change, then plot this difference using matplotlib. I assume you could categorize the times\ninto ranges to make the graphing easier on yourself. 1 Bar for less than 1 hour change intervals, another for changes between 1 - 5 hours, and so on.\nThere is a Python implementation of tail -f located here if you don't want to use it directly. Upon a detection of a change in the file, you could perform the above.\n"
] | [
4,
1,
0,
0
] | [] | [] | [
"ip_address",
"matplotlib",
"python",
"visualization"
] | stackoverflow_0000749937_ip_address_matplotlib_python_visualization.txt |
Q:
find missing numeric from ALPHANUMERIC - Python
How would I write a function in Python to determine if a list of filenames matches a given pattern and which files are missing from that pattern? For example:
Input ->
KUMAR.3.txt
KUMAR.4.txt
KUMAR.6.txt
KUMAR.7.txt
KUMAR.9.txt
KUMAR.10.txt
KUMAR.11.txt
KUMAR.13.txt
KUMAR.15.txt
KUMAR.16.txt
Desired Output-->
KUMAR.5.txt
KUMAR.8.txt
KUMAR.12.txt
KUMAR.14.txt
Input -->
KUMAR3.txt
KUMAR4.txt
KUMAR6.txt
KUMAR7.txt
KUMAR9.txt
KUMAR10.txt
KUMAR11.txt
KUMAR13.txt
KUMAR15.txt
KUMAR16.txt
Desired Output -->
KUMAR5.txt
KUMAR8.txt
KUMAR12.txt
KUMAR14.txt
A:
You can approach this as:
Convert the filenames to appropriate integers.
Find the missing numbers.
Combine the missing numbers with the filename template as output.
For (1), if the file structure is predictable, then this is easy.
def to_num(s, start=6):
return int(s[start:s.index('.txt')])
Given:
lst = ['KUMAR.3.txt', 'KUMAR.4.txt', 'KUMAR.6.txt', 'KUMAR.7.txt',
'KUMAR.9.txt', 'KUMAR.10.txt', 'KUMAR.11.txt', 'KUMAR.13.txt',
'KUMAR.15.txt', 'KUMAR.16.txt']
you can get a list of known numbers by: map(to_num, lst). Of course, to look for gaps, you only really need the minimum and maximum. Combine that with the range function and you get all the numbers that you should see, and then remove the numbers you've got. Sets are helpful here.
def find_gaps(int_list):
return sorted(set(range(min(int_list), max(int_list))) - set(int_list))
Putting it all together:
missing = find_gaps(map(to_num, lst))
for i in missing:
print 'KUMAR.%d.txt' % i
A:
Assuming the patterns are relatively static, this is easy enough with a regex:
import re
inlist = "KUMAR.3.txt KUMAR.4.txt KUMAR.6.txt KUMAR.7.txt KUMAR.9.txt KUMAR.10.txt KUMAR.11.txt KUMAR.13.txt KUMAR.15.txt KUMAR.16.txt".split()
def get_count(s):
return int(re.match('.*\.(\d+)\..*', s).groups()[0])
mincount = get_count(inlist[0])
maxcount = get_count(inlist[-1])
values = set(map(get_count, inlist))
for ii in range (mincount, maxcount):
if ii not in values:
print 'KUMAR.%d.txt' % ii
| find missing numeric from ALPHANUMERIC - Python | How would I write a function in Python to determine if a list of filenames matches a given pattern and which files are missing from that pattern? For example:
Input ->
KUMAR.3.txt
KUMAR.4.txt
KUMAR.6.txt
KUMAR.7.txt
KUMAR.9.txt
KUMAR.10.txt
KUMAR.11.txt
KUMAR.13.txt
KUMAR.15.txt
KUMAR.16.txt
Desired Output-->
KUMAR.5.txt
KUMAR.8.txt
KUMAR.12.txt
KUMAR.14.txt
Input -->
KUMAR3.txt
KUMAR4.txt
KUMAR6.txt
KUMAR7.txt
KUMAR9.txt
KUMAR10.txt
KUMAR11.txt
KUMAR13.txt
KUMAR15.txt
KUMAR16.txt
Desired Output -->
KUMAR5.txt
KUMAR8.txt
KUMAR12.txt
KUMAR14.txt
| [
"You can approach this as:\n\nConvert the filenames to appropriate integers.\nFind the missing numbers.\nCombine the missing numbers with the filename template as output.\n\nFor (1), if the file structure is predictable, then this is easy.\ndef to_num(s, start=6):\n return int(s[start:s.index('.txt')])\n\nGiven:\nlst = ['KUMAR.3.txt', 'KUMAR.4.txt', 'KUMAR.6.txt', 'KUMAR.7.txt',\n 'KUMAR.9.txt', 'KUMAR.10.txt', 'KUMAR.11.txt', 'KUMAR.13.txt',\n 'KUMAR.15.txt', 'KUMAR.16.txt']\n\nyou can get a list of known numbers by: map(to_num, lst). Of course, to look for gaps, you only really need the minimum and maximum. Combine that with the range function and you get all the numbers that you should see, and then remove the numbers you've got. Sets are helpful here.\ndef find_gaps(int_list):\n return sorted(set(range(min(int_list), max(int_list))) - set(int_list))\n\nPutting it all together:\nmissing = find_gaps(map(to_num, lst))\nfor i in missing:\n print 'KUMAR.%d.txt' % i\n\n",
"Assuming the patterns are relatively static, this is easy enough with a regex:\nimport re\n\ninlist = \"KUMAR.3.txt KUMAR.4.txt KUMAR.6.txt KUMAR.7.txt KUMAR.9.txt KUMAR.10.txt KUMAR.11.txt KUMAR.13.txt KUMAR.15.txt KUMAR.16.txt\".split()\n\ndef get_count(s):\n return int(re.match('.*\\.(\\d+)\\..*', s).groups()[0])\n\nmincount = get_count(inlist[0])\nmaxcount = get_count(inlist[-1])\nvalues = set(map(get_count, inlist))\nfor ii in range (mincount, maxcount):\n if ii not in values:\n print 'KUMAR.%d.txt' % ii\n\n"
] | [
2,
1
] | [] | [] | [
"alphanumeric",
"filenames",
"list",
"python"
] | stackoverflow_0000750093_alphanumeric_filenames_list_python.txt |
Q:
Retrieving/Printing execution context
EDIT: This question has been solved with help from apphacker and ConcernedOfTunbridgeWells. I have updated the code to reflect the solution I will be using.
I am currently writing a swarm intelligence simulator and looking to give the user an easy way to debug their algorithms. Among other outputs, I feel it would be beneficial to give the user a printout of the execution context at the beginning of each step in the algorithm.
The following code achieves what I was needing.
import inspect
def print_current_execution_context():
frame=inspect.currentframe().f_back #get caller frame
print frame.f_locals #print locals of caller
class TheClass(object):
def __init__(self,val):
self.val=val
def thefunction(self,a,b):
c=a+b
print_current_execution_context()
C=TheClass(2)
C.thefunction(1,2)
This gives the expected output of:
{'a': 1, 'c': 3, 'b': 2, 'self': <__main__.TheClass object at 0xb7d2214c>}
Thank you to apphacker and ConcernedOfTunbridgeWells who pointed me towards this answer
A:
try:
class TheClass(object):
def __init__(self,val):
self.val=val
def thefunction(self,a,b):
c=a+b
print locals()
C=TheClass(2)
C.thefunction(1,2)
A:
You can use __locals__ to get the local execution context. See this stackoverflow posting for some discussion that may also be pertinent.
| Retrieving/Printing execution context | EDIT: This question has been solved with help from apphacker and ConcernedOfTunbridgeWells. I have updated the code to reflect the solution I will be using.
I am currently writing a swarm intelligence simulator and looking to give the user an easy way to debug their algorithms. Among other outputs, I feel it would be beneficial to give the user a printout of the execution context at the beginning of each step in the algorithm.
The following code achieves what I was needing.
import inspect
def print_current_execution_context():
frame=inspect.currentframe().f_back #get caller frame
print frame.f_locals #print locals of caller
class TheClass(object):
def __init__(self,val):
self.val=val
def thefunction(self,a,b):
c=a+b
print_current_execution_context()
C=TheClass(2)
C.thefunction(1,2)
This gives the expected output of:
{'a': 1, 'c': 3, 'b': 2, 'self': <__main__.TheClass object at 0xb7d2214c>}
Thank you to apphacker and ConcernedOfTunbridgeWells who pointed me towards this answer
| [
"try:\nclass TheClass(object):\n def __init__(self,val):\n self.val=val\n def thefunction(self,a,b):\n c=a+b\n print locals()\n\n\nC=TheClass(2)\nC.thefunction(1,2)\n\n",
"You can use __locals__ to get the local execution context. See this stackoverflow posting for some discussion that may also be pertinent.\n"
] | [
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0000750702_python.txt |
Q:
Binary array in python
How to create big array in python, how efficient creating that
in C/C++:
byte *data = (byte*)memalloc(10000);
or
byte *data = new byte[10000];
in python...?
A:
Have a look at the array module:
import array
array.array('B', [0] * 10000)
Instead of passing a list to initialize it, you can pass a generator, which is more memory efficient.
A:
You can pre-allocate a list with:
l = [0] * 10000
which will be slightly faster than .appending to it (as it avoids intermediate reallocations). However, this will generally allocate space for a list of pointers to integer objects, which will be larger than an array of bytes in C.
If you need memory efficiency, you could use an array object. ie:
import array, itertools
a = array.array('b', itertools.repeat(0, 10000))
Note that these may be slightly slower to use in practice, as there is an unboxing process when accessing elements (they must first be converted to a python int object).
A:
Typically with python, you'd just create a list
mylist = []
and use it as an array. Alternatively, I think you might be looking for the array module. See http://docs.python.org/library/array.html.
A:
You can efficiently create big array with array module, but using it won't be as fast as C. If you intend to do some math, you'd be better off with numpy.array
Check this question for comparison.
| Binary array in python | How to create big array in python, how efficient creating that
in C/C++:
byte *data = (byte*)memalloc(10000);
or
byte *data = new byte[10000];
in python...?
| [
"Have a look at the array module:\nimport array\narray.array('B', [0] * 10000)\n\nInstead of passing a list to initialize it, you can pass a generator, which is more memory efficient.\n",
"You can pre-allocate a list with:\nl = [0] * 10000\n\nwhich will be slightly faster than .appending to it (as it avoids intermediate reallocations). However, this will generally allocate space for a list of pointers to integer objects, which will be larger than an array of bytes in C.\nIf you need memory efficiency, you could use an array object. ie:\nimport array, itertools\na = array.array('b', itertools.repeat(0, 10000))\n\nNote that these may be slightly slower to use in practice, as there is an unboxing process when accessing elements (they must first be converted to a python int object).\n",
"Typically with python, you'd just create a list\nmylist = []\n\nand use it as an array. Alternatively, I think you might be looking for the array module. See http://docs.python.org/library/array.html.\n",
"You can efficiently create big array with array module, but using it won't be as fast as C. If you intend to do some math, you'd be better off with numpy.array\nCheck this question for comparison.\n"
] | [
8,
6,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0000751055_python.txt |
Q:
Parse shell file output with Python
I have a file with data. The file is the output generated from a shell scripting file:
|a |869 |
|b |835 |
|c |0 |
|d |0 |
|e |34 |
|f |3337
How can I get a = 869 from this?
A:
You could do this:
output = {}
for line in open("myfile"):
parts = line.split('|')
output[parts[1].strip()] = parts[2].strip()
print output['a'] // prints 869
print output['f'] // prints 3337
Or, using the csv module, as suggested by Eugene Morozov:
import csv
output = {}
reader = csv.reader(open("C:/output.txt"), delimiter='|')
for line in reader:
output[line[1].strip()] = line[2].strip()
print output['a'] // prints 869
print output['f'] // prints 3337
A:
lines = file("test.txt").readlines()
d = dict([[i.strip() for i in l.split("|")[1:3]] for l in lines if l.strip()])
For example:
>>> lines = file("test.txt").readlines()
>>> d = dict([[i.strip() for i in l.split("|")[1:3]] for l in lines if l.strip()])
>>> d['a']
'869'
A:
You can also use csv module with delimiter |.
A:
Maybe not the most Pythonic way, but this should work if your shell output is stored in f.txt and you are looking for every line to be processed:
h = open("f.txt", "rt")
inp = h.readline()
while inp:
flds = inp.split('|')
str = flds[1].strip() + " = " + flds[2].strip()
print str
inp = h.readline()
| Parse shell file output with Python | I have a file with data. The file is the output generated from a shell scripting file:
|a |869 |
|b |835 |
|c |0 |
|d |0 |
|e |34 |
|f |3337
How can I get a = 869 from this?
| [
"You could do this:\noutput = {}\nfor line in open(\"myfile\"):\n parts = line.split('|')\n output[parts[1].strip()] = parts[2].strip()\n\nprint output['a'] // prints 869\nprint output['f'] // prints 3337\n\nOr, using the csv module, as suggested by Eugene Morozov:\nimport csv\noutput = {}\nreader = csv.reader(open(\"C:/output.txt\"), delimiter='|')\nfor line in reader:\n output[line[1].strip()] = line[2].strip()\n\nprint output['a'] // prints 869\nprint output['f'] // prints 3337\n\n",
"lines = file(\"test.txt\").readlines()\nd = dict([[i.strip() for i in l.split(\"|\")[1:3]] for l in lines if l.strip()])\n\nFor example:\n>>> lines = file(\"test.txt\").readlines()\n>>> d = dict([[i.strip() for i in l.split(\"|\")[1:3]] for l in lines if l.strip()])\n>>> d['a']\n'869'\n\n",
"You can also use csv module with delimiter |.\n",
"Maybe not the most Pythonic way, but this should work if your shell output is stored in f.txt and you are looking for every line to be processed:\nh = open(\"f.txt\", \"rt\")\ninp = h.readline()\nwhile inp:\n flds = inp.split('|')\n str = flds[1].strip() + \" = \" + flds[2].strip()\n print str\n inp = h.readline()\n\n"
] | [
9,
4,
3,
0
] | [] | [] | [
"python"
] | stackoverflow_0000751557_python.txt |
Q:
How to embed a tag within a url templatetag in a django template?
How do I embed a tag within a url templatetag in a django template?
Django 1.0 , Python 2.5.2
In views.py
def home_page_view(request):
NUP={"HOMEPAGE": "named-url-pattern-string-for-my-home-page-view"}
variables = RequestContext(request, {'NUP':NUP})
return render_to_response('home_page.html', variables)
In home_page.html, the following
NUP.HOMEPAGE = {{ NUP.HOMEPAGE }}
is displayed as
NUP.HOMEPAGE = named-url-pattern-string-for-my-home-page-view
and the following url named pattern works ( as expected ),
url template tag for NUP.HOMEPAGE = {% url named-url-pattern-string-for-my-home-page-view %}
and is displayed as
url template tag for NUP.HOMEPAGE = /myhomepage/
but when {{ NUP.HOMEPAGE }} is embedded within a {% url ... %} as follows
url template tag for NUP.HOMEPAGE = {% url {{ NUP.HOMEPAGE }} %}
this results in a template syntax error
TemplateSyntaxError at /myhomepage/
Could not parse the remainder: '}}' from '}}'
Request Method: GET
Request URL: http://localhost:8000/myhomepage/
Exception Type: TemplateSyntaxError
Exception Value:
Could not parse the remainder: '}}' from '}}'
Exception Location: C:\Python25\Lib\site-packages\django\template\__init__.py in __init__, line 529
Python Executable: C:\Python25\python.exe
Python Version: 2.5.2
I was expecting {% url {{ NUP.HOMEPAGE }} %} to resolve to {% url named-url-pattern-string-for-my-home-page-view %} at runtime and be displayed as /myhomepage/.
Are embedded tags not supported in django?
is it possible to write a custom url template tag with embedded tags support to make this work?
{% url {{ NUP.HOMEPAGE }} %}
A:
Maybe you could try passing the final URL to the template, instead?
Something like this:
from django.core.urlresolvers import reverse
def home_page_view(request):
NUP={"HOMEPAGE": reverse('named-url-pattern-string-for-my-home-page-view')}
variables = RequestContext(request, {'NUP':NUP})
return render_to_response('home_page.html', variables)
Then in the template, the NUP.HOMEPAGE should the the url itself.
A:
That's seems way too dynamic. You're supposed to do
{% url named-url-pattern-string-for-my-home-page-view %}
And leave it at that. Dynamically filling in the name of the URL tag is -- frankly -- a little odd.
If you want to use any of a large number of different URL tags, you'd have to do something like this
{% if tagoption1 %}<a href="{% url named-url-1 %}">Text</a>{% endif %}
Which seems long-winded because, again, the dynamic thing you're trying to achieve seems a little odd.
If you have something like a "families" or "clusters" of pages, perhaps separate template directories would be a way to manage this better. Each of the clusters of pages can inherit from a base templates and override small things like this navigation feature to keep all of the pages in the cluster looking similar but having one navigation difference for a "local home".
A:
Posted a bug to Django. They should be able to fix this on their side.
http://code.djangoproject.com/ticket/10823
| How to embed a tag within a url templatetag in a django template? | How do I embed a tag within a url templatetag in a django template?
Django 1.0 , Python 2.5.2
In views.py
def home_page_view(request):
NUP={"HOMEPAGE": "named-url-pattern-string-for-my-home-page-view"}
variables = RequestContext(request, {'NUP':NUP})
return render_to_response('home_page.html', variables)
In home_page.html, the following
NUP.HOMEPAGE = {{ NUP.HOMEPAGE }}
is displayed as
NUP.HOMEPAGE = named-url-pattern-string-for-my-home-page-view
and the following url named pattern works ( as expected ),
url template tag for NUP.HOMEPAGE = {% url named-url-pattern-string-for-my-home-page-view %}
and is displayed as
url template tag for NUP.HOMEPAGE = /myhomepage/
but when {{ NUP.HOMEPAGE }} is embedded within a {% url ... %} as follows
url template tag for NUP.HOMEPAGE = {% url {{ NUP.HOMEPAGE }} %}
this results in a template syntax error
TemplateSyntaxError at /myhomepage/
Could not parse the remainder: '}}' from '}}'
Request Method: GET
Request URL: http://localhost:8000/myhomepage/
Exception Type: TemplateSyntaxError
Exception Value:
Could not parse the remainder: '}}' from '}}'
Exception Location: C:\Python25\Lib\site-packages\django\template\__init__.py in __init__, line 529
Python Executable: C:\Python25\python.exe
Python Version: 2.5.2
I was expecting {% url {{ NUP.HOMEPAGE }} %} to resolve to {% url named-url-pattern-string-for-my-home-page-view %} at runtime and be displayed as /myhomepage/.
Are embedded tags not supported in django?
is it possible to write a custom url template tag with embedded tags support to make this work?
{% url {{ NUP.HOMEPAGE }} %}
| [
"Maybe you could try passing the final URL to the template, instead?\nSomething like this:\nfrom django.core.urlresolvers import reverse\n\ndef home_page_view(request):\n NUP={\"HOMEPAGE\": reverse('named-url-pattern-string-for-my-home-page-view')} \n variables = RequestContext(request, {'NUP':NUP})\n return render_to_response('home_page.html', variables)\n\nThen in the template, the NUP.HOMEPAGE should the the url itself.\n",
"That's seems way too dynamic. You're supposed to do\n{% url named-url-pattern-string-for-my-home-page-view %}\n\nAnd leave it at that. Dynamically filling in the name of the URL tag is -- frankly -- a little odd. \nIf you want to use any of a large number of different URL tags, you'd have to do something like this\n{% if tagoption1 %}<a href=\"{% url named-url-1 %}\">Text</a>{% endif %}\n\nWhich seems long-winded because, again, the dynamic thing you're trying to achieve seems a little odd.\nIf you have something like a \"families\" or \"clusters\" of pages, perhaps separate template directories would be a way to manage this better. Each of the clusters of pages can inherit from a base templates and override small things like this navigation feature to keep all of the pages in the cluster looking similar but having one navigation difference for a \"local home\".\n",
"Posted a bug to Django. They should be able to fix this on their side.\nhttp://code.djangoproject.com/ticket/10823\n"
] | [
2,
0,
0
] | [] | [] | [
"django",
"python",
"templates",
"templatetag",
"url"
] | stackoverflow_0000254895_django_python_templates_templatetag_url.txt |
Q:
Can I compile numpy & scipy as eggs for free on Windows32?
I've been asked to provide Numpy & Scipy as python egg files. Unfortunately Numpy and Scipy do not make official releases of their product in .egg form for a Win32 platform - that means if I want eggs then I have to compile them myself.
At the moment my employer provides Visual Studio.Net 2003, which will compile no version of Numpy later than 1.1.1 - every version released subsequently cannot be compiled with VS2003.
What I'd really like is some other compiler I can use, perhaps for free, but at a push as a free time-limited trial... I can use that to compile the eggs. Is anybody aware of another compiler that I can get and use without paying anything and will definitely compile Numpy on Windows?
Please only suggest something if you know for a fact that that it will compile Numpy - no speculation!
Thanks
Notes: I work for an organization which is very sensitive about legal matters, so everything I do has to be totally legit. I've got to do everything according to licensed terms, and will be audited.
My environment:
Windows 32
Standard C Python 2.4.4
A:
Try compiling the whole Python stack with MinGW32. This is a GCC-Win32 development environment that can be used to build Python and a wide variety of software. You will probably have to compile the whole Python distribution with it. Here is a guide to compiling Python with MinGW. Note that you will probably have to provide a python distribution that is compiled with MinGW32 as well.
If recompiling the Python distro is not a goer I believe that Python 2.4 is compiled using VS2003. You are probably stuck with back-porting Scipy and Numpy to VS2003 or paying a consultant to do it. I would dig out the relevant mailing lists or contact the maintainers and get some view of the effort that would be required to do it.
Another alternative would be to upgrade the version of Python to a more recent one but you will probably have to regression test your application and upgrade the version of Visual Studio to 2005 or 2008.
A:
You could try GCC for Windows. GCC is the compiler most often used for compiling Numpy/Scipy (or anything else, really) on Linux, so it seems reasonable that it should work on Windows too. (Never tried it myself, though)
And of course it's distributed under the GPL, so there shouldn't be any legal barriers.
A:
If you just need the compiler, it is part of the .NET framework.
For instance, you can find the 3.5 framework (Which is used be visual studio 2008) in:
"C:\Windows\Microsoft.NET\Framework\v3.5"
| Can I compile numpy & scipy as eggs for free on Windows32? | I've been asked to provide Numpy & Scipy as python egg files. Unfortunately Numpy and Scipy do not make official releases of their product in .egg form for a Win32 platform - that means if I want eggs then I have to compile them myself.
At the moment my employer provides Visual Studio.Net 2003, which will compile no version of Numpy later than 1.1.1 - every version released subsequently cannot be compiled with VS2003.
What I'd really like is some other compiler I can use, perhaps for free, but at a push as a free time-limited trial... I can use that to compile the eggs. Is anybody aware of another compiler that I can get and use without paying anything and will definitely compile Numpy on Windows?
Please only suggest something if you know for a fact that that it will compile Numpy - no speculation!
Thanks
Notes: I work for an organization which is very sensitive about legal matters, so everything I do has to be totally legit. I've got to do everything according to licensed terms, and will be audited.
My environment:
Windows 32
Standard C Python 2.4.4
| [
"Try compiling the whole Python stack with MinGW32. This is a GCC-Win32 development environment that can be used to build Python and a wide variety of software. You will probably have to compile the whole Python distribution with it. Here is a guide to compiling Python with MinGW. Note that you will probably have to provide a python distribution that is compiled with MinGW32 as well.\nIf recompiling the Python distro is not a goer I believe that Python 2.4 is compiled using VS2003. You are probably stuck with back-porting Scipy and Numpy to VS2003 or paying a consultant to do it. I would dig out the relevant mailing lists or contact the maintainers and get some view of the effort that would be required to do it. \nAnother alternative would be to upgrade the version of Python to a more recent one but you will probably have to regression test your application and upgrade the version of Visual Studio to 2005 or 2008.\n",
"You could try GCC for Windows. GCC is the compiler most often used for compiling Numpy/Scipy (or anything else, really) on Linux, so it seems reasonable that it should work on Windows too. (Never tried it myself, though)\nAnd of course it's distributed under the GPL, so there shouldn't be any legal barriers.\n",
"If you just need the compiler, it is part of the .NET framework.\nFor instance, you can find the 3.5 framework (Which is used be visual studio 2008) in:\n\"C:\\Windows\\Microsoft.NET\\Framework\\v3.5\"\n\n"
] | [
2,
1,
0
] | [] | [] | [
"numpy",
"python",
"scipy",
"windows"
] | stackoverflow_0000752482_numpy_python_scipy_windows.txt |
Q:
How can I use Perl libraries from Python?
I have written a bunch of Perl libraries (actually Perl classes) and I want to use some of them in my Python application. Is there a natural way to do this without using SWIG or writing Perl API for Python. I am asking for a similar way of PHP's Perl interface. If there is no such kind of work for Perl in Python. What is the easiest way to use Perl classes in python?
A:
Personally, I would expose the Perl libs as services via XML/RPC or some other such mechanism. That way you can call them from your Python application in a very natural manner.
A:
I haven't tried it, but Inline::Python lets you call Python from Perl.
You should be able to use a thin bit of perl to load your python app and then use the perl python package that comes with I::P to access your Perl objects.
A:
"What is the easiest way to use Perl classes in python?"
Easiest. Rewrite the Perl into Python and be done with it. Seriously. Just pick one language—that's easiest. Leaving Perl behind is no great loss. Rewriting classes into Python may give you an opportunity to improve them in small ways.
Not so easy. Run the Perl application using Python's subprocess module. That uses the Perl classes in the Perl application without problems. You can easily create pipelines so the Perl gets input from Python and produces output to Python
someApp.py | something.pl | finalStep.py
This has the advantage of breaking your application into three concurrent processes, using up lots of processor resources and running (sometimes) in 1/3 the time.
Everything else is much less easy.
A:
You've just missed a chance for having Python running on the Parrot VM together with Perl. On April 1st, 2009 PEP 401 was published, and one of the Official Acts of the FLUFL read:
Recognized that C is a 20th century language with almost universal rejection by programmers under the age of 30, the CPython implementation will terminate with the release of Python 2.6.2 and 3.0.2. Thereafter, the reference implementation of Python will target the Parrot virtual machine. Alternative implementations of Python (e.g. Jython, IronPython, and PyPy ) are officially discouraged but tolerated.
A:
Check out PyPerl.
WARNING: PyPerl is currently unmaintained, so don't use it if you require stability.
| How can I use Perl libraries from Python? | I have written a bunch of Perl libraries (actually Perl classes) and I want to use some of them in my Python application. Is there a natural way to do this without using SWIG or writing Perl API for Python. I am asking for a similar way of PHP's Perl interface. If there is no such kind of work for Perl in Python. What is the easiest way to use Perl classes in python?
| [
"Personally, I would expose the Perl libs as services via XML/RPC or some other such mechanism. That way you can call them from your Python application in a very natural manner.\n",
"I haven't tried it, but Inline::Python lets you call Python from Perl. \nYou should be able to use a thin bit of perl to load your python app and then use the perl python package that comes with I::P to access your Perl objects.\n",
"\"What is the easiest way to use Perl classes in python?\"\nEasiest. Rewrite the Perl into Python and be done with it. Seriously. Just pick one language—that's easiest. Leaving Perl behind is no great loss. Rewriting classes into Python may give you an opportunity to improve them in small ways. \nNot so easy. Run the Perl application using Python's subprocess module. That uses the Perl classes in the Perl application without problems. You can easily create pipelines so the Perl gets input from Python and produces output to Python\nsomeApp.py | something.pl | finalStep.py\n\nThis has the advantage of breaking your application into three concurrent processes, using up lots of processor resources and running (sometimes) in 1/3 the time.\nEverything else is much less easy.\n",
"You've just missed a chance for having Python running on the Parrot VM together with Perl. On April 1st, 2009 PEP 401 was published, and one of the Official Acts of the FLUFL read:\n\n\nRecognized that C is a 20th century language with almost universal rejection by programmers under the age of 30, the CPython implementation will terminate with the release of Python 2.6.2 and 3.0.2. Thereafter, the reference implementation of Python will target the Parrot virtual machine. Alternative implementations of Python (e.g. Jython, IronPython, and PyPy ) are officially discouraged but tolerated.\n\n\n",
"Check out PyPerl.\nWARNING: PyPerl is currently unmaintained, so don't use it if you require stability.\n"
] | [
8,
4,
3,
2,
1
] | [] | [] | [
"api",
"perl",
"python"
] | stackoverflow_0000750872_api_perl_python.txt |
Q:
dispatcher python
hy all, I have the following "wrong" dispatcher:
def _load_methods(self):
import os, sys, glob
sys.path.insert(0, 'modules\commands')
for c in glob.glob('modules\commands\Command*.py'):
if os.path.isdir(c):
continue
c = os.path.splitext(c)[0]
parts = c.split(os.path.sep )
module, name = '.'.join( parts ), parts[-1:]
module = __import__( module, globals(), locals(), name )
_cmdClass = __import__(module).Command
for method_name in list_public_methods(_cmdClass):
self._methods[method_name] = getattr(_cmdClass(), method_name)
sys.path.pop(0)
It produces the following error:
ImportError: No module named commands.CommandAntitheft
where Command*.py is placed into modules\commands\ folder
can someone help me?
One Possible solution (It works!!!) is:
def _load_methods(self):
import os, sys, glob, imp
for file in glob.glob('modules/commands/Command*.py'):
if os.path.isdir(file):
continue
module = os.path.splitext(file)[0].rsplit(os.sep, 1)[1]
fd, filename, desc = imp.find_module(module,
['./modules/commands'])
try:
_cmdClass = imp.load_module( module, fd, filename, desc).Command
finally:
fd.close()
for method_name in list_public_methods(_cmdClass):
self._methods[method_name] = getattr(_cmdClass(), method_name)
It still remains all risks suggested by bobince (tanks :-) )but now I'm able to load commands at "runtime"
A:
sys.path.insert(0, 'modules\commands')
It's best not to put a relative path into sys.path. If the current directory changes during execution it'll break.
Also if you are running from a different directory to the script it won't work. If you want to make it relative to the script's location, use file.
Also the ‘\’ character should be escaped to ‘\\’ for safety, and really it ought to be using os.path.join() instead of relying on Windows path rules.
sys.path.insert(0, os.path.abspath(os.path.join(__file__, 'modules')))
sys.path.pop(0)
Dangerous. If another imported script has played with sys.path (and it might), you'll have pulled the wrong path off. Also reloads of your own modules will break. Best leave the path where it is.
module, name = '.'.join( parts ), parts[-1:]
Remember your path includes the segment ‘modules’. So you're effectively trying to:
import modules.commands.CommandSomething
but since ‘modules.commands’ is already in the path you added to search what you really want is just:
import CommandSomething
__import__( module, globals(), locals(), name )
Also ‘fromlist’ is a list, so it should be ‘[name]’ if you really want to have it write ‘CommandSomething’ to your local variables. (You almost certainly don't want this; leave the fromlist empty.)
_cmdClass = __import__(module).Command
Yeah, that won't work, module is a module object and __import__ wants a module name. You already have the module object; why not just “module.Command”?
My reaction to all this is simple: Too Much Magic.
You're making this overly difficult for yourself and creating a lot of potential problems and fragility by messing around with the internals of the import system. This is tricky stuff even for experienced Python programmers.
You would almost certainly be better off using plain old Python modules which you import explicitly. Hard-coding the list of commands is really no great hardship; having all your commands in a package, with __init__.py saying:
__all__= ['ThisCommand', 'ThatCommand', 'TheOtherCommand']
may repeat the filenames once, but is much simpler and more robust than a surfeit of magic.
A:
Do you actually need to import things as modules? If you're just loading code from arbitrary positions in the filesystem, then rather than fiddling with the module path etc, you could just use execfile.
ie.
for file in glob.glob('modules/commands/Command*.py'):
if os.path.isdir(file):
continue
moddict={}
execfile(file, moddict)
_cmdClass = moddict['Command']
...
| dispatcher python | hy all, I have the following "wrong" dispatcher:
def _load_methods(self):
import os, sys, glob
sys.path.insert(0, 'modules\commands')
for c in glob.glob('modules\commands\Command*.py'):
if os.path.isdir(c):
continue
c = os.path.splitext(c)[0]
parts = c.split(os.path.sep )
module, name = '.'.join( parts ), parts[-1:]
module = __import__( module, globals(), locals(), name )
_cmdClass = __import__(module).Command
for method_name in list_public_methods(_cmdClass):
self._methods[method_name] = getattr(_cmdClass(), method_name)
sys.path.pop(0)
It produces the following error:
ImportError: No module named commands.CommandAntitheft
where Command*.py is placed into modules\commands\ folder
can someone help me?
One Possible solution (It works!!!) is:
def _load_methods(self):
import os, sys, glob, imp
for file in glob.glob('modules/commands/Command*.py'):
if os.path.isdir(file):
continue
module = os.path.splitext(file)[0].rsplit(os.sep, 1)[1]
fd, filename, desc = imp.find_module(module,
['./modules/commands'])
try:
_cmdClass = imp.load_module( module, fd, filename, desc).Command
finally:
fd.close()
for method_name in list_public_methods(_cmdClass):
self._methods[method_name] = getattr(_cmdClass(), method_name)
It still remains all risks suggested by bobince (tanks :-) )but now I'm able to load commands at "runtime"
| [
"\nsys.path.insert(0, 'modules\\commands')\n\nIt's best not to put a relative path into sys.path. If the current directory changes during execution it'll break.\nAlso if you are running from a different directory to the script it won't work. If you want to make it relative to the script's location, use file.\nAlso the ‘\\’ character should be escaped to ‘\\\\’ for safety, and really it ought to be using os.path.join() instead of relying on Windows path rules.\nsys.path.insert(0, os.path.abspath(os.path.join(__file__, 'modules')))\n\n\nsys.path.pop(0)\n\nDangerous. If another imported script has played with sys.path (and it might), you'll have pulled the wrong path off. Also reloads of your own modules will break. Best leave the path where it is.\n\nmodule, name = '.'.join( parts ), parts[-1:]\n\nRemember your path includes the segment ‘modules’. So you're effectively trying to:\nimport modules.commands.CommandSomething\n\nbut since ‘modules.commands’ is already in the path you added to search what you really want is just:\nimport CommandSomething\n\n\n__import__( module, globals(), locals(), name )\n\nAlso ‘fromlist’ is a list, so it should be ‘[name]’ if you really want to have it write ‘CommandSomething’ to your local variables. (You almost certainly don't want this; leave the fromlist empty.)\n\n_cmdClass = __import__(module).Command\n\nYeah, that won't work, module is a module object and __import__ wants a module name. You already have the module object; why not just “module.Command”?\nMy reaction to all this is simple: Too Much Magic.\nYou're making this overly difficult for yourself and creating a lot of potential problems and fragility by messing around with the internals of the import system. This is tricky stuff even for experienced Python programmers.\nYou would almost certainly be better off using plain old Python modules which you import explicitly. Hard-coding the list of commands is really no great hardship; having all your commands in a package, with __init__.py saying:\n__all__= ['ThisCommand', 'ThatCommand', 'TheOtherCommand']\n\nmay repeat the filenames once, but is much simpler and more robust than a surfeit of magic.\n",
"Do you actually need to import things as modules? If you're just loading code from arbitrary positions in the filesystem, then rather than fiddling with the module path etc, you could just use execfile.\nie. \nfor file in glob.glob('modules/commands/Command*.py'):\n if os.path.isdir(file):\n continue\n\n moddict={}\n execfile(file, moddict)\n _cmdClass = moddict['Command']\n ...\n\n"
] | [
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0000751455_python.txt |
Q:
"Slicing" in Python Expressions documentation
I don't understand the following part of the Python docs:
http://docs.python.org/reference/expressions.html#slicings
Is this referring to list slicing ( x=[1,2,3,4]; x[0:2] )..? Particularly the parts referring to ellipsis..
slice_item ::= expression | proper_slice | ellipsis
The conversion of a slice item that is an expression is that expression. The conversion of an ellipsis slice item is the built-in Ellipsis object.
A:
Ellipsis is used mainly by the numeric python extension, which adds a multidimensional array type. Since there are more than one dimensions, slicing becomes more complex than just a start and stop index; it is useful to be able to slice in multiple dimensions as well. eg, given a 4x4 array, the top left area would be defined by the slice "[:2,:2]"
>>> a
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12],
[13, 14, 15, 16]])
>>> a[:2,:2] # top left
array([[1, 2],
[5, 6]])
Ellipsis is used here to indicate a placeholder for the rest of the array dimensions not specified. Think of it as indicating the full slice [:] for dimensions not specified, so
for a 3d array, a[...,0] is the same as a[:,:,0] and for 4d, a[:,:,:,0].
Note that the actual Ellipsis literal (...) is not usable outside the slice syntax in python2, though there is a builtin Ellipsis object. This is what is meant by "The conversion of an ellipsis slice item is the built-in Ellipsis object." ie. "a[...]" is effectively sugar for "a[Ellipsis]". In python3, ... denotes Ellipsis anywhere, so you can write:
>>> ...
Ellipsis
If you're not using numpy, you can pretty much ignore all mention of Ellipsis. None of the builtin types use it, so really all you have to care about is that lists get passed a single slice object, that contains "start","stop" and "step" members. ie:
l[start:stop:step] # proper_slice syntax from the docs you quote.
is equivalent to calling:
l.__getitem__(slice(start, stop, step))
A:
Defining simple test class that just prints what is being passed:
>>> class TestGetitem(object):
... def __getitem__(self, item):
... print type(item), item
...
>>> t = TestGetitem()
Expression example:
>>> t[1]
<type 'int'> 1
>>> t[3-2]
<type 'int'> 1
>>> t['test']
<type 'str'> test
>>> t[t]
<class '__main__.TestGetitem'> <__main__.TestGetitem object at 0xb7e9bc4c>
Slice example:
>>> t[1:2]
<type 'slice'> slice(1, 2, None)
>>> t[1:'this':t]
<type 'slice'> slice(1, 'this', <__main__.TestGetitem object at 0xb7e9bc4c>)
Ellipsis example:
>>> t[...]
<type 'ellipsis'> Ellipsis
Tuple with ellipsis and slice:
>>> t[...,1:]
<type 'tuple'> (Ellipsis, slice(1, None, None))
A:
What happens is this. See http://docs.python.org/reference/datamodel.html#types and http://docs.python.org/library/functions.html#slice
Slice objects are used to represent
slices when extended slice syntax is
used. This is a slice using two
colons, or multiple slices or ellipses
separated by commas, e.g.,
a[i:j:step], a[i:j, k:l], or a[...,
i:j]. They are also created by the
built-in slice() function.
Special read-only attributes: start is
the lower bound; stop is the upper
bound; step is the step value; each is
None if omitted. These attributes can
have any type.
x=[1,2,3,4]
x[0:2]
The "0:2" is transformed into a Slice object with start of 0, stop of 2 and a step of None.
This Slice object is provided to x's __getitem__ method of classes you define.
>>> class MyClass( object ):
def __getitem__( self, key ):
print type(key), key
>>> x=MyClass()
>>> x[0:2]
<type 'slice'> slice(0, 2, None)
For the build-in list class, however, a special __getslice__ method must be overridden.
class MyList( list ):
def __getslice__( self, i, j=None, k=None ):
# decode various parts of the slice values
The ellipsis is a special "ignore this dimension" syntax provided to multi-dimensional slices.
Also see http://docs.python.org/library/itertools.html#itertools.islice for even more information.
| "Slicing" in Python Expressions documentation | I don't understand the following part of the Python docs:
http://docs.python.org/reference/expressions.html#slicings
Is this referring to list slicing ( x=[1,2,3,4]; x[0:2] )..? Particularly the parts referring to ellipsis..
slice_item ::= expression | proper_slice | ellipsis
The conversion of a slice item that is an expression is that expression. The conversion of an ellipsis slice item is the built-in Ellipsis object.
| [
"Ellipsis is used mainly by the numeric python extension, which adds a multidimensional array type. Since there are more than one dimensions, slicing becomes more complex than just a start and stop index; it is useful to be able to slice in multiple dimensions as well. eg, given a 4x4 array, the top left area would be defined by the slice \"[:2,:2]\"\n>>> a\narray([[ 1, 2, 3, 4],\n [ 5, 6, 7, 8],\n [ 9, 10, 11, 12],\n [13, 14, 15, 16]])\n\n>>> a[:2,:2] # top left\narray([[1, 2],\n [5, 6]])\n\nEllipsis is used here to indicate a placeholder for the rest of the array dimensions not specified. Think of it as indicating the full slice [:] for dimensions not specified, so\nfor a 3d array, a[...,0] is the same as a[:,:,0] and for 4d, a[:,:,:,0].\nNote that the actual Ellipsis literal (...) is not usable outside the slice syntax in python2, though there is a builtin Ellipsis object. This is what is meant by \"The conversion of an ellipsis slice item is the built-in Ellipsis object.\" ie. \"a[...]\" is effectively sugar for \"a[Ellipsis]\". In python3, ... denotes Ellipsis anywhere, so you can write:\n>>> ...\nEllipsis\n\nIf you're not using numpy, you can pretty much ignore all mention of Ellipsis. None of the builtin types use it, so really all you have to care about is that lists get passed a single slice object, that contains \"start\",\"stop\" and \"step\" members. ie:\nl[start:stop:step] # proper_slice syntax from the docs you quote.\n\nis equivalent to calling:\nl.__getitem__(slice(start, stop, step))\n\n",
"Defining simple test class that just prints what is being passed:\n>>> class TestGetitem(object):\n... def __getitem__(self, item):\n... print type(item), item\n... \n>>> t = TestGetitem()\n\nExpression example:\n>>> t[1]\n<type 'int'> 1\n>>> t[3-2]\n<type 'int'> 1\n>>> t['test']\n<type 'str'> test\n>>> t[t]\n<class '__main__.TestGetitem'> <__main__.TestGetitem object at 0xb7e9bc4c>\n\nSlice example:\n>>> t[1:2]\n<type 'slice'> slice(1, 2, None)\n>>> t[1:'this':t]\n<type 'slice'> slice(1, 'this', <__main__.TestGetitem object at 0xb7e9bc4c>)\n\nEllipsis example:\n>>> t[...]\n<type 'ellipsis'> Ellipsis\n\nTuple with ellipsis and slice:\n>>> t[...,1:]\n<type 'tuple'> (Ellipsis, slice(1, None, None))\n\n",
"What happens is this. See http://docs.python.org/reference/datamodel.html#types and http://docs.python.org/library/functions.html#slice\n\nSlice objects are used to represent\n slices when extended slice syntax is\n used. This is a slice using two\n colons, or multiple slices or ellipses\n separated by commas, e.g.,\n a[i:j:step], a[i:j, k:l], or a[...,\n i:j]. They are also created by the\n built-in slice() function.\nSpecial read-only attributes: start is\n the lower bound; stop is the upper\n bound; step is the step value; each is\n None if omitted. These attributes can\n have any type.\n\nx=[1,2,3,4]\nx[0:2]\n\nThe \"0:2\" is transformed into a Slice object with start of 0, stop of 2 and a step of None.\nThis Slice object is provided to x's __getitem__ method of classes you define.\n>>> class MyClass( object ):\n def __getitem__( self, key ):\n print type(key), key\n\n\n>>> x=MyClass()\n>>> x[0:2]\n<type 'slice'> slice(0, 2, None)\n\nFor the build-in list class, however, a special __getslice__ method must be overridden.\nclass MyList( list ):\n def __getslice__( self, i, j=None, k=None ):\n # decode various parts of the slice values\n\nThe ellipsis is a special \"ignore this dimension\" syntax provided to multi-dimensional slices.\nAlso see http://docs.python.org/library/itertools.html#itertools.islice for even more information.\n"
] | [
32,
26,
9
] | [] | [] | [
"python",
"syntax"
] | stackoverflow_0000752602_python_syntax.txt |
Q:
How do you automate the launching/debugging of large scale projects?
Scenario:
There is a complex piece of software that is annoying to launch by hand. What I've done is to create a python script to launch the executable and attach gdb for debugging.
The process launching script:
ensures an environment variable is set.
ensures a local build directory gets added to the environment's LD_LIBRARY_PATH variable.
changes the current working directory to where the executable expects to be (not my design)
launches the executable with a config file the only command line option
pipes the output from the executable to a second logging process
remembers PID of executable, then launches & attaches gdb to running executable.
The script works, with one caveat. ctrl-c doesn't interrupt the debugee and return control to gdb. So if I "continue" with no active breakpoints I can never stop the process again, it has to be killed/interrupted from another shell. BTW, running "kill -s SIGINT <pid>" where <pid> is the debuggee's pid does get me back to gdb's prompt... but it is really annoying to have to do things this way
At first I thought Python was grabbing the SIGINT signal, but this doesn't seem to be the case as I set up signal handlers forward the signal to the debugee and that doesn't fix the problem.
I've tried various configurations to the python script (calling os.spawn* instead of subprocess, etc.) It seems that any way I go about it, if python launched the child process, SIGINT (ctrl-c) signals DO NOT to get routed to gdb or the child process.
Current line of thinking
This might be related to needing a
separate process group id for the debugee & gdb...any credence to this?
Possible bug with SELinux?
Info:
gdb 6.8
Python 2.5.2 (problem present with Python 2.6.1 as well)
SELinux Environment (bug delivering signals to processes?)
Alternatives I've considered:
Setting up a .gdbinit file to do as much of what the script does, environment variables and current working directory are a problem with this approach.
Launching executable and attaching gdb manually (yuck)
Question:
How do you automate the launching/debugging of large scale projects?
Update:
I've tried Nicholas Riley's examples below, on my Macintosh at home they all allow cntl-c to work to varrying degrees, on the production boxen (which I now to believe may be running SELinux) they don't...
A:
Instead of forwarding the signal to the debuggee from Python, you could try just ignoring it. The following worked for me:
import signal
signal.signal(signal.SIGINT, signal.SIG_IGN)
import subprocess
cat = subprocess.Popen(['cat'])
subprocess.call(['gdb', '--pid=%d' % cat.pid])
With this I was able to ^C repeatedly inside GDB and interrupt the debuggee without a problem, however I did see some weird behavior.
Incidentally, I also had no problem when forwarding the signal to the target process.
import subprocess
cat = subprocess.Popen(['cat'])
import signal, os
signal.signal(signal.SIGINT,
lambda signum, frame: os.kill(cat.pid, signum))
subprocess.call(['gdb', '--pid=%d' % cat.pid])
So, maybe something else is going on in your case? It might help if you posted some code that breaks.
A:
Your comment notes that you're sshing in with putty... do you have a controlling tty? With openssh you would want to add the -T option, I don't know how/if putty will do this the way you're using it.
Also: you might try using cygwin's ssh instead of putty.
A:
if you already have a current script set up to do this, but are having problems automating part of it, maybe you can just grab expect and use it to provide the setup, then drop back into interactive mode in expect to launch the process. Then you can still have your ctrl-c available to interrupt.
| How do you automate the launching/debugging of large scale projects? | Scenario:
There is a complex piece of software that is annoying to launch by hand. What I've done is to create a python script to launch the executable and attach gdb for debugging.
The process launching script:
ensures an environment variable is set.
ensures a local build directory gets added to the environment's LD_LIBRARY_PATH variable.
changes the current working directory to where the executable expects to be (not my design)
launches the executable with a config file the only command line option
pipes the output from the executable to a second logging process
remembers PID of executable, then launches & attaches gdb to running executable.
The script works, with one caveat. ctrl-c doesn't interrupt the debugee and return control to gdb. So if I "continue" with no active breakpoints I can never stop the process again, it has to be killed/interrupted from another shell. BTW, running "kill -s SIGINT <pid>" where <pid> is the debuggee's pid does get me back to gdb's prompt... but it is really annoying to have to do things this way
At first I thought Python was grabbing the SIGINT signal, but this doesn't seem to be the case as I set up signal handlers forward the signal to the debugee and that doesn't fix the problem.
I've tried various configurations to the python script (calling os.spawn* instead of subprocess, etc.) It seems that any way I go about it, if python launched the child process, SIGINT (ctrl-c) signals DO NOT to get routed to gdb or the child process.
Current line of thinking
This might be related to needing a
separate process group id for the debugee & gdb...any credence to this?
Possible bug with SELinux?
Info:
gdb 6.8
Python 2.5.2 (problem present with Python 2.6.1 as well)
SELinux Environment (bug delivering signals to processes?)
Alternatives I've considered:
Setting up a .gdbinit file to do as much of what the script does, environment variables and current working directory are a problem with this approach.
Launching executable and attaching gdb manually (yuck)
Question:
How do you automate the launching/debugging of large scale projects?
Update:
I've tried Nicholas Riley's examples below, on my Macintosh at home they all allow cntl-c to work to varrying degrees, on the production boxen (which I now to believe may be running SELinux) they don't...
| [
"Instead of forwarding the signal to the debuggee from Python, you could try just ignoring it. The following worked for me:\nimport signal\nsignal.signal(signal.SIGINT, signal.SIG_IGN)\n\nimport subprocess\ncat = subprocess.Popen(['cat'])\nsubprocess.call(['gdb', '--pid=%d' % cat.pid])\n\nWith this I was able to ^C repeatedly inside GDB and interrupt the debuggee without a problem, however I did see some weird behavior. \nIncidentally, I also had no problem when forwarding the signal to the target process.\nimport subprocess\ncat = subprocess.Popen(['cat'])\n\nimport signal, os\nsignal.signal(signal.SIGINT,\n lambda signum, frame: os.kill(cat.pid, signum))\n\nsubprocess.call(['gdb', '--pid=%d' % cat.pid])\n\nSo, maybe something else is going on in your case? It might help if you posted some code that breaks.\n",
"Your comment notes that you're sshing in with putty... do you have a controlling tty? With openssh you would want to add the -T option, I don't know how/if putty will do this the way you're using it.\nAlso: you might try using cygwin's ssh instead of putty.\n",
"if you already have a current script set up to do this, but are having problems automating part of it, maybe you can just grab expect and use it to provide the setup, then drop back into interactive mode in expect to launch the process. Then you can still have your ctrl-c available to interrupt.\n"
] | [
3,
0,
0
] | [] | [] | [
"debugging",
"gdb",
"python",
"selinux",
"subprocess"
] | stackoverflow_0000739090_debugging_gdb_python_selinux_subprocess.txt |
Q:
Accessing the class that owns a decorated method from the decorator
I'm writing a decorator for methods that must inspect the parent methods (the methods of the same name in the parents of the class in which I'm decorating).
Example (from the fourth example of PEP 318):
def returns(rtype):
def check_returns(f):
def new_f(*args, **kwds):
result = f(*args, **kwds)
assert isinstance(result, rtype), \
"return value %r does not match %s" % (result,rtype)
return result
new_f.func_name = f.func_name
# here I want to reach the class owning the decorated method f,
# it should give me the class A
return new_f
return check_returns
class A(object):
@returns(int)
def compute(self, value):
return value * 3
So I'm looking for the code to type in place of # here I want...
Thanks.
A:
here I want to reach the class owning the decorated method f
You can't because at the point of decoration, no class owns the method f.
class A(object):
@returns(int)
def compute(self, value):
return value * 3
Is the same as saying:
class A(object):
pass
@returns(int)
def compute(self, value):
return value*3
A.compute= compute
Clearly, the returns() decorator is built before the function is assigned to an owner class.
Now when you write a function to a class (either inline, or explicitly like this) it becomes an unbound method object. Now it has a reference to its owner class, which you can get by saying:
>>> A.compute.im_class
<class '__main__.A'>
So you can read f.im_class inside ‘new_f’, which is executed after the assignment, but not in the decorator itself.
(And even then it's a bit ugly relying on a CPython implementation detail if you don't need to. I'm not quite sure what you're trying to do, but things involving “get the owner class” are often doable using metaclasses.)
A:
As bobince said it, you can't access the surrounding class, because at the time the decorator is invoked, the class does not exist yet. If you need access to the full dictionary of the class and the bases, you should consider a metaclass:
__metaclass__
This variable can be any callable accepting arguments for name, bases, and dict. Upon class creation, the callable is used instead of the built-in type().
Basically, we convert the returns decorator into something that just tells the metaclass to do some magic on class construction:
class CheckedReturnType(object):
def __init__(self, meth, rtype):
self.meth = meth
self.rtype = rtype
def returns(rtype):
def _inner(f):
return CheckedReturnType(f, rtype)
return _inner
class BaseInspector(type):
def __new__(mcs, name, bases, dct):
for obj_name, obj in dct.iteritems():
if isinstance(obj, CheckedReturnType):
# do your wrapping & checking here, base classes are in bases
# reassign to dct
return type.__new__(mcs, name, bases, dct)
class A(object):
__metaclass__ = BaseInspector
@returns(int)
def compute(self, value):
return value * 3
Mind that I have not tested this code, please leave comments if I should update this.
There are some articles on metaclasses by the highly recommendable David Mertz, which you might find interesting in this context.
| Accessing the class that owns a decorated method from the decorator | I'm writing a decorator for methods that must inspect the parent methods (the methods of the same name in the parents of the class in which I'm decorating).
Example (from the fourth example of PEP 318):
def returns(rtype):
def check_returns(f):
def new_f(*args, **kwds):
result = f(*args, **kwds)
assert isinstance(result, rtype), \
"return value %r does not match %s" % (result,rtype)
return result
new_f.func_name = f.func_name
# here I want to reach the class owning the decorated method f,
# it should give me the class A
return new_f
return check_returns
class A(object):
@returns(int)
def compute(self, value):
return value * 3
So I'm looking for the code to type in place of # here I want...
Thanks.
| [
"\nhere I want to reach the class owning the decorated method f\n\nYou can't because at the point of decoration, no class owns the method f.\nclass A(object):\n @returns(int)\n def compute(self, value):\n return value * 3\n\nIs the same as saying:\nclass A(object):\n pass\n\n@returns(int)\ndef compute(self, value):\n return value*3\n\nA.compute= compute\n\nClearly, the returns() decorator is built before the function is assigned to an owner class.\nNow when you write a function to a class (either inline, or explicitly like this) it becomes an unbound method object. Now it has a reference to its owner class, which you can get by saying:\n>>> A.compute.im_class\n<class '__main__.A'>\n\nSo you can read f.im_class inside ‘new_f’, which is executed after the assignment, but not in the decorator itself.\n(And even then it's a bit ugly relying on a CPython implementation detail if you don't need to. I'm not quite sure what you're trying to do, but things involving “get the owner class” are often doable using metaclasses.)\n",
"As bobince said it, you can't access the surrounding class, because at the time the decorator is invoked, the class does not exist yet. If you need access to the full dictionary of the class and the bases, you should consider a metaclass:\n\n__metaclass__\nThis variable can be any callable accepting arguments for name, bases, and dict. Upon class creation, the callable is used instead of the built-in type().\n\nBasically, we convert the returns decorator into something that just tells the metaclass to do some magic on class construction:\nclass CheckedReturnType(object):\n def __init__(self, meth, rtype):\n self.meth = meth\n self.rtype = rtype\n\ndef returns(rtype):\n def _inner(f):\n return CheckedReturnType(f, rtype)\n return _inner\n\nclass BaseInspector(type):\n def __new__(mcs, name, bases, dct):\n for obj_name, obj in dct.iteritems():\n if isinstance(obj, CheckedReturnType):\n # do your wrapping & checking here, base classes are in bases\n # reassign to dct\n return type.__new__(mcs, name, bases, dct)\n\nclass A(object):\n __metaclass__ = BaseInspector\n @returns(int)\n def compute(self, value):\n return value * 3\n\nMind that I have not tested this code, please leave comments if I should update this.\nThere are some articles on metaclasses by the highly recommendable David Mertz, which you might find interesting in this context.\n"
] | [
6,
6
] | [] | [] | [
"decorator",
"python"
] | stackoverflow_0000753537_decorator_python.txt |
Q:
Need to build (or otherwise obtain) python-devel 2.3 and add to LD_LIBRARY_PATH
I am supporting an application with a hard dependency on python-devel 2.3.7. The application runs the python interpreter embedded, attempting to load libpython2.3.so - but since the local machine has libpython2.4.so under /usr/lib64, the application is failing.
I see that there are RPMs for python-devel (but not version 2.3.x). Another wrinkle is that I don't want to overwrite the existing python under /usr/lib (I don't have su anyway). What I want to do is place the somewhere in my home directory (i.e. /home/noahz/lib) and use PATH and LD_LIBRARY_PATH to point to the older version for this application.
What I'm trying to find out (but can't seem to craft the right google search for) is:
1) Where do I download python-devel-2.3 or libpython2.3.so.1.0 (if either available)
2a) If I can't download python-devel-2.3, how do I build libpython2.3.so from source (already downloaded Python-2.3.tgz and
2b) Is building libpython2.3.so.1.0 from source and pointing to it with LD_LIBRARY_PATH good enough, or am I going to run into other problems (other dependencies)
3) In general, am I approaching this problem the right way?
ADDITIONAL INFO:
I attempted to symlink (ln -s) to the later version. This caused the app to fail silently.
Distro is Red Hat Enterprise Linux 5 (RHEL5) - for x86_64
A:
You can use the python RPM's linked to from the python home page ChristopheD mentioned.
You can extract the RPM's using cpio, as they are just specialized cpio archives.
Your method of extracting them to your home directory and setting LD_LIBRARY_PATH and PATH should work; I use this all the time for hand-built newer versions of projects I also have installed.
Don't focus on the -devel package though; you need the main package. You can unpack the -devel one as well, but the only thing you'll actually use from it is the libpython2.3.so symlink that points to the actual library, and you can just as well create this by hand.
Whether this is the right approach depends on what you are trying to do. If all you're trying to do is to get this one application to run for you personally, then this hack sounds fine.
If you wanted to actually distribute something to other people for running this application, and you have no way of fixing the actual application, you should consider building an rpm of the older python version that doesn't conflict with the system-installed one.
A:
Can you use one of these rpm's?
What specific distro are you on?
http://www.python.org/download/releases/2.3.3/rpms/
http://rpm.pbone.net/index.php3/stat/4/idpl/3171326/com/python-devel-2.3-4.i586.rpm.html
| Need to build (or otherwise obtain) python-devel 2.3 and add to LD_LIBRARY_PATH | I am supporting an application with a hard dependency on python-devel 2.3.7. The application runs the python interpreter embedded, attempting to load libpython2.3.so - but since the local machine has libpython2.4.so under /usr/lib64, the application is failing.
I see that there are RPMs for python-devel (but not version 2.3.x). Another wrinkle is that I don't want to overwrite the existing python under /usr/lib (I don't have su anyway). What I want to do is place the somewhere in my home directory (i.e. /home/noahz/lib) and use PATH and LD_LIBRARY_PATH to point to the older version for this application.
What I'm trying to find out (but can't seem to craft the right google search for) is:
1) Where do I download python-devel-2.3 or libpython2.3.so.1.0 (if either available)
2a) If I can't download python-devel-2.3, how do I build libpython2.3.so from source (already downloaded Python-2.3.tgz and
2b) Is building libpython2.3.so.1.0 from source and pointing to it with LD_LIBRARY_PATH good enough, or am I going to run into other problems (other dependencies)
3) In general, am I approaching this problem the right way?
ADDITIONAL INFO:
I attempted to symlink (ln -s) to the later version. This caused the app to fail silently.
Distro is Red Hat Enterprise Linux 5 (RHEL5) - for x86_64
| [
"You can use the python RPM's linked to from the python home page ChristopheD mentioned.\nYou can extract the RPM's using cpio, as they are just specialized cpio archives.\nYour method of extracting them to your home directory and setting LD_LIBRARY_PATH and PATH should work; I use this all the time for hand-built newer versions of projects I also have installed.\nDon't focus on the -devel package though; you need the main package. You can unpack the -devel one as well, but the only thing you'll actually use from it is the libpython2.3.so symlink that points to the actual library, and you can just as well create this by hand.\nWhether this is the right approach depends on what you are trying to do. If all you're trying to do is to get this one application to run for you personally, then this hack sounds fine.\nIf you wanted to actually distribute something to other people for running this application, and you have no way of fixing the actual application, you should consider building an rpm of the older python version that doesn't conflict with the system-installed one. \n",
"Can you use one of these rpm's?\nWhat specific distro are you on?\n\nhttp://www.python.org/download/releases/2.3.3/rpms/\nhttp://rpm.pbone.net/index.php3/stat/4/idpl/3171326/com/python-devel-2.3-4.i586.rpm.html\n\n"
] | [
2,
0
] | [] | [] | [
"build",
"linux",
"python"
] | stackoverflow_0000753749_build_linux_python.txt |
Q:
How to generate examples of a gettext plural forms expression? In Python?
Given a gettext Plural-Forms line, general a few example values for each n. I'd like this feature for the web interface for my site's translators, so that they know which plural form to put where. For example, given:
"Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%"
"10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;\n"
... I want the first text field to be labeled "1, 21..", then "2, 3, 4...", then "5, 6..." (not sure if this is exactly right, but you get the idea.)
Right now the best thing I can come up with is to parse the expression somehow, then iterate x from 0 to 100 and see what n it produces. This isn't guaranteed to work (what if the lowest x is over 100 for some language?) but it's probably good enough. Any better ideas or existing Python code?
A:
Given that it's late, I'll bite.
The following solution is hacky, and relies on converting your plural form to python code that can be evaluated (basically converting the x ? y : z statements to the python x and y or z equivalent, and changing &&/|| to and/or)
I'm not sure if your plural form rule is a contrived example, and I don't understand what you mean with your first text field, but I'm sure you'll get where I'm going with my example solution:
# -*- Mode: Python -*-
# vi:si:et:sw=4:sts=4:ts=4
p = "Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;\n"
# extract rule
import re
matcher = re.compile('plural=(.*);')
match = matcher.search(p)
rule = match.expand("\\1")
# convert rule to python syntax
oldrule = None
while oldrule != rule:
oldrule = rule
rule = re.sub('(.*)\?(.*):(.*)', r'(\1) and (\2) or (\3)', oldrule)
rule = re.sub('&&', 'and', rule)
rule = re.sub('\|\|', 'or', rule)
for n in range(40):
code = "n = %d" % n
print n, eval(rule)
| How to generate examples of a gettext plural forms expression? In Python? | Given a gettext Plural-Forms line, general a few example values for each n. I'd like this feature for the web interface for my site's translators, so that they know which plural form to put where. For example, given:
"Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%"
"10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;\n"
... I want the first text field to be labeled "1, 21..", then "2, 3, 4...", then "5, 6..." (not sure if this is exactly right, but you get the idea.)
Right now the best thing I can come up with is to parse the expression somehow, then iterate x from 0 to 100 and see what n it produces. This isn't guaranteed to work (what if the lowest x is over 100 for some language?) but it's probably good enough. Any better ideas or existing Python code?
| [
"Given that it's late, I'll bite.\nThe following solution is hacky, and relies on converting your plural form to python code that can be evaluated (basically converting the x ? y : z statements to the python x and y or z equivalent, and changing &&/|| to and/or)\nI'm not sure if your plural form rule is a contrived example, and I don't understand what you mean with your first text field, but I'm sure you'll get where I'm going with my example solution:\n# -*- Mode: Python -*-\n# vi:si:et:sw=4:sts=4:ts=4\n\np = \"Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;\\n\"\n\n# extract rule\nimport re\nmatcher = re.compile('plural=(.*);')\nmatch = matcher.search(p)\nrule = match.expand(\"\\\\1\")\n\n# convert rule to python syntax\noldrule = None\nwhile oldrule != rule:\n oldrule = rule\n rule = re.sub('(.*)\\?(.*):(.*)', r'(\\1) and (\\2) or (\\3)', oldrule)\n\nrule = re.sub('&&', 'and', rule)\nrule = re.sub('\\|\\|', 'or', rule)\n\nfor n in range(40):\n code = \"n = %d\" % n\n print n, eval(rule)\n\n"
] | [
1
] | [] | [] | [
"gettext",
"internationalization",
"python"
] | stackoverflow_0000749170_gettext_internationalization_python.txt |
Q:
How to know when to manage resources in Python
I hope I framed the question right. I am trying to force myself to be a better programmer. By better I mean efficient. I want to write a program to identify the files in a directory and read each file for further processing. After some shuffling I got to this:
for file in os.listdir(dir):
y=open(dir+'\\'+file,'r').readlines()
for line in y:
pass
y.close()
It should be no surprise that I get an AttributeError since y is a list. I didn't think about that when I wrote the snippet.
I am thinking about this and am afraid that I have five open files (there are five files in the directory specified by dir.
I can fix the code so it runs and I explicitly close the files after opening them. I am curious if I need to or if Python handles closing the file in the next iteration of the loop. If so then I only need to write:
for file in os.listdir(dir):
y=open(dir+'\\'+file,'r').readlines()
for line in y:
pass
I am guessing that it(python) does handle this effortlessly. The reason I think that this might be handled is that I have changed the object/thing that y is referencing. When I start the second iteration there are no more memory references to the file that was opened and read using the readlines method.
A:
Python will close open files when they get garbage-collected, so generally you can forget about it -- particularly when reading.
That said, if you want to close explicitely, you could do this:
for file in os.listdir(dir):
f = open(dir+'\\'+file,'r')
y = f.readlines()
for line in y:
pass
f.close()
However, we can immediately improve this, because in python you can iterate over file-like objects directly:
for file in os.listdir(dir):
y = open(dir+'\\'+file,'r')
for line in y:
pass
y.close()
Finally, in recent python, there is the 'with' statement:
for file in os.listdir(dir):
with open(dir+'\\'+file,'r') as y:
for line in y:
pass
When the with block ends, python will close the file for you and clean it up.
(you also might want to look into os.path for more pythonic tools for manipulating file names and directories)
A:
Don't worry about it. Python's garbage collector is good, and I've never had a problem with not closing file-pointers (for read operations at least)
If you did want to explicitly close the file, just store the open() in one variable, then call readlines() on that, for example..
f = open("thefile.txt")
all_lines = f.readlines()
f.close()
Or, you can use the with statement, which was added in Python 2.5 as a from __future__ import, and "properly" added in Python 2.6:
from __future__ import with_statement # for python 2.5, not required for >2.6
with open("thefile.txt") as f:
print f.readlines()
# or
the_file = open("thefile.txt")
with the_file as f:
print f.readlines()
The file will automatically be closed at the end of the block.
..but, there are other more important things to worry about in the snippets you posted, mostly stylistic things.
Firstly, try to avoid manually constructing paths using string-concatenation. The os.path module contains lots of methods to do this, in a more reliable, cross-platform manner.
import os
y = open(os.path.join(dir, file), 'r')
Also, you are using two variable names, dir and file - both of which are built-in functions. Pylint is a good tool to spot things like this, in this case it would give the warning:
[W0622] Redefining built-in 'file'
| How to know when to manage resources in Python | I hope I framed the question right. I am trying to force myself to be a better programmer. By better I mean efficient. I want to write a program to identify the files in a directory and read each file for further processing. After some shuffling I got to this:
for file in os.listdir(dir):
y=open(dir+'\\'+file,'r').readlines()
for line in y:
pass
y.close()
It should be no surprise that I get an AttributeError since y is a list. I didn't think about that when I wrote the snippet.
I am thinking about this and am afraid that I have five open files (there are five files in the directory specified by dir.
I can fix the code so it runs and I explicitly close the files after opening them. I am curious if I need to or if Python handles closing the file in the next iteration of the loop. If so then I only need to write:
for file in os.listdir(dir):
y=open(dir+'\\'+file,'r').readlines()
for line in y:
pass
I am guessing that it(python) does handle this effortlessly. The reason I think that this might be handled is that I have changed the object/thing that y is referencing. When I start the second iteration there are no more memory references to the file that was opened and read using the readlines method.
| [
"Python will close open files when they get garbage-collected, so generally you can forget about it -- particularly when reading.\nThat said, if you want to close explicitely, you could do this:\nfor file in os.listdir(dir):\n f = open(dir+'\\\\'+file,'r')\n y = f.readlines()\n for line in y:\n pass\n f.close()\n\nHowever, we can immediately improve this, because in python you can iterate over file-like objects directly:\nfor file in os.listdir(dir):\n y = open(dir+'\\\\'+file,'r')\n for line in y:\n pass\n y.close()\n\nFinally, in recent python, there is the 'with' statement:\nfor file in os.listdir(dir):\n with open(dir+'\\\\'+file,'r') as y:\n for line in y:\n pass\n\nWhen the with block ends, python will close the file for you and clean it up.\n(you also might want to look into os.path for more pythonic tools for manipulating file names and directories)\n",
"Don't worry about it. Python's garbage collector is good, and I've never had a problem with not closing file-pointers (for read operations at least)\nIf you did want to explicitly close the file, just store the open() in one variable, then call readlines() on that, for example..\nf = open(\"thefile.txt\")\nall_lines = f.readlines()\nf.close()\n\nOr, you can use the with statement, which was added in Python 2.5 as a from __future__ import, and \"properly\" added in Python 2.6:\nfrom __future__ import with_statement # for python 2.5, not required for >2.6\n\nwith open(\"thefile.txt\") as f:\n print f.readlines()\n\n# or\n\nthe_file = open(\"thefile.txt\")\nwith the_file as f:\n print f.readlines()\n\nThe file will automatically be closed at the end of the block.\n..but, there are other more important things to worry about in the snippets you posted, mostly stylistic things.\nFirstly, try to avoid manually constructing paths using string-concatenation. The os.path module contains lots of methods to do this, in a more reliable, cross-platform manner.\nimport os\ny = open(os.path.join(dir, file), 'r')\n\nAlso, you are using two variable names, dir and file - both of which are built-in functions. Pylint is a good tool to spot things like this, in this case it would give the warning:\n[W0622] Redefining built-in 'file'\n\n"
] | [
11,
3
] | [] | [] | [
"garbage_collection",
"python"
] | stackoverflow_0000754187_garbage_collection_python.txt |
Q:
How is this "referenced before assignment"?
I have a bit of Python to connect to a database with a switch throw in for local versus live.
LOCAL_CONNECTION = {"server": "127.0.0.1", "user": "root", "password": "", "database": "testing"}
LIVE_CONNECTION = {"server": "10.1.1.1", "user": "x", "password": "y", "database": "nottesting"}
if debug_mode:
connection_info = LOCAL_CONNECTION
else:
connnection_info = LIVE_CONNECTION
self.connection = MySQLdb.connect(host = connection_info["server"], user = connection_info["user"], passwd = connection_info["password"], db = connection_info["database"])
Works fine locally (Windows, Python 2.5) but live (Linux, Python 2.4) I'm getting:
UnboundLocalError: local variable 'connection_info' referenced before assignment
I see the same error even if I remove the if/ else and just assign connection info directly to the LIVE_CONNECTION value. If I hard-code the live connection values into the last line, it all works. Clearly I'm sleepy. What am I not seeing?
A:
The second assignement is misspelled.
You wrote connnection_info = LIVE_CONNECTION with 3 n's.
A:
Typo: connnection_info = LIVE_CONNECTION
| How is this "referenced before assignment"? | I have a bit of Python to connect to a database with a switch throw in for local versus live.
LOCAL_CONNECTION = {"server": "127.0.0.1", "user": "root", "password": "", "database": "testing"}
LIVE_CONNECTION = {"server": "10.1.1.1", "user": "x", "password": "y", "database": "nottesting"}
if debug_mode:
connection_info = LOCAL_CONNECTION
else:
connnection_info = LIVE_CONNECTION
self.connection = MySQLdb.connect(host = connection_info["server"], user = connection_info["user"], passwd = connection_info["password"], db = connection_info["database"])
Works fine locally (Windows, Python 2.5) but live (Linux, Python 2.4) I'm getting:
UnboundLocalError: local variable 'connection_info' referenced before assignment
I see the same error even if I remove the if/ else and just assign connection info directly to the LIVE_CONNECTION value. If I hard-code the live connection values into the last line, it all works. Clearly I'm sleepy. What am I not seeing?
| [
"The second assignement is misspelled.\nYou wrote connnection_info = LIVE_CONNECTION with 3 n's.\n",
"Typo: connnection_info = LIVE_CONNECTION\n"
] | [
16,
4
] | [] | [] | [
"python"
] | stackoverflow_0000754421_python.txt |
Q:
Python: Read a file (from an external server)
Can you tell me how to code a Python script which reads a file from an external server? I look for something similar to PHP's file_get_contents() or file() function.
It would be great if someone could post the entire code for such a script.
Thanks in advance!
A:
The entire script is:
import urllib
content = urllib.urlopen('http://www.google.com/').read()
A:
better would be the same as Jarret's code, but using urllib2:
import urllib2
content = urllib2.urlopen('http://google.com').read()
urllib2 is a bit newer and more modern. Doesn't matter too much in your case, but it's good practice to use it.
| Python: Read a file (from an external server) | Can you tell me how to code a Python script which reads a file from an external server? I look for something similar to PHP's file_get_contents() or file() function.
It would be great if someone could post the entire code for such a script.
Thanks in advance!
| [
"The entire script is:\nimport urllib\ncontent = urllib.urlopen('http://www.google.com/').read()\n\n",
"better would be the same as Jarret's code, but using urllib2:\nimport urllib2\ncontent = urllib2.urlopen('http://google.com').read()\n\nurllib2 is a bit newer and more modern. Doesn't matter too much in your case, but it's good practice to use it.\n"
] | [
12,
5
] | [] | [] | [
"file",
"python"
] | stackoverflow_0000754170_file_python.txt |
Q:
Some Basic Python Questions
I'm a total python noob so please bear with me. I want to have python scan a page of html and replace instances of Microsoft Word entities with something UTF-8 compatible.
My question is, how do you do that in Python (I've Googled this but haven't found a clear answer so far)? I want to dip my toe in the Python waters so I figure something simple like this is a good place to start. It seems that I would need to:
load text pasted from MS Word into a variable
run some sort of replace function on the contents
output it
In PHP I would do it like this:
$test = $_POST['pasted_from_Word']; //for example “Going Mobile”
function defangWord($string)
{
$search = array(
(chr(0xe2) . chr(0x80) . chr(0x98)),
(chr(0xe2) . chr(0x80) . chr(0x99)),
(chr(0xe2) . chr(0x80) . chr(0x9c)),
(chr(0xe2) . chr(0x80) . chr(0x9d)),
(chr(0xe2) . chr(0x80) . chr(0x93)),
(chr(0xe2) . chr(0x80) . chr(0x94)),
(chr(0x2d))
);
$replace = array(
"‘",
"’",
"“",
"”",
"–",
"—",
"–"
);
return str_replace($search, $replace, $string);
}
echo defangWord($test);
How would you do it in Python?
EDIT: Hmmm, ok ignore my confusion about UTF-8 and entities for the moment. The input contains text pasted from MS Word. Things like curly quotes are showing up as odd symbols. Various PHP functions I used to try and fix it were not giving me the results I wanted. By viewing those odd symbols in a hex editor I saw that they corresponded to the symbols I used above (0xe2, 0x80 etc.). So I simply swapped out the oddball characters with HTML entities. So if the bit I have above already IS UTF-8, what is being pasted in from MS Word that is causing the odd symbols?
EDIT2: So I set out to learn a bit about Python and found I don't really understand encoding. The problem I was trying to solve can be handled simply by having sonsistent encoding from end to end. If the input form is UTF-8, the database that stores the input is UTF-8 and the page that outputs it is UTF-8... pasting from Word works fine. No special functions needed. Now, about learning a little Python...
A:
First of all, those aren't Microsoft Word entities—they are UTF-8. You're converting them to HTML entities.
The Pythonic way to write something like:
chr(0xe2) . chr(0x80) . chr(0x98)
would be:
'\xe2\x80\x98'
But Python already has built-in functionality for the type of conversion you want to do:
def defang(string):
return string.decode('utf-8').encode('ascii', 'xmlcharrefreplace')
This will replace the UTF-8 codes in a string for characters like ‘ with numeric entities like “.
If you want to replace those numeric entities with named ones where possible:
import re
from htmlentitydefs import codepoint2name
def convert_match_to_named(match):
num = int(match.group(1))
if num in codepoint2name:
return "&%s;" % codepoint2name[num]
else:
return match.group(0)
def defang_named(string):
return re.sub('&#(\d+);', convert_match_to_named, defang(string))
And use it like so:
>>> defang_named('\xe2\x80\x9cHello, world!\xe2\x80\x9d')
'“Hello, world!”'
To complete the answer, the equivalent code to your example to process a file would look something like this:
# in Python, it's common to operate a line at a time on a file instead of
# reading the entire thing into memory
my_file = open("test100.html")
for line in my_file:
print defang_named(line)
my_file.close()
Note that this answer is targeted at Python 2.5; the Unicode situation is dramatically different for Python 3+.
I also agree with bobince's comment below: if you can just keep the text in UTF-8 format and send it with the correct content-type and charset, do that; if you need it to be in ASCII, then stick with the numeric entities—there's really no need to use the named ones.
A:
The Python code has the same outline.
Just replace all of the PHP-isms with Python-isms.
Start by creating a File object. The result of a file.read() is a string object. Strings have a "replace" operation.
A:
Your best bet for cleaning Word HTML is using HTML Tidy which has a mode just for that. There are a few Python wrappers you can use if you need to do it programmatically.
A:
As S.Lott said, the Python code would be very, very similar—the only differences would essentially be the function calls/statements.
I don't think Python has a direct equivalent to file_get_contents(), but since you can obtain an array of the lines in the file, you can then join them by newlines, like this:
sample = '\n'.join(open(test, 'r').readlines())
EDIT: Never mind, there's a much easier way: sample = file(test).read()
String replacing is almost exactly the same as str_replace():
sample = sample.replace(search, replace)
And outputting is as simple as a print statement:
print defang_word(sample)
So as you can see, the two versions look almost exactly the same.
| Some Basic Python Questions | I'm a total python noob so please bear with me. I want to have python scan a page of html and replace instances of Microsoft Word entities with something UTF-8 compatible.
My question is, how do you do that in Python (I've Googled this but haven't found a clear answer so far)? I want to dip my toe in the Python waters so I figure something simple like this is a good place to start. It seems that I would need to:
load text pasted from MS Word into a variable
run some sort of replace function on the contents
output it
In PHP I would do it like this:
$test = $_POST['pasted_from_Word']; //for example “Going Mobile”
function defangWord($string)
{
$search = array(
(chr(0xe2) . chr(0x80) . chr(0x98)),
(chr(0xe2) . chr(0x80) . chr(0x99)),
(chr(0xe2) . chr(0x80) . chr(0x9c)),
(chr(0xe2) . chr(0x80) . chr(0x9d)),
(chr(0xe2) . chr(0x80) . chr(0x93)),
(chr(0xe2) . chr(0x80) . chr(0x94)),
(chr(0x2d))
);
$replace = array(
"‘",
"’",
"“",
"”",
"–",
"—",
"–"
);
return str_replace($search, $replace, $string);
}
echo defangWord($test);
How would you do it in Python?
EDIT: Hmmm, ok ignore my confusion about UTF-8 and entities for the moment. The input contains text pasted from MS Word. Things like curly quotes are showing up as odd symbols. Various PHP functions I used to try and fix it were not giving me the results I wanted. By viewing those odd symbols in a hex editor I saw that they corresponded to the symbols I used above (0xe2, 0x80 etc.). So I simply swapped out the oddball characters with HTML entities. So if the bit I have above already IS UTF-8, what is being pasted in from MS Word that is causing the odd symbols?
EDIT2: So I set out to learn a bit about Python and found I don't really understand encoding. The problem I was trying to solve can be handled simply by having sonsistent encoding from end to end. If the input form is UTF-8, the database that stores the input is UTF-8 and the page that outputs it is UTF-8... pasting from Word works fine. No special functions needed. Now, about learning a little Python...
| [
"First of all, those aren't Microsoft Word entities—they are UTF-8. You're converting them to HTML entities.\nThe Pythonic way to write something like:\nchr(0xe2) . chr(0x80) . chr(0x98)\n\nwould be:\n'\\xe2\\x80\\x98'\n\nBut Python already has built-in functionality for the type of conversion you want to do:\ndef defang(string):\n return string.decode('utf-8').encode('ascii', 'xmlcharrefreplace')\n\nThis will replace the UTF-8 codes in a string for characters like ‘ with numeric entities like “.\nIf you want to replace those numeric entities with named ones where possible:\nimport re\nfrom htmlentitydefs import codepoint2name\n\ndef convert_match_to_named(match):\n num = int(match.group(1))\n if num in codepoint2name:\n return \"&%s;\" % codepoint2name[num]\n else:\n return match.group(0)\n\ndef defang_named(string):\n return re.sub('&#(\\d+);', convert_match_to_named, defang(string))\n\nAnd use it like so:\n>>> defang_named('\\xe2\\x80\\x9cHello, world!\\xe2\\x80\\x9d')\n'“Hello, world!”'\n\n\nTo complete the answer, the equivalent code to your example to process a file would look something like this:\n# in Python, it's common to operate a line at a time on a file instead of\n# reading the entire thing into memory\n\nmy_file = open(\"test100.html\")\nfor line in my_file:\n print defang_named(line)\nmy_file.close()\n\nNote that this answer is targeted at Python 2.5; the Unicode situation is dramatically different for Python 3+.\nI also agree with bobince's comment below: if you can just keep the text in UTF-8 format and send it with the correct content-type and charset, do that; if you need it to be in ASCII, then stick with the numeric entities—there's really no need to use the named ones.\n",
"The Python code has the same outline.\nJust replace all of the PHP-isms with Python-isms.\nStart by creating a File object. The result of a file.read() is a string object. Strings have a \"replace\" operation. \n",
"Your best bet for cleaning Word HTML is using HTML Tidy which has a mode just for that. There are a few Python wrappers you can use if you need to do it programmatically.\n",
"As S.Lott said, the Python code would be very, very similar—the only differences would essentially be the function calls/statements.\nI don't think Python has a direct equivalent to file_get_contents(), but since you can obtain an array of the lines in the file, you can then join them by newlines, like this:\nsample = '\\n'.join(open(test, 'r').readlines())\n\nEDIT: Never mind, there's a much easier way: sample = file(test).read()\nString replacing is almost exactly the same as str_replace():\nsample = sample.replace(search, replace)\n\nAnd outputting is as simple as a print statement:\nprint defang_word(sample)\n\nSo as you can see, the two versions look almost exactly the same.\n"
] | [
20,
3,
2,
1
] | [] | [] | [
"html_entities",
"php",
"python",
"replace",
"unicode"
] | stackoverflow_0000754468_html_entities_php_python_replace_unicode.txt |
Q:
can a method call be chained to the 'set()' built-in? (and why not?)
If I try:
mi_list = ['three', 'small', 'words']
mi_set = set(mi_list)
mi_set.remove('small')
print mi_set
I get:
set(['three', 'words'])
which is what I expect. Whereas If I try:
mi_list = ['three', 'small', 'words']
mi_set = set(mi_list).remove('small')
print mi_set
I get:
None
Why?
I suspect there's a clue in that if I try to remove an element that isn't present - eg 'big' - an error gets reported:
KeyError: 'big'
A:
set.remove returns nothing (None).
Your code assigns the return value of set.remove to the variable mi_set. Therefore, mi_set is None.
A:
There is a general convention in python that methods which cause side-effects return None. Examples include list.sort, list.append, set.add, set.remove, dict.update, etc.
This is essentially to help you avoid bugs. Say you had a set called mi_set. If you could write:
mi_set2 = mi_set.remove('small')
then a reader might think: "mi_set2 is different from mi_set". But this would not be the case! And the confusion might lead to subtle bugs caused by mistakenly sharing data structures. So by returning None, python forces you to remember that methods like those I listed above modify objects, rather than creating new ones.
See also long discussion here. Although note that methods like sorted() and reversed() have been added since that thread.
[note that list.pop is an exception to this rule, for historical reasons – it lets you use a list like a stack. And even then, it returns the object removed, rather than the list.]
A:
The way to go, in your case, would be to use the difference member:
>>> a = set(["a", "b", "c"])
>>> a = a.difference(["a"])
>>> print a
set(['c', 'b'])
The difference is that remove acts on the current set (python library object member functions that modify an instance usually return None), whereas difference creates and returns a new set.
A:
Why does it return None? Because .remove, .add, etc. return None :) That's it. They do not support chaining.
set is using methods that change it in place. You could create your own version of set that uses chaining, but that can cause some problems:
class chain_set(set):
def chain_add(self, x):
newself = self.copy()
newself.add(x)
return newself
cs = chain_set([1,2,3,4])
cs.chain_add(5)
# chain_set([1, 2, 3, 4, 5])
cs.chain_add(7)
# chain_set([1, 2, 3, 4, 7])
cs.chain_add(7).chain_add(8)
# chain_set([1, 2, 3, 4, 7, 8])
The problem is - do you expect cs itself to change?
Do you always want to modify the original set (might create some hard to find bugs) or do you want to copy the set every time (might be slow with bigger sets). If you know what behaviour you need and you remember about it - just go ahead with your own set implementation.
A:
Are you sure that the remove function returns a value?
A:
remove modifies the original set without returning anything (or rather, it returns None). This example shows what happens to the original object when you call remove on it:
Python 3.0.1 (r301:69561, Feb 13 2009, 20:04:18) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> lst = [1,2,3]
>>> s1 = set(lst)
>>> s1
{1, 2, 3}
>>> s2 = s1.remove(2) # instead of reassigning s1, I save the result of remove to s2
>>> s1
{1, 3} # *** 2 is not an element in the original set ***
>>> s2 # s2 is not a set at all!
>>>
To answer the other part of your question, the exception indicates that remove tried to remove the argument from the set, but couldn't because the argument is not in the set. Conversely, remove returns None to indicate success.
| can a method call be chained to the 'set()' built-in? (and why not?) | If I try:
mi_list = ['three', 'small', 'words']
mi_set = set(mi_list)
mi_set.remove('small')
print mi_set
I get:
set(['three', 'words'])
which is what I expect. Whereas If I try:
mi_list = ['three', 'small', 'words']
mi_set = set(mi_list).remove('small')
print mi_set
I get:
None
Why?
I suspect there's a clue in that if I try to remove an element that isn't present - eg 'big' - an error gets reported:
KeyError: 'big'
| [
"set.remove returns nothing (None).\nYour code assigns the return value of set.remove to the variable mi_set. Therefore, mi_set is None.\n",
"There is a general convention in python that methods which cause side-effects return None. Examples include list.sort, list.append, set.add, set.remove, dict.update, etc.\nThis is essentially to help you avoid bugs. Say you had a set called mi_set. If you could write:\nmi_set2 = mi_set.remove('small')\n\nthen a reader might think: \"mi_set2 is different from mi_set\". But this would not be the case! And the confusion might lead to subtle bugs caused by mistakenly sharing data structures. So by returning None, python forces you to remember that methods like those I listed above modify objects, rather than creating new ones.\nSee also long discussion here. Although note that methods like sorted() and reversed() have been added since that thread.\n[note that list.pop is an exception to this rule, for historical reasons – it lets you use a list like a stack. And even then, it returns the object removed, rather than the list.]\n",
"The way to go, in your case, would be to use the difference member:\n>>> a = set([\"a\", \"b\", \"c\"])\n>>> a = a.difference([\"a\"])\n>>> print a\nset(['c', 'b'])\n\nThe difference is that remove acts on the current set (python library object member functions that modify an instance usually return None), whereas difference creates and returns a new set.\n",
"Why does it return None? Because .remove, .add, etc. return None :) That's it. They do not support chaining.\nset is using methods that change it in place. You could create your own version of set that uses chaining, but that can cause some problems:\nclass chain_set(set):\n def chain_add(self, x):\n newself = self.copy()\n newself.add(x)\n return newself\n\ncs = chain_set([1,2,3,4])\ncs.chain_add(5)\n# chain_set([1, 2, 3, 4, 5])\ncs.chain_add(7)\n# chain_set([1, 2, 3, 4, 7])\ncs.chain_add(7).chain_add(8)\n# chain_set([1, 2, 3, 4, 7, 8])\n\nThe problem is - do you expect cs itself to change?\nDo you always want to modify the original set (might create some hard to find bugs) or do you want to copy the set every time (might be slow with bigger sets). If you know what behaviour you need and you remember about it - just go ahead with your own set implementation.\n",
"Are you sure that the remove function returns a value?\n",
"remove modifies the original set without returning anything (or rather, it returns None). This example shows what happens to the original object when you call remove on it:\nPython 3.0.1 (r301:69561, Feb 13 2009, 20:04:18) [MSC v.1500 32 bit (Intel)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> lst = [1,2,3]\n>>> s1 = set(lst)\n>>> s1\n{1, 2, 3}\n>>> s2 = s1.remove(2) # instead of reassigning s1, I save the result of remove to s2\n>>> s1\n{1, 3} # *** 2 is not an element in the original set ***\n>>> s2 # s2 is not a set at all!\n>>> \n\nTo answer the other part of your question, the exception indicates that remove tried to remove the argument from the set, but couldn't because the argument is not in the set. Conversely, remove returns None to indicate success.\n"
] | [
19,
8,
1,
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0000751457_python.txt |
Q:
How do you Debug/Take Apart/Learn from someone else's Python code (web-based)?
A good example of this is: http://github.com/tav/tweetapp/blob/a711404f2935c3689457c61e073105c1756b62af/app/root.py
In Visual Studio (ASP.net C#) where I come from, the classes are usually split into separate files + I can set break points to understand the code level.
If I run a program like this, do I just do "system.out" to print out where in the code I am in?
I read through this https://stackoverflow.com/questions/246546/good-techniques-for-understanding-someone-elses-code which was quite helpful.
A:
You've run into a pretty specific case of code that will be hard to understand. They probably did that for the convenience of having all the code in one file.
I would recommend letting epydoc have a pass at it. It will create HTML documentation of the program. This will show you the class structure and you can even build charts of which functions call which other functions.
http://epydoc.sourceforge.net/manual-usage.html
Your other options are to break it into multiple files yourself (which I think will be tedious and not of much benefit)
A:
If you install Eclipse and PyDev you can set breakpoints in the same way you can in visual studio.
Failing that, printing out information at cucial points is often a good way to see what's going on. I quite often add in debug information that way and leave it in the code but disabled until I change a variable. I find this often helps if you break the code and need to go back and take another look at what's going on. Better still, send your debug information to a logging class and you can start to use the output in unit tests... you do test your code right? ;)
| How do you Debug/Take Apart/Learn from someone else's Python code (web-based)? | A good example of this is: http://github.com/tav/tweetapp/blob/a711404f2935c3689457c61e073105c1756b62af/app/root.py
In Visual Studio (ASP.net C#) where I come from, the classes are usually split into separate files + I can set break points to understand the code level.
If I run a program like this, do I just do "system.out" to print out where in the code I am in?
I read through this https://stackoverflow.com/questions/246546/good-techniques-for-understanding-someone-elses-code which was quite helpful.
| [
"You've run into a pretty specific case of code that will be hard to understand. They probably did that for the convenience of having all the code in one file.\nI would recommend letting epydoc have a pass at it. It will create HTML documentation of the program. This will show you the class structure and you can even build charts of which functions call which other functions.\nhttp://epydoc.sourceforge.net/manual-usage.html\nYour other options are to break it into multiple files yourself (which I think will be tedious and not of much benefit)\n",
"If you install Eclipse and PyDev you can set breakpoints in the same way you can in visual studio.\nFailing that, printing out information at cucial points is often a good way to see what's going on. I quite often add in debug information that way and leave it in the code but disabled until I change a variable. I find this often helps if you break the code and need to go back and take another look at what's going on. Better still, send your debug information to a logging class and you can start to use the output in unit tests... you do test your code right? ;)\n"
] | [
3,
0
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0000754481_google_app_engine_python.txt |
Q:
Is there a more pythonic way to build this dictionary?
What is the "most pythonic" way to build a dictionary where I have the values in a sequence and each key will be a function of its value? I'm currently using the following, but I feel like I'm just missing a cleaner way. NOTE: values is a list that is not related to any dictionary.
for value in values:
new_dict[key_from_value(value)] = value
A:
At least it's shorter:
dict((key_from_value(value), value) for value in values)
A:
>>> l = [ 1, 2, 3, 4 ]
>>> dict( ( v, v**2 ) for v in l )
{1: 1, 2: 4, 3: 9, 4: 16}
In Python 3.0 you can use a "dict comprehension" which is basically a shorthand for the above:
{ v : v**2 for v in l }
A:
Py3K:
{ key_for_value(value) : value for value in values }
A:
This method avoids the list comprehension syntax:
dict(zip(map(key_from_value, values), values))
I will never claim to be an authority on "Pythonic", but this way feels like a good way.
| Is there a more pythonic way to build this dictionary? | What is the "most pythonic" way to build a dictionary where I have the values in a sequence and each key will be a function of its value? I'm currently using the following, but I feel like I'm just missing a cleaner way. NOTE: values is a list that is not related to any dictionary.
for value in values:
new_dict[key_from_value(value)] = value
| [
"At least it's shorter:\ndict((key_from_value(value), value) for value in values)\n\n",
">>> l = [ 1, 2, 3, 4 ]\n>>> dict( ( v, v**2 ) for v in l )\n{1: 1, 2: 4, 3: 9, 4: 16}\n\nIn Python 3.0 you can use a \"dict comprehension\" which is basically a shorthand for the above:\n{ v : v**2 for v in l }\n\n",
"Py3K:\n{ key_for_value(value) : value for value in values }\n\n",
"This method avoids the list comprehension syntax:\ndict(zip(map(key_from_value, values), values))\n\nI will never claim to be an authority on \"Pythonic\", but this way feels like a good way.\n"
] | [
18,
15,
5,
0
] | [] | [] | [
"python"
] | stackoverflow_0000753986_python.txt |
Q:
finditer hangs when matching against long string
I have a somewhat complex regular expression which I'm trying to match against a long string (65,535 characters). I'm looking for multiple occurrences of the re in the string, and so am using finditer. It works, but for some reason it hangs after identifying the first few occurrences. Does anyone know why this might be? Here's the code snippet:
pattern = "(([ef]|([gh]d*(ad*[gh]d)*b))d*b([ef]d*b|d*)*c)"
matches = re.finditer(pattern, string)
for match in matches:
print "(%d-%d): %s" % (match.start(), match.end(), match.group())
It prints out the first four occurrences, but then it hangs. When I kill it using Ctrl-C, it tells me it was killed in the iterator:
Traceback (most recent call last):
File "code.py", line 133, in <module>
main(sys.argv[1:])
File "code.py", line 106, in main
for match in matches:
KeyboardInterrupt
If I try it with a simpler re, it works fine.
I'm running this on python 2.5.4 running on Cygwin on Windows XP.
I managed to get it to hang with a very much shorter string. With this 50 character string, it never returned after about 5 minutes:
ddddddeddbedddbddddddddddddddddddddddddddddddddddd
With this 39 character string it took about 15 seconds to return (and display no matches):
ddddddeddbedddbdddddddddddddddddddddddd
And with this string it returns instantly:
ddddddeddbedddbdddddddddddddd
A:
Could it be that your expression triggers exponential behavior in the Python RE engine?
This article deals with the problem. If you have the time, you might want to try running your expression in an RE engine developed using those ideas.
A:
Definitely exponential behaviour. You've got so many d* parts to your regexp that it'll be backtracking like crazy when it gets to the long string of d's, but fails to match something earlier. You need to rethink the regexp, so it has less possible paths to try.
In particular I think:
([ef]d\*b|d\*)*</pre></code> and <code><pre>([ef]|([gh]d\*(ad\*[gh]d)\*b))d\*b
Might need rethinking, as they'll force a retry of the alternate match. Plus they also overlap in terms of what they match. They'd both match edb for example, but if one fails and tries to backtrack the other part will probably have the same behaviour.
So in short try not to use the | if you can and try to make sure the patterns don't overlap where possible.
A:
Thanks to all the responses, which were very helpful. In the end, surprisingly, it was easy to speed it up. Here's the original regex:
(([ef]|([gh]d*(ad*[gh]d)*b))d*b([ef]d*b|d*)*c)
I noticed that the |d* near the end was not really what I needed, so I modified it as follows:
(([ef]|([gh]d*(ad*[gh]d)*b))d*b([ef]d*bd*)*c)
Now it works almost instantaneously on the 65,536 character string. I guess now I just have to make sure that the regex is really matching the strings I need it to match...
A:
I think you experience what is known as "catastrophic backtracking".
Your regex has many optional/alternative parts, all of which still try to match, so previous sub-expressions give back characters to the following expression on local failure. This leads to a back-and-fourth behavior within the regex and exponentially rising execution times.
Python (2.7+?, I'm not sure) supports atomic grouping and possessive quantifiers, you could examine your regex to identify the parts that should match or fail as a whole. Unnecessary backtracking can be brought under control with that.
A:
catastrophic backtracking!
Regular Expressions can be very expensive. Certain (unintended and intended) strings may cause RegExes to exhibit exponential behavior. We've taken several hotfixes for this. RegExes are so handy, but devs really need to understand how they work; we've gotten bitten by them.
example and debugger:
http://www.codinghorror.com/blog/archives/000488.html
A:
You already gave yourself the answer: The regular expression is to complex and ambiguous.
You should try to find a less complex and more distinct expression that is easier to process. Or tell us what you want to accomplish and we could try to help you to find one.
Edit If you just want to allow ds in every position as you said in a comment to John Montgomery’s answer, you should remove them before testing the pattern:
import re
string = "ddddddeddbedddbddddddddddddddddddddddddddddddddddd"
pattern = "(([ef]|([gh](a[gh])*b))b([ef]b)*c)"
matches = re.finditer(pattern, re.sub("d+", "", string))
for match in matches:
print "(%d-%d): %s" % (match.start(), match.end(), match.group())
| finditer hangs when matching against long string | I have a somewhat complex regular expression which I'm trying to match against a long string (65,535 characters). I'm looking for multiple occurrences of the re in the string, and so am using finditer. It works, but for some reason it hangs after identifying the first few occurrences. Does anyone know why this might be? Here's the code snippet:
pattern = "(([ef]|([gh]d*(ad*[gh]d)*b))d*b([ef]d*b|d*)*c)"
matches = re.finditer(pattern, string)
for match in matches:
print "(%d-%d): %s" % (match.start(), match.end(), match.group())
It prints out the first four occurrences, but then it hangs. When I kill it using Ctrl-C, it tells me it was killed in the iterator:
Traceback (most recent call last):
File "code.py", line 133, in <module>
main(sys.argv[1:])
File "code.py", line 106, in main
for match in matches:
KeyboardInterrupt
If I try it with a simpler re, it works fine.
I'm running this on python 2.5.4 running on Cygwin on Windows XP.
I managed to get it to hang with a very much shorter string. With this 50 character string, it never returned after about 5 minutes:
ddddddeddbedddbddddddddddddddddddddddddddddddddddd
With this 39 character string it took about 15 seconds to return (and display no matches):
ddddddeddbedddbdddddddddddddddddddddddd
And with this string it returns instantly:
ddddddeddbedddbdddddddddddddd
| [
"Could it be that your expression triggers exponential behavior in the Python RE engine?\nThis article deals with the problem. If you have the time, you might want to try running your expression in an RE engine developed using those ideas.\n",
"Definitely exponential behaviour. You've got so many d* parts to your regexp that it'll be backtracking like crazy when it gets to the long string of d's, but fails to match something earlier. You need to rethink the regexp, so it has less possible paths to try.\nIn particular I think:\n([ef]d\\*b|d\\*)*</pre></code> and <code><pre>([ef]|([gh]d\\*(ad\\*[gh]d)\\*b))d\\*b\n\nMight need rethinking, as they'll force a retry of the alternate match. Plus they also overlap in terms of what they match. They'd both match edb for example, but if one fails and tries to backtrack the other part will probably have the same behaviour.\nSo in short try not to use the | if you can and try to make sure the patterns don't overlap where possible.\n",
"Thanks to all the responses, which were very helpful. In the end, surprisingly, it was easy to speed it up. Here's the original regex:\n(([ef]|([gh]d*(ad*[gh]d)*b))d*b([ef]d*b|d*)*c)\n\nI noticed that the |d* near the end was not really what I needed, so I modified it as follows:\n(([ef]|([gh]d*(ad*[gh]d)*b))d*b([ef]d*bd*)*c)\n\nNow it works almost instantaneously on the 65,536 character string. I guess now I just have to make sure that the regex is really matching the strings I need it to match...\n",
"I think you experience what is known as \"catastrophic backtracking\".\nYour regex has many optional/alternative parts, all of which still try to match, so previous sub-expressions give back characters to the following expression on local failure. This leads to a back-and-fourth behavior within the regex and exponentially rising execution times.\nPython (2.7+?, I'm not sure) supports atomic grouping and possessive quantifiers, you could examine your regex to identify the parts that should match or fail as a whole. Unnecessary backtracking can be brought under control with that.\n",
"catastrophic backtracking!\n\nRegular Expressions can be very expensive. Certain (unintended and intended) strings may cause RegExes to exhibit exponential behavior. We've taken several hotfixes for this. RegExes are so handy, but devs really need to understand how they work; we've gotten bitten by them. \n\nexample and debugger:\nhttp://www.codinghorror.com/blog/archives/000488.html\n",
"You already gave yourself the answer: The regular expression is to complex and ambiguous.\nYou should try to find a less complex and more distinct expression that is easier to process. Or tell us what you want to accomplish and we could try to help you to find one.\n\nEdit If you just want to allow ds in every position as you said in a comment to John Montgomery’s answer, you should remove them before testing the pattern:\nimport re\n\nstring = \"ddddddeddbedddbddddddddddddddddddddddddddddddddddd\"\npattern = \"(([ef]|([gh](a[gh])*b))b([ef]b)*c)\"\nmatches = re.finditer(pattern, re.sub(\"d+\", \"\", string))\nfor match in matches:\n print \"(%d-%d): %s\" % (match.start(), match.end(), match.group())\n\n"
] | [
5,
5,
3,
2,
2,
1
] | [] | [] | [
"performance",
"python",
"regex"
] | stackoverflow_0000755332_performance_python_regex.txt |
Q:
Python/wxPython: Doing work continuously in the background
I'm making a program for running simulations in Python, with a wxPython interface. In the program, you can create a simulation, and the program renders (=calculates) it for you. Rendering can be very time-consuming sometimes.
When the user starts a simulation, and defines an initial state, I want the program to render the simulation continuously in the background, while the user may be doing different things in the program. Sort of like a YouTube-style bar that fills up: You can play the simulation only up to the point that was rendered.
How should I run the rendering function?
A:
I would use a threading.Thread to run the code in the background and wx.CallAfter to post updates to my window thread to render them to the user.
thread = threading.Thread(target=self.do_work)
thread.setDaemon(True)
thread.start()
...
def do_work(self):
# processing code here
while processing:
# do stuff
wx.CallAfter(self.update_view, args, kwargs)
def update_view(self, args):
# do stuff with args
# since wx.CallAfter was used, it's safe to do GUI stuff here
A:
There's a fair bit of info on the wxPython wiki about long running tasks that might be useful. They basically make use a thread and wx.PostEvent to handle communication between the thread and the main wx event loop.
A:
Launch a new process to render in background and periodically check to see if it has returned.
You can find the documentation for the subprocess module here and the multiprocess module here. As Jay said, multiprocess is probably better if you're using Python 2.6. That said, I don't think there would be any performance difference between the two. Multiprocess just seems to be a wrapper around subprocess making certain things easier to do.
While subprocess/multiprocess is the standard way to do this, you may also want to take a look at Parallel Python.
A:
If you don't mind using a very slightly different approach, you can have a look at stackless python and create a tasklet for your rendering process. I find it very easy to use personally.
| Python/wxPython: Doing work continuously in the background | I'm making a program for running simulations in Python, with a wxPython interface. In the program, you can create a simulation, and the program renders (=calculates) it for you. Rendering can be very time-consuming sometimes.
When the user starts a simulation, and defines an initial state, I want the program to render the simulation continuously in the background, while the user may be doing different things in the program. Sort of like a YouTube-style bar that fills up: You can play the simulation only up to the point that was rendered.
How should I run the rendering function?
| [
"I would use a threading.Thread to run the code in the background and wx.CallAfter to post updates to my window thread to render them to the user.\nthread = threading.Thread(target=self.do_work)\nthread.setDaemon(True)\nthread.start()\n\n...\n\ndef do_work(self):\n # processing code here\n while processing:\n # do stuff\n wx.CallAfter(self.update_view, args, kwargs)\n\ndef update_view(self, args):\n # do stuff with args\n # since wx.CallAfter was used, it's safe to do GUI stuff here\n\n",
"There's a fair bit of info on the wxPython wiki about long running tasks that might be useful. They basically make use a thread and wx.PostEvent to handle communication between the thread and the main wx event loop.\n",
"Launch a new process to render in background and periodically check to see if it has returned.\nYou can find the documentation for the subprocess module here and the multiprocess module here. As Jay said, multiprocess is probably better if you're using Python 2.6. That said, I don't think there would be any performance difference between the two. Multiprocess just seems to be a wrapper around subprocess making certain things easier to do.\nWhile subprocess/multiprocess is the standard way to do this, you may also want to take a look at Parallel Python.\n",
"If you don't mind using a very slightly different approach, you can have a look at stackless python and create a tasklet for your rendering process. I find it very easy to use personally.\n"
] | [
10,
7,
4,
0
] | [] | [] | [
"background",
"multithreading",
"python",
"wxpython"
] | stackoverflow_0000730645_background_multithreading_python_wxpython.txt |
Q:
Can you write a permutation function just as elegantly in C#?
I like this 6 line solution a lot and am trying to replicate it in C#. Basically, it permutes the elements of an array:
def permute(xs, pre=[]):
if len(xs) == 0:
yield pre
for i, x in enumerate(xs):
for y in permute(xs[:i] + xs[i+1:], pre + [x]):
yield y
A:
Well, it probably isn't how I'd write it, but:
static IEnumerable<T[]> Permute<T>(this T[] xs, params T[] pre) {
if (xs.Length == 0) yield return pre;
for (int i = 0; i < xs.Length; i++) {
foreach (T[] y in Permute(xs.Take(i).Union(xs.Skip(i+1)).ToArray(), pre.Union(new[] { xs[i] }).ToArray())) {
yield return y;
}
}
}
Re your comment; I'm not entirely clear on the question; if you mean "why is this useful?" - among other things, there are a range of brute-force scenarios where you would want to try different permutations - for example, for small ordering problems like travelling sales person (that aren't big enough to warrant a more sophisticated solution), you might want to check whether it is best to go {base,A,B,C,base}, {base,A,C,B,base},{base,B,A,C,base}, etc.
If you mean "how would I use this method?" - untested, but something like:
int[] values = {1,2,3};
foreach(int[] perm in values.Permute()) {
WriteArray(perm);
}
void WriteArray<T>(T[] values) {
StringBuilder sb = new StringBuilder();
foreach(T value in values) {
sb.Append(value).Append(", ");
}
Console.WriteLine(sb);
}
If you mean "how does it work?" - iterator blocks (yield return) are a complex subject in themselves - Jon has a free chapter (6) in his book, though. The rest of the code is very much like your original question - just using LINQ to provide the moral equivalent of + (for arrays).
A:
C# has a yield keyword that I imagine works pretty much the same as what your python code is doing, so it shouldn't be too hard to get a mostly direct translation.
However this is a recursive solution, so for all it's brevity it's sub-optimal. I don't personally understand all the math involved, but for good efficient mathematical permutations you want to use factoradics. This article should help:
http://msdn.microsoft.com/en-us/library/aa302371.aspx
[Update]: The other answer brings up a good point: if you're just using permutations to do a shuffle there are still better options available. Specifically, the Knuth/Fisher-Yates shuffle.
A:
While you cannot port it while maintaining the brevity, you can get pretty close.
public static class IEnumerableExtensions
{
public static IEnumerable<IEnumerable<T>> Permutations<T>(this IEnumerable<T> source)
{
if (source == null)
throw new ArgumentNullException("source");
return PermutationsImpl(source, new T[0]);
}
private static IEnumerable<IEnumerable<T>> PermutationsImpl<T>(IEnumerable<T> source, IEnumerable<T> prefix)
{
if (source.Count() == 0)
yield return prefix;
foreach (var x in source)
foreach (var permutation in PermutationsImpl(source.Except(new T[] { x }),
prefix.Union(new T[] { x }))))
yield return permutation;
}
}
| Can you write a permutation function just as elegantly in C#? | I like this 6 line solution a lot and am trying to replicate it in C#. Basically, it permutes the elements of an array:
def permute(xs, pre=[]):
if len(xs) == 0:
yield pre
for i, x in enumerate(xs):
for y in permute(xs[:i] + xs[i+1:], pre + [x]):
yield y
| [
"Well, it probably isn't how I'd write it, but:\nstatic IEnumerable<T[]> Permute<T>(this T[] xs, params T[] pre) {\n if (xs.Length == 0) yield return pre;\n for (int i = 0; i < xs.Length; i++) {\n foreach (T[] y in Permute(xs.Take(i).Union(xs.Skip(i+1)).ToArray(), pre.Union(new[] { xs[i] }).ToArray())) {\n yield return y;\n }\n }\n}\n\n\nRe your comment; I'm not entirely clear on the question; if you mean \"why is this useful?\" - among other things, there are a range of brute-force scenarios where you would want to try different permutations - for example, for small ordering problems like travelling sales person (that aren't big enough to warrant a more sophisticated solution), you might want to check whether it is best to go {base,A,B,C,base}, {base,A,C,B,base},{base,B,A,C,base}, etc.\nIf you mean \"how would I use this method?\" - untested, but something like:\nint[] values = {1,2,3};\nforeach(int[] perm in values.Permute()) {\n WriteArray(perm);\n}\n\nvoid WriteArray<T>(T[] values) {\n StringBuilder sb = new StringBuilder();\n foreach(T value in values) {\n sb.Append(value).Append(\", \");\n }\n Console.WriteLine(sb);\n}\n\nIf you mean \"how does it work?\" - iterator blocks (yield return) are a complex subject in themselves - Jon has a free chapter (6) in his book, though. The rest of the code is very much like your original question - just using LINQ to provide the moral equivalent of + (for arrays).\n",
"C# has a yield keyword that I imagine works pretty much the same as what your python code is doing, so it shouldn't be too hard to get a mostly direct translation.\nHowever this is a recursive solution, so for all it's brevity it's sub-optimal. I don't personally understand all the math involved, but for good efficient mathematical permutations you want to use factoradics. This article should help:\nhttp://msdn.microsoft.com/en-us/library/aa302371.aspx\n[Update]: The other answer brings up a good point: if you're just using permutations to do a shuffle there are still better options available. Specifically, the Knuth/Fisher-Yates shuffle.\n",
"While you cannot port it while maintaining the brevity, you can get pretty close.\npublic static class IEnumerableExtensions\n{\n public static IEnumerable<IEnumerable<T>> Permutations<T>(this IEnumerable<T> source)\n {\n if (source == null)\n throw new ArgumentNullException(\"source\");\n\n return PermutationsImpl(source, new T[0]);\n }\n\n private static IEnumerable<IEnumerable<T>> PermutationsImpl<T>(IEnumerable<T> source, IEnumerable<T> prefix)\n {\n if (source.Count() == 0)\n yield return prefix;\n\n foreach (var x in source)\n foreach (var permutation in PermutationsImpl(source.Except(new T[] { x }),\n prefix.Union(new T[] { x }))))\n yield return permutation;\n }\n}\n\n"
] | [
12,
1,
0
] | [
"Not entirely to the point I must admit after some comments, but the code below can be used to generate a random permutation of a finite sequence. It's a variation of the Fisher-Yates shuffle algorithm. The example uses a sequence of int's but you can use any Enumerable<T> of course.\nvar ints = Enumerable.Range(0, 51);\nvar shuffledInts = ints.OrderBy(a => Guid.NewGuid());\n\nYou order by a random value (in this case a Guid) which essentially permutates your list. Whether NewGuid is a good source of randomness is debatable, but it's an elegant and compact solution (albeit for another problem then the question was actually about).\nTaken from Jeff Atwood (Coding Horror).\n"
] | [
-6
] | [
"algorithm",
"c#",
"python"
] | stackoverflow_0000756223_algorithm_c#_python.txt |
Q:
How to select at the same time from two Listbox?
from Tkinter import *
master = Tk()
listbox = Listbox(master)
listbox.pack()
listbox.insert(END, "a list entry")
for item in ["one", "two", "three", "four"]:
listbox.insert(END, item)
listbox2 = Listbox(master)
listbox2.pack()
listbox2.insert(END, "a list entry")
for item in ["one", "two", "three", "four"]:
listbox2.insert(END, item)
master.mainloop()
The code above creates a tkinter window with two listboxes. But there's a problem if you want to retrieve the values from both because, as soon as you select a value in one, it deselects whatever you selected in the other.
Is this just a limitation developers have to live with?
A:
Short answer: set the value of the exportselection attribute of all listbox widgets to False or zero.
From a pythonware overview of the listbox widget:
By default, the selection is exported
to the X selection mechanism. If you
have more than one listbox on the
screen, this really messes things up
for the poor user. If he selects
something in one listbox, and then
selects something in another, the
original selection is cleared. It is
usually a good idea to disable this
mechanism in such cases. In the
following example, three listboxes are
used in the same dialog:
b1 = Listbox(exportselection=0)
for item in families:
b1.insert(END, item)
b2 = Listbox(exportselection=0)
for item in fonts:
b2.insert(END, item)
b3 = Listbox(exportselection=0)
for item in styles:
b3.insert(END, item)
The definitive documentation for tk widgets is based on the Tcl language rather than python, but it is easy to translate to python. The exportselection attribute can be found on the standard options manual page.
A:
exportselection=0 when defining a listbox seems to take care of this issue.
| How to select at the same time from two Listbox? | from Tkinter import *
master = Tk()
listbox = Listbox(master)
listbox.pack()
listbox.insert(END, "a list entry")
for item in ["one", "two", "three", "four"]:
listbox.insert(END, item)
listbox2 = Listbox(master)
listbox2.pack()
listbox2.insert(END, "a list entry")
for item in ["one", "two", "three", "four"]:
listbox2.insert(END, item)
master.mainloop()
The code above creates a tkinter window with two listboxes. But there's a problem if you want to retrieve the values from both because, as soon as you select a value in one, it deselects whatever you selected in the other.
Is this just a limitation developers have to live with?
| [
"Short answer: set the value of the exportselection attribute of all listbox widgets to False or zero.\nFrom a pythonware overview of the listbox widget:\n\nBy default, the selection is exported\n to the X selection mechanism. If you\n have more than one listbox on the\n screen, this really messes things up\n for the poor user. If he selects\n something in one listbox, and then\n selects something in another, the\n original selection is cleared. It is\n usually a good idea to disable this\n mechanism in such cases. In the\n following example, three listboxes are\n used in the same dialog:\nb1 = Listbox(exportselection=0)\nfor item in families:\n b1.insert(END, item)\n\nb2 = Listbox(exportselection=0)\nfor item in fonts:\n b2.insert(END, item)\n\nb3 = Listbox(exportselection=0)\nfor item in styles:\n b3.insert(END, item)\n\n\nThe definitive documentation for tk widgets is based on the Tcl language rather than python, but it is easy to translate to python. The exportselection attribute can be found on the standard options manual page. \n",
"exportselection=0 when defining a listbox seems to take care of this issue.\n"
] | [
46,
8
] | [] | [] | [
"listbox",
"python",
"tkinter"
] | stackoverflow_0000756662_listbox_python_tkinter.txt |
Q:
Matching a pair of comments in HTML using regular expressions
I have a mako template that looks something like this:
% if staff:
<!-- begin staff -->
...
<!-- end staff -->
% endif
That way if I pass the staff variable as being True, those comments should appear. I'm trying to test this by using a regular expression that looks like this:
re.search('<!-- begin staff -->.*<!-- end staff -->', text)
I've verified that the comments appear in the HTML output, but the regular expression doesn't match. I've even tried putting the comments (<!-- begin staff --> and <!-- end staff -->) through re.escape, but still no luck. What am I doing wrong?
Or is there a better way to run this test?
A:
By default . doesn't match newlines - you need to add the re.DOTALL option.
re.search('<!-- begin staff -->.*<!-- end staff -->', text, re.DOTALL)
If you have more than one staff section, you might also want to make the match ungreedy:
re.search('<!-- begin staff -->.*?<!-- end staff -->', text, re.DOTALL)
A:
Use an HTML Parser like HTMLParser instead. See Can you provide some examples of why it is hard to parse XML and HTML with a regex? for why.
| Matching a pair of comments in HTML using regular expressions | I have a mako template that looks something like this:
% if staff:
<!-- begin staff -->
...
<!-- end staff -->
% endif
That way if I pass the staff variable as being True, those comments should appear. I'm trying to test this by using a regular expression that looks like this:
re.search('<!-- begin staff -->.*<!-- end staff -->', text)
I've verified that the comments appear in the HTML output, but the regular expression doesn't match. I've even tried putting the comments (<!-- begin staff --> and <!-- end staff -->) through re.escape, but still no luck. What am I doing wrong?
Or is there a better way to run this test?
| [
"By default . doesn't match newlines - you need to add the re.DOTALL option.\nre.search('<!-- begin staff -->.*<!-- end staff -->', text, re.DOTALL)\n\nIf you have more than one staff section, you might also want to make the match ungreedy:\nre.search('<!-- begin staff -->.*?<!-- end staff -->', text, re.DOTALL)\n\n",
"Use an HTML Parser like HTMLParser instead. See Can you provide some examples of why it is hard to parse XML and HTML with a regex? for why.\n"
] | [
9,
2
] | [] | [] | [
"mako",
"python",
"regex",
"unit_testing"
] | stackoverflow_0000756898_mako_python_regex_unit_testing.txt |
Q:
Difference in regex behavior between Perl and Python?
I have a couple email addresses, 'support@company.com' and '1234567@tickets.company.com'.
In perl, I could take the To: line of a raw email and find either of the above addresses with
/\w+@(tickets\.)?company\.com/i
In python, I simply wrote the above regex as'\w+@(tickets\.)?company\.com' expecting the same result. However, support@company.com isn't found at all and a findall on the second returns a list containing only 'tickets.'. So clearly the '(tickets\.)?' is the problem area, but what exactly is the difference in regular expression rules between Perl and Python that I'm missing?
A:
The documentation for re.findall:
findall(pattern, string, flags=0)
Return a list of all non-overlapping matches in the string.
If one or more groups are present in the pattern, return a
list of groups; this will be a list of tuples if the pattern
has more than one group.
Empty matches are included in the result.
Since (tickets\.) is a group, findall returns that instead of the whole match. If you want the whole match, put a group around the whole pattern and/or use non-grouping matches, i.e.
r'(\w+@(tickets\.)?company\.com)'
r'\w+@(?:tickets\.)?company\.com'
Note that you'll have to pick out the first element of each tuple returned by findall in the first case.
A:
I think the problem is in your expectations of extracted values. Try using this in your current Python code:
'(\w+@(?:tickets\.)?company\.com)'
A:
Two problems jump out at me:
You need to use a raw string to avoid having to escape "\"
You need to escape "."
So try:
r'\w+@(tickets\.)?company\.com'
EDIT
Sample output:
>>> import re
>>> exp = re.compile(r'\w+@(tickets\.)?company\.com')
>>> bool(exp.match("s@company.com"))
True
>>> bool(exp.match("1234567@tickets.company.com"))
True
A:
There isn't a difference in the regexes, but there is a difference in what you are looking for. Your regex is capturing only "tickets." if it exists in both regexes. You probably want something like this
#!/usr/bin/python
import re
regex = re.compile("(\w+@(?:tickets\.)?company\.com)");
a = [
"foo@company.com",
"foo@tickets.company.com",
"foo@ticketsacompany.com",
"foo@compant.org"
];
for string in a:
print regex.findall(string)
| Difference in regex behavior between Perl and Python? | I have a couple email addresses, 'support@company.com' and '1234567@tickets.company.com'.
In perl, I could take the To: line of a raw email and find either of the above addresses with
/\w+@(tickets\.)?company\.com/i
In python, I simply wrote the above regex as'\w+@(tickets\.)?company\.com' expecting the same result. However, support@company.com isn't found at all and a findall on the second returns a list containing only 'tickets.'. So clearly the '(tickets\.)?' is the problem area, but what exactly is the difference in regular expression rules between Perl and Python that I'm missing?
| [
"The documentation for re.findall:\n\nfindall(pattern, string, flags=0)\n Return a list of all non-overlapping matches in the string.\n\n If one or more groups are present in the pattern, return a\n list of groups; this will be a list of tuples if the pattern\n has more than one group.\n\n Empty matches are included in the result.\n\n\nSince (tickets\\.) is a group, findall returns that instead of the whole match. If you want the whole match, put a group around the whole pattern and/or use non-grouping matches, i.e.\nr'(\\w+@(tickets\\.)?company\\.com)'\nr'\\w+@(?:tickets\\.)?company\\.com'\n\nNote that you'll have to pick out the first element of each tuple returned by findall in the first case.\n",
"I think the problem is in your expectations of extracted values. Try using this in your current Python code:\n'(\\w+@(?:tickets\\.)?company\\.com)'\n\n",
"Two problems jump out at me:\n\nYou need to use a raw string to avoid having to escape \"\\\"\nYou need to escape \".\"\n\nSo try:\nr'\\w+@(tickets\\.)?company\\.com'\n\nEDIT\nSample output:\n>>> import re\n>>> exp = re.compile(r'\\w+@(tickets\\.)?company\\.com')\n>>> bool(exp.match(\"s@company.com\"))\nTrue\n>>> bool(exp.match(\"1234567@tickets.company.com\"))\nTrue\n\n",
"There isn't a difference in the regexes, but there is a difference in what you are looking for. Your regex is capturing only \"tickets.\" if it exists in both regexes. You probably want something like this\n#!/usr/bin/python\n\nimport re\n\nregex = re.compile(\"(\\w+@(?:tickets\\.)?company\\.com)\");\n\na = [\n \"foo@company.com\", \n \"foo@tickets.company.com\", \n \"foo@ticketsacompany.com\",\n \"foo@compant.org\"\n];\n\nfor string in a:\n print regex.findall(string)\n\n"
] | [
7,
4,
2,
1
] | [] | [] | [
"perl",
"python",
"regex"
] | stackoverflow_0000757476_perl_python_regex.txt |
Q:
How do I send large amounts of data from a forked process?
I have a ctypes wrapper for a library. Unfortunately, this library is not 100% reliable (occasional segfaults, etc.). Because of how it's used, I want the wrapper to be reasonably resilient to the library crashing.
The best way to do this seems to be forking a process and sending the results back from the child. I'd like to do something along these lines:
r, w = os.pipe()
pid = os.fork()
if pid == 0:
# child
result = ctypes_fn()
os.write(w, pickle.dumps(result))
os.close(w)
else:
# parent
os.waitpid(pid, 0)
result = os.read(r, 524288) # can be this big
os.close(r)
return pickle.loads(result)
This doesn't quite work, though. The forked process hangs on the write. Am I trying to send too much at once? Is there a simpler solution to this problem?
A:
Probably you are trying to write more data than can fit into the pipe, so it is blocking until someone comes along and reads some of that info out of there. That will never happen, because the only reader is the parent process, which you appear to have written to wait until the child terminates before it reads anything. This is what we call a deadlock.
You might consider taking out that os.waitpid call and see what happens. Another option would be to see if os.pipe has any methods that give it a bigger buffer (I don't know your environment enough to say).
A:
The basic problem is that there's a 64kB limit on the pipe. A few possible solutions, from the simple to the complex:
Send less data. zlib.compress could help in getting under the limit.
Store the actual data somewhere else (file, mmap, memcache), only using the pipe to send control information.
Continue using the pipe, but chunk the output. Use two sets of pipes so the processes can talk to each other and synchronize their communication. The code is more complex, but is otherwise very effective.
A:
One solution to the deadlock that ted.dennison mentioned is the following pseudocode:
#parent
while waitpid(pid, WNOHANG) == (0, 0):
result = os.read(r, 1024)
#sleep for a short time
#at this point the child process has ended
#and you need the last bit of data from the pipe
result = os.read(r, 1024)
os.close(r)
Waitpid with the WNOHANG option causes waitpid to return immediately when the child process hasn't exited yet. In this case it returns (0,0). You'll need to make sure not to overwrite the result variable each time through the loop like the above code does.
| How do I send large amounts of data from a forked process? | I have a ctypes wrapper for a library. Unfortunately, this library is not 100% reliable (occasional segfaults, etc.). Because of how it's used, I want the wrapper to be reasonably resilient to the library crashing.
The best way to do this seems to be forking a process and sending the results back from the child. I'd like to do something along these lines:
r, w = os.pipe()
pid = os.fork()
if pid == 0:
# child
result = ctypes_fn()
os.write(w, pickle.dumps(result))
os.close(w)
else:
# parent
os.waitpid(pid, 0)
result = os.read(r, 524288) # can be this big
os.close(r)
return pickle.loads(result)
This doesn't quite work, though. The forked process hangs on the write. Am I trying to send too much at once? Is there a simpler solution to this problem?
| [
"Probably you are trying to write more data than can fit into the pipe, so it is blocking until someone comes along and reads some of that info out of there. That will never happen, because the only reader is the parent process, which you appear to have written to wait until the child terminates before it reads anything. This is what we call a deadlock.\nYou might consider taking out that os.waitpid call and see what happens. Another option would be to see if os.pipe has any methods that give it a bigger buffer (I don't know your environment enough to say).\n",
"The basic problem is that there's a 64kB limit on the pipe. A few possible solutions, from the simple to the complex:\n\nSend less data. zlib.compress could help in getting under the limit.\nStore the actual data somewhere else (file, mmap, memcache), only using the pipe to send control information.\nContinue using the pipe, but chunk the output. Use two sets of pipes so the processes can talk to each other and synchronize their communication. The code is more complex, but is otherwise very effective.\n\n",
"One solution to the deadlock that ted.dennison mentioned is the following pseudocode:\n#parent\nwhile waitpid(pid, WNOHANG) == (0, 0):\n result = os.read(r, 1024)\n #sleep for a short time\n#at this point the child process has ended \n#and you need the last bit of data from the pipe\nresult = os.read(r, 1024)\nos.close(r)\n\nWaitpid with the WNOHANG option causes waitpid to return immediately when the child process hasn't exited yet. In this case it returns (0,0). You'll need to make sure not to overwrite the result variable each time through the loop like the above code does.\n"
] | [
4,
2,
0
] | [] | [] | [
"fork",
"pipe",
"python"
] | stackoverflow_0000757020_fork_pipe_python.txt |
Q:
"ImportError: No module named dummy" on fresh Django project
I've got the following installed through MacPorts on MacOS X 10.5.6:
py25-sqlite3 @2.5.4_0 (active)
python25 @2.5.4_1+darwin_9+macosx (active)
sqlite3 @3.6.12_0 (active)
python25 is correctly set as my system's default Python.
I downloaded a fresh copy of Django 1.1 beta (I have the same problem with 1.0 and trunk, though) and installed it with "sudo python setup.py install".
Things seem to load correctly through the interactive interpreter:
$ python
Python 2.5.4 (r254:67916, Apr 10 2009, 16:02:52)
[GCC 4.0.1 (Apple Inc. build 5490)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
>>> import sqlite3
>>> ^D
But:
$ django-admin.py startproject foo
$ cd foo/
$ python manage.py runserver
Validating models...
Unhandled exception in thread started by <function inner_run at 0x6c1e70>
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/commands/runserver.py", line 48, in inner_run
self.validate(display_num_errors=True)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/base.py", line 246, in validate
num_errors = get_validation_errors(s, app)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/validation.py", line 22, in get_validation_errors
from django.db import models, connection
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/db/__init__.py", line 22, in <module>
backend = __import__('%s.base' % settings.DATABASE_ENGINE, {}, {}, [''])
ImportError: No module named dummy.base
If I change DATABASE_ENGINE in settings.py to "sqlite3", I get the following, seemingly related problem:
$ python manage.py runserver
Validating models...
Unhandled exception in thread started by <function inner_run at 0x6c1e70>
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/commands/runserver.py", line 48, in inner_run
self.validate(display_num_errors=True)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/base.py", line 246, in validate
num_errors = get_validation_errors(s, app)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/validation.py", line 22, in get_validation_errors
from django.db import models, connection
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/db/__init__.py", line 22, in <module>
backend = __import__('%s.base' % settings.DATABASE_ENGINE, {}, {}, [''])
ImportError: No module named base
^C$
I swear this all worked a few days ago and I don't recall changing anything related to Django or Python, installation-wise.
My various Google adventures have turned up nothing useful. So... Any ideas?
Edit: 'syncdb' raises the same exceptions.
A:
I found this thread on the Django Users group:
They suggest that it has something to do with the way MacPorts installs Python. I wish I had more details to help you with, but as a workaround, I recommend you use MacPorts to uninstall this copy of Python and try to use alternate method of install it. If you're looking for an quick and easy install, you might want to try MacPython. Hope this helps!
A:
did you try the intro doc? doc link
If you follow this doc, you can at least say, "at step XXXX it got error YYY". Then someone with some experience (no me) should be able to find a good answer. This link is for the trunk, there's a link for 1.0 docs at the top.
A:
duh, i'm not thinking. Just run
python manage.py syncdb
this will build you db so you can then run the server.
A:
Re-check your settings.py. In the second case, it looks like your DATABASE_ENGINE is set to the empty string, not 'sqlite3'.
A:
This isn't an answer, exactly, but I would try removing the MacPorts install of Django and start over. Then try adding easy_install and using that to install everything. To make things cleaner and easier to start over, you might also want to add virtualenv, which lets you set up multiple self-contained Python environments.
A:
You can also try installing the py25-hashlib package if you don't have it already. I found this described on the django bug tracking site.
Normally, this package is part of python, but it's either missing or wrong in the macports version, from what I've read.
I found more info on the macports version of py25-hashlib here.
A:
Try using the full path to python as well as checking the module path
| "ImportError: No module named dummy" on fresh Django project | I've got the following installed through MacPorts on MacOS X 10.5.6:
py25-sqlite3 @2.5.4_0 (active)
python25 @2.5.4_1+darwin_9+macosx (active)
sqlite3 @3.6.12_0 (active)
python25 is correctly set as my system's default Python.
I downloaded a fresh copy of Django 1.1 beta (I have the same problem with 1.0 and trunk, though) and installed it with "sudo python setup.py install".
Things seem to load correctly through the interactive interpreter:
$ python
Python 2.5.4 (r254:67916, Apr 10 2009, 16:02:52)
[GCC 4.0.1 (Apple Inc. build 5490)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
>>> import sqlite3
>>> ^D
But:
$ django-admin.py startproject foo
$ cd foo/
$ python manage.py runserver
Validating models...
Unhandled exception in thread started by <function inner_run at 0x6c1e70>
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/commands/runserver.py", line 48, in inner_run
self.validate(display_num_errors=True)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/base.py", line 246, in validate
num_errors = get_validation_errors(s, app)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/validation.py", line 22, in get_validation_errors
from django.db import models, connection
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/db/__init__.py", line 22, in <module>
backend = __import__('%s.base' % settings.DATABASE_ENGINE, {}, {}, [''])
ImportError: No module named dummy.base
If I change DATABASE_ENGINE in settings.py to "sqlite3", I get the following, seemingly related problem:
$ python manage.py runserver
Validating models...
Unhandled exception in thread started by <function inner_run at 0x6c1e70>
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/commands/runserver.py", line 48, in inner_run
self.validate(display_num_errors=True)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/base.py", line 246, in validate
num_errors = get_validation_errors(s, app)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/core/management/validation.py", line 22, in get_validation_errors
from django.db import models, connection
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/django/db/__init__.py", line 22, in <module>
backend = __import__('%s.base' % settings.DATABASE_ENGINE, {}, {}, [''])
ImportError: No module named base
^C$
I swear this all worked a few days ago and I don't recall changing anything related to Django or Python, installation-wise.
My various Google adventures have turned up nothing useful. So... Any ideas?
Edit: 'syncdb' raises the same exceptions.
| [
"I found this thread on the Django Users group:\nThey suggest that it has something to do with the way MacPorts installs Python. I wish I had more details to help you with, but as a workaround, I recommend you use MacPorts to uninstall this copy of Python and try to use alternate method of install it. If you're looking for an quick and easy install, you might want to try MacPython. Hope this helps!\n",
"did you try the intro doc? doc link\nIf you follow this doc, you can at least say, \"at step XXXX it got error YYY\". Then someone with some experience (no me) should be able to find a good answer. This link is for the trunk, there's a link for 1.0 docs at the top.\n",
"duh, i'm not thinking. Just run\npython manage.py syncdb\n\nthis will build you db so you can then run the server.\n",
"Re-check your settings.py. In the second case, it looks like your DATABASE_ENGINE is set to the empty string, not 'sqlite3'.\n",
"This isn't an answer, exactly, but I would try removing the MacPorts install of Django and start over. Then try adding easy_install and using that to install everything. To make things cleaner and easier to start over, you might also want to add virtualenv, which lets you set up multiple self-contained Python environments.\n",
"You can also try installing the py25-hashlib package if you don't have it already. I found this described on the django bug tracking site.\nNormally, this package is part of python, but it's either missing or wrong in the macports version, from what I've read.\nI found more info on the macports version of py25-hashlib here.\n",
"Try using the full path to python as well as checking the module path\n"
] | [
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"database",
"django",
"python",
"sqlite"
] | stackoverflow_0000739191_database_django_python_sqlite.txt |
Q:
Why doesn't the regex match when I add groups?
I have this regex code in python :
if re.search(r"\{\\fad|fade\(\d{1,4},\d{1,4}\)\}", text):
print(re.search(r"\{\\fad|fade\((\d{1,4}),(\d{1,4})\)\}", text).groups())
text is {\fad(200,200)}Épisode 101 : {\i1}The Ghost{\i0}\Nv. 1.03 and read from a file (don't know if that helps).
This returns the following:
(None, None)
When I change the regex in the print to r"\{\\fad\((\d{1,4}),(\d{1,4})\)\}", it returns the correct values:
(200, 200)
Can anyone see why the conditional fad|fade matches the regex in the re.search but doesn't return the correct values of the groups in the print?
Thanks.
A:
Put extra parens around the choice: re.search(r"{(?:\\fad|fade)\((\d{1,4}),(\d{1,4})\)}", text).groups()
Also, escaping {} braces isn't necessary, it just needlessly clutters your regexp.
A:
The bracket is part of the or branch starting with fade, so it's looking for either "{fad" or "fade(...". You need to group the fad|fade part together. Try:
r"\{\\(?:fad|fade)\(\d{1,4},\d{1,4}\)\}"
[Edit]
The reason you do get into the if block is because the regex is matching, but only because it detects it starts with "{\fad". However, that part of the match contains no groups. You need to match with the part that defines the groups if you want to capture them.
A:
Try this:
r"\{\\fade?\(\d{1,4},\d{1,4}\)\}"
A:
I think your conditional is looking for "\fad" or "fade", I think you need to move a \ outside the grouping if you want to look for "\fad" or "\fade".
A:
Try this instead:
r"\{\\fade?\((\d{1,4}),(\d{1,4})\)\}"
The e? is an optional e.
The way you have it now matches {\fad or fade(0000,0000)}
A:
I don't know the python dialect of regular expressions, but wouldn't you need to 'group' the "fad|fade" somehow to make sure it isn't trying to find "fad OR fade(etc..."?
| Why doesn't the regex match when I add groups? | I have this regex code in python :
if re.search(r"\{\\fad|fade\(\d{1,4},\d{1,4}\)\}", text):
print(re.search(r"\{\\fad|fade\((\d{1,4}),(\d{1,4})\)\}", text).groups())
text is {\fad(200,200)}Épisode 101 : {\i1}The Ghost{\i0}\Nv. 1.03 and read from a file (don't know if that helps).
This returns the following:
(None, None)
When I change the regex in the print to r"\{\\fad\((\d{1,4}),(\d{1,4})\)\}", it returns the correct values:
(200, 200)
Can anyone see why the conditional fad|fade matches the regex in the re.search but doesn't return the correct values of the groups in the print?
Thanks.
| [
"Put extra parens around the choice: re.search(r\"{(?:\\\\fad|fade)\\((\\d{1,4}),(\\d{1,4})\\)}\", text).groups()\nAlso, escaping {} braces isn't necessary, it just needlessly clutters your regexp.\n",
"The bracket is part of the or branch starting with fade, so it's looking for either \"{fad\" or \"fade(...\". You need to group the fad|fade part together. Try:\nr\"\\{\\\\(?:fad|fade)\\(\\d{1,4},\\d{1,4}\\)\\}\"\n\n[Edit]\nThe reason you do get into the if block is because the regex is matching, but only because it detects it starts with \"{\\fad\". However, that part of the match contains no groups. You need to match with the part that defines the groups if you want to capture them.\n",
"Try this:\nr\"\\{\\\\fade?\\(\\d{1,4},\\d{1,4}\\)\\}\"\n\n",
"I think your conditional is looking for \"\\fad\" or \"fade\", I think you need to move a \\ outside the grouping if you want to look for \"\\fad\" or \"\\fade\".\n",
"Try this instead:\nr\"\\{\\\\fade?\\((\\d{1,4}),(\\d{1,4})\\)\\}\"\n\nThe e? is an optional e.\nThe way you have it now matches {\\fad or fade(0000,0000)}\n",
"I don't know the python dialect of regular expressions, but wouldn't you need to 'group' the \"fad|fade\" somehow to make sure it isn't trying to find \"fad OR fade(etc...\"?\n"
] | [
6,
4,
2,
1,
1,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000757949_python_regex.txt |
Q:
Whats the easiest and fastest way to measure HD performance using Python?
I need to measure the performance of a hard disk using python. What is the best/fastest/shortest/easiest approach to do it? It doesn't have to be overly accurate, just a ballpark value.
My actual goal is to write a small utility which will adjust the postgres settings to the best configuration for the given hardware.
My naive approach would be to write some files and measure the time how long it would take. I would try it for several block sizes, and then I would try to access some random positions within a large file. Any other ideas?
A:
I would think your best bet would be using an external tool, Bonnie++ for example, and parse the program output. Even if you're not that concerned with precision there's no reason to reinvent the wheel. Why rewrite what's already there?
A:
Start here: http://www.acnc.com/benchmarks.html
Get the source for one you like write something like it in Python.
| Whats the easiest and fastest way to measure HD performance using Python? | I need to measure the performance of a hard disk using python. What is the best/fastest/shortest/easiest approach to do it? It doesn't have to be overly accurate, just a ballpark value.
My actual goal is to write a small utility which will adjust the postgres settings to the best configuration for the given hardware.
My naive approach would be to write some files and measure the time how long it would take. I would try it for several block sizes, and then I would try to access some random positions within a large file. Any other ideas?
| [
"I would think your best bet would be using an external tool, Bonnie++ for example, and parse the program output. Even if you're not that concerned with precision there's no reason to reinvent the wheel. Why rewrite what's already there?\n",
"Start here: http://www.acnc.com/benchmarks.html\nGet the source for one you like write something like it in Python.\n"
] | [
2,
1
] | [] | [] | [
"performance",
"postgresql",
"python"
] | stackoverflow_0000757816_performance_postgresql_python.txt |
Q:
Problem with Python modules
I'm uploading my first Django app to my Dreamhost server. My app uses xlwt package and since I can't install it in the default location ( /usr/lib/python2.3/site-packages/xlwt ), I installed it on another location by:
python setup.py install --home=$HOME
Then xlwt is installed here:
/home/myuser/lib/python/xlwt/
After that, I add this folder to de env var PYTHONPATH
export PYTHONPATH=$PYTHONPATH:/home/myuser/lib/python
... And in a python promt I can do this (without problems)
import xlwt
... But if I do the same thing in my app code, I have the follow error:
Could not import ISI.restaurante.views. Error was: No module named xlwt
[where ISI.restaurante.views is my code where I do the import]
Could u help me? Thanks!
A:
PYTHONPATH may only be set when you run from the shell, you can set path programatically from python using
import sys
sys.path.append('/home/myuser/lib/python')
| Problem with Python modules | I'm uploading my first Django app to my Dreamhost server. My app uses xlwt package and since I can't install it in the default location ( /usr/lib/python2.3/site-packages/xlwt ), I installed it on another location by:
python setup.py install --home=$HOME
Then xlwt is installed here:
/home/myuser/lib/python/xlwt/
After that, I add this folder to de env var PYTHONPATH
export PYTHONPATH=$PYTHONPATH:/home/myuser/lib/python
... And in a python promt I can do this (without problems)
import xlwt
... But if I do the same thing in my app code, I have the follow error:
Could not import ISI.restaurante.views. Error was: No module named xlwt
[where ISI.restaurante.views is my code where I do the import]
Could u help me? Thanks!
| [
"PYTHONPATH may only be set when you run from the shell, you can set path programatically from python using\nimport sys\nsys.path.append('/home/myuser/lib/python')\n\n"
] | [
5
] | [] | [] | [
"django",
"dreamhost",
"python"
] | stackoverflow_0000758187_django_dreamhost_python.txt |
Q:
How do you order lists in the same way QuerySets are ordered in Django?
I have a model that has an ordering field under its Meta class. When I perform a query and get back a QuerySet for the model it is in the order specified. However if I have instances of this model that are in a list and execute the sort method on the list the order is different from the one I want. Is there a way to sort a list of instances of a model such that the order is equal to that specified in the model definition?
A:
Not automatically, but with a bit of work, yes. You need to define a comparator function (or cmp method on the model class) that can compare two model instances according to the relevant attribute. For instance:
class Dated(models.Model):
...
created = models.DateTimeField(default=datetime.now)
class Meta:
ordering = ('created',)
def __cmp__(self, other):
try:
return cmp(self.created, other.created)
except AttributeError:
return cmp(self.created, other)
A:
The answer to your question is varying degrees of yes, with some manual requirements. If by list you mean a queryset that has been formed by some complicated query, then, sure:
queryset.order_by(ClassName.Meta.ordering)
or
queryset.order_by(instance._meta.ordering)
or
queryset.order_by("fieldname") #If you like being manual
If you're not working with a queryset, then of course you can still sort, the same way anyone sorts complex objects in python:
Comparators
Specifying keys
Decorate/Sort/Undecorate
See the python wiki for a detailed explanation of all three.
A:
Building on Carl's answer, you could easily add the ability to use all the ordering fields and even detect the ones that are in reverse order.
class Person(models.Model):
first_name = models.CharField(max_length=50)
last_name = models.CharField(max_length=50)
birthday = date = models.DateField()
class Meta:
ordering = ['last_name', 'first_name']
def __cmp__(self, other):
for order in self._meta.ordering:
if order.startswith('-'):
order = order[1:]
mode = -1
else:
mode = 1
if hasattr(self, order) and hasattr(other, order):
result = mode * cmp(getattr(self, order), getattr(other, order))
if result: return result
return 0
| How do you order lists in the same way QuerySets are ordered in Django? | I have a model that has an ordering field under its Meta class. When I perform a query and get back a QuerySet for the model it is in the order specified. However if I have instances of this model that are in a list and execute the sort method on the list the order is different from the one I want. Is there a way to sort a list of instances of a model such that the order is equal to that specified in the model definition?
| [
"Not automatically, but with a bit of work, yes. You need to define a comparator function (or cmp method on the model class) that can compare two model instances according to the relevant attribute. For instance:\nclass Dated(models.Model):\n ...\n created = models.DateTimeField(default=datetime.now)\n\n class Meta:\n ordering = ('created',)\n\n def __cmp__(self, other):\n try:\n return cmp(self.created, other.created)\n except AttributeError:\n return cmp(self.created, other)\n\n",
"The answer to your question is varying degrees of yes, with some manual requirements. If by list you mean a queryset that has been formed by some complicated query, then, sure:\nqueryset.order_by(ClassName.Meta.ordering)\n\nor\nqueryset.order_by(instance._meta.ordering)\n\nor\nqueryset.order_by(\"fieldname\") #If you like being manual\n\nIf you're not working with a queryset, then of course you can still sort, the same way anyone sorts complex objects in python:\n\nComparators\nSpecifying keys\nDecorate/Sort/Undecorate\n\nSee the python wiki for a detailed explanation of all three.\n",
"Building on Carl's answer, you could easily add the ability to use all the ordering fields and even detect the ones that are in reverse order.\nclass Person(models.Model):\n first_name = models.CharField(max_length=50)\n last_name = models.CharField(max_length=50)\n birthday = date = models.DateField()\n\n class Meta:\n ordering = ['last_name', 'first_name']\n\n def __cmp__(self, other):\n for order in self._meta.ordering:\n if order.startswith('-'):\n order = order[1:]\n mode = -1\n else:\n mode = 1\n if hasattr(self, order) and hasattr(other, order):\n result = mode * cmp(getattr(self, order), getattr(other, order))\n if result: return result\n return 0\n\n"
] | [
5,
3,
2
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000753687_django_django_models_python.txt |
Q:
how to use pycurl if requested data is sometimes gzipped, sometimes not?
I'm doing this to fetch some data:
c = pycurl.Curl()
c.setopt(pycurl.ENCODING, 'gzip')
c.setopt(pycurl.URL, url)
c.setopt(pycurl.TIMEOUT, 10)
c.setopt(pycurl.FOLLOWLOCATION, True)
xml = StringIO()
c.setopt(pycurl.WRITEFUNCTION, xml.write )
c.perform()
c.close()
My urls are typically of this sort:
http://host/path/to/resource-foo.xml
Usually I get back 302 pointing to:
http://archive-host/path/to/resource-foo.xml.gz
Given that I have set FOLLOWLOCATION, and ENCODING gzip, everything works great.
The problem is, sometimes I have a URL which does not result in a redirect to a gzipped resource. When this happens, c.perform() throws this error:
pycurl.error: (61, 'Error while processing content unencoding: invalid block type')
Which suggests to me that pycurl is trying to gunzip a resource that is not gzipped.
Is there some way I can instruct pycurl to figure out the response encoding, and gunzip or not as appropriate? I have played around with using different values for ENCODING, but so far no beans.
The pycurl docs seems to be a little lacking. :/
thx!
A:
If worst comes to worst, you could omit the ENCODING 'gzip', set HTTPHEADER to {'Accept-Encoding' : 'gzip'}, check the response headers for "Content-Encoding: gzip" and if it's present, gunzip the response yourself.
| how to use pycurl if requested data is sometimes gzipped, sometimes not? | I'm doing this to fetch some data:
c = pycurl.Curl()
c.setopt(pycurl.ENCODING, 'gzip')
c.setopt(pycurl.URL, url)
c.setopt(pycurl.TIMEOUT, 10)
c.setopt(pycurl.FOLLOWLOCATION, True)
xml = StringIO()
c.setopt(pycurl.WRITEFUNCTION, xml.write )
c.perform()
c.close()
My urls are typically of this sort:
http://host/path/to/resource-foo.xml
Usually I get back 302 pointing to:
http://archive-host/path/to/resource-foo.xml.gz
Given that I have set FOLLOWLOCATION, and ENCODING gzip, everything works great.
The problem is, sometimes I have a URL which does not result in a redirect to a gzipped resource. When this happens, c.perform() throws this error:
pycurl.error: (61, 'Error while processing content unencoding: invalid block type')
Which suggests to me that pycurl is trying to gunzip a resource that is not gzipped.
Is there some way I can instruct pycurl to figure out the response encoding, and gunzip or not as appropriate? I have played around with using different values for ENCODING, but so far no beans.
The pycurl docs seems to be a little lacking. :/
thx!
| [
"If worst comes to worst, you could omit the ENCODING 'gzip', set HTTPHEADER to {'Accept-Encoding' : 'gzip'}, check the response headers for \"Content-Encoding: gzip\" and if it's present, gunzip the response yourself.\n"
] | [
5
] | [] | [] | [
"gzip",
"http",
"libcurl",
"pycurl",
"python"
] | stackoverflow_0000758243_gzip_http_libcurl_pycurl_python.txt |
Q:
Python Sort Collections.DefaultDict in Descending order
I have this bit of code:
visits = defaultdict(int)
for t in tweetsSQL:
visits[t.user.from_user] += 1
I looked at some examples online that used the sorted method like so:
sorted(visits.iteritems, key=operator.itemgetter(1), reverse=True)
but it is giving me:
"TypeError: 'builtin_function_or_method' object is not iterable"
I am not sure why.
A:
iteritems is a method. You need parenthesis to call it: visits.iteritems().
As it stands now, you are passing the iteritems method itself to sorted which is why it is complaining that it can't iterate over a function or method.
A:
Personally I think one of these forms is a little more succinct as the first argument only needs to be an iterable not an iterator.
sorted_keys = sorted(visits.keys(), reverse=True)
sorted_keys = visits.keys().sort(reverse=True)
| Python Sort Collections.DefaultDict in Descending order | I have this bit of code:
visits = defaultdict(int)
for t in tweetsSQL:
visits[t.user.from_user] += 1
I looked at some examples online that used the sorted method like so:
sorted(visits.iteritems, key=operator.itemgetter(1), reverse=True)
but it is giving me:
"TypeError: 'builtin_function_or_method' object is not iterable"
I am not sure why.
| [
"iteritems is a method. You need parenthesis to call it: visits.iteritems().\nAs it stands now, you are passing the iteritems method itself to sorted which is why it is complaining that it can't iterate over a function or method. \n",
"Personally I think one of these forms is a little more succinct as the first argument only needs to be an iterable not an iterator.\nsorted_keys = sorted(visits.keys(), reverse=True)\nsorted_keys = visits.keys().sort(reverse=True)\n\n"
] | [
12,
2
] | [] | [] | [
"python"
] | stackoverflow_0000758792_python.txt |
Q:
Matplotlib suddenly crashes after reinstalling Xcode?
I was happy in my world of python and matplotlib with a good level of familiarity. I notied Xcode on my Mac wasn't working so I installed the latest version from Apple and it somehow broke my install of matplotlib (or numpy?)! I'm now getting
...
/sw/lib/python2.5/site-packages/matplotlib-0.91.1-py2.5-macosx-
10.5-i386.egg/matplotlib/numerix/ma/__init__.py in <module>()
14 print "using maskedarray"
15 else:
---> 16 from numpy.core.ma import *
17 #print "using ma"
18 def getmaskorNone(obj):
ImportError: No module named ma
I've tried reinstalling numpy (Ver 1.3.0) and matplotlib (Ver 0.91.1) to no avail. Any ideas?
A:
You're using Python from Fink, so Xcode doesn't have anything to do with it. from numpy.core.ma import * works fine with Apple's bundled Python and NumPy too.
| Matplotlib suddenly crashes after reinstalling Xcode? | I was happy in my world of python and matplotlib with a good level of familiarity. I notied Xcode on my Mac wasn't working so I installed the latest version from Apple and it somehow broke my install of matplotlib (or numpy?)! I'm now getting
...
/sw/lib/python2.5/site-packages/matplotlib-0.91.1-py2.5-macosx-
10.5-i386.egg/matplotlib/numerix/ma/__init__.py in <module>()
14 print "using maskedarray"
15 else:
---> 16 from numpy.core.ma import *
17 #print "using ma"
18 def getmaskorNone(obj):
ImportError: No module named ma
I've tried reinstalling numpy (Ver 1.3.0) and matplotlib (Ver 0.91.1) to no avail. Any ideas?
| [
"You're using Python from Fink, so Xcode doesn't have anything to do with it. from numpy.core.ma import * works fine with Apple's bundled Python and NumPy too.\n"
] | [
3
] | [] | [] | [
"matplotlib",
"numpy",
"python",
"xcode"
] | stackoverflow_0000758980_matplotlib_numpy_python_xcode.txt |
Q:
How can I remove a temporary file (image) that is being displayed by CGI?
I've written a python CGI script that converts files into .jpgs and displays them in a simple HTML page. I don't want to clutter up the folders with these .jpg files, so I used tempfile.NamedTemporaryFile to create a file to store the converted .jpg output. Everything works great, but i want to remove this file after the page is displayed. Currently I have delete=False set, but i can't seem to remove the file without causing a broken img link.
A:
You can't remove the file from your cgi script. Because the html page is send to the user only after your script finishes to run. And then the users browser parse the html and fetch the jpg file.
The simplest option is to write the temporary files to a sub directory and periodically clean that directory (living in it the last few minutes only). There are ways to improve this process but they are probably pointless.
A more advanced option (which is also probably pointless, depending on your scenario) is to configure the web server to run a script on the "get jpg" request. And then you can stream the jpg through your script. That way you will know when the jpg was fetched. And in this script you can call a subscript asynchronically to delete the jpg file.
| How can I remove a temporary file (image) that is being displayed by CGI? | I've written a python CGI script that converts files into .jpgs and displays them in a simple HTML page. I don't want to clutter up the folders with these .jpg files, so I used tempfile.NamedTemporaryFile to create a file to store the converted .jpg output. Everything works great, but i want to remove this file after the page is displayed. Currently I have delete=False set, but i can't seem to remove the file without causing a broken img link.
| [
"You can't remove the file from your cgi script. Because the html page is send to the user only after your script finishes to run. And then the users browser parse the html and fetch the jpg file.\nThe simplest option is to write the temporary files to a sub directory and periodically clean that directory (living in it the last few minutes only). There are ways to improve this process but they are probably pointless.\nA more advanced option (which is also probably pointless, depending on your scenario) is to configure the web server to run a script on the \"get jpg\" request. And then you can stream the jpg through your script. That way you will know when the jpg was fetched. And in this script you can call a subscript asynchronically to delete the jpg file.\n"
] | [
3
] | [] | [] | [
"cgi",
"image",
"python"
] | stackoverflow_0000759271_cgi_image_python.txt |
Q:
Unexpected result from sys.getrefcount
When I typed:
>>> astrd = 123
>>> import sys
>>> sys.getrefcount(astrd)
3
>>>
I am not getting where is astrd used 3 times ?
A:
It's not astrd that is referenced three times, but the value 123. astrd is simply a name for the (immutable) number 123, which can be referenced however many times. Additionally to that, small integers are usually shared:
>>> astrd = 123
>>> sys.getrefcount(astrd)
4
>>> j = 123
>>> sys.getrefcount(astrd)
5
In the second assignment, no new integer is created, instead j is just a new name for the integer 123.
However, given very large integers, this does not hold:
>>> i = 823423442583
>>> sys.getrefcount(i)
2
>>> j = 823423442583
>>> sys.getrefcount(i)
2
Shared integers are an implementation detail of CPython (among others). Since small integers are instantiated very often, sharing them saves a lot of memory. This is made possible by the fact that integers are immutable in the first place.
For the additional reference in the second example, cf. codeape's answer.
A:
From the getrefcount docstring:
... The count returned is generally one higher than you might expect,
because it includes the (temporary) reference as an argument to getrefcount().
The other two references means that python internally is holding two references to the object. Maybe the locals() and globals() dictionaries count as one reference each?
A:
I think it counts the references to 123, try other examples, like
>>> import sys
>>> astrd = 1
>>> sys.getrefcount(astrd)
177
>>> astrd = 9802374987193847
>>> sys.getrefcount(astrd)
2
>>>
The refcount for 9802374987193847 fits codeape's answer.
This is probably because numbers are immutables. If you for example use a list, it will always be 2 (from a clean prompt that is).
Btw, I get 2 for 123 as well, perhaps your setup is somewhat different? Or it might be time related or so?
A:
ints are implemented in a special way, they are cached and shared, that why you don't get 1.
And python, uses reference counted objects. astrd is itself a reference, so you actually get the number of references to the int '123'. Try with another (user-defined) type and you'll get 1.
| Unexpected result from sys.getrefcount | When I typed:
>>> astrd = 123
>>> import sys
>>> sys.getrefcount(astrd)
3
>>>
I am not getting where is astrd used 3 times ?
| [
"It's not astrd that is referenced three times, but the value 123. astrd is simply a name for the (immutable) number 123, which can be referenced however many times. Additionally to that, small integers are usually shared:\n>>> astrd = 123\n>>> sys.getrefcount(astrd)\n4\n>>> j = 123\n>>> sys.getrefcount(astrd)\n5\n\nIn the second assignment, no new integer is created, instead j is just a new name for the integer 123.\nHowever, given very large integers, this does not hold:\n>>> i = 823423442583\n>>> sys.getrefcount(i)\n2\n>>> j = 823423442583\n>>> sys.getrefcount(i)\n2\n\nShared integers are an implementation detail of CPython (among others). Since small integers are instantiated very often, sharing them saves a lot of memory. This is made possible by the fact that integers are immutable in the first place.\nFor the additional reference in the second example, cf. codeape's answer.\n",
"From the getrefcount docstring:\n\n... The count returned is generally one higher than you might expect, \n because it includes the (temporary) reference as an argument to getrefcount().\n\nThe other two references means that python internally is holding two references to the object. Maybe the locals() and globals() dictionaries count as one reference each?\n",
"I think it counts the references to 123, try other examples, like\n>>> import sys\n>>> astrd = 1\n>>> sys.getrefcount(astrd)\n177\n>>> astrd = 9802374987193847\n>>> sys.getrefcount(astrd)\n2\n>>> \n\nThe refcount for 9802374987193847 fits codeape's answer.\nThis is probably because numbers are immutables. If you for example use a list, it will always be 2 (from a clean prompt that is).\nBtw, I get 2 for 123 as well, perhaps your setup is somewhat different? Or it might be time related or so?\n",
"ints are implemented in a special way, they are cached and shared, that why you don't get 1.\nAnd python, uses reference counted objects. astrd is itself a reference, so you actually get the number of references to the int '123'. Try with another (user-defined) type and you'll get 1.\n"
] | [
10,
7,
6,
5
] | [] | [] | [
"garbage_collection",
"python"
] | stackoverflow_0000759740_garbage_collection_python.txt |
Q:
Elixir Event Handler
I want to use the @after_insert decorator of Elixir, but i can't access the Session within the model. Since i have autocommit set to False, i can't commit any changes in the event handler. Is there any best practice how to deal with that?
The Code I used to build model, database connection etc. are mostly taken off the documentations.
The desired method:
class Artefact(Entity):
[...]
@after_insert
def make_signature(self):
self.signature = '%s-%s' % (self.artefact_type.title.upper()[:3], self.id)
All the Session initialization is done in the init.py in the same directory.
When I then call:
Session.update(self)
Session.commit()
I get an error that Session is undefined.
Any idea?
A:
Have you imported Session?
from packagename import Session
at the top of your model file should do the trick. Packagename is the directory name.
| Elixir Event Handler | I want to use the @after_insert decorator of Elixir, but i can't access the Session within the model. Since i have autocommit set to False, i can't commit any changes in the event handler. Is there any best practice how to deal with that?
The Code I used to build model, database connection etc. are mostly taken off the documentations.
The desired method:
class Artefact(Entity):
[...]
@after_insert
def make_signature(self):
self.signature = '%s-%s' % (self.artefact_type.title.upper()[:3], self.id)
All the Session initialization is done in the init.py in the same directory.
When I then call:
Session.update(self)
Session.commit()
I get an error that Session is undefined.
Any idea?
| [
"Have you imported Session?\nfrom packagename import Session\nat the top of your model file should do the trick. Packagename is the directory name.\n"
] | [
0
] | [] | [] | [
"pylons",
"python",
"python_elixir"
] | stackoverflow_0000756529_pylons_python_python_elixir.txt |
Q:
Shell: insert a blank/new line two lines above pattern
To add a blank line above every line that matches your regexp, you can use:
sed '/regexp/{x;p;x;}'
But I want to add a blank line, not one line above, but two lines above the line which matches my regexp.
The pattern I'll be matching is a postal code in the address line.
Here is a snippet of the text's formatting:
random info (belongs to previous business)
business name
business address
For example:
Languages Spoken: English
Arnold's Cove, Nfld (sub To Clarenville)
Nile Road, Arnolds Cove, NL, A0B1N0
I'd like to add a new line above the business name:
Languages Spoken: English
Arnold's Cove, Nfld (sub To Clarenville)
Nile Road, Arnolds Cove, NL, A0B1N0
A:
More readable Perl, and handles multiple files sanely.
#!/usr/bin/env perl
use constant LINES => 2;
my @buffer = ();
while (<>) {
/pattern/ and unshift @buffer, "\n";
push @buffer, $_;
print splice @buffer, 0, -LINES;
}
continue {
if (eof(ARGV)) {
print @buffer;
@buffer = ();
}
}
A:
Something a bit like your original approach in sed:
sed '/regexp/i\
$H
x'
The basic idea is to print everything delayed by one line (xchange the hold and pattern spaces - printing is implicit). That needs to be done because until we check whether the next line matches the regexp we don't know whether to insert a newline or not.
(The $H there is just a trick to make the last line print. It appends the last line into the hold buffer so that the final implicit print command outputs it too.)
A:
Simple:
sed '1{x;d};$H;/regexp/{x;s/^/\n/;b};x'
Describe it
#!/bin/sed
# trick is juggling previous and current line in hold and pattern space
1 { # at firs line
x # place first line to hold space
d # skip to end and avoid printing
}
$H # append last line to hold space to force print
/regexp/ { # regexp found (in current line - pattern space)
x # swap previous and current line between hold and pattern space
s/^/\n/ # prepend line break before previous line
b # jump at end of script which cause print previous line
}
x # if regexp does not match just swap previous and current line to print previous one
Edit: Little bit simpler version.
sed '$H;/regexp/{x;s/^/\n/;b};x;1d'
A:
perl -ne 'END{print @x} push@x,$_; if(@x>2){splice @x,1,0,"\n" if /[[:alpha:]]\d[[:alpha:]]\s?\d[[:alpha:]]\d/;print splice @x,0,-2}'
If I cat your file into this, I get what you want... it's ugly, but you wanted shell (i.e., one-liner) :-) If I were to do this in full perl, I'd be able to clean it up a lot to make it approach readable. :-)
A:
Here's an approach that works for Python.
import sys
def address_change( aFile ):
address= []
for line in aFile:
if regex.match( line ):
# end of the address
print address[0]
print
print address[1:]
print line
address= []
else:
address.append( line )
address_change( sys.stdin )
This allows you to reformat a complete address to your heart's content. You can expand this to create define an Address class if your formatting is complex.
A:
I tried
sed '/regexp/a\\n'
but it inserted two newlines. If that does not bother you, take it.
echo -e "a\nb\nc" | sed '/^a$/a\n'
a
b
c
Edit:
Now that you state that you need to insert two lines above the matching regexp the suggested regex won't work.
I am not even sure if it would work at all with sed, as you need to remember past lines. Sounds like a job for a higher level language like python or perl :-)
| Shell: insert a blank/new line two lines above pattern | To add a blank line above every line that matches your regexp, you can use:
sed '/regexp/{x;p;x;}'
But I want to add a blank line, not one line above, but two lines above the line which matches my regexp.
The pattern I'll be matching is a postal code in the address line.
Here is a snippet of the text's formatting:
random info (belongs to previous business)
business name
business address
For example:
Languages Spoken: English
Arnold's Cove, Nfld (sub To Clarenville)
Nile Road, Arnolds Cove, NL, A0B1N0
I'd like to add a new line above the business name:
Languages Spoken: English
Arnold's Cove, Nfld (sub To Clarenville)
Nile Road, Arnolds Cove, NL, A0B1N0
| [
"More readable Perl, and handles multiple files sanely.\n#!/usr/bin/env perl\nuse constant LINES => 2;\nmy @buffer = ();\nwhile (<>) {\n /pattern/ and unshift @buffer, \"\\n\";\n push @buffer, $_;\n print splice @buffer, 0, -LINES;\n}\ncontinue {\n if (eof(ARGV)) {\n print @buffer;\n @buffer = ();\n }\n}\n\n",
"Something a bit like your original approach in sed:\nsed '/regexp/i\\\n\n$H\nx'\n\nThe basic idea is to print everything delayed by one line (xchange the hold and pattern spaces - printing is implicit). That needs to be done because until we check whether the next line matches the regexp we don't know whether to insert a newline or not. \n(The $H there is just a trick to make the last line print. It appends the last line into the hold buffer so that the final implicit print command outputs it too.)\n",
"Simple:\nsed '1{x;d};$H;/regexp/{x;s/^/\\n/;b};x'\n\nDescribe it\n#!/bin/sed\n\n# trick is juggling previous and current line in hold and pattern space\n\n1 { # at firs line\n x # place first line to hold space\n d # skip to end and avoid printing\n}\n$H # append last line to hold space to force print\n/regexp/ { # regexp found (in current line - pattern space)\n x # swap previous and current line between hold and pattern space\n s/^/\\n/ # prepend line break before previous line\n b # jump at end of script which cause print previous line\n}\nx # if regexp does not match just swap previous and current line to print previous one\n\nEdit: Little bit simpler version.\nsed '$H;/regexp/{x;s/^/\\n/;b};x;1d'\n\n",
"perl -ne 'END{print @x} push@x,$_; if(@x>2){splice @x,1,0,\"\\n\" if /[[:alpha:]]\\d[[:alpha:]]\\s?\\d[[:alpha:]]\\d/;print splice @x,0,-2}'\n\nIf I cat your file into this, I get what you want... it's ugly, but you wanted shell (i.e., one-liner) :-) If I were to do this in full perl, I'd be able to clean it up a lot to make it approach readable. :-)\n",
"Here's an approach that works for Python.\nimport sys\ndef address_change( aFile ):\n address= []\n for line in aFile:\n if regex.match( line ):\n # end of the address\n print address[0]\n print \n print address[1:]\n print line\n address= []\n else:\n address.append( line )\naddress_change( sys.stdin )\n\nThis allows you to reformat a complete address to your heart's content. You can expand this to create define an Address class if your formatting is complex.\n",
"I tried\nsed '/regexp/a\\\\n'\n\nbut it inserted two newlines. If that does not bother you, take it.\n\necho -e \"a\\nb\\nc\" | sed '/^a$/a\\n'\n a \nb\n c \n\nEdit:\nNow that you state that you need to insert two lines above the matching regexp the suggested regex won't work.\nI am not even sure if it would work at all with sed, as you need to remember past lines. Sounds like a job for a higher level language like python or perl :-)\n"
] | [
7,
5,
3,
2,
1,
0
] | [] | [] | [
"awk",
"perl",
"python",
"sed",
"text"
] | stackoverflow_0000757532_awk_perl_python_sed_text.txt |
Q:
Performance Considerations Using Multiple Layers of Generators in Python?
Are there any performance considerations for using a lot of generators chained together, as opposed to just a single generator.
For example:
def A(self, items):
for item in self.AB(items):
if object.A():
yield item
def AB(self, items):
for object in self.ABC(objects):
if object.A() or object.B():
yield object
def ABC(self, objects):
for object in objects:
if object.A() or object.B() or object.C():
yield object
Clearly calling A(objects) is going to go through three different generators, but in many situations it makes the code re-use better if there are different generators to handle different filtering. Can anyone indicate that there is a significant impact on performance using this technique?
A:
There is nothing wrong with chaining generators, but in this example there is no reason for A to call self.AB, it can just loop over items to get the same result.
You should write your code as clearly as you can and if it's slow then use a profiler to determine where the bottleneck is. Contrived examples such as this one are too far from reality to be useful indicators of performance.
| Performance Considerations Using Multiple Layers of Generators in Python? | Are there any performance considerations for using a lot of generators chained together, as opposed to just a single generator.
For example:
def A(self, items):
for item in self.AB(items):
if object.A():
yield item
def AB(self, items):
for object in self.ABC(objects):
if object.A() or object.B():
yield object
def ABC(self, objects):
for object in objects:
if object.A() or object.B() or object.C():
yield object
Clearly calling A(objects) is going to go through three different generators, but in many situations it makes the code re-use better if there are different generators to handle different filtering. Can anyone indicate that there is a significant impact on performance using this technique?
| [
"There is nothing wrong with chaining generators, but in this example there is no reason for A to call self.AB, it can just loop over items to get the same result.\nYou should write your code as clearly as you can and if it's slow then use a profiler to determine where the bottleneck is. Contrived examples such as this one are too far from reality to be useful indicators of performance.\n"
] | [
2
] | [] | [] | [
"generator",
"performance",
"python"
] | stackoverflow_0000759729_generator_performance_python.txt |
Q:
add request to django model method?
I'm keeping track of a user status on a model. For the model 'Lesson' I have the status 'Finished', 'Learning', 'Viewed'. In a view for a list of models I want to add the user status. What is the best way to do this?
One idea: Adding the request to a models method would do the trick. Is that possible?
Edit: I meant in templatecode: {{ lesson.get_status }}, with get_status(self, request). Is it possible? It does not work (yet).
A:
If your status is a value that changes, you have to break this into two separate parts.
Updating the status. This must be called in a view function. The real work, however, belongs in the model. The view function calls the model method and does the save.
Displaying the status. This is just some string representation of the status.
Model
class MyStatefulModel( models.Model ):
theState = models.CharField( max_length=64 )
def changeState( self ):
if theState is None:
theState= "viewed"
elif theState is "viewed":
theState= "learning"
etc.
View Function
def show( request, object_id ):
object= MyStatefulModel.objects.get( id=object_id )
object.changeState()
object.save()
render_to_response( ... )
Template
<p>Your status is {{object.theState}}.</p>
A:
Yes, you can add a method to your model with a request paramater:
class MyModel(models.Model):
fields....
def update_status(self, request):
make something with the request...
| add request to django model method? | I'm keeping track of a user status on a model. For the model 'Lesson' I have the status 'Finished', 'Learning', 'Viewed'. In a view for a list of models I want to add the user status. What is the best way to do this?
One idea: Adding the request to a models method would do the trick. Is that possible?
Edit: I meant in templatecode: {{ lesson.get_status }}, with get_status(self, request). Is it possible? It does not work (yet).
| [
"If your status is a value that changes, you have to break this into two separate parts.\n\nUpdating the status. This must be called in a view function. The real work, however, belongs in the model. The view function calls the model method and does the save.\nDisplaying the status. This is just some string representation of the status.\n\nModel\nclass MyStatefulModel( models.Model ):\n theState = models.CharField( max_length=64 )\n def changeState( self ):\n if theState is None:\n theState= \"viewed\"\n elif theState is \"viewed\":\n theState= \"learning\"\n etc.\n\nView Function\n def show( request, object_id ):\n object= MyStatefulModel.objects.get( id=object_id )\n object.changeState()\n object.save()\n render_to_response( ... )\n\nTemplate\n <p>Your status is {{object.theState}}.</p>\n\n",
"Yes, you can add a method to your model with a request paramater:\nclass MyModel(models.Model):\n fields....\n\n def update_status(self, request):\n make something with the request...\n\n"
] | [
2,
1
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000759850_django_django_models_python.txt |
Q:
django-paypal setup
Has anyone setup django-paypal? Here is the link to it here?
I have "myproject" setup, and my folder sturecture looks like this:
myproject > paypal > (stdandard and pro folders)
to my settins.py file I added
INSTALLED_APPS = (
'myproject.paypal.standard',
'myproject.paypal.pro',
)
in my url's file for my account app I added:
urlpatterns += patterns('myproject.account.views',
(r'^payment-url/$', 'buy_my_item'),
)
and in my account view I added:
from myproject.paypal.pro.views import PayPalPro
from myproject.paypal.pro.forms import PaymentForm, ConfirmForm
def buy_my_item(request):
item = {'amt':"10.00", # amount to charge for item
'inv':"1111", # unique tracking variable paypal
'custom':"2222", # custom tracking variable for you
'cancelurl':"http://127.0.0.1:8000/", # Express checkout cancel url
'returnurl':"http://127.0.0.1:8000/"} # Express checkout return url
kw = {'item':'item', # what you're selling
'payment_template': 'pro/payment.html', # template to use for payment form
'confirm_template': ConfirmForm, # form class to use for Express checkout confirmation
'payment_form_cls': PaymentForm, # form class to use for payment
'success_url': '/success', # where to redirect after successful payment
}
ppp = PayPalPro(**kw)
return ppp(request)
--- EDIT ---------
Then, I added the pro and standard template folders to my projects template folder.
When I go to http://127.0.0.1:8000/account/payment-url/ and submit the form...
I get a ValueError : "dictionary update sequence element #0 has length 1; 2 is required"
Traceback:
File "...\accounts\views.py" in buy_my_item
655. return ppp(request)
File "...\paypal\pro\views.py" in __call__
115. return self.validate_payment_form()
File "...\paypal\pro\views.py" in validate_payment_form
133. success = form.process(self.request, self.item)
File "...\paypal\pro\forms.py" in process
params.update(item)
A:
In your code...
'payment_form_cls': 'payment_form_cls', # form class to use for payment
This must be a Form object that's used for validation.
'payment_form_cls': MyValidationForm, # form class to use for payment
Edit
http://github.com/johnboxall/django-paypal/tree/master
Your request is supposed to include a notify-url, return-url and cancel-return. All three url's YOU provide to Paypal.
Paypal will send messages to these URL's.
Since Paypal will send messages to these URL's, YOU must put them in your urls.py. You must write view functions for these three urls'. These urls will have your paypal responses sent to them.
A:
PayPal django Integration post should help you.
| django-paypal setup | Has anyone setup django-paypal? Here is the link to it here?
I have "myproject" setup, and my folder sturecture looks like this:
myproject > paypal > (stdandard and pro folders)
to my settins.py file I added
INSTALLED_APPS = (
'myproject.paypal.standard',
'myproject.paypal.pro',
)
in my url's file for my account app I added:
urlpatterns += patterns('myproject.account.views',
(r'^payment-url/$', 'buy_my_item'),
)
and in my account view I added:
from myproject.paypal.pro.views import PayPalPro
from myproject.paypal.pro.forms import PaymentForm, ConfirmForm
def buy_my_item(request):
item = {'amt':"10.00", # amount to charge for item
'inv':"1111", # unique tracking variable paypal
'custom':"2222", # custom tracking variable for you
'cancelurl':"http://127.0.0.1:8000/", # Express checkout cancel url
'returnurl':"http://127.0.0.1:8000/"} # Express checkout return url
kw = {'item':'item', # what you're selling
'payment_template': 'pro/payment.html', # template to use for payment form
'confirm_template': ConfirmForm, # form class to use for Express checkout confirmation
'payment_form_cls': PaymentForm, # form class to use for payment
'success_url': '/success', # where to redirect after successful payment
}
ppp = PayPalPro(**kw)
return ppp(request)
--- EDIT ---------
Then, I added the pro and standard template folders to my projects template folder.
When I go to http://127.0.0.1:8000/account/payment-url/ and submit the form...
I get a ValueError : "dictionary update sequence element #0 has length 1; 2 is required"
Traceback:
File "...\accounts\views.py" in buy_my_item
655. return ppp(request)
File "...\paypal\pro\views.py" in __call__
115. return self.validate_payment_form()
File "...\paypal\pro\views.py" in validate_payment_form
133. success = form.process(self.request, self.item)
File "...\paypal\pro\forms.py" in process
params.update(item)
| [
"In your code...\n 'payment_form_cls': 'payment_form_cls', # form class to use for payment\n\nThis must be a Form object that's used for validation.\n 'payment_form_cls': MyValidationForm, # form class to use for payment\n\n\nEdit\nhttp://github.com/johnboxall/django-paypal/tree/master\nYour request is supposed to include a notify-url, return-url and cancel-return. All three url's YOU provide to Paypal.\nPaypal will send messages to these URL's.\nSince Paypal will send messages to these URL's, YOU must put them in your urls.py. You must write view functions for these three urls'. These urls will have your paypal responses sent to them.\n",
"PayPal django Integration post should help you.\n"
] | [
5,
0
] | [] | [] | [
"django",
"paypal",
"python"
] | stackoverflow_0000757809_django_paypal_python.txt |
Q:
Map raw SQL to multiple related Django models
Due to performance reasons I can't use the ORM query methods of Django and I have to use raw SQL for some complex questions. I want to find a way to map the results of a SQL query to several models.
I know I can use the following statement to map the query results to one model, but I can't figure how to use it to be able to map to related models (like I can do by using the select_related statement in Django).
model_instance = MyModel(**dict(zip(field_names, row_data)))
Is there a relatively easy way to be able to map fields of related tables that are also in the query result set?
A:
First, can you prove the ORM is stopping your performance? Sometimes performance problems are simply poor database design, or improper indexes. Usually this comes from trying to force-fit Django's ORM onto a legacy database design. Stored procedures and triggers can have adverse impact on performance -- especially when working with Django where the trigger code is expected to be in the Python model code.
Sometimes poor performance is an application issue. This includes needless order-by operations being done in the database.
The most common performance problem is an application that "over-fetches" data. Casually using the .all() method and creating large in-memory collections. This will crush performance. The Django query sets have to be touched as little as possible so that the query set iterator is given to the template for display.
Once you choose to bypass the ORM, you have to fight out the Object-Relational Impedance Mismatch problem. Again. Specifically, relational "navigation" has no concept of "related": it has to be a first-class fetch of a relational set using foreign keys. To assemble a complex in-memory object model via SQL is simply hard. Circular references make this very hard; resolving FK's into collections is hard.
If you're going to use raw SQL, you have two choices.
Eschew "select related" -- it doesn't exist -- and it's painful to implement.
Invent your own ORM-like "select related" features. A common approach is to add stateful getters that (a) check a private cache to see if they've fetched the related object and if the object doesn't exist, (b) fetch the related object from the database and update the cache.
In the process of inventing your own stateful getters, you'll be reinventing Django's, and you'll probably discover that it isn't the ORM layer, but a database design or an application design issue.
| Map raw SQL to multiple related Django models | Due to performance reasons I can't use the ORM query methods of Django and I have to use raw SQL for some complex questions. I want to find a way to map the results of a SQL query to several models.
I know I can use the following statement to map the query results to one model, but I can't figure how to use it to be able to map to related models (like I can do by using the select_related statement in Django).
model_instance = MyModel(**dict(zip(field_names, row_data)))
Is there a relatively easy way to be able to map fields of related tables that are also in the query result set?
| [
"First, can you prove the ORM is stopping your performance? Sometimes performance problems are simply poor database design, or improper indexes. Usually this comes from trying to force-fit Django's ORM onto a legacy database design. Stored procedures and triggers can have adverse impact on performance -- especially when working with Django where the trigger code is expected to be in the Python model code.\nSometimes poor performance is an application issue. This includes needless order-by operations being done in the database.\nThe most common performance problem is an application that \"over-fetches\" data. Casually using the .all() method and creating large in-memory collections. This will crush performance. The Django query sets have to be touched as little as possible so that the query set iterator is given to the template for display.\nOnce you choose to bypass the ORM, you have to fight out the Object-Relational Impedance Mismatch problem. Again. Specifically, relational \"navigation\" has no concept of \"related\": it has to be a first-class fetch of a relational set using foreign keys. To assemble a complex in-memory object model via SQL is simply hard. Circular references make this very hard; resolving FK's into collections is hard.\nIf you're going to use raw SQL, you have two choices.\n\nEschew \"select related\" -- it doesn't exist -- and it's painful to implement.\nInvent your own ORM-like \"select related\" features. A common approach is to add stateful getters that (a) check a private cache to see if they've fetched the related object and if the object doesn't exist, (b) fetch the related object from the database and update the cache.\n\nIn the process of inventing your own stateful getters, you'll be reinventing Django's, and you'll probably discover that it isn't the ORM layer, but a database design or an application design issue.\n"
] | [
1
] | [] | [] | [
"django",
"django_models",
"mysql",
"python",
"sql"
] | stackoverflow_0000759797_django_django_models_mysql_python_sql.txt |
Q:
python ctypes and sysctl
I have following code
import sys
from ctypes import *
from ctypes.util import find_library
libc = cdll.LoadLibrary(find_library("c"))
CTL_KERN = 1
KERN_SHMMAX = 34
sysctl_names = {
'memory_shared_buffers' : (CTL_KERN, KERN_SHMMAX),
}
def posix_sysctl_long(name):
_mem = c_uint64(0)
_arr = c_int * 2
_name = _arr()
_name[0] = c_int(sysctl_names[name][0])
_name[1] = c_int(sysctl_names[name][1])
result = libc.sysctl(_name, byref(_mem), c_size_t(sizeof(_mem)), None, c_size_t(0))
if result != 0:
raise Exception('sysctl returned with error %s' % result)
return _mem.value
print posix_sysctl_long('memory_shared_buffers')
which produces following result:
Traceback (most recent call last):
File "test.py", line 23, in <module>
print posix_sysctl_long('memory_shared_buffers')
File "test.py", line 20, in posix_sysctl_long
raise Exception('sysctl returned with error %s' % result)
Exception: sysctl returned with error -1
I gues I did something wrong. What would be the correct calling convention? How would I find out what exactly went wrong?
A:
You are not providing the correct values to the sysctl function. Detailed information on the arguments of sysctl() can be found here.
Here are your errors:
You have forgotten the nlen argument (second argument)
The oldlenp argument is a pointer to the size, not directly the size
Here is the correct function (with minor improvement):
def posix_sysctl_long(name):
_mem = c_uint64(0)
_def = sysctl_names[name]
_arr = c_int * len(_def)
_name = _arr()
for i, v in enumerate(_def):
_name[i] = c_int(v)
_sz = c_size_t(sizeof(_mem))
result = libc.sysctl(_name, len(_def), byref(_mem), byref(_sz), None, c_size_t(0))
if result != 0:
raise Exception('sysctl returned with error %s' % result)
return _mem.value
| python ctypes and sysctl | I have following code
import sys
from ctypes import *
from ctypes.util import find_library
libc = cdll.LoadLibrary(find_library("c"))
CTL_KERN = 1
KERN_SHMMAX = 34
sysctl_names = {
'memory_shared_buffers' : (CTL_KERN, KERN_SHMMAX),
}
def posix_sysctl_long(name):
_mem = c_uint64(0)
_arr = c_int * 2
_name = _arr()
_name[0] = c_int(sysctl_names[name][0])
_name[1] = c_int(sysctl_names[name][1])
result = libc.sysctl(_name, byref(_mem), c_size_t(sizeof(_mem)), None, c_size_t(0))
if result != 0:
raise Exception('sysctl returned with error %s' % result)
return _mem.value
print posix_sysctl_long('memory_shared_buffers')
which produces following result:
Traceback (most recent call last):
File "test.py", line 23, in <module>
print posix_sysctl_long('memory_shared_buffers')
File "test.py", line 20, in posix_sysctl_long
raise Exception('sysctl returned with error %s' % result)
Exception: sysctl returned with error -1
I gues I did something wrong. What would be the correct calling convention? How would I find out what exactly went wrong?
| [
"You are not providing the correct values to the sysctl function. Detailed information on the arguments of sysctl() can be found here.\nHere are your errors:\n\nYou have forgotten the nlen argument (second argument)\nThe oldlenp argument is a pointer to the size, not directly the size\n\nHere is the correct function (with minor improvement):\ndef posix_sysctl_long(name):\n _mem = c_uint64(0)\n _def = sysctl_names[name]\n _arr = c_int * len(_def)\n _name = _arr()\n for i, v in enumerate(_def):\n _name[i] = c_int(v)\n _sz = c_size_t(sizeof(_mem))\n result = libc.sysctl(_name, len(_def), byref(_mem), byref(_sz), None, c_size_t(0))\n if result != 0:\n raise Exception('sysctl returned with error %s' % result)\n return _mem.value\n\n"
] | [
7
] | [] | [] | [
"c",
"ctypes",
"linux",
"python"
] | stackoverflow_0000759892_c_ctypes_linux_python.txt |
Q:
Generator function getting executed twice?
I'm using a python generator function to provide me with a list of images in the current directory. However I see the function is giving out the entire list twice instead of one time and I have no idea why. I'm using the Python PIL library to create batch thumbnails.
Can anyone point me in the right direction?
Script:
import os
import sys
import Image
class ThumbnailGenerator:
def __init__(self, width, height, image_path, thumb_path):
self.width = width
self.height = height
self.image_path = image_path
self.thumb_path = "%s%s%s" % (self.image_path, os.sep, thumb_path)
def __call__(self):
self.__create_thumbnail_dir()
for filename, image in self.__generate_image_list():
try:
thumbnail = "%s%s%s" % (self.thumb_path, os.sep, filename)
image.thumbnail((self.width, self.height))
image.save(thumbnail, 'JPEG')
print "Thumbnail gemaakt voor: %s" % filename
except IOError:
print "Fout: thumbnail kon niet gemaakt worden voor: %s" % filename
def __generate_image_list(self):
for dirpath, dirnames, filenames in os.walk(self.image_path):
count = 0
for filename in filenames:
try:
image = Image.open(filename)
print '=========', count, filename
count += 1
yield (filename, image)
except IOError:
pass
def __create_thumbnail_dir(self):
try:
os.mkdir(self.thumb_path)
except OSError as exception:
print "Fout: %s" % exception
if __name__ == '__main__':
try:
thumbnail_generator = ThumbnailGenerator(80, 80, '.', 'thumbs')
thumbnail_generator()
except KeyboardInterrupt:
print 'Programma gestopt'
The output of the script at this moment (with some test images) is:
========= 0 124415main_image_feature_380a_ys_full.jpg
Thumbnail gemaakt voor: 124415main_image_feature_380a_ys_full.jpg
========= 1 60130main_image_feature_182_jwfull.jpg
Thumbnail gemaakt voor: 60130main_image_feature_182_jwfull.jpg
========= 2 assetImage.jpg
Thumbnail gemaakt voor: assetImage.jpg
========= 3 devcon-c1-image.gif
Fout: thumbnail kon niet gemaakt worden voor: devcon-c1-image.gif
========= 4 image-646313.jpg
Thumbnail gemaakt voor: image-646313.jpg
========= 5 Image-Schloss_Nymphenburg_Munich_CC.jpg
Thumbnail gemaakt voor: Image-Schloss_Nymphenburg_Munich_CC.jpg
========= 6 image1w.jpg
Thumbnail gemaakt voor: image1w.jpg
========= 7 New%20Image.jpg
Thumbnail gemaakt voor: New%20Image.jpg
========= 8 samsung-gx20-image.jpg
Thumbnail gemaakt voor: samsung-gx20-image.jpg
========= 9 samsung-image.jpg
Thumbnail gemaakt voor: samsung-image.jpg
========= 0 124415main_image_feature_380a_ys_full.jpg
Thumbnail gemaakt voor: 124415main_image_feature_380a_ys_full.jpg
========= 1 60130main_image_feature_182_jwfull.jpg
Thumbnail gemaakt voor: 60130main_image_feature_182_jwfull.jpg
========= 2 assetImage.jpg
Thumbnail gemaakt voor: assetImage.jpg
========= 3 devcon-c1-image.gif
Fout: thumbnail kon niet gemaakt worden voor: devcon-c1-image.gif
========= 4 image-646313.jpg
Thumbnail gemaakt voor: image-646313.jpg
========= 5 Image-Schloss_Nymphenburg_Munich_CC.jpg
Thumbnail gemaakt voor: Image-Schloss_Nymphenburg_Munich_CC.jpg
========= 6 image1w.jpg
Thumbnail gemaakt voor: image1w.jpg
========= 7 New%20Image.jpg
Thumbnail gemaakt voor: New%20Image.jpg
========= 8 samsung-gx20-image.jpg
Thumbnail gemaakt voor: samsung-gx20-image.jpg
========= 9 samsung-image.jpg
Thumbnail gemaakt voor: samsung-image.jpg
While it should be:
========= 0 124415main_image_feature_380a_ys_full.jpg
Thumbnail gemaakt voor: 124415main_image_feature_380a_ys_full.jpg
========= 1 60130main_image_feature_182_jwfull.jpg
Thumbnail gemaakt voor: 60130main_image_feature_182_jwfull.jpg
========= 2 assetImage.jpg
Thumbnail gemaakt voor: assetImage.jpg
========= 3 devcon-c1-image.gif
Fout: thumbnail kon niet gemaakt worden voor: devcon-c1-image.gif
========= 4 image-646313.jpg
Thumbnail gemaakt voor: image-646313.jpg
========= 5 Image-Schloss_Nymphenburg_Munich_CC.jpg
Thumbnail gemaakt voor: Image-Schloss_Nymphenburg_Munich_CC.jpg
========= 6 image1w.jpg
Thumbnail gemaakt voor: image1w.jpg
========= 7 New%20Image.jpg
Thumbnail gemaakt voor: New%20Image.jpg
========= 8 samsung-gx20-image.jpg
Thumbnail gemaakt voor: samsung-gx20-image.jpg
========= 9 samsung-image.jpg
Thumbnail gemaakt voor: samsung-image.jpg
As you can see the generator function is returning the list twice (I verified it and it gets called only once).
@heikogerlach:
os.walk cannot find the thumbnails as I'm walking the filenames of the current directory and the thumbnails get written to a sub-folder of the current directory called 'thumb'. The list is generated before writing the thumbnails to the 'thumb' dir and I verified (using WinPDB) that the thumbnails are not included in the list.
@S.Lott:
Thanks for the advice. os.path.join fixed the problem.
A:
In your debugging, print the full path. I think you're walking the thumbs subdirectory after you walk the . directory.
Also.
class ThumbnailGenerator( object ):
Usually works out better in the long run.
Please do NOT use __ in front of your method names (generate_image_list and create_thumbnail_dir).
Do not use "%s%s%s" % (self.image_path, os.sep, thumb_path) to make path names, use os.path.join.
A:
Your thumbnails are in a subdirectory of self.image_path and have the same name as the original image. Can you check if walk finds the thumnails as you create them? Just print the path of the image together with the name.
| Generator function getting executed twice? | I'm using a python generator function to provide me with a list of images in the current directory. However I see the function is giving out the entire list twice instead of one time and I have no idea why. I'm using the Python PIL library to create batch thumbnails.
Can anyone point me in the right direction?
Script:
import os
import sys
import Image
class ThumbnailGenerator:
def __init__(self, width, height, image_path, thumb_path):
self.width = width
self.height = height
self.image_path = image_path
self.thumb_path = "%s%s%s" % (self.image_path, os.sep, thumb_path)
def __call__(self):
self.__create_thumbnail_dir()
for filename, image in self.__generate_image_list():
try:
thumbnail = "%s%s%s" % (self.thumb_path, os.sep, filename)
image.thumbnail((self.width, self.height))
image.save(thumbnail, 'JPEG')
print "Thumbnail gemaakt voor: %s" % filename
except IOError:
print "Fout: thumbnail kon niet gemaakt worden voor: %s" % filename
def __generate_image_list(self):
for dirpath, dirnames, filenames in os.walk(self.image_path):
count = 0
for filename in filenames:
try:
image = Image.open(filename)
print '=========', count, filename
count += 1
yield (filename, image)
except IOError:
pass
def __create_thumbnail_dir(self):
try:
os.mkdir(self.thumb_path)
except OSError as exception:
print "Fout: %s" % exception
if __name__ == '__main__':
try:
thumbnail_generator = ThumbnailGenerator(80, 80, '.', 'thumbs')
thumbnail_generator()
except KeyboardInterrupt:
print 'Programma gestopt'
The output of the script at this moment (with some test images) is:
========= 0 124415main_image_feature_380a_ys_full.jpg
Thumbnail gemaakt voor: 124415main_image_feature_380a_ys_full.jpg
========= 1 60130main_image_feature_182_jwfull.jpg
Thumbnail gemaakt voor: 60130main_image_feature_182_jwfull.jpg
========= 2 assetImage.jpg
Thumbnail gemaakt voor: assetImage.jpg
========= 3 devcon-c1-image.gif
Fout: thumbnail kon niet gemaakt worden voor: devcon-c1-image.gif
========= 4 image-646313.jpg
Thumbnail gemaakt voor: image-646313.jpg
========= 5 Image-Schloss_Nymphenburg_Munich_CC.jpg
Thumbnail gemaakt voor: Image-Schloss_Nymphenburg_Munich_CC.jpg
========= 6 image1w.jpg
Thumbnail gemaakt voor: image1w.jpg
========= 7 New%20Image.jpg
Thumbnail gemaakt voor: New%20Image.jpg
========= 8 samsung-gx20-image.jpg
Thumbnail gemaakt voor: samsung-gx20-image.jpg
========= 9 samsung-image.jpg
Thumbnail gemaakt voor: samsung-image.jpg
========= 0 124415main_image_feature_380a_ys_full.jpg
Thumbnail gemaakt voor: 124415main_image_feature_380a_ys_full.jpg
========= 1 60130main_image_feature_182_jwfull.jpg
Thumbnail gemaakt voor: 60130main_image_feature_182_jwfull.jpg
========= 2 assetImage.jpg
Thumbnail gemaakt voor: assetImage.jpg
========= 3 devcon-c1-image.gif
Fout: thumbnail kon niet gemaakt worden voor: devcon-c1-image.gif
========= 4 image-646313.jpg
Thumbnail gemaakt voor: image-646313.jpg
========= 5 Image-Schloss_Nymphenburg_Munich_CC.jpg
Thumbnail gemaakt voor: Image-Schloss_Nymphenburg_Munich_CC.jpg
========= 6 image1w.jpg
Thumbnail gemaakt voor: image1w.jpg
========= 7 New%20Image.jpg
Thumbnail gemaakt voor: New%20Image.jpg
========= 8 samsung-gx20-image.jpg
Thumbnail gemaakt voor: samsung-gx20-image.jpg
========= 9 samsung-image.jpg
Thumbnail gemaakt voor: samsung-image.jpg
While it should be:
========= 0 124415main_image_feature_380a_ys_full.jpg
Thumbnail gemaakt voor: 124415main_image_feature_380a_ys_full.jpg
========= 1 60130main_image_feature_182_jwfull.jpg
Thumbnail gemaakt voor: 60130main_image_feature_182_jwfull.jpg
========= 2 assetImage.jpg
Thumbnail gemaakt voor: assetImage.jpg
========= 3 devcon-c1-image.gif
Fout: thumbnail kon niet gemaakt worden voor: devcon-c1-image.gif
========= 4 image-646313.jpg
Thumbnail gemaakt voor: image-646313.jpg
========= 5 Image-Schloss_Nymphenburg_Munich_CC.jpg
Thumbnail gemaakt voor: Image-Schloss_Nymphenburg_Munich_CC.jpg
========= 6 image1w.jpg
Thumbnail gemaakt voor: image1w.jpg
========= 7 New%20Image.jpg
Thumbnail gemaakt voor: New%20Image.jpg
========= 8 samsung-gx20-image.jpg
Thumbnail gemaakt voor: samsung-gx20-image.jpg
========= 9 samsung-image.jpg
Thumbnail gemaakt voor: samsung-image.jpg
As you can see the generator function is returning the list twice (I verified it and it gets called only once).
@heikogerlach:
os.walk cannot find the thumbnails as I'm walking the filenames of the current directory and the thumbnails get written to a sub-folder of the current directory called 'thumb'. The list is generated before writing the thumbnails to the 'thumb' dir and I verified (using WinPDB) that the thumbnails are not included in the list.
@S.Lott:
Thanks for the advice. os.path.join fixed the problem.
| [
"In your debugging, print the full path. I think you're walking the thumbs subdirectory after you walk the . directory.\nAlso. \nclass ThumbnailGenerator( object ):\n\nUsually works out better in the long run.\nPlease do NOT use __ in front of your method names (generate_image_list and create_thumbnail_dir).\nDo not use \"%s%s%s\" % (self.image_path, os.sep, thumb_path) to make path names, use os.path.join.\n",
"Your thumbnails are in a subdirectory of self.image_path and have the same name as the original image. Can you check if walk finds the thumnails as you create them? Just print the path of the image together with the name.\n"
] | [
3,
0
] | [] | [] | [
"generator",
"python"
] | stackoverflow_0000760647_generator_python.txt |
Q:
How to access Yahoo Enterprise Web Services using Python SOAPpy?
I have a PHP script which works and i need to write the same in Python but SOAPpy generates a slightly different request and i'm not sure how to fix it so the server likes it.
The request generated by php script looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:ns1="http://marketing.ews.yahooapis.com/V4"
>
<SOAP-ENV:Header>
<ns1:username>*****</ns1:username>
<ns1:password>*****</ns1:password>
<ns1:masterAccountID>*****</ns1:masterAccountID>
<ns1:accountID>6674262970</ns1:accountID>
<ns1:license>*****</ns1:license>
</SOAP-ENV:Header>
<SOAP-ENV:Body>
<ns1:getCampaignsByAccountID>
<ns1:accountID>6674262970</ns1:accountID>
<ns1:includeDeleted>false</ns1:includeDeleted>
</ns1:getCampaignsByAccountID>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
When trying to make the same using SOAPPy i get this request:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsd="http://www.w3.org/1999/XMLSchema"
>
<SOAP-ENV:Header>
<username xsi:type="xsd:string">*****</username>
<masterAccountID xsi:type="xsd:string">*****</masterAccountID>
<license xsi:type="xsd:string">*****</license>
<accountID xsi:type="xsd:integer">6674262970</accountID>
<password xsi:type="xsd:string">*****</password>
</SOAP-ENV:Header>
<SOAP-ENV:Body>
<ns1:getCampaignsByAccountID xmlns:ns1="http://marketing.ews.yahooapis.com/V4">
<includeDeleted xsi:type="xsd:boolean">False</includeDeleted>
<accountID xsi:type="xsd:integer">6674262970</accountID>
</ns1:getCampaignsByAccountID>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
A slightly different request but i guess it should work but i get an error from the server: "Account ID specified in the
header does not match the one specified in the parameter."
But they do match!
The only thing i see is some difference in namespaces, but i have no idea what to do right now. Please help.
A:
The problem was not about SOAP headers format but just about the parameter order. Here's the full explanation and code: http://pea.somemilk.org/2009/04/05/yahoo-search-marketing-python-soap-binding/
A:
accountID should be of type xsd:string rather than xsd:integer. (maybe you're passing a string instead of an integer and that is why SOAPpy does it that way?)
| How to access Yahoo Enterprise Web Services using Python SOAPpy? | I have a PHP script which works and i need to write the same in Python but SOAPpy generates a slightly different request and i'm not sure how to fix it so the server likes it.
The request generated by php script looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:ns1="http://marketing.ews.yahooapis.com/V4"
>
<SOAP-ENV:Header>
<ns1:username>*****</ns1:username>
<ns1:password>*****</ns1:password>
<ns1:masterAccountID>*****</ns1:masterAccountID>
<ns1:accountID>6674262970</ns1:accountID>
<ns1:license>*****</ns1:license>
</SOAP-ENV:Header>
<SOAP-ENV:Body>
<ns1:getCampaignsByAccountID>
<ns1:accountID>6674262970</ns1:accountID>
<ns1:includeDeleted>false</ns1:includeDeleted>
</ns1:getCampaignsByAccountID>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
When trying to make the same using SOAPPy i get this request:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsd="http://www.w3.org/1999/XMLSchema"
>
<SOAP-ENV:Header>
<username xsi:type="xsd:string">*****</username>
<masterAccountID xsi:type="xsd:string">*****</masterAccountID>
<license xsi:type="xsd:string">*****</license>
<accountID xsi:type="xsd:integer">6674262970</accountID>
<password xsi:type="xsd:string">*****</password>
</SOAP-ENV:Header>
<SOAP-ENV:Body>
<ns1:getCampaignsByAccountID xmlns:ns1="http://marketing.ews.yahooapis.com/V4">
<includeDeleted xsi:type="xsd:boolean">False</includeDeleted>
<accountID xsi:type="xsd:integer">6674262970</accountID>
</ns1:getCampaignsByAccountID>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
A slightly different request but i guess it should work but i get an error from the server: "Account ID specified in the
header does not match the one specified in the parameter."
But they do match!
The only thing i see is some difference in namespaces, but i have no idea what to do right now. Please help.
| [
"The problem was not about SOAP headers format but just about the parameter order. Here's the full explanation and code: http://pea.somemilk.org/2009/04/05/yahoo-search-marketing-python-soap-binding/\n",
"accountID should be of type xsd:string rather than xsd:integer. (maybe you're passing a string instead of an integer and that is why SOAPpy does it that way?)\n"
] | [
2,
0
] | [] | [] | [
"python",
"soap",
"soappy"
] | stackoverflow_0000657473_python_soap_soappy.txt |
Q:
How to mark a device in a way that can be retrived by HAL but does not require mounting or changing the label
I'm trying to find a way to mark a USB flash device in a way that I can programmaticly test for without mounting it or changing the label.
Are there any properties I can modify about a device that will not cause it to behave/look differently to the user?
Running Ubuntu Jaunty.
A:
You cannot modify this property, but the tuple (vendor_id, product_id, serial_number) is unique to each device, so you can use this as mark that is already there.
You can enumerate the devices on the USB bus using lsusb or usblib.
A:
Changing the VID/PID might make your device non-usable without custom drivers. HAL isn't supposed to auto-mount your flash drives for you.
That being said, you could always sneak something into the boot sector and/or the beginning part of the drive. There are a lot of spare bytes in there that can be used for custom purposes - both nefarious and otherwise.
| How to mark a device in a way that can be retrived by HAL but does not require mounting or changing the label | I'm trying to find a way to mark a USB flash device in a way that I can programmaticly test for without mounting it or changing the label.
Are there any properties I can modify about a device that will not cause it to behave/look differently to the user?
Running Ubuntu Jaunty.
| [
"You cannot modify this property, but the tuple (vendor_id, product_id, serial_number) is unique to each device, so you can use this as mark that is already there. \nYou can enumerate the devices on the USB bus using lsusb or usblib.\n",
"Changing the VID/PID might make your device non-usable without custom drivers. HAL isn't supposed to auto-mount your flash drives for you. \nThat being said, you could always sneak something into the boot sector and/or the beginning part of the drive. There are a lot of spare bytes in there that can be used for custom purposes - both nefarious and otherwise.\n"
] | [
1,
0
] | [] | [] | [
"dbus",
"hal",
"hardware",
"mount",
"python"
] | stackoverflow_0000760310_dbus_hal_hardware_mount_python.txt |