text
stringlengths 226
34.5k
|
---|
Tkinter: Changing a variable within a function
Question: I know this kind of question gets asked all the time but either i've been
unable to come across the answer i need, or i've been unable to understand it
when i did.
I want to be able to do something like:
spam = StringVar()
spam.set(aValue)
class MyScale(Scale):
def __init__(self,var,*args,**kwargs):
Scale.__init__(self,*args,**kwargs)
self.bind("<ButtonRelease-1>",self.getValue)
self.set(var.get())
def getValue(self,event):
## spam gets changed to the new value set
## by the user manipulating the scale
var.set(self.get)
eggs = MyScale(spam,*args,**kwargs)
eggs.pack()
Of course, i get back "NameError: global name 'var' is not defined."
How do i get around the inability to pass arguments to getValue? I've been
warned against using global variables but is that my only option? Is it
setting up a separate scale class for each variable i want to change? I get
the feeling i'm missing something thats right under my nose...
edit: is this what you mean?
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python26\lib\lib-tk\Tkinter.py", line 1410, in __call__
return self.func(*args)
File "C:\...\interface.py", line 70, in getValue
var.set(self.get)
NameError: global name 'var' is not defined
Sorry, I've only been programming a month and some of the jargon still escapes
me.
Answer: Please give this a shot.
Lots of example code out there generously uses globals, like your "var"
variable.
I have used your var argument to act as a pointer back to the original spam
object; assigned to self.var_pointer within the MyScale class.
The code below will change the value of 'spam' (and 'eggs') on the scale's
ButtonRelease.
You can check out the value by typing eggs.get() or spam.get() to see the
changed value.
from Tkinter import *
root = Tk()
aValue = "5"
spam = StringVar()
spam.set(aValue)
class MyScale(Scale):
def __init__(self,var,*args,**kwargs):
self.var_pointer = var
Scale.__init__(self,*args,**kwargs)
self.bind("<ButtonRelease-1>",self.getValue)
self.set(var.get())
def getValue(self,event):
## spam gets changed to the new value set
## by the user manipulating the scale
self.var_pointer.set(self.get())
eggs = MyScale(spam)
eggs.pack(anchor=CENTER)
|
Different styles for Windows forms in Ironpython
Question: I want to change the look of my Ironpython windows forms, Is it possible to
change the style of the form and for example make it more like a Mac?
thank you
Answer: As an interface designer, it's important to use an many standard windows
controls as possible. The more your app looks like the microsoft ones users
are used to, they will become more comfortable with it quicker.
A good example is how out of place iTunes and Safari look in windows. My tip
is to just keep it with the windows look.
|
Write to UTF-8 file in Python
Question: I'm really confused with the `codecs.open function`. When I do:
file = codecs.open("temp", "w", "utf-8")
file.write(codecs.BOM_UTF8)
file.close()
It gives me the error
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 0:
> ordinal not in range(128)
If I do:
file = open("temp", "w")
file.write(codecs.BOM_UTF8)
file.close()
It works fine.
**Question** is why does the first method fail? And how do I insert the bom?
If the second method is the correct way of doing it, what the point of using
`codecs.open(filename, "w", "utf-8")`?
Answer: Disclaimer: I'm not a Python programmer.
I believe the problem is that
[`codecs.BOM_UTF8`](https://docs.python.org/library/codecs.html#codecs.BOM_UTF8)
is a byte string, not a Unicode string. I suspect the file handler is trying
to guess what you really mean based on "I'm meant to be writing Unicode as
UTF-8-encoded text, but you've given me a byte string!"
Try writing the Unicode string for the byte order mark (i.e. Unicode U+FEFF)
directly, so that the file just encodes that as UTF-8:
import codecs
file = codecs.open("lol", "w", "utf-8")
file.write(u'\ufeff')
file.close()
(That seems to give the right answer - a file with bytes EF BB BF.)
EDIT: S. Lott's [suggestion](http://stackoverflow.com/a/934203/12892) of using
"utf-8-sig" as the encoding is a better one than explicitly writing the BOM
yourself, but I'll leave this answer here as it explains what was going wrong
before.
|
Creating a logging handler to connect to Oracle?
Question: So right now i need to create and implement an extension of the Python logging
module that will be used to log to our database. Basically we have several
python applications(that all run in the background) that currently log to a
random mishmash of text files. Which makes it almost impossible to find out if
a certain application failed or not.
The problem given to me is to move said logging to text files to an oracle DB.
The tables have already been defined, and where things need to be logged to
but right now, im looking at adding another logging handler that will log to
the DB.
I am using python 2.5.4 and cx_Oracle and the applications in general can be
ether run as a service/daemon or a straight application.
I'm just mainly curious about what would be the best possible way to go about
this. Few questions:
1. If any errors occur with cx_Oracle, where should these errors be logged to? If its down would it be best to just go and have the logger retreat to the default text file?
2. Awhile back we started enforcing that people use sys.stderr/stdout.write instead of print, so worst case scenario we wouldn't run into any issues with print becoming deprecated. Is there a way to seamlessly make all of the thousands of sys.std calls be piped directly into the logger, and have the logger pickup the slack?
3. After every logged message, should the script automatically do a commit? (there's going to be several dozen a second.)
4. What is the best way to implement a new handler for the logging system? Inheriting from the basic Handler class seems to be easiest.
Any ideas / suggestions would be great.
Answer: 1. If errors occur with cx_Oracle, it's probably best to log these to a text file.
2. You could try redirecting sys.stdout and sys.stderr to file-like objects which log whatever's written to them to a logger.
3. I would guess you do want to commit after each event, unless you have strong reasons for not doing this. Alternatively, you can buffer several events and write them all in a single transaction every so often.
4. Below is an example which uses mx.ODBC, you can probably adapt this to cx_Oracle without too much trouble. It's meant to be Python DB-API 2.0 compliant, I think.
The standalone Python logging distribution (before logging was added to
Python) is at <http://www.red-dove.com/python_logging.html> and although the
logging package in Python is much more up to date, the standalone distribution
contains a test directory which has a lot of useful examples of derived
handler classes.
#!/usr/bin/env python
#
# Copyright 2001-2009 by Vinay Sajip. All Rights Reserved.
#
# Permission to use, copy, modify, and distribute this software and its
# documentation for any purpose and without fee is hereby granted,
# provided that the above copyright notice appear in all copies and that
# both that copyright notice and this permission notice appear in
# supporting documentation, and that the name of Vinay Sajip
# not be used in advertising or publicity pertaining to distribution
# of the software without specific, written prior permission.
# VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING
# ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
# VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR
# ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER
# IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
#
# This file is part of the standalone Python logging distribution. See
# http://www.red-dove.com/python_logging.html
#
"""
A test harness for the logging module. An example handler - DBHandler -
which writes to an Python DB API 2.0 data source. You'll need to set this
source up before you run the test.
Copyright (C) 2001-2009 Vinay Sajip. All Rights Reserved.
"""
import sys, string, time, logging
class DBHandler(logging.Handler):
def __init__(self, dsn, uid='', pwd=''):
logging.Handler.__init__(self)
import mx.ODBC.Windows
self.dsn = dsn
self.uid = uid
self.pwd = pwd
self.conn = mx.ODBC.Windows.connect(self.dsn, self.uid, self.pwd)
self.SQL = """INSERT INTO Events (
Created,
RelativeCreated,
Name,
LogLevel,
LevelText,
Message,
Filename,
Pathname,
Lineno,
Milliseconds,
Exception,
Thread
)
VALUES (
%(dbtime)s,
%(relativeCreated)d,
'%(name)s',
%(levelno)d,
'%(levelname)s',
'%(message)s',
'%(filename)s',
'%(pathname)s',
%(lineno)d,
%(msecs)d,
'%(exc_text)s',
'%(thread)s'
);
"""
self.cursor = self.conn.cursor()
def formatDBTime(self, record):
record.dbtime = time.strftime("#%m/%d/%Y#", time.localtime(record.created))
def emit(self, record):
try:
#use default formatting
self.format(record)
#now set the database time up
self.formatDBTime(record)
if record.exc_info:
record.exc_text = logging._defaultFormatter.formatException(record.exc_info)
else:
record.exc_text = ""
sql = self.SQL % record.__dict__
self.cursor.execute(sql)
self.conn.commit()
except:
import traceback
ei = sys.exc_info()
traceback.print_exception(ei[0], ei[1], ei[2], None, sys.stderr)
del ei
def close(self):
self.cursor.close()
self.conn.close()
logging.Handler.close(self)
dh = DBHandler('Logging')
logger = logging.getLogger("")
logger.setLevel(logging.DEBUG)
logger.addHandler(dh)
logger.info("Jackdaws love my big %s of %s", "sphinx", "quartz")
logger.debug("Pack my %s with five dozen %s", "box", "liquor jugs")
try:
import math
math.exp(1000)
except:
logger.exception("Problem with %s", "math.exp")
|
How to Change Mouse Cursor in PythonCard
Question: How do I change the mouse cursor to indicate a waiting state using Python and
PythonCard?
I didn't see anything in the documentation.
Answer: PythonCard builds on top of wx, so if you import wx you should be able to
build a suitable cursor (e.g. with `wx.CursorFromImage`), set it (e.g. with
`wx.BeginBusyCursor`) when your wait begins, and end it (with
`wx.EndBusyCursor`) when your wait ends.
|
How to check for NaN in python?
Question: `float('nan')` results in a thingy simply called nan. But how do I check for
it? Should be very easy, but i cannot find it.
Answer: [math.isnan()](http://docs.python.org/library/math.html#math.isnan)
> Checks if the float x is a NaN (not a number). NaNs are part of the IEEE 754
> standards. Operation like but not limited to inf * 0, inf / inf or any
> operation involving a NaN, e.g. nan * 1, return a NaN.
>
> _New in version 2.6._
>>> import math
>>> x=float('nan')
>>> math.isnan(x)
True
>>>
|
Using Python's list index() method on a list of tuples or objects?
Question: Python's list type has an index() method that takes one parameter and returns
the index of the first item in the list matching the parameter. For instance:
>>> some_list = ["apple", "pear", "banana", "grape"]
>>> some_list.index("pear")
1
>>> some_list.index("grape")
3
Is there a graceful (idiomatic) way to extend this to lists of complex
objects, like tuples? Ideally, I'd like to be able to do something like this:
>>> tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
>>> some_list.getIndexOfTuple(1, 7)
1
>>> some_list.getIndexOfTuple(0, "kumquat")
2
getIndexOfTuple() is just a hypothetical method that accepts a sub-index and a
value, and then returns the index of the list item with the given value at
that sub-index. I hope
Is there some way to achieve that general result, using list comprehensions or
lambas or something "in-line" like that? I think I could write my own class
and method, but I don't want to reinvent the wheel if Python already has a way
to do it.
Answer: How about this?
>>> tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
>>> [x for x, y in enumerate(tuple_list) if y[1] == 7]
[1]
>>> [x for x, y in enumerate(tuple_list) if y[0] == 'kumquat']
[2]
As pointed out in the comments, this would get all matches. To just get the
first one, you can do:
>>> [y[0] for y in tuple_list].index('kumquat')
2
There is a good discussion in the comments as to the speed difference between
all the solutions posted. I may be a little biased but I would personally
stick to a one-liner as the speed we're talking about is pretty insignificant
versus creating functions and importing modules for this problem, but if you
are planning on doing this to a very large amount of elements you might want
to look at the other answers provided, as they are faster than what I
provided.
|
Why won't python allow me to delete files?
Question: I've created a python script that gets a list of files from a text file and
deletes them if they're empty. It correctly detects empty files but it doesn't
want to delete them. It gives me:
(32, 'The process cannot access the file because it is being used by another process')
I've used two different tools to check whether the files are locked or not and
I'm certain that they are not. I used sysinternals process explorer and
LockHunter. Furthermore, I'm able to just manually delete the files myself. I
obviously don't want to do that for all of them as there are hundreds in
various locations.
The script:
import os.path
import sys
def DeleteFilesFromListIfBlank(PathToListOfFiles):
ListOfFiles = open(PathToListOfFiles)
FilesToCheck = [];
for line in ListOfFiles.readlines():
if(len(line) > 1):
line = line.rstrip();
FilesToCheck.append(line)
print "Found %s files to check. Starting check." % len(FilesToCheck)
FilesToRemove = [];
for line in FilesToCheck:
#print "Opening %s" % line
try:
ActiveFile = open(line);
Length = len(ActiveFile.read())
if(Length < 691 and ActiveFile.read() == ""):
print "Deleting %s" % line
os.unlink(line);
else:
print "Keeping %s" % line
except IOError,message:
print "Could not open file: $s" % message
except Exception as inst:
print inst.args
DeleteFilesFromListIfBlank("C:\\ListOfResx.txt")
I've tried using both os.unlink and os.remove. I'm running Python 2.6 on
Vista64
Thanks
Answer: You need to call `.close()` on the file object before you try and delete it.
Edit: And really you shouldn't be opening the file at all. `os.stat()` will
tell you the size of a file (and 9 other values) without ever opening the
file.
This (I think) does the same thing but is a little cleaner (IMHO):
import os
_MAX_SIZE = 691
def delete_if_blank(listFile):
# Make a list of files to check.
with open(listFile) as listFile:
filesToCheck = filter(None, (line.rstrip() for line in listFile.readlines()))
# listFile is automatically closed now because we're out of the 'with' statement.
print "Found %u files to check. Starting check." % len(filesToCheck)
# Remove each file.
for filename in filesToCheck:
if os.stat(filename).st_size < _MAX_SIZE:
print "Deleting %s" % filename
os.remove(filename)
else:
print "Keeping %s" % filename
|
Prevent a timer from updating a text box if the key cursor is in the box
Question: Is it possible to check if a
[`TextCtrl`](http://www.wxpython.org/docs/api/wx.TextCtrl-class.html) is under
keyboard focus (blinking cursor in text box) without defining a handler for
[`EVT_SET_FOCUS`](http://www.wxpython.org/docs/api/wx.FocusEvent-class.html)?
I just want to do a quick boolean check to prevent a `wx.Timer` from
overwriting the text box if the user is writing something in the box.
Answer: You can bypass a timer update by finding which window has the focus (using
`FindFocus`) and comparing this to your TextCtrl window. Then, if your
TextCtrl has the focus you can leave it alone. Here's an example:
import wx
class TestFrame(wx.Frame):
def __init__(self):
self.count = 0
wx.Frame.__init__(self, None, -1, "test frame", size=(200, 100))
self.panel = wx.Panel(self, -1)
button = wx.Button(self.panel, -1, "b", pos=(10, 40))
self.text = wx.TextCtrl(self.panel, -1, `self.count`, size=(50, 25))
self.timer = wx.Timer(self)
self.Bind(wx.EVT_TIMER, self.incr_text)
self.timer.Start(1000)
def incr_text(self, event):
self.count += 1
f = self.FindFocus()
if not f==self.text:
self.text.SetValue(`self.count`)
if __name__=="__main__":
app = wx.PySimpleApp()
TestFrame().Show()
app.MainLoop()
|
Python - simple reading lines from a pipe
Question: I'm trying to read lines from a pipe and process them, but I'm doing something
silly and I can't figure out what. The producer is going to keep producing
lines indefinitely, like this:
producer.py
import time
while True:
print 'Data'
time.sleep(1)
The consumer just needs to check for lines periodically:
consumer.py
import sys, time
while True:
line = sys.stdin.readline()
if line:
print 'Got data:', line
else:
time.sleep(1)
When I run this in the Windows shell as `python producer.py | python consumer.py`, it just sleeps forever (never seems to get data?) It seems that maybe the problem is that the producer never terminates, since if I send a finite amount of data then it works fine.
How can I get the data to be received and show up for the consumer? In the
real application, the producer is a C++ program I have no control over.
Answer: Some old versions of Windows simulated pipes through files (so they were prone
to such problems), but that hasn't been a problem in 10+ years. Try adding a
sys.stdout.flush()
to the producer after the `print`, and also try to make the producer's stdout
unbuffered (by using `python -u`).
Of course this doesn't help if you have no control over the producer -- if it
buffers too much of its output you're still going to wait a long time.
Unfortunately - while there are many approaches to solve that problem on Unix-
like operating systems, such as pyexpect,
[pexpect](http://sourceforge.net/projects/pexpect/),
[exscript](http://code.google.com/p/exscript/), and
[paramiko](http://www.lag.net/paramiko/), I doubt any of them works on
Windows; if that's indeed the case, I'd try [Cygwin](http://www.cygwin.com/),
which puts enough of a Linux-like veneer on Windows as to often enable the use
of Linux-like approaches on a Windows box.
|
cx_Oracle And User Defined Types
Question: Does anyone know an easier way to work with user defined types in Oracle using
cx_Oracle?
For example, if I have these two types:
CREATE type my_type as object(
component varchar2(30)
,key varchar2(100)
,value varchar2(4000))
/
CREATE type my_type_tab as table of my_type
/
And then a procedure in package `my_package` as follows:
PROCEDURE my_procedure (param in my_type_tab);
To execute the procedure in PL/SQL I can do something like this:
declare
l_parms my_type_tab;
l_cnt pls_integer;
begin
l_parms := my_type_tab();
l_parms.extend;
l_cnt := l_parms.count;
l_parms(l_cnt) := my_type('foo','bar','hello');
l_parms.extend;
l_cnt := l_parms.count;
l_parms(l_cnt) := my_type('faz','baz','world');
my_package.my_procedure(l_parms);
end;
However, I was wondering how I can do it in Python, similar to this code:
import cx_Oracle
orcl = cx_Oracle.connect('foo:bar@mydb.com:5555/blah' + instance)
curs = orcl.cursor()
params = ???
curs.execute('begin my_package.my_procedure(:params)', params=params)
If the parameter was a string I can do this as above, but since it's an user-
defined type, I have no idea how to call it without resorting to pure PL/SQL
code.
Edit: Sorry, I should have said that I was looking for ways to do more in
Python code instead of PL/SQL.
Answer: While cx_Oracle can select user defined types, it does not to my knowledge
support passing in user defined types as bind variables. So for example the
following will work:
cursor.execute("select my_type('foo', 'bar', 'hello') from dual")
val, = cursor.fetchone()
print val.COMPONENT, val.KEY, val.VALUE
However what you can't do is construct a Python object, pass it in as an input
argument and then have cx_Oracle "translate" the Python object into your
Oracle type. So I would say you're going to have to construct your input
argument within a PL/SQL block.
You can pass in Python lists, so the following should work:
components=["foo", "faz"]
values=["bar", "baz"]
keys=["hello", "world"]
cursor.execute("""
declare
type udt_StringList is table of varchar2(4000) index by binary_integer;
l_components udt_StringList := :p_components;
l_keys udt_StringList := :p_keys;
l_values udt_StringList := :p_values;
l_parms my_type_tab;
begin
l_parms.extend(l_components.count);
for i in 1..l_components.count loop
l_parms(i) := my_type(l_components(i), l_keys(i), l_values(i));
end loop;
my_package.my_procedure(l_parms);
end;""", p_components=components, p_values=values, p_keys=keys)
|
Redirecting FORTRAN (called via F2PY) output in Python
Question: I'm trying to figure out how to redirect output from some FORTRAN code for
which I've generated a Python interface by using F2PY. I've tried:
from fortran_code import fortran_function
stdout_holder = sys.stdout
stderr_holder = sys.stderr
sys.stdout = file("/dev/null","w")
fortran_function()
sys.stdout.close()
sys.stderr.close()
sys.stdout = stdout_holder
sys.stderr = stderr_holder
This is the de facto method of redirecting output in Python, but it doesn't
seem to work in this case (i.e., the output is displayed anyway).
I did find [a mailing list post from 2002](http://cens.ioc.ee/pipermail/f2py-
users/2002-November/000352.html) saying that "It is possible to read messages
from pts devices, e.g. ttysnoop does this". Information on ttysnoop seems to
be pretty difficult to find online (I don't think it's been updated in quite a
few years; for example, [the first result on Google for
"ttysnoop"](http://www.linuxhelp.net/guides/ttysnoop/) has only dead links to
tarballs, RPMs, and .deb's), and [this request for a port to OS
X](http://macosx.forked.net/bbarchive/forum5/000008.html) received the
response "No luck, it requires some linux specific utmp functions which I
can't create."
I'm open to any suggestions on how to redirect the output (it doesn't have to
use ttysnoop).
Thanks!
Answer: The stdin and stdout fds are being inherited by the C shared library.
from fortran_code import fortran_function
import os
print "will run fortran function!"
# open 2 fds
null_fds = [os.open(os.devnull, os.O_RDWR) for x in xrange(2)]
# save the current file descriptors to a tuple
save = os.dup(1), os.dup(2)
# put /dev/null fds on 1 and 2
os.dup2(null_fds[0], 1)
os.dup2(null_fds[1], 2)
# *** run the function ***
fortran_function()
# restore file descriptors so I can print the results
os.dup2(save[0], 1)
os.dup2(save[1], 2)
# close the temporary fds
os.close(null_fds[0])
os.close(null_fds[1])
print "done!"
|
Obfuscate strings in Python
Question: I have a password string that must be passed to a method. Everything works
fine but I don't feel comfortable storing the password in clear text. Is there
a way to obfuscate the string or to truly encrypt it? I'm aware that
obfuscation can be reverse engineered, but I think I should at least try to
cover up the password a bit. At the very least it wont be visible to a
indexing program, or a stray eye giving a quick look at my code.
I am aware of pyobfuscate but I don't want the whole program obfuscated, just
one string and possibly the whole line itself where the variable is defined.
Target platform is GNU Linux Generic (If that makes a difference)
Answer: If you just want to prevent casually glancing at a password, you may want to
consider encoding/decoding the password to/from
[base64](http://docs.python.org/library/base64.html). It's not secure in the
least, but the password won't be casually human/robot readable.
import base64
# Encode password
encoded_pw = base64.b64encode(raw_pw)
# Decode password
decoded_pw = base64.b64decode(encoded_pw)
|
Python 3 doesn't read unicode file on a new server
Question: My webpages are served by a script that dynamically imports a bunch of files
with
try:
with open (filename, 'r') as f:
exec(f.read())
except IOError: pass
(actually, can you suggest a better method of importing a file? I'm sure there
is one.)
Sometimes the files have strings in different languages, like
# contents of language.ru
title = "Название"
Those were all saved as UTF-8 files. Python has no problem running the script
in command line or serving a page from my MacBook:
OK: [server command line] python3.0 page.py /index.ru
OK: http://whitebox.local/index.ru
but it throws an error when trying to serve a page from a server we just moved
to:
157 try:
158 with open (filename, 'r') as f:
159 exec(f.read())
160 except IOError: pass
161
/usr/local/lib/python3.0/io.py in read(self=, n=-1)
...
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe1 in position 627: ordinal not in range(128)
All the files were copied from my laptop where they were perfectly served by
Apache. What is the reason?
**Update:** I found out the default encoding for `open()` is platform-
dependent so it was `utf8` on my laptop and `ascii` on server. I wonder if
there is a per-program function to set it in Python 3
(`sys.setdefaultencoding` is used in `site` module and then deleted from the
namespace).
Answer: Use `open(filename, 'r', encoding='utf8')`. See [Python 3 docs for
open](http://docs.python.org/3.0/library/functions.html#open).
|
Django : Timestamp string custom field
Question: I'm trying to create a custom timestamp field.
class TimestampKey(models.CharField):
__metaclass__ = models.SubfieldBase
def __init__(self, *args, **kwargs):
import time
kwargs['unique'] = True
kwargs['max_length'] = 20
kwargs['auto_created'] = True
kwargs['editable']=False
super(TimestampKey, self).__init__(*args, **kwargs)
def to_python(self, value) :
return value
def get_db_prep_value(self, value) :
try:
import time
t = time.localtime()
value = reduce(lambda a,b:str(a)+str(b),t)
except ValueError:
value = {}
return value
class Table1(models.Model):
f = TimestampKey(primary_key=True)
n = ....
It stores the value with appropriate timestamp in the db. But it doesnt
populate the field 'f' in the object.
Eg:
t1 = Table1(n="some value")
t1.f -> blank
t1.save()
t1.f -> blank.
This is the problem. Am I missing something so that it doesnt populate the
filed? Please shed some light on this.
Thanks.
Answer: Is it wise to use a timestamp as your primary key? If your database uses ISO
8601 or really any time format in which second is the smallest time
interval... Well, anyway, my point is that you have no guarantee, especially
if this is going to be a web-facing application that two entries are going to
resolve within the minimum time interval. That is, if the smallest time
interval is a second, as in ISO 8601, if you get two requests to save in the
same second, you're going to get an error condition. Why not stick to
automatically incrementing integer keys and just make the timestamp its own
field?
|
an error in taking an input in python
Question: 111111111111111111111111111111111111111111111111111111111111
when i take this as input , it appends an L at the end like this
111111111111111111111111111111111111111111111111111111111111L
thus affecting my calculations on it .. how can i remove it?
import math
t=raw_input()
l1=[]
a=0
while (str(t)!="" and int(t)!= 0):
l=1
k=int(t)
while(k!= 1):
l=l+1
a=(0.5 + 2.5*(k %2))*k + k % 2
k=a
l1.append(l)
t=raw_input()
a=a+1
for i in range(0,int(a)):
print l1[i]
this is my code and it works for every test case except
111111111111111111111111111111111111111111111111111111111111
so i guess something is wrong when python considers such a huge number
Answer: It's being input as a Long Integer, which should behave just like any other
number in terms of doing calculations. It's only when you display it using
[`repr`](http://docs.python.org/library/functions.html#repr) (or something
that invokes `repr`, like printing a list) that it gets the 'L'.
What exactly is going wrong?
**Edit** : Thanks for the code. As far as I can see, giving it a long or short
number makes no difference, but it's not really clear what it's supposed to
do.
|
Counting repeated characters in a string in Python
Question: I want to count the number of times each character is repeated in a string. Is
there any particular way to do it apart from comparing each character of the
string from A-Z and incrementing a counter?
**Update** (in reference to [Anthony's
answer](http://stackoverflow.com/questions/991350/counting-repeated-
characters-in-a-string-in-python/991372#991372)): Whatever you have suggested
till now I have to write 26 times. Is there an easier way?
Answer:
import collections
d = collections.defaultdict(int)
for c in thestring:
d[c] += 1
A `collections.defaultdict` is like a `dict` (subclasses it, actually), but
when an entry is sought and not found, instead of reporting it doesn't have
it, it makes it and inserts it by calling the supplied 0-argument callable.
Most popular are `defaultdict(int)`, for counting (or, equivalently, to make a
multiset AKA bag data structure), and `defaultdict(list)`, which does away
forever with the need to use `.setdefault(akey, []).append(avalue)` and
similar awkward idioms.
So once you've done this `d` is a dict-like container mapping every character
to the number of times it appears, and you can emit it any way you like, of
course. For example, most-popular character first:
for c in sorted(d, key=d.get, reverse=True):
print '%s %6d' % (c, d[c])
|
Pylons FormEncode with an array of form elements
Question: I have a Pylons app and am using FormEncode and HtmlFill to handle my forms. I
have an array of text fields in my template (Mako)
<tr>
<td>Yardage</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
</tr>
However, I can't seem to figure out how to validate these fields. Here is the
relevant entry from my Schema
`yardage = formencode.ForEach(formencode.validators.Int())`
I'm trying to validate that each of these fields is an Int. However, no
validation occurs for these fields.
**UPDATE** As requested here is the code for the action of this controller. I
know it was working as I can validate other form fields.
def submit(self):
schema = CourseForm()
try:
c.form_result = schema.to_python(dict(request.params))
except formencode.Invalid, error:
c.form_result = error.value
c.form_errors = error.error_dict or {}
c.heading = 'Add a course'
html = render('/derived/course/add.html')
return htmlfill.render(
html,
defaults = c.form_result,
errors = c.form_errors
)
else:
h.redirect_to(controler='course', action='view')
**UPDATE** It was suggested on IRC that I change the name of the elements from
`yardage[]` to `yardage` No result. They should all be ints but putting in f
into one of the elements doesn't cause it to be invalid. As I said before, I
am able to validate other form fields. Below is my entire schema.
import formencode
class CourseForm(formencode.Schema):
allow_extra_fields = True
filter_extra_fields = True
name = formencode.validators.NotEmpty(messages={'empty': 'Name must not be empty'})
par = formencode.ForEach(formencode.validators.Int())
yardage = formencode.ForEach(formencode.validators.Int())
Answer: Turns out what I wanted to do wasn't quite right.
**Template** :
<tr>
<td>Yardage</td>
% for hole in range(9):
<td>${h.text('hole-%s.yardage'%(hole), maxlength=3, size=3)}</td>
% endfor
</tr>
(Should have made it in a loop to begin with.) You'll notice that the name of
the first element will become `hole-1.yardage`. I will then use
`[FormEncode.variabledecode](http://www.formencode.org/en/latest/modules/variabledecode.html)`
to turn this into a dictionary. This is done in the
**Schema** :
import formencode
class HoleSchema(formencode.Schema):
allow_extra_fields = False
yardage = formencode.validators.Int(not_empty=True)
par = formencode.validators.Int(not_empty=True)
class CourseForm(formencode.Schema):
allow_extra_fields = True
filter_extra_fields = True
name = formencode.validators.NotEmpty(messages={'empty': 'Name must not be empty'})
hole = formencode.ForEach(HoleSchema())
The HoleSchema will validate that `hole-#.par` and `hole-#.yardage` are both
ints and are not empty. `formencode.ForEach` allows me to apply `HoleSchema`
to the dictionary that I get from passing `variable_decode=True` to the
`@validate` decorator.
Here is the `submit` action from my
**Controller** :
@validate(schema=CourseForm(), form='add', post_only=False, on_get=True,
auto_error_formatter=custom_formatter,
variable_decode=True)
def submit(self):
# Do whatever here.
return 'Submitted!'
Using the `@validate` decorator allows for a much cleaner way to validate and
fill in the forms. The `variable_decode=True` is very important or the
dictionary will not be properly created.
|
Segmentation fault in custom QAbstractItemModel
Question: I've written my own QAbstractItemModel to show a tree in TreeView. It shows
the top level items, but when you expand a directory, the app closes, the the
following message is written to the console: "Segmentation fault" What am I
doing wrong that is causing this. Here is a simplifed version of my code:
#!/usr/bin/env python
import sys
from PyQt4 import QtCore, QtGui
class TreeModel(QtCore.QAbstractItemModel):
NAME = 0
FILEID = QtCore.Qt.UserRole + 1
horizontalHeaderLabels = ["File Name",]
inventory = None
def set_tree(self, inventory, root_item):
self.emit(QtCore.SIGNAL("layoutAboutToBeChanged()"))
self.inventory = inventory
self.id2fileid = []
self.fileid2id = {}
self.dir_children_ids = {}
self.parent_ids = []
# Create internal ids for all items in the tree for use in
# ModelIndex's.
root_fileid = root_item.file_id
self.append_fileid(root_fileid, None)
remaining_dirs = [root_fileid,]
while remaining_dirs:
dir_fileid = remaining_dirs.pop(0)
dir_id = self.fileid2id[dir_fileid]
dir_children_ids = []
for child in inventory[dir_fileid].children:
id = self.append_fileid(child.file_id, dir_id)
dir_children_ids.append(id)
if child.children:
remaining_dirs.append(child.file_id)
if len(self.id2fileid) % 100 == 0:
QtCore.QCoreApplication.processEvents()
self.dir_children_ids[dir_id] = dir_children_ids
self.emit(QtCore.SIGNAL("layoutChanged()"))
def append_fileid(self, fileid, parent_id):
ix = len(self.id2fileid)
self.id2fileid.append(fileid)
self.parent_ids.append(parent_id)
self.fileid2id[fileid] = ix
return ix
def columnCount(self, parent):
if parent.isValid():
return 0
return len(self.horizontalHeaderLabels)
def rowCount(self, parent):
if self.inventory is None:
return 0
parent_id = parent.internalId()
if parent_id not in self.dir_children_ids:
return 0
return len(self.dir_children_ids[parent_id])
def _index(self, row, column, parent_id):
item_id = self.dir_children_ids[parent_id][row]
return self.createIndex(row, column, item_id)
def index(self, row, column, parent = QtCore.QModelIndex()):
if self.inventory is None:
return self.createIndex(row, column, 0)
parent_id = parent.internalId()
return self._index(row, column, parent_id)
def sibling(self, row, column, index):
sibling_id = child.internalId()
if sibling_id == 0:
return QtCore.QModelIndex()
parent_id = self.parent_ids[child_id]
return self._index(row, column, parent_id)
def parent(self, child):
child_id = child.internalId()
if child_id == 0:
return QtCore.QModelIndex()
item_id = self.parent_ids[child_id]
if item_id == 0 :
return self.createIndex(0, 0, item_id)
parent_id = self.parent_ids[item_id]
row = self.dir_children_ids[parent_id].index(item_id)
return self.createIndex(row, 0, item_id)
def hasChildren(self, parent):
if self.inventory is None:
return False
parent_id = parent.internalId()
return parent_id in self.dir_children_ids
def data(self, index, role):
if not index.isValid():
return QtCore.QVariant()
fileid = self.id2fileid[index.internalId()]
if role == self.FILEID:
return QtCore.QVariant(fileid)
item = self.inventory[fileid]
column = index.column()
if column == self.NAME:
if role == QtCore.Qt.DisplayRole:
return QtCore.QVariant(item.file_name)
return QtCore.QVariant()
def flags(self, index):
if not index.isValid():
return QtCore.Qt.ItemIsEnabled
return QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
def headerData(self, section, orientation, role):
if orientation == QtCore.Qt.Horizontal and role == QtCore.Qt.DisplayRole:
return QtCore.QVariant(self.horizontalHeaderLabels[section])
return QtCore.QVariant()
inventory = {}
class InventoryItem():
def __init__(self, file_id, file_name, children=[]):
self.file_id = file_id
self.file_name = file_name
self.children = children
global inventory
inventory[file_id] = self
root_item = InventoryItem("root-id", "", [
InventoryItem("dir1-id", "dir1", [
InventoryItem("file1-id", "file1")
]),
InventoryItem("file1-id", "file1")
])
app = QtGui.QApplication(sys.argv)
model = TreeModel()
model.set_tree(inventory, root_item)
tree_view = QtGui.QTreeView()
tree_view.setModel(model)
tree_view.show()
app.exec_()
The full version can be found in this branch:
https://code.launchpad.net/~garyvdm/qbzr/trees, in the file lib/browse.py
Answer: Use the modeltest.py module! It exercises your model for different scenarios.
You can find it in the /contrib/ directory inside the PyQt source code
package.
|
Python - Print on stdout on a "terminal"
Question: Before starting, I ask you all to apologize for the question. Maybe it is
stupid, but I cannot find a solution. I am working on a remote machine, and
have no idea what type.
My python code, that seems to work, is the one below. The problem is that I am
trying to print some outputs on the screen but nothing happens. I have tried
both print and raw_input but nothing happens ... Do you know any other way to
do it ?
# Set up fields of reply message based on query
def prepareReply():
global authorReply, authorReplyLen, localConvId, originConvId, blbContentAndUntUnz, linkName
print "PLOP!"
raw_input("blabla")
#print "="*10
Thanks !
Answer:
import sys
print "Hi!"
sys.stdout.flush()
|
Command Line Arguments In Python
Question: I am originally a C programmer. I have seen numerous tricks and "hacks" to
read many different arguments.
What are some of the ways Python programmers can do this?
### Related
* [What’s the best way to grab/parse command line arguments passed to a Python script?](http://stackoverflow.com/questions/20063/whats-the-best-way-to-grab-parse-command-line-arguments-passed-to-a-python-scrip)
* [Implementing a “[command] [action] [parameter]” style command-line interfaces?](http://stackoverflow.com/questions/362426/implementing-a-command-action-parameter-style-command-line-interfaces)
* [How can I process command line arguments in Python?](http://stackoverflow.com/questions/567879/how-can-i-process-command-line-arguments-in-python)
* [How do I format positional argument help using Python’s optparse?](http://stackoverflow.com/questions/642648/how-do-i-format-positional-argument-help-using-pythons-optparse)
Answer:
import sys
print "\n".join(sys.argv)
`sys.argv` is a list that contains all the arguments passed to the script on
the command line.
Basically,
import sys
print sys.argv[1:]
|
Bash or Python to go backwards?
Question: I have a text file which a lot of random occurrences of the string @STRING_A,
and I would be interested in writing a short script which removes only some of
them. Particularly one that scans the file and once it finds a line which
starts with this string like
@STRING_A
then checks if 3 lines backwards there is another occurrence of a line
starting with the same string, like
@STRING_A
@STRING_A
and if it happens, to delete the occurrence 3 lines backward. I was thinking
about bash, but I do not know how to "go backwards" with it. So I am sure that
this is not possible with bash. I also thought about python, but then I should
store all information in memory in order to go backwards and then, for long
files it would be unfeasible.
What do you think? Is it possible to do it in bash or python?
Thanks
Answer: Funny that after all these hours nobody's yet given a solution to the problem
as actually phrased (as @John Machin points out in a comment) -- remove just
the leading marker (if followed by another such marker 3 lines down), not the
whole line containing it. It's not hard, of course -- here's a tiny mod as
needed of @truppo's fun solution, for example:
from itertools import izip, chain
f = "foo.txt"
for third, line in izip(chain(" ", open(f)), open(f)):
if third.startswith("@STRING_A") and line.startswith("@STRING_A"):
line = line[len("@STRING_A"):]
print line,
Of course, in real life, one would use an `iterator.tee` instead of reading
the file twice, have this code in a function, not repeat the marker constant
endlessly, &c;-).
|
Python Authentication with urllib2
Question: So I'm trying to download a file from a site called vsearch.cisco.com with
python
[python]
#Connects to the Cisco Server and Downloads files at the URL specified
import urllib2
#Define Useful Variables
url = 'http://vsearch.cisco.com'
username = 'xxxxxxxx'
password = 'xxxxxxxx'
realm = 'CEC'
# Begin Making connection
# Create a Handler -- Also could be where the error lies
handler = urllib2.HTTPDigestAuthHandler()
handler.add_password(realm,url,username,password)
# Create an Opener
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
try:
urllib2.urlopen(url)
print f.read()
except urllib2.HTTPError, e:
print e.code
print e.header
[/python]
My error is ValueError: AbstractDigestAuthHandler doesn't know about basic
I've tried using Basic HTML Authorization handlers and even HTTPS handlers.
Nothing gives me access. This error is different from all the other errors
however. The other errors are simply 401 HTML errors
Any suggestions on how to do this?
Answer: A "password manager" might help:
mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
mgr.add_password(None, url, user, password)
urllib2.build_opener(urllib2.HTTPBasicAuthHandler(mgr),
urllib2.HTTPDigestAuthHandler(mgr))
|
How many numbers below N are coprimes to N?
Question: ## In short:
Given that **a** is coprime to **b** if **GCD(a,b) = 1** (where GCD stands for
[great common
divisor](http://en.wikipedia.org/wiki/Greatest%5Fcommon%5Fdivisor)), how many
positive integers below N are coprime to N?
Is there a clever way?
* * *
## Not necessary stuff
Here is the dumbest way:
def count_coprime(N):
counter = 0
for n in xrange(1,N):
if gcd(n,N) == 1:
counter += 1
return counter
It works, but it is slow, and dumb. I'd like to use a clever and faster
algorithm. I tried to use prime factors and divisors of N but I always get
something that doesn't work with larger N.
~~I think the algorithm should be able to count them without calculating all
of them like the dumbest algorithm does :P~~
## Edit
It seems I've found a working one:
def a_bit_more_clever_counter(N):
result = N - 1
factors = []
for factor, multiplicity in factorGenerator(N):
result -= N/factor - 1
for pf in factors:
if lcm(pf, factor) < N:
result += N/lcm(pf, factor) - 1
factors += [factor]
return result
where lcm is least common multiple. Does anyone have a better one?
## Note
I'm using python, I think code should be readable even to who doesn't know
python, if you find anything that is not clear just ask in the comments. I'm
interested in the algorithm and the math, the idea.
Answer: **[Edit]** One last thought, which (IMO) is important enough that I'll put it
at the beginning: if you're collecting a bunch of totients at once, you can
avoid a lot of redundant work. Don't bother starting from large numbers to
find their smaller factors -- instead, iterate over the smaller factors and
accumulate results for the larger numbers.
class Totient:
def __init__(self, n):
self.totients = [1 for i in range(n)]
for i in range(2, n):
if self.totients[i] == 1:
for j in range(i, n, i):
self.totients[j] *= i - 1
k = j / i
while k % i == 0:
self.totients[j] *= i
k /= i
def __call__(self, i):
return self.totients[i]
if __name__ == '__main__':
from itertools import imap
totient = Totient(10000)
print sum(imap(totient, range(10000)))
This takes just 8ms on my desktop.
* * *
The Wikipedia page on the [Euler totient
function](http://en.wikipedia.org/wiki/Euler_totient_function) has some nice
mathematical results.
![\\sum_{d\\mid
n}\\varphi\(d\)](http://chart.apis.google.com/chart?cht=tx&chl=%5Cnormalsize%5C%21%5Csum_%7Bd%5Cmid%20n%7D%5Cvarphi%28d%29)
counts the numbers coprime to and smaller than each divisor of
![n](http://chart.apis.google.com/chart?cht=tx&chl=%5Cnormalsize%5C%21n): this
has a trivial* mapping to counting the integers from
![1](http://chart.apis.google.com/chart?cht=tx&chl=%5Cnormalsize%5C%211) to
![n](http://chart.apis.google.com/chart?cht=tx&chl=%5Cnormalsize%5C%21n), so
the sum total is
![n](http://chart.apis.google.com/chart?cht=tx&chl=%5Cnormalsize%5C%21n).
_* by the second definition of_ [trivial](http://meshula.net/wordpress/?p=294)
This is perfect for an application of the [Möbius inversion
formula](http://en.wikipedia.org/wiki/M%C3%B6bius_inversion_formula), a clever
trick for inverting sums of this exact form.
![\\varphi\(n\)=\\sum_{d\\mid n}d\\cdot\\mu\\left\(\\frac
nd\\right\)](http://chart.apis.google.com/chart?cht=tx&chl=%5Cvarphi%28n%29%3D%5Csum_%7Bd%5Cmid%20n%7Dd%5Ccdot%5Cmu%5Cleft%28%5Cfrac%20nd%5Cright%29)
This leads naturally to the code
def totient(n):
if n == 1: return 1
return sum(d * mobius(n / d) for d in range(1, n+1) if n % d == 0)
def mobius(n):
result, i = 1, 2
while n >= i:
if n % i == 0:
n = n / i
if n % i == 0:
return 0
result = -result
i = i + 1
return result
There exist better implementations of the [Möbius
function](http://en.wikipedia.org/wiki/M%C3%B6bius_function), and it could be
memoized for speed, but this should be easy enough to follow.
The more obvious computation of the totient function is
![\\varphi\\left\(p_1^{k_1}\\dots
p_r^{k_r}\\right\)=\(p_1-1\)p_1^{k_1-1}\\dots\(p_r-1\)p_r^{k_r-1}p_1^{k_1}\\dots
p_r^{k_r}\\prod_{i=1}^r\\left\(1-\\frac1{p_r}\\right\)](http://chart.apis.google.com/chart?cht=tx&chl=%5Cvarphi%5Cleft%28p_1%5E%7Bk_1%7D%5Cdots%20p_r%5E%7Bk_r%7D%5Cright%29%3D%28p_1-1%29p_1%5E%7Bk_1-1%7D%5Cdots%28p_r-1%29p_r%5E%7Bk_r-1%7D%3Dp_1%5E%7Bk_1%7D%5Cdots%20p_r%5E%7Bk_r%7D%5Cprod_%7Bi%3D1%7D%5Er%5Cleft%281-%5Cfrac1%7Bp_r%7D%5Cright%29)
In other words, fully factor the number into unique primes and exponents, and
do a simple multiplication from there.
from operator import mul
def totient(n):
return int(reduce(mul, (1 - 1.0 / p for p in prime_factors(n)), n))
def primes_factors(n):
i = 2
while n >= i:
if n % i == 0:
yield i
n = n / i
while n % i == 0:
n = n / i
i = i + 1
Again, there exist better implementations of `prime_factors`, but this is
meant for easy reading.
* * *
`# helper functions`
from collections import defaultdict
from itertools import count
from operator import mul
def gcd(a, b):
while a != 0: a, b = b % a, a
return b
def lcm(a, b): return a * b / gcd(a, b)
primes_cache, prime_jumps = [], defaultdict(list)
def primes():
prime = 1
for i in count():
if i < len(primes_cache): prime = primes_cache[i]
else:
prime += 1
while prime in prime_jumps:
for skip in prime_jumps[prime]:
prime_jumps[prime + skip] += [skip]
del prime_jumps[prime]
prime += 1
prime_jumps[prime + prime] += [prime]
primes_cache.append(prime)
yield prime
def factorize(n):
for prime in primes():
if prime > n: return
exponent = 0
while n % prime == 0:
exponent, n = exponent + 1, n / prime
if exponent != 0:
yield prime, exponent
`# OP's first attempt`
def totient1(n):
counter = 0
for i in xrange(1, n):
if gcd(i, n) == 1:
counter += 1
return counter
`# OP's second attempt`
# I don't understand the algorithm, and just copying it yields inaccurate results
`# Möbius inversion`
def totient2(n):
if n == 1: return 1
return sum(d * mobius(n / d) for d in xrange(1, n+1) if n % d == 0)
mobius_cache = {}
def mobius(n):
result, stack = 1, [n]
for prime in primes():
if n in mobius_cache:
result = mobius_cache[n]
break
if n % prime == 0:
n /= prime
if n % prime == 0:
result = 0
break
stack.append(n)
if prime > n: break
for n in stack[::-1]:
mobius_cache[n] = result
result = -result
return -result
`# traditional formula`
def totient3(n):
return int(reduce(mul, (1 - 1.0 / p for p, exp in factorize(n)), n))
`# traditional formula, no division`
def totient4(n):
return reduce(mul, ((p-1) * p ** (exp-1) for p, exp in factorize(n)), 1)
Using this code to calculate the totients of all numbers from 1 to 9999 on my
desktop, averaging over 5 runs,
* `totient1` takes forever
* `totient2` takes 10s
* `totient3` takes 1.3s
* `totient4` takes 1.3s
|
How to replace a column using Python's built-in .csv writer module?
Question: I need to do a find and replace (specific to one column of URLs) in a huge
Excel .csv file. Since I'm in the beginning stages of trying to teach myself a
scripting language, I figured I'd try to implement the solution in python.
I'm having trouble with the "replace" part of the solution. I've read the
[official csv module documentation](http://docs.python.org/library/csv.html)
about how to use the writer, but there isn't really a clear enough example for
me (yes, I'm slow). So, now for the question: how does one iterate through the
rows of a csv file with a writer object?
p.s. apologies in advance for the clumsy code, I'm still learning :)
import csv
csvfile = open("PALTemplateData.csv")
csvout = open("PALTemplateDataOUT.csv")
dialect = csv.Sniffer().sniff(csvfile.read(1024))
csvfile.seek(0)
reader = csv.reader(csvfile, dialect)
writer = csv.writer(csvout, dialect)
total=0;
needchange=0;
changed = 0;
temp = ''
changeList = []
for row in reader:
total=total+1
temp = row[len(row)-1]
if '/?' in temp:
needchange=needchange+1;
changeList.append(row.index)
for row in writer: #this doesn't compile, hence the question
if row.index in changeList:
changed=changed+1
temp = row[len(row)-1]
temp.replace('/?', '?')
row[len(row)-1] = temp
writer.writerow(row)
print('Total URLs:', total)
print('Total URLs to change:', needchange)
print('Total URLs changed:', changed)
Answer: The reason you're getting an error is that the writer doesn't have data to
iterate over. You're supposed to give it the data - presumably, you'd have
some sort of list or generator that produces the rows to write out.
I'd suggest just combining the two loops, like so:
for row in reader:
row[-1] = row[-1].replace('/?', '?')
writer.writerow(row)
And with that, you don't even need `total`, `needchange`, and `changeList`.
(There are a bunch of optimizations in there that I unfortunately don't have
time to explain, but I'll see if I can edit that info in later)
|
Looping Fget with fsockopen in PHP 5.x
Question: I have a Python Server finally working and responding to multiple command's
with the output's, however I'm now having problem's with PHP receiving the
full output. I have tried commands such as fgets, fread, the only command that
seems to work is "fgets".
However this only recieve's on line of data, I then created a while statement
shown bellow:
while (!feof($handle)) {
$buffer = fgets($handle, 4096);
echo $buffer;
}
However it seems the Python server is not sending a Feof at the end of the
output so the php page times out and does not display anything. Like I said
above, just running echo fgets($handle), work's fine, and output's one line,
running the command again under neither will display the next line e.t.c
I have attached the important part of my Python Script bellow:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(("", port))
s.listen(5)
print "OK."
print " Listening on port:", port
import subprocess
while 1:
con, addr = s.accept()
while True:
datagram = con.recv(1024)
if not datagram:
break
print "Rx Cmd:", datagram
print "Launch:", datagram
process = subprocess.Popen(datagram+" &", shell=True, stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
con.send(stdout)
con.close()
s.close()
I have also attached the full PHP script:
<?php
$handle = fsockopen("tcp://xxx.xxx.xxx.xxx",12345);
fwrite($handle,"ls");
echo fgets($handle);
fclose($handle);
?>
Thanks, Ashley
Answer: I believe you need to fix your server code a bit. I have removed the inner
while loop. The problem with your code was that the server never closed the
connection, so `feof` never returned true.
I also removed the `+ " &"` bit. To get the output, you need to wait until the
process ends anyway. And I am not sure how the shell would handle the `&` in
this case.
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(("", port))
s.listen(5)
print "OK."
print " Listening on port:", port
import subprocess
try:
while 1:
con, addr = s.accept()
try:
datagram = con.recv(1024)
if not datagram:
continue
print "Rx Cmd:", datagram
print "Launch:", datagram
process = subprocess.Popen(datagram, shell=True, stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
con.send(stdout)
finally:
print "closing connection"
con.close()
except KeyboardInterrupt:
pass
finally:
print "closing socket"
s.close()
BTW, you need to use the while-loop in your php script. `fgets` returns until
a single line only.
|
How do I set up a basic website with registration in Python on Dreamhost?
Question: I need to write a basic website on Dreamhost. It needs to be done in Python. I
discovered Dreamhost permits me to write .py files, and read them.
### Example:
#!/usr/bin/python
print "Content-type: text/html\n\n"
print "hello world"
So now I am looking for a basic framework, or a set of files that has already
programmed the whole registration to be able to kick-off the project in a
simple way. By registration I mean the files to register a new account, log
in, check the email (sending a mail), and edit the user information. All this
possibly using MySQL.
Answer: Let me share my own experience with django. My prerequisits:
* average knowledge of python
* very weak idea of how web works (no js skills, just a bit of css)
* my day job is filled with coding in C and I just wanted to try something different, so there certainly was a passion to learn (I think this is the most important one)
Why I've chosen django:
* I've already knew bits and pieces of python
* django has excelent documentation, including tutorial, which explained everything in very clear and simple manner
It is worth to read complete [manual](http://docs.djangoproject.com/en/dev/)
first (it took me two or three weekends. I remember I could not
remember/understand everything at first pass, but it helped me to learn where
the information can be found when needed. There is also another source of
documentaion called [djangobook](http://www.djangobook.com/ "djangobook").
Djangobook contains same information as manual, but things are explained more
in detail. It's worth to read it also, it helps to catch up with MVC concept,
if you have not tried that before.
And finally to answer your question best: there are already also
[OpenId](http://openid.net) modules ready for you. I'm considering to use
[django-authopenid](http://bitbucket.org/benoitc/django-authopenid/wiki/Home)
for my new project. It supports OpenId, while providing fallback to locally
managed users.
There is certain learning curve if you are going learn django. The more you
know about the web and python the steeper the curve is. I had to also learn
bits and pieces of javascript and it took me also some time. If you are able
to spend full time learning django, then you can expect you'll be able to
deliver first results within 4-6 weeks. It took me 6 months, since I was doing
my django studies in free time.
|
Formatting cells in Excel with Python
Question: How do I format cells in Excel with python?
In particular I need to change the font of several subsequent rows to be
regular instead of bold.
Thnak you,
Alex
Answer: Using [xlwt](http://pypi.python.org/pypi/xlwt):
from xlwt import *
font0 = Font()
font0.bold = False
style0 = XFStyle()
style0.font = font0
wb = Workbook()
ws0 = wb.add_sheet('0')
ws0.write(0, 0, 'myNormalText', style0)
font1 = Font()
font1.bold = True
style1 = XFStyle()
style1.font = font1
ws0.write(0, 1, 'myBoldText', style1)
wb.save('format.xls')
|
Flash Characters on Screen in Linux
Question: I have a XFCE 4.6 on kernel 2.6. Is there a quick and easy way to flash a
message on the screen for a few seconds?
My Thinkpad T60 has 3 volume buttons (up, down, mute). When I pressed the
buttons, I would like to flash the volume on the screen for a second on
screen. Can it be done with Python?
Answer: [notification-daemon-
xfce](http://goodies.xfce.org/projects/applications/notification-daemon-xfce)
allows [libnotify](http://www.galago-project.org/) clients to show brief
messages in XFCE. libnotify has [Python bindings](http://www.galago-
project.org/files/releases/source/notify-python/) available.
As an untested example,
import pynotify
import sys
pynotify.init(sys.argv[0])
notification = pynotify.Notification("Title", "body", "dialog-info")
notification.set_urgency(pynotify.URGENCY_NORMAL)
notification.set_timeout(pynotify.EXPIRES_DEFAULT)
notification.show()
|
Reading binary file in Python and looping over each byte
Question: In Python, how do I read in a binary file and loop over each byte of that
file?
Answer:
f = open("myfile", "rb")
try:
byte = f.read(1)
while byte != "":
# Do stuff with byte.
byte = f.read(1)
finally:
f.close()
By suggestion of chrispy:
with open("myfile", "rb") as f:
byte = f.read(1)
while byte != "":
# Do stuff with byte.
byte = f.read(1)
Note that the with statement is not available in versions of Python below 2.5.
To use it in v 2.5 you'll need to import it:
from __future__ import with_statement
In 2.6 this is not needed.
In Python 3, it's a bit different. We will no longer get raw characters from
the stream in byte mode but byte objects, thus we need to alter the condition:
with open("myfile", "rb") as f:
byte = f.read(1)
while byte != b"":
# Do stuff with byte.
byte = f.read(1)
Or as benhoyt says, skip the not equal and take advantage of the fact that
`b""` evaluates to false. This makes the code compatible between 2.6 and 3.x
without any changes. It would also save you from changing the condition if you
go from byte mode to text or the reverse.
with open("myfile", "rb") as f:
byte = f.read(1)
while byte:
# Do stuff with byte.
byte = f.read(1)
|
In Python, how do I easily generate an image file from some source data?
Question: I have some some data that I would like to visualize. Each byte of the source
data roughly corresponds to a pixel value of the image.
What is the easiest way to generate an image file (bitmap?) using Python?
Answer: You can create images with a list of pixel values using
[Pillow](http://python-pillow.github.io/):
from PIL import Image
img = Image.new('RGB', (width, height))
img.putdata(my_list)
img.save('image.png')
|
wx.Panel scales to fit entire parent Frame despite giving it a size
Question: Hi I am newbie to wxpython I am trying to have a Frame and within that a small
panel area which I am coloring blue. However no matter what I do the wx.Panel
using the size attribute , the single panel snaps to the size of its parent
frame. If I add another panel (pane2 in code below) both panes are drawn in
the correct size.
I know I can control these panels using sizers . But I was trying to
understand why the wx.Panel object behaves the way it does when its all alone.
Here is the code:
import wx
class PlateGui(wx.Frame):
def __init__(self, *args , **kwds):
self.frame = wx.Frame.__init__(self,*args, **kwds)
print "Made frame"
if __name__ == "__main__":
an_app = wx.PySimpleApp()
aframe = PlateGui(parent=None,id=-1,title="Test Frame",size=(300, 300))
pane = wx.Panel(parent=aframe,size=(100,100),style=wx.RAISED_BORDER)
pane.SetBackgroundColour(wx.Colour(0,0,255))
# pane2 = wx.Panel(parent=aframe,size=(200,100),style=wx.RAISED_BORDER)
# Commenting out the second pane makes the first pane fit
# entire frame regardless of size specified
aframe.Show()
an_app.MainLoop()
Answer: By default, wx.Frame has a sizer that expands its child to fill the frame.
Create your own sizer, add the panel to it (without specifying expand flags)
and set that as the frame's sizer.
import wx
app = wx.PySimpleApp()
frame = wx.Frame(None, -1, 'Test')
sizer = wx.BoxSizer(wx.VERTICAL)
panel = wx.Panel(frame, -1, size=(100,100), style=wx.BORDER_RAISED)
sizer.Add(panel)
frame.SetSizer(sizer)
frame.Show()
app.MainLoop()
|
Django unit testing with date/time-based objects
Question: Suppose I have the following `Event` model:
from django.db import models
import datetime
class Event(models.Model):
date_start = models.DateField()
date_end = models.DateField()
def is_over(self):
return datetime.date.today() > self.date_end
I want to test `Event.is_over()` by creating an Event that ends in the future
(today + 1 or something), and stubbing the date and time so the system thinks
we've reached that future date.
I'd like to be able to stub ALL system time objects as far as python is
concerned. This includes `datetime.date.today()`, `datetime.datetime.now()`,
and any other standard date/time objects.
What's the standard way to do this?
Answer: **EDIT** : Since my answer is the accepted answer here I'm updating it to let
everyone know a better way has been created in the meantime, the freezegun
library: <https://pypi.python.org/pypi/freezegun>. I use this in all my
projects when I want to influence time in tests. Have a look at it.
**Original answer:**
Replacing internal stuff like this is always dangerous because it can have
nasty side effects. So what you indeed want, is to have the monkey patching be
as local as possible.
We use Michael Foord's excellent mock library:
<http://www.voidspace.org.uk/python/mock/> that has a `@patch` decorator which
patches certain functionality, but the monkey patch only lives in the scope of
the testing function, and everything is automatically restored after the
function runs out of its scope.
The only problem is that the internal `datetime` module is implemented in C,
so by default you won't be able to monkey patch it. We fixed this by making
our own simple implementation which _can_ be mocked.
The total solution is something like this (the example is a validator function
used within a Django project to validate that a date is in the future). Mind
you I took this from a project but took out the non-important stuff, so things
may not actually work when copy-pasting this, but you get the idea, I hope :)
First we define our own very simple implementation of `datetime.date.today` in
a file called `utils/date.py`:
import datetime
def today():
return datetime.date.today()
Then we create the unittest for this validator in `tests.py`:
import datetime
import mock
from unittest2 import TestCase
from django.core.exceptions import ValidationError
from .. import validators
class ValidationTests(TestCase):
@mock.patch('utils.date.today')
def test_validate_future_date(self, today_mock):
# Pin python's today to returning the same date
# always so we can actually keep on unit testing in the future :)
today_mock.return_value = datetime.date(2010, 1, 1)
# A future date should work
validators.validate_future_date(datetime.date(2010, 1, 2))
# The mocked today's date should fail
with self.assertRaises(ValidationError) as e:
validators.validate_future_date(datetime.date(2010, 1, 1))
self.assertEquals([u'Date should be in the future.'], e.exception.messages)
# Date in the past should also fail
with self.assertRaises(ValidationError) as e:
validators.validate_future_date(datetime.date(2009, 12, 31))
self.assertEquals([u'Date should be in the future.'], e.exception.messages)
The final implementation looks like this:
from django.utils.translation import ugettext_lazy as _
from django.core.exceptions import ValidationError
from utils import date
def validate_future_date(value):
if value <= date.today():
raise ValidationError(_('Date should be in the future.'))
Hope this helps
|
MS Access library for python
Question: Is there a library for using MS Access database in python? The win32 module is
not as easy as the MySQL library. Is there a simpler way to use MS Access with
Python?
Answer: Depending on what you want to do,
[pyodbc](https://github.com/mkleehammer/pyodbc) might be what you are looking
for.
import pyodbc
db_file = r'''C:\x.mdb'''
user = 'admin'
password = ''
odbc_conn_str = 'DRIVER={Microsoft Access Driver (*.mdb)};DBQ=%s;UID=%s;PWD=%s' %\
(db_file, user, password)
# Or, for newer versions of the Access drivers:
odbc_conn_str = 'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=%s;UID=%s;PWD=%s' %\
(db_file, user, password)
conn = pyodbc.connect(odbc_conn_str)
|
wxPython SplitterWindow does not expand within a Panel
Question: I'm trying a simple layout and the panel divided by a SplitterWindow doesn't
expand to fill the whole area, what I want is this:
[button] <= (fixed size)
---------
TEXT AREA }
~~~~~~~~~ <= (this is the splitter) } this is a panel
TEXT AREA }
The actual code is:
import wx
app = wx.App()
frame = wx.Frame(None, wx.ID_ANY, "Register Translator")
parseButton = wx.Button(frame, label="Parse")
panel = wx.Panel(frame)
panel.SetBackgroundColour("BLUE")
splitter = wx.SplitterWindow(panel)
inputArea = wx.TextCtrl(splitter, style=wx.TE_MULTILINE)
outputArea = wx.TextCtrl(splitter, style=wx.TE_MULTILINE)
splitter.SplitHorizontally(inputArea, outputArea)
sizer=wx.BoxSizer(wx.VERTICAL)
sizer.Add(parseButton, 0, wx.ALIGN_CENTER)
sizer.Add(panel, 1, wx.EXPAND | wx.ALL)
frame.SetSizerAndFit(sizer)
frame.SetAutoLayout(1)
frame.Show(True)
app.MainLoop()
I set the panel color different, and it's actually using the whole area, so
the problem is just the SplitterWindow within the Panel, not within the
BoxSizer.
Any ideas about why it isn't working? Thanks!
Answer: The Panel is probably expanding but the ScrolledWindow within the Panel is
not, because you aren't using a sizer for the panel, only the frame.
You could also try just having the SplitterWindow be a child of the frame,
without the panel.
|
Importing database data into Joomla
Question: How to import data from a database to Joomla CMS?
I have a database with lots of data I want to use in my new website. An ideal
solution for me would be a Python/Perl/PHP API that would know how to do
Joomla' basic routines:
1. adding/removing a section/category/material/menu/module;
2. changing properties of existing entities
Answer: You could try the following extensions:
1. [Bulk Import](http://extensions.joomla.org/extensions/migration-&-conversion/data-import-&-export/7243/details)
2. [CSV Import](http://extensions.joomla.org/extensions/migration-&-conversion/extensions-migration/4247/details)
If that doesn't work for you, maybe take a look at the [Joomla
API](http://api.joomla.org/)
|
formencode invalid return type
Question: if an exception occurs in form encode then what will be the return type??
suppose
if(request.POST):
formvalidate = ValidationRule()
try:
new = formvalidate.to_python(request.POST)
data = Users1( n_date = new['n_date'], heading = new['heading'],
desc = new['desc'], link = new['link'],
module_name = new['module_name'] )
session.add(data)
session.commit()
except formencode.Invalid, e:
errors = e
how we can find the field wise error
Answer: I assume you are using formencode(<http://formencode.org>)
you can use unpack_errors to get per field error e.g.
import formencode
from formencode import validators
class UserForm(formencode.Schema):
first_name = validators.String(not_empty=True)
last_name = validators.String(not_empty=True)
form = UserForm()
try:
form.to_python({})
except formencode.Invalid,e:
print e.unpack_errors()
it will print a dict of errors per field.
you can use formencode.htmlfill.render to render all errors, in different
ways, read <http://formencode.org/htmlfill.html#errors>
|
KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm
Question: This question relates to those parts of the KenKen Latin Square puzzles which
ask you to find all possible combinations of ncells numbers with values x such
that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested
several of the more promising answers, I'm going to award the answer-prize to
Lennart Regebro, because:
1. his routine is as fast as mine (+-5%), and
2. he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart.
chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs
later, sooo, first to the wire gets it.
A remark: Alex Martelli's bare-bones recursive algorithm is an example of
making every possible combination and throwing them all at a sieve and seeing
which go through the holes. This approach takes 20+ times longer than
Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5,
target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not
generating every possible combination is good.
Another remark: Lennart's and my routines generate **the same answers in the
same order**. Are they in fact the same algorithm seen from different angles?
I don't know.
Something occurs to me. If you sort the answers, starting, say, with
(8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8,
n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with
the first ones being "hot" and the last one being "cold" and the greatest
possible number of stages in between. Is this related to "informational
entropy"? What's the proper metric for looking at it? Is there an algorithm
that producs the combinations in descending (or ascending) order of heat?
(This one doesn't, as far as I can see, although it's close over short
stretches, looking at normalized std. dev.)
Here's the Python routine:
#!/usr/bin/env python
#filename: makeAddCombos.07.py -- stripped for StackOverflow
def initialize_combo( max_val, n_cells, target_sum):
"""returns combo
Starting from left, fills combo to max_val or an intermediate value from 1 up.
E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1].
"""
combo = []
#Put 1 in each cell.
combo += [1] * n_cells
need = target_sum - sum(combo)
#Fill as many cells as possible to max_val.
n_full_cells = need //(max_val - 1)
top_up = max_val - 1
for i in range( n_full_cells): combo[i] += top_up
need = target_sum - sum(combo)
# Then add the rest to next item.
if need > 0:
combo[n_full_cells] += need
return combo
#def initialize_combo()
def scrunch_left( combo):
"""returns (new_combo,done)
done Boolean; if True, ignore new_combo, all done;
if Falso, new_combo is valid.
Starts a new combo list. Scanning from right to left, looks for first
element at least 2 greater than right-end element.
If one is found, decrements it, then scrunches all available counts on its
right up against its right-hand side. Returns the modified combo.
If none found, (that is, either no step or single step of 1), process
done.
"""
new_combo = []
right_end = combo[-1]
length = len(combo)
c_range = range(length-1, -1, -1)
found_step_gt_1 = False
for index in c_range:
value = combo[index]
if (value - right_end) > 1:
found_step_gt_1 = True
break
if not found_step_gt_1:
return ( new_combo,True)
if index > 0:
new_combo += combo[:index]
ceil = combo[index] - 1
new_combo += [ceil]
new_combo += [1] * ((length - 1) - index)
need = sum(combo[index:]) - sum(new_combo[index:])
fill_height = ceil - 1
ndivf = need // fill_height
nmodf = need % fill_height
if ndivf > 0:
for j in range(index + 1, index + ndivf + 1):
new_combo[j] += fill_height
if nmodf > 0:
new_combo[index + ndivf + 1] += nmodf
return (new_combo, False)
#def scrunch_left()
def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum):
"""
Build combos, list of tuples of 2 or more addends.
"""
combo = initialize_combo( max_val, n_cells, target_sum)
combos.append( tuple( combo))
while True:
(combo, done) = scrunch_left( combo)
if done:
break
else:
combos.append( tuple( combo))
return combos
#def make_combos_n_cells_ge_two()
if __name__ == '__main__':
combos = []
max_val = 8
n_cells = 5
target_sum = 20
if n_cells == 1: combos.append( (target_sum,))
else:
combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum)
import pprint
pprint.pprint( combos)
Answer: Your algorithm seems pretty good at first blush, and I don't think OO or
another language would improve the code. I can't say if recursion would have
helped but I admire the non-recursive approach. I bet it was harder to get
working and it's harder to read but it likely is more efficient and it's
definitely quite clever. To be honest I didn't analyze the algorithm in detail
but it certainly looks like something that took a long while to get working
correctly. I bet there were lots of off-by-1 errors and weird edge cases you
had to think through, eh?
Given all that, basically all I tried to do was pretty up your code as best I
could by replacing the numerous C-isms with more idiomatic Python-isms. Often
times what requires a loop in C can be done in one line in Python. Also I
tried to rename things to follow Python naming conventions better and cleaned
up the comments a bit. Hope I don't offend you with any of my changes. You can
take what you want and leave the rest. :-)
Here are the notes I took as I worked:
* Changed the code that initializes `tmp` to a bunch of 1's to the more idiomatic `tmp = [1] * n_cells`.
* Changed `for` loop that sums up `tmp_sum` to idiomatic `sum(tmp)`.
* Then replaced all the loops with a `tmp = <list> + <list>` one-liner.
* Moved `raise doneException` to `init_tmp_new_ceiling` and got rid of the `succeeded` flag.
* The check in `init_tmp_new_ceiling` actually seems unnecessary. Removing it, the only `raise`s left were in `make_combos_n_cells`, so I just changed those to regular returns and dropped `doneException` entirely.
* Normalized mix of 4 spaces and 8 spaces for indentation.
* Removed unnecessary parentheses around your `if` conditions.
* `tmp[p2] - tmp[p1] == 0` is the same thing as `tmp[p2] == tmp[p1]`.
* Changed `while True: if new_ceiling_flag: break` to `while not new_ceiling_flag`.
* You don't need to initialize variables to 0 at the top of your functions.
* Removed `combos` list and changed function to `yield` its tuples as they are generated.
* Renamed `tmp` to `combo`.
* Renamed `new_ceiling_flag` to `ceiling_changed`.
And here's the code for your perusal:
def initial_combo(ceiling=5, target_sum=13, num_cells=4):
"""
Returns a list of possible addends, probably to be modified further.
Starts a new combo list, then, starting from left, fills items to ceiling
or intermediate between 1 and ceiling or just 1. E.g.:
Given ceiling = 5, target_sum = 13, num_cells = 4: creates [5,5,2,1].
"""
num_full_cells = (target_sum - num_cells) // (ceiling - 1)
combo = [ceiling] * num_full_cells \
+ [1] * (num_cells - num_full_cells)
if num_cells > num_full_cells:
combo[num_full_cells] += target_sum - sum(combo)
return combo
def all_combos(ceiling, target_sum, num_cells):
# p0 points at the rightmost item and moves left under some conditions
# p1 starts out at rightmost items and steps left
# p2 starts out immediately to the left of p1 and steps left as p1 does
# So, combo[p2] and combo[p1] always point at a pair of adjacent items.
# d combo[p2] - combo[p1]; immediate difference
# cd combo[p2] - combo[p0]; cumulative difference
# The ceiling decreases by 1 each iteration.
while True:
combo = initial_combo(ceiling, target_sum, num_cells)
yield tuple(combo)
ceiling_changed = False
# Generate all of the remaining combos with this ceiling.
while not ceiling_changed:
p2, p1, p0 = -2, -1, -1
while combo[p2] == combo[p1] and abs(p2) <= num_cells:
# 3,3,3,3
if abs(p2) == num_cells:
return
p2 -= 1
p1 -= 1
p0 -= 1
cd = 0
# slide_ptrs_left loop
while abs(p2) <= num_cells:
d = combo[p2] - combo[p1]
cd += d
# 5,5,3,3 or 5,5,4,3
if cd > 1:
if abs(p2) < num_cells:
# 5,5,3,3 --> 5,4,4,3
if d > 1:
combo[p2] -= 1
combo[p1] += 1
# d == 1; 5,5,4,3 --> 5,4,4,4
else:
combo[p2] -= 1
combo[p0] += 1
yield tuple(combo)
# abs(p2) == num_cells; 5,4,4,3
else:
ceiling -= 1
ceiling_changed = True
# Resume at make_combo_same_ceiling while
# and follow branch.
break
# 4,3,3,3 or 4,4,3,3
elif cd == 1:
if abs(p2) == num_cells:
return
p1 -= 1
p2 -= 1
if __name__ == '__main__':
print list(all_combos(ceiling=6, target_sum=12, num_cells=4))
|
Adding Cookie to SOAPpy Request
Question: I'm trying to send a SOAP request using SOAPpy as the client. I've found some
documentation stating how to add a cookie by extending SOAPpy.HTTPTransport,
but I can't seem to get it to work.
I tried to use the example
[here](http://code.activestate.com/recipes/444758/), but the server I'm trying
to send the request to started throwing 415 errors, so I'm trying to
accomplish this without using ClientCookie, or by figuring out why the server
is throwing 415's when I do use it. I suspect it might be because ClientCookie
uses urllib2 & http/1.1, whereas SOAPpy uses urllib & http/1.0
Does someone know how to make ClientCookie use http/1.0, if that is even the
problem, or a way to add a cookie to the SOAPpy headers without using
ClientCookie? If tried this code using other services, it only seems to throw
errors when sending requests to Microsoft servers.
I'm still finding my footing with python, so it could just be me doing
something dumb.
import sys, os, string
from SOAPpy import WSDL,HTTPTransport,Config,SOAPAddress,Types
import ClientCookie
Config.cookieJar = ClientCookie.MozillaCookieJar()
class CookieTransport(HTTPTransport):
def call(self, addr, data, namespace, soapaction = None, encoding = None,
http_proxy = None, config = Config):
if not isinstance(addr, SOAPAddress):
addr = SOAPAddress(addr, config)
cookie_cutter = ClientCookie.HTTPCookieProcessor(config.cookieJar)
hh = ClientCookie.HTTPHandler()
hh.set_http_debuglevel(1)
# TODO proxy support
opener = ClientCookie.build_opener(cookie_cutter, hh)
t = 'text/xml';
if encoding != None:
t += '; charset="%s"' % encoding
opener.addheaders = [("Content-Type", t),
("Cookie", "Username=foobar"), # ClientCookie should handle
("SOAPAction" , "%s" % (soapaction))]
response = opener.open(addr.proto + "://" + addr.host + addr.path, data)
data = response.read()
# get the new namespace
if namespace is None:
new_ns = None
else:
new_ns = self.getNS(namespace, data)
print '\n' * 4 , '-'*50
# return response payload
return data, new_ns
url = 'http://www.authorstream.com/Services/Test.asmx?WSDL'
proxy = WSDL.Proxy(url, transport=CookieTransport)
print proxy.GetList()
Answer: Error 415 is because of incorrect content-type header.
Install httpfox for firefox or whatever tool (wireshark, Charles or Fiddler)
to track what headers are you sending. Try Content-Type: application/xml.
...
t = 'application/xml';
if encoding != None:
t += '; charset="%s"' % encoding
...
If you trying to send file to the web server use Content-
Type:application/x-www-form-urlencoded
|
In Python, Using pyodbc, How Do You Perform Transactions?
Question: I have a username which I must change in numerous (up to ~25) tables. (Yeah, I
know.) An atomic transaction seems to be the way to go for this sort of thing.
However, I do not know how to do this with pyodbc. I've seen various tutorials
on atomic transactions before, but have never used them.
The setup: Windows platform, Python 2.6, pyodbc, Microsoft SQL 2005. I've used
pyodbc for single SQL statements, but no compound statements or transactions.
Best practices for SQL seem to suggest that creating a stored procedure is
excellent for this. My fears about doing a stored procedure are as follows, in
order of increasing importance: 1) I have never written a stored procedure. 2)
I heard that pyodbc does not return results from stored procedures as of yet.
3) This is most definitely Not My Database. It's vendor-supplied, vendor-
updated, and so forth.
So, what's the best way to go about this?
Answer: By its
[documentation](http://code.google.com/p/pyodbc/wiki/FAQs#Connecting%5Ffails%5Fwith%5Fan%5Ferror%5Fabout%5FSQL%5FATTR%5FAUTOCOMMIT),
pyodbc does support transactions, but only if the odbc driver support it.
Furthermore, as pyodbc is compliant with [PEP
249](http://www.python.org/dev/peps/pep-0249/), data is stored only when a
manual commit is done.
However, it also support autocommit feature, and in that case you cannot have
any transaction.
You should check the connection, when it is performed
cnxn = pyodbc.connect(cstring, autocommit=True)
or explicitely turn off the autocommit mode with
cnxn.autocommit = False
Note: you can get more information on the autocommit mode of pyodbc on its
[wiki](http://code.google.com/p/pyodbc/wiki/Features)
Once autocommit is turned off, then you have to explicitely commit() the
transaction, or rollback() the entire transaction.
|
Translating Perl to Python
Question: I found this Perl script while [migrating my SQLite database to
mysql](http://stackoverflow.com/questions/18671/quick-easy-way-to-migrate-
sqlite3-to-mysql/25860)
I was wondering (since I don't know Perl) how could one rewrite this in
Python?
Bonus points for the shortest (code) answer :)
**edit** : sorry I meant shortest code, not strictly shortest answer
#! /usr/bin/perl
while ($line = <>){
if (($line !~ /BEGIN TRANSACTION/) && ($line !~ /COMMIT/) && ($line !~ /sqlite_sequence/) && ($line !~ /CREATE UNIQUE INDEX/)){
if ($line =~ /CREATE TABLE \"([a-z_]*)\"(.*)/){
$name = $1;
$sub = $2;
$sub =~ s/\"//g; #"
$line = "DROP TABLE IF EXISTS $name;\nCREATE TABLE IF NOT EXISTS $name$sub\n";
}
elsif ($line =~ /INSERT INTO \"([a-z_]*)\"(.*)/){
$line = "INSERT INTO $1$2\n";
$line =~ s/\"/\\\"/g; #"
$line =~ s/\"/\'/g; #"
}else{
$line =~ s/\'\'/\\\'/g; #'
}
$line =~ s/([^\\'])\'t\'(.)/$1THIS_IS_TRUE$2/g; #'
$line =~ s/THIS_IS_TRUE/1/g;
$line =~ s/([^\\'])\'f\'(.)/$1THIS_IS_FALSE$2/g; #'
$line =~ s/THIS_IS_FALSE/0/g;
$line =~ s/AUTOINCREMENT/AUTO_INCREMENT/g;
print $line;
}
}
Some additional code was necessary to successfully migrate the sqlite database
(handles one line Create table statements, foreign keys, fixes a bug in the
original program that converted empty fields `''` to `\'`.
I [posted the code on the migrating my SQLite database to mysql
Question](http://stackoverflow.com/questions/18671/quick-easy-way-to-migrate-
sqlite3-to-mysql/1067365#1067365)
Answer: Here's a pretty literal translation with just the minimum of obvious style
changes (putting all code into a function, using string rather than re
operations where possible).
import re, fileinput
def main():
for line in fileinput.input():
process = False
for nope in ('BEGIN TRANSACTION','COMMIT',
'sqlite_sequence','CREATE UNIQUE INDEX'):
if nope in line: break
else:
process = True
if not process: continue
m = re.search('CREATE TABLE "([a-z_]*)"(.*)', line)
if m:
name, sub = m.groups()
line = '''DROP TABLE IF EXISTS %(name)s;
CREATE TABLE IF NOT EXISTS %(name)s%(sub)s
'''
line = line % dict(name=name, sub=sub)
else:
m = re.search('INSERT INTO "([a-z_]*)"(.*)', line)
if m:
line = 'INSERT INTO %s%s\n' % m.groups()
line = line.replace('"', r'\"')
line = line.replace('"', "'")
line = re.sub(r"([^'])'t'(.)", r"\1THIS_IS_TRUE\2", line)
line = line.replace('THIS_IS_TRUE', '1')
line = re.sub(r"([^'])'f'(.)", r"\1THIS_IS_FALSE\2", line)
line = line.replace('THIS_IS_FALSE', '0')
line = line.replace('AUTOINCREMENT', 'AUTO_INCREMENT')
print line,
main()
|
Get rid of toplevel tk panewindow while usong tkMessageBox
Question: [link text](http://stackoverflow.com/questions/1052420/tkkinter-message-box)
When I do :
tkMessageBox.askquestion(title="Symbol Display",message="Is the symbol visible on the console")
along with Symbol Display window tk window is also coming.
If i press "Yes"...the child window return yes,whereas tk window remains
there.
Whenever I am tryng to close tk window, End Program - tk comes. on pushing
"End Now" button "pythonw.exe" window comes asking to send error report or
not.
Why is it so ? How can I avoid tk window from popping out without affecting my
script execution ???
Answer: The _trick_ is to invoke withdraw on the Tk root top-level:
>>> import tkMessageBox, Tkinter
>>> Tkinter.Tk().withdraw()
>>> tkMessageBox.askquestion(
... title="Symbol Display",
... message="Is the symbol visible on the console")
|
Using SimpleXMLTreeBuilder in elementtree
Question: I have been developing an application with django and elementtree and while
deploying it to the production server i have found out it is running python
2.4. I have been able to bundle elementtree but now i am getting the error:
"No module named expat; use SimpleXMLTreeBuilder instead"
Unfortunately i cannot upgrade python so im stuck with what i got. How do i
use SimpleXMLTreeBuilder as the parser and/or will i need to rewrite code?
Answer: If you have third party module that wants to use ElementTree (and
XMLTreeBuilder by dependency) you can change ElementTree's XMLTreeBuilder
definition to the one provided by SimpleXMLTreeBuilder like so:
from xml.etree import ElementTree # part of python distribution
from elementtree import SimpleXMLTreeBuilder # part of your codebase
ElementTree.XMLTreeBuilder = SimpleXMLTreeBuilder.TreeBuilder
Now ElementTree will always use the SimpleXMLTreeBuilder whenever it's called.
See also: <http://groups.google.com/group/google-
appengine/browse_thread/thread/b7399a91c9525c97>
|
Implementing a custom Python authentication handler
Question: The answer to a [previous
question](http://stackoverflow.com/questions/1080179/handling-authentication-
and-proxy-servers-with-httplib2) showed that Nexus implement a [custom
authentication helper](http://svn.sonatype.org/nexus/tags/nexus-1.3.4/nexus-
clients/nexus-rest-client-
java/src/main/java/org/sonatype/nexus/client/rest/HttpNxBasicHelper.java)
called "NxBASIC".
How do I begin to implement a handler in python?
* * *
Update:
Implementing the handler per Alex's suggestion looks to be the right approach,
but fails trying to extract the scheme and realm from the authreq. The
returned value for authreq is:
str: NxBASIC realm="Sonatype Nexus Repository Manager API""
AbstractBasicAuthHandler.rx.search(authreq) is only returning a single tuple:
tuple: ('NxBASIC', '"', 'Sonatype Nexus Repository Manager API')
so scheme,realm = mo.groups() fails. From my limited regex knowledge it looks
like the standard regex from AbstractBasicAuthHandler should match scheme and
realm, but it seems not to.
The regex is:
rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+'
'realm=(["\'])(.*?)\\2', re.I)
* * *
Update 2: From inspection of AbstractBasicAuthHandler, the default processing
is to do:
scheme, quote, realm = mo.groups()
Changing to this works. I now just need to set the password against the
correct realm. Thanks Alex!
Answer: If, as described, name and description are the only differences between this
"NxBasic" and good old "Basic", then you could essentially copy-paste-edit
some code from urllib2.py (which unfortunately doesn't expose the scheme name
as easily overridable in itself), as follows (see
[urllib2.py](http://svn.python.org/view/python/trunk/Lib/urllib2.py?revision=72880&view=markup)'s
online sources):
import urllib2
class HTTPNxBasicAuthHandler(urllib2.HTTPBasicAuthHandler):
def http_error_auth_reqed(self, authreq, host, req, headers):
# host may be an authority (without userinfo) or a URL with an
# authority
# XXX could be multiple headers
authreq = headers.get(authreq, None)
if authreq:
mo = AbstractBasicAuthHandler.rx.search(authreq)
if mo:
scheme, realm = mo.groups()
if scheme.lower() == 'nxbasic':
return self.retry_http_basic_auth(host, req, realm)
def retry_http_basic_auth(self, host, req, realm):
user, pw = self.passwd.find_user_password(realm, host)
if pw is not None:
raw = "%s:%s" % (user, pw)
auth = 'NxBasic %s' % base64.b64encode(raw).strip()
if req.headers.get(self.auth_header, None) == auth:
return None
req.add_header(self.auth_header, auth)
return self.parent.open(req)
else:
return None
As you can see by inspection, I've just changed two strings from "Basic" to
"NxBasic" (and the lowercase equivalents) from what's in urrlib2.py (in the
abstract basic auth handler superclass of the http basic auth handler class).
Try using this version -- and if it's still not working, at least having it be
your code can help you add print/logging statements, breakpoints, etc, to
better understand what's breaking and how. Best of luck! (Sorry I can't help
further but I don't have any Nexus around to experiment with).
|
How would you adblock using Python?
Question: I'm slowly building a [web
browser](http://github.com/regomodo/qtBrowser/tree/master) in PyQt4 and like
the speed i'm getting out of it. However, I want to combine easylist.txt with
it. I believe adblock uses this to block http requests by the browser.
How would you go about it using python/PyQt4?
[edit1] Ok. I think i've setup Privoxy. I haven't setup any additional filters
and it seems to work. The PyQt4 i've tried to use looks like this
`self.proxyIP = "127.0.0.1"
self.proxyPORT= 8118
proxy = QNetworkProxy()
proxy.setType(QNetworkProxy.HttpProxy)
proxy.setHostName(self.proxyIP)
proxy.setPort(self.proxyPORT)
QNetworkProxy.setApplicationProxy(proxy) `
However, this does absolutely nothing and I cannot make sense of the docs and
can not find any examples.
[edit2] I've just noticed that i'f I change self.proxyIP to my actual local IP
rather than 127.0.0.1 the page doesn't load. So something is happening.
Answer: I know this is an old question, but I thought I'd try giving an answer for
anyone who happens to stumble upon it. You could create a subclass of
QNetworkAccessManager and combine it with
<https://github.com/atereshkin/abpy>. Something kind of like this:
from PyQt4.QtNetwork import QNetworkAccessManager
from abpy import Filter
adblockFilter = Filter(file("easylist.txt"))
class MyNetworkAccessManager(QNetworkAccessManager):
def createRequest(self, op, request, device=None):
url = request.url().toString()
doFilter = adblockFilter.match(url)
if doFilter:
return QNetworkAccessManager.createRequest(self, self.GetOperation, QNetworkRequest(QUrl()))
else:
QNetworkAccessManager.createRequest(self, op, request, device)
myNetworkAccessManager = MyNetworkAccessManager()
After that, set the following on all your QWebView instances, or make a
subclass of QWebView:
QWebView.page().setNetworkAccessManager(myNetworkAccessManager)
Hope this helps!
|
Running JSON through Python's eval()?
Question: Best practices aside, is there a compelling reason **not** to do this?
I'm writing a post-commit hook for use with a Google Code project, which
provides commit data via a JSON object. GC provides an HMAC authentication
token along with the request (outside the JSON data), so by validating that
token I gain high confidence that the JSON data is both benign (as there's
little point in distrusting Google) and valid.
My own (brief) investigations suggest that JSON happens to be completely valid
Python, with the exception of the `"\/"` escape sequence — which GC doesn't
appear to generate.
So, as I'm working with Python 2.4 (i.e. no `json` module), `eval()` is
looking _really_ tempting.
**Edit:** For the record, I am very much **not** asking if this is a good
idea. I'm _quite_ aware that it isn't, and I very much doubt I'll ever use
this technique for any future projects even if I end up using it for this one.
I just wanted to make sure that I know what kind of trouble I'll run into if I
do. :-)
Answer: If you're comfortable with your script working fine for a while, and then
randomly failing on some obscure edge case, I would go with eval.
If it's important that your code be robust, I would take the time to add
simplejson. You don't need the C portion for speedups, so it really shouldn't
be hard to dump a few .py files into a directory somewhere.
As an example of something that might bite you, JSON uses Unicode and
simplejson returns Unicode, whereas eval returns str:
>>> simplejson.loads('{"a":1, "b":2}')
{u'a': 1, u'b': 2}
>>> eval('{"a":1, "b":2}')
{'a': 1, 'b': 2}
Edit: a better example of where eval() behaves differently:
>>> simplejson.loads('{"X": "\uabcd"}')
{u'X': u'\uabcd'}
>>> eval('{"X": "\uabcd"}')
{'X': '\\uabcd'}
>>> simplejson.loads('{"X": "\uabcd"}') == eval('{"X": "\uabcd"}')
False
Edit 2: saw yet another problem today pointed out by SilentGhost: eval doesn't
handle true -> True, false -> False, null -> None correctly.
>>> simplejson.loads('[false, true, null]')
[False, True, None]
>>> eval('[false, true, null]')
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "<string>", line 1, in <module>
NameError: name 'false' is not defined
>>>
|
How do I use TLS with asyncore?
Question: An asyncore-based XMPP client opens a normal TCP connection to an XMPP server.
The server indicates it requires an encrypted connection. The client is now
expected to start a TLS handshake so that subsequent requests can be
encrypted.
[tlslite](http://trevp.net/tlslite/readme.txt) integrates with asyncore, but
the sample code is for a server (?) and I don't understand what it's doing.
I'm on Python 2.5. How can I get the TLS magic working?
* * *
Here's what ended up working for me:
from tlslite.api import *
def handshakeTls(self):
"""
Encrypt the socket using the tlslite module
"""
self.logger.info("activating TLS encrpytion")
self.socket = TLSConnection(self.socket)
self.socket.handshakeClientCert()
Answer: Definitely check out twisted and wokkel. I've been building tons of xmpp bots
and components with it and it's a dream.
|
Python remove all lines which have common value in fields
Question: I have lines of data comprising of 4 fields
aaaa bbb1 cccc dddd
aaaa bbb2 cccc dddd
aaaa bbb3 cccc eeee
aaaa bbb4 cccc ffff
aaaa bbb5 cccc gggg
aaaa bbb6 cccc dddd
Please bear with me.
The first and third field is always the same - but I don't need them, the 4th
field can be the same or different. The thing is, I only want 2nd and 4th
fields from lines which don't share the common field. For example like this
from the above data
bbb3 eeee
bbb4 ffff
bbb5 gggg
Now I don't mean deduplication as that would leave one of the entries in. If
the 4th field shares a value with another line, I don't want any line which
ever had that value.
humblest apologies once again for asking what is probably simple.
Answer: Here you go:
from collections import defaultdict
LINES = """\
aaaa bbb1 cccc dddd
aaaa bbb2 cccc dddd
aaaa bbb3 cccc eeee
aaaa bbb4 cccc ffff
aaaa bbb5 cccc gggg
aaaa bbb6 cccc dddd""".split('\n')
# Count how many lines each unique value of the fourth field appears in.
d_counts = defaultdict(int)
for line in LINES:
a, b, c, d = line.split()
d_counts[d] += 1
# Print only those lines with a unique value for the fourth field.
for line in LINES:
a, b, c, d = line.split()
if d_counts[d] == 1:
print b, d
# Prints
# bbb3 eeee
# bbb4 ffff
# bbb5 gggg
|
How can I parse marked up text for further processing?
Question: **See updated input and output data at Edit-1.**
What I am trying to accomplish is turning
+ 1
+ 1.1
+ 1.1.1
- 1.1.1.1
- 1.1.1.2
+ 1.2
- 1.2.1
- 1.2.2
- 1.3
+ 2
- 3
into a python data structure such as
[{'1': [{'1.1': {'1.1.1': ['1.1.1.1', '1.1.1.2']}, '1.2': ['1.2.1', '1.2.2']}, '1.3'], '2': {}}, ['3',]]
I've looked at many different wiki markup languages, markdown, restructured
text, etc but they are all extremely complicated for me to understand how it
works since they must cover a large amount of tags and syntax (I would only
need the "list" parts of most of these but converted to python instead of html
of course.)
I've also taken a look at tokenizers, lexers and parsers but again they are
much more complicated than I need and that I can understand.
I have no idea where to begin and would appreciate any help possible on this
subject. Thanks
**Edit-1** : Yes the character at the beginning of the line matters, from the
required output from before and now it could be seen that the **`*`** denotes
a root node with children, the **+** has children and the **-** has no
children (root or otherwise) and is just extra information pertaining to that
node. The **`*`** is not important and can be interchanged with **+** (I can
get root status other ways.)
Therefore the new requirement would be using only **`*`** to denote a node
with or without children and **-** cannot have children. I've also changed it
so the key isn't the text after the **`*`** since that will no doubt changer
later to an actual title.
For example
* 1
* 1.1
* 1.2
- Note for 1.2
* 2
* 3
- Note for root
would give
[{'title': '1', 'children': [{'title': '1.1', 'children': []}, {'title': '1.2', 'children': []}]}, {'title': '2', 'children': [], 'notes': ['Note for 1.2', ]}, {'title': '3', 'children': []}, 'Note for root']
Or if you have another idea to represent the outline in python then bring it
forward.
Answer: **Edit** : thanks to the clarification and change in the spec I've edited my
code, still using an explicit `Node` class as an intermediate step for clarity
-- the logic is to turn the list of lines into a list of nodes, then turn that
list of nodes into a tree (by using their indent attribute appropriately),
then print that tree in a readable form (this is just a "debug-help" step, to
check the tree is well constructed, and can of course get commented out in the
final version of the script -- which, just as of course, will take the lines
from a file rather than having them hardcoded for debugging!-), finally build
the desired Python structure and print it. Here's the code, and as we'll see
after that the result is _almost_ as the OP specifies with one exception --
but, the code first:
import sys
class Node(object):
def __init__(self, title, indent):
self.title = title
self.indent = indent
self.children = []
self.notes = []
self.parent = None
def __repr__(self):
return 'Node(%s, %s, %r, %s)' % (
self.indent, self.parent, self.title, self.notes)
def aspython(self):
result = dict(title=self.title, children=topython(self.children))
if self.notes:
result['notes'] = self.notes
return result
def print_tree(node):
print ' ' * node.indent, node.title
for subnode in node.children:
print_tree(subnode)
for note in node.notes:
print ' ' * node.indent, 'Note:', note
def topython(nodelist):
return [node.aspython() for node in nodelist]
def lines_to_tree(lines):
nodes = []
for line in lines:
indent = len(line) - len(line.lstrip())
marker, body = line.strip().split(None, 1)
if marker == '*':
nodes.append(Node(body, indent))
elif marker == '-':
nodes[-1].notes.append(body)
else:
print>>sys.stderr, "Invalid marker %r" % marker
tree = Node('', -1)
curr = tree
for node in nodes:
while node.indent <= curr.indent:
curr = curr.parent
node.parent = curr
curr.children.append(node)
curr = node
return tree
data = """\
* 1
* 1.1
* 1.2
- Note for 1.2
* 2
* 3
- Note for root
""".splitlines()
def main():
tree = lines_to_tree(data)
print_tree(tree)
print
alist = topython(tree.children)
print alist
if __name__ == '__main__':
main()
When run, this emits:
1
1.1
1.2
Note: 1.2
2
3
Note: 3
[{'children': [{'children': [], 'title': '1.1'}, {'notes': ['Note for 1.2'], 'children': [], 'title': '1.2'}], 'title': '1'}, {'children': [], 'title': '2'}, {'notes': ['Note for root'], 'children': [], 'title': '3'}]
Apart from the ordering of keys (which is immaterial and not guaranteed in a
dict, of course), this is _almost_ as requested -- except that here **all**
notes appear as dict entries with a key of `notes` and a value that's a list
of strings (but the notes entry is omitted if the list would be empty, roughly
as done in the example in the question).
In the current version of the question, how to represent the notes is slightly
unclear; one note appears as a stand-alone string, others as entries whose
value is a string (instead of a list of strings as I'm using). It's not clear
what's supposed to imply that the note must appear as a stand-alone string in
one case and as a dict entry in all others, so this scheme I'm using is more
regular; and if a note (if any) is a single string rather than a list, would
that mean it's an error if more than one note appears for a node? In the
latter regard, this scheme I'm using is more general (lets a node have any
number of notes from 0 up, instead of just 0 or 1 as apparently implied in the
question).
Having written so much code (the pre-edit answer was about as long and helped
clarify and change the specs) to provide (I hope) 99% of the desired solution,
I hope this satisfies the original poster, since the last few tweaks to code
and/or specs to make them match each other should be easy for him to do!
|
Finding partial strings in a list of strings - python
Question: I am trying to check if a user is a member of an Active Directory group, and I
have this:
ldap.set_option(ldap.OPT_REFERRALS, 0)
try:
con = ldap.initialize(LDAP_URL)
con.simple_bind_s(userid+"@"+ad_settings.AD_DNS_NAME, password)
ADUser = con.search_ext_s(ad_settings.AD_SEARCH_DN, ldap.SCOPE_SUBTREE, \
"sAMAccountName=%s" % userid, ad_settings.AD_SEARCH_FIELDS)[0][1]
except ldap.LDAPError:
return None
`ADUser` returns a list of strings:
{'givenName': ['xxxxx'],
'mail': ['xxxxx@example.com'],
'memberOf': ['CN=group1,OU=Projects,OU=Office,OU=company,DC=domain,DC=com',
'CN=group2,OU=Projects,OU=Office,OU=company,DC=domain,DC=com',
'CN=group3,OU=Projects,OU=Office,OU=company,DC=domain,DC=com',
'CN=group4,OU=Projects,OU=Office,OU=company,DC=domain,DC=com'],
'sAMAccountName': ['myloginid'],
'sn': ['Xxxxxxxx']}
Of course in the real world the group names are verbose and of varied
structure, and users will belong to tens or hundreds of groups.
If I get the list of groups out as `ADUser.get('memberOf')[0]`, what is the
best way to check if any members of a separate list exist in the main list?
For example, the check list would be `['group2', 'group16']` and I want to get
a true/false answer as to whether _any_ of the smaller list exist in the main
list.
Answer: If the format example you give is somewhat reliable, something like:
import re
grps = re.compile(r'CN=(\w+)').findall
def anyof(short_group_list, adu):
all_groups_of_user = set(g for gs in adu.get('memberOf',()) for g in grps(gs))
return sorted(all_groups_of_user.intersection(short_group_list))
where you pass your list such as `['group2', 'group16']` as the first
argument, your `ADUser` dict as the second argument; this returns an
alphabetically sorted list (possibly empty, meaning "none") of the groups,
among those in `short_group_list`, to which the user belongs.
It's probably not much faster to just a bool, but, if you insist, changing the
second statement of the function to:
return any(g for g in short_group_list if g in all_groups_of_user)
might possibly save a certain amount of time in the "true" case (since `any`
short-circuits) though I suspect not in the "false" case (where the whole list
must be traversed anyway). If you care about the performance issue, best is to
benchmark both possibilities on data that's realistic for your use case!
If performance isn't yet good enough (and a bool yes/no is sufficient, as you
say), try reversing the looping logic:
def anyof_v2(short_group_list, adu):
gset = set(short_group_list)
return any(g for gs in adu.get('memberOf',()) for g in grps(gs) if g in gset)
`any`'s short-circuit abilities might prove more useful here (at least in the
"true" case, again -- because, again, there's no way to give a "false" result
without examining ALL the possibilities anyway!-).
|
Can python mechanize handle HTTP auth?
Question: Mechanize (Python) is failing with 401 for me to open http digest URLs. I
googled and tried debugging but no success.
My code looks like this.
import mechanize
project = "test"
baseurl = "http://trac.somewhere.net"
loginurl = "%s/%s/login" % (baseurl, project)
b = mechanize.Browser()
b.add_password(baseurl, "user", "secret", "some Realm")
b.open(loginurl)
Answer: Mechanize claims that the parameters should be uri, username and password as
parameters, but you have four parameters. Four parameters are correct for
urllib2.add_password, but then the first parameter should be the realm, not
the uri.
<http://wwwsearch.sourceforge.net/mechanize/>
I'd try to change that first.
Does trac require digest? if not a next step could be to try using basic auth,
as a test to see if that works, since you can add that with just addHeader:
import base64
from mechanize import Browser
browser = Browser()
browser.addheaders.append(('Authorization', 'Basic %s' % base64.encodestring('%s:%s' % (user, pwd))))
|
IRC Python Bot: Best Way
Question: I want to build a bot that basically does the following:
1. Listens to the room and interacts with users and encourages them to PM the bot.
2. Once a user has PMed the bot engage with the client using various AI techniques.
Should I just use the IRC library or Sockets in python or do I need more of a
bot framework.
What would you do?
Thanks!
Here is the code I'm currently using, however, I haven't gotten it to work.
#!/usr/bin/python
import socket
network = 'holmes.freenet.net'
port = 6667
irc = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
irc.connect ( ( network, port ) )
irc.send ( 'NICK PyIRC\r\n' )
irc.send ( 'USER PyIRC PyIRC PyIRC :Python IRC\r\n' )
irc.send ( 'JOIN #pyirc\r\n' )
irc.send ( 'PRIVMSG #pyirc :Can you hear me?\r\n' )
irc.send ( 'PART #pyirc\r\n' )
irc.send ( 'QUIT\r\n' )
irc.close()
Answer: Use [Twisted](http://twistedmatrix.com) or
[Asynchat](http://docs.python.org/library/asynchat.html) if you want to have a
sane design. It is possible to just do it with sockets but why bother doing it
from scratch?
|
Tkinter: AttributeError: NoneType object has no attribute get
Question: I have seen a couple of other posts on similar error message but couldn't find
a solution which would fix it in my case.
I dabbled a bit with TkInter and created a very simple UI. The code follows-
from string import *
from Tkinter import *
import tkMessageBox
root=Tk()
vid = IntVar()
def grabText(event):
if entryBox.get().strip()=="":
tkMessageBox.showerror("Error", "Please enter text")
else:
print entryBox.get().strip()
root.title("My Sample")
root.maxsize(width=550, height=200)
root.minsize(width=550, height=200)
root.resizable(width=NO, height=NO)
label=Label(root, text = "Enter text:").grid(row=2,column=0,sticky=W)
entryBox=Entry(root,width=60).grid(row=2, column=1,sticky=W)
grabBtn=Button(root, text="Grab")
grabBtn.grid(row=8, column=1)
grabBtn.bind('<Button-1>', grabText)
root.mainloop()
I get the UI up and running. When I click on the `Grab` button, I get the
following error on the console:
C:\Python25>python.exe myFiles\testBed.py
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python25\lib\lib-tk\Tkinter.py", line 1403, in __call__
return self.func(*args)
File "myFiles\testBed.py", line 10, in grabText
if entryBox.get().strip()=="":
AttributeError: 'NoneType' object has no attribute 'get'
The error traces back to `Tkinter.py`.
I'm sure some one might have dealt with this before. Any help is appreciated.
Answer: The `grid` (and `pack`, and `place`) function of the `Entry` object (and of
all other widgets) returns `None`. In python when you do `a().b()`, the result
of the expression is whatever `b()` returns, therefore `Entry(...).grid(...)`
will return `None`.
You should split that onto two lines, like this:
entryBox = Entry(root, width=60)
entryBox.grid(row=2, column=1, sticky=W)
That way, you get your `Entry` reference stored in `entryBox`, and it's laid
out like you expect. This has a bonus side effect of making your layout easier
to understand and maintain, if you collect all of your `grid` and/or `pack`
statements in blocks.
|
Find functions explicitly defined in a module (python)
Question: Ok I know you can use the dir() method to list everything in a module, but is
there any way to see only the functions that are defined in that module? For
example, assume my module looks like this:
from datetime import date, datetime
def test():
return "This is a real method"
Even if i use inspect() to filter out the builtins, I'm still left with
anything that was imported. E.g I'll see:
['date', 'datetime', 'test']
Is there any way to exclude imports? Or another way to find out what's defined
in a module?
Answer: Are you looking for something like this?
import sys, inspect
def is_mod_function(mod, func):
return inspect.isfunction(func) and inspect.getmodule(func) == mod
def list_functions(mod):
return [func.__name__ for func in mod.__dict__.itervalues()
if is_mod_function(mod, func)]
print 'functions in current module:\n', list_functions(sys.modules[__name__])
print 'functions in inspect module:\n', list_functions(inspect)
EDIT: Changed variable names from 'meth' to 'func' to avoid confusion (we're
dealing with functions, not methods, here).
|
Python Lambda Problems
Question: What's going on here? I'm trying to create a list of functions:
def f(a,b):
return a*b
funcs = []
for i in range(0,10):
funcs.append(lambda x:f(i,x))
This isn't doing what I expect. I would expect the list to act like this:
funcs[3](3) = 9
funcs[0](5) = 0
But all the functions in the list seem to be identical, and be setting the
fixed value to be 9:
funcs[3](3) = 27
funcs[3](1) = 9
funcs[2](6) = 54
Any ideas?
Answer: lambdas in python are closures.... the arguments you give it aren't going to
be evaluated until the lambda is evaluated. At that time, i=9 regardless,
because your iteration is finished.
The behavior you're looking for can be achieved with functools.partial
import functools
def f(a,b):
return a*b
funcs = []
for i in range(0,10):
funcs.append(functools.partial(f,i))
|
List of installed fonts OS X / C
Question: I'm trying to programatically get a list of installed fonts in C or Python. I
need to be able to do this on OS X, does anyone know how?
Answer: Python with PyObjC installed (which is the case for Mac OS X 10.5+, so this
code will work without having to install anything):
import Cocoa
manager = Cocoa.NSFontManager.sharedFontManager()
font_families = list(manager.availableFontFamilies())
(based on htw's answer)
|
Is it possible to pass a variable out of a pdb session into the original interactive session?
Question: I am using pdb to examine a script having called `run -d` in an ipython
session. It would be useful to be able to plot some of the variables but I
need them in the main ipython environment in order to do that.
So what I am looking for is some way to make a variable available back in the
main interactive session after I quit pdb. If you set a variable in the
topmost frame it does seem to be there in the ipython session, but this
doesn't work for any frames further down.
Something like `export` in the following:
ipdb> myvar = [1,2,3]
ipdb> p myvar
[1, 2, 3]
ipdb> export myvar
ipdb> q
In [66]: myvar
Out[66]: [1, 2, 3]
Answer: Per ipython's [docs](http://ipython.scipy.org/moin/Documentation), and also a
`run?` command from the ipython prompt,
> after execution, the IPython interactive namespace gets updated with all
> variables defined in the program (except for `__name__` and sys.argv)
By "defined in the program" (a slightly sloppy use of terms), it doesn't mean
"anywhere within any nested functions found there" -- it means "in the
`globals()` of the script/module you're `run`ning. If you're within any kind
of nesting, `globals()['myvar'] = [1,2,3]` should still work fine, just like
your hoped-for `export` would if it existed.
**Edit** : If you're in a different module, you need to set the name in the
globals of your original one -- after an `import sys` if needed,
`sys.modules["originalmodule"].myvar = [1, 2, 3]` will do what you desire.
|
PyQt: event is not triggered, what's wrong with my code?
Question: I'm a Python newbie and I'm trying to write a trivial app with an event
handler that gets activated when an item in a custom QTreeWidget is clicked.
For some reason it doesn't work. Since I'm only at the beginning of learning
it, I can't figure out what I'm doing wrong. Here is the code:
#!/usr/bin/env python
import sys
from PyQt4.QtCore import SIGNAL
from PyQt4.QtGui import QApplication
from PyQt4.QtGui import QMainWindow
from PyQt4.QtGui import QTreeWidget
from PyQt4.QtGui import QTreeWidgetItem
class MyTreeItem(QTreeWidgetItem):
def __init__(self, s, parent = None):
super(MyTreeItem, self).__init__(parent, [s])
class MyTree(QTreeWidget):
def __init__(self, parent = None):
super(MyTree, self).__init__(parent)
self.setMinimumWidth(200)
self.setMinimumHeight(200)
for s in ['foo', 'bar']:
MyTreeItem(s, self)
self.connect(self, SIGNAL('itemClicked(QTreeWidgetItem*, column)'), self.onClick)
def onClick(self, item, column):
print item
class MainWindow(QMainWindow):
def __init__(self, parent = None):
super(MainWindow, self).__init__(parent)
self.tree = MyTree(self)
def main():
app = QApplication(sys.argv)
win = MainWindow()
win.show()
app.exec_()
if __name__ == '__main__':
main()
My initial goal is to make MyTree.onClick() print something when I click a
tree item (and have access to the clicked item in this handler).
Answer: You should have said
self.connect(self, SIGNAL('itemClicked(QTreeWidgetItem*, int)'), self.onClick)
Notice it says **int** rather than _column_ in the first argument to `SIGNAL`.
You also only need to do the `connect` call once for the tree widget, not once
for each node in the tree.
|
How can I find path to given file?
Question: I have a file, for example "something.exe" and I want to find path to this
file
How can I do this in python?
Answer: Perhaps `os.path.abspath()` would do it:
import os
print os.path.abspath("something.exe")
If your `something.exe` is not in the current directory, you can pass any
relative path and `abspath()` will resolve it.
|
Is it possible to go into ipython from code?
Question: For my debugging needs, `pdb` is pretty good. However, it would be _much_
cooler (and helpful) if I could go into `ipython`. Is this thing possible?
Answer: There is an `ipdb` project which embeds iPython into the standard pdb, so you
can just do:
import ipdb; ipdb.set_trace()
It's installable via the usual `easy_install ipdb`.
`ipdb` is pretty short, so instead of easy_installing you can also create a
file `ipdb.py` somewhere on your Python path and paste the following into the
file:
import sys
from IPython.Debugger import Pdb
from IPython.Shell import IPShell
from IPython import ipapi
shell = IPShell(argv=[''])
def set_trace():
ip = ipapi.get()
def_colors = ip.options.colors
Pdb(def_colors).set_trace(sys._getframe().f_back)
|
How can I make setuptools ignore subversion inventory?
Question: When packaging a Python package with a setup.py that uses the setuptools:
from setuptools import setup
...
the source distribution created by:
python setup.py sdist
not only includes, as usual, the files specified in MANIFEST.in, but it also,
gratuitously, includes all of the files that Subversion lists as being version
controlled beneath the package directory. This is vastly annoying. Not only
does it make it difficult to exercise any sort of explicit control over what
files get distributed with my package, but it means that when I build my
package following an "svn export" instead of an "svn checkout", the contents
of my package might be quite different, since without the .svn metadata
setuptools will make different choices about what to include.
My question: how can I turn off this terrible behavior, so that "setuptools"
treats my project the same way whether I'm using Subversion, or version
control it's never heard of, or a bare tree created with "svn export" that
I've created at the end of my project to make sure it builds cleanly somewhere
besides my working directory?
The best I have managed so far is an ugly monkey-patch:
from setuptools.command import sdist
del sdist.finders[:]
But this is Python, not the jungle, so of course I want a better solution that
involves no monkeys at all. How can I tame setuptools, turn off its magic, and
have it behave sensibly by looking at the visible, predictable rules in my
MANIFEST.py instead?
Answer: I know you know much of this, Brandon, but I'll try to give as a complete
answer as I can (although I'm no setuptools gury) for the benefit of others.
The problem here is that setuptools itself involves quite a lot of black
magick, including using an entry point called setuptools.file_finders where
you can add plugins to find files to include. I am, however, at a complete
loss as to how REMOVE plugins from it...
* Quick workaround: svn export your package to a temporary directory and run the setup.py from there. That means you have no svn, so the svn finder finds no files to include. :)
* Longer workaround: Do you really need setuptools? Setuptools have a lot of features, so the answer is likely yes, but mainly those features are depdenencies (so your dependencies get installed by easy_install), namespace packages (foo.bar), and entry points. Namespace packages can actually be created without setuptools as well. But if you use none of these you might actually get away with just using distutils.
* Ugly workaround: The monkeypatch you gave to sdist in your question, which simply makes the plugin not have any finders, and exit quickly.
So as you see, this answer, although as complete as I can make it, is still
embarrassingly incomplete. I can't actually answer your question, though I
think the answer is "You can't".
|
easy, programmable data plotting
Question: I spend most of my time plotting data, but unfortunately I haven't found a
decent solution for my plotting needs. At the moment, the most powerful and
pleasant library I found that performs plotting is matplotlib. The results are
stunning, but I mostly spend my time fighting with the library when trying to
do simple things like having an arrow as I want. SImilar programs like R and
gnuplot produce visually less appealing results, and they are not GUI based.
On the other hand, programs like xmgrace (or better) allow direct manipulation
of the plotted objects and direct feedback, but they fail on two important
points:
1. if my dataset (normally stored in csv files) changes for some reason, I have to reimport it and perform the manipulations again, by hand
2. once I obtain a nice plot setup, the only way I have to recreate the plot is to use a graphical, interactive program. I would like to have the possibility to run a command line utility on my csv files and get the .pdf as a result, with no human intervention.
I still have to find something that provides me both worlds, and it has an
affordable price. Ideally, I would need an interactive GUI program (a la
Origin) to generate matplotlib-based python scripts.
Does anyone have any hints on software that could address my needs on OSX
(preferably) or Linux ?
Answer: You may want to check out [Igor Pro](http://www.wavemetrics.com/). It's quite
old, and quirky but it provides the most advanced plotting system I've found
yet on the Mac. You can modify anything graphically, at a command line or in
script files. The most powerful feature (IMO) is the ability to automatically
generate a script to recreate a figure or to use a figure to create a script
that generates figures _like_ (in style etc.) a particular figure. I use Igor
for all publication figures I produce.
Data is stored in "waves" (translation: vectors) which encapsulate data and
information about the delta between data points (e.g. time step). Figures
reference waves as their data source. When you update a wave (e.g. by re-
importing a CSV file and specifying that the data overwrite specific waves),
all figures that reference that wave are automatically updated.
You can create "layouts" which are page-layouts containing multiple graphs.
These layouts are also automatically updated whenever any of the figures in
the layout are updated (see above). You can add drawing/text/annotations to
either graphs or layout.s
Be warned: Igor Pro's scripting language is something like the bastard child
of VB and Matlab. It makes my eyes bleed. It makes me pray to whatever God
that the pain just end. **But** the entire system is so powerful that it's
worth it.
|
MySQL db problem in Python
Question: For me mysql db has been successfully instaled in my system.I verified through
the following code that it is successfully installed without any errors.
C:\Python26>python
Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import MySQLdb
>>>
But when I imported the mysqldb in my script its giving **No module name
MySQLdb.**
Kindly let me know the problem and the solution..
I am using python 2.6 and mysql is 4.0.3 in windows XP.
Thanks in advance...
Answer: 1) Try using your package manager to download **[python-
mysql](http://www.novell.com/products/linuxpackages/server10/i386/python-
mysql.html)** which includes MySQLdb.
2) Ensure `/usr/lib/python2.4/site-packages/` is in your
[PYTHONPATH](http://docs.python.org/using/cmdline.html#envvar-PYTHONPATH),
e.g.:
>>> import sys
>>> from pprint import pprint
>>> pprint(sys.path)
['',
'/usr/lib/python2.4',
'/usr/lib/python2.4/plat-linux2',
'/usr/lib/python2.4/lib-tk',
'/usr/lib/python2.4/site-packages']
3) You seem to be using the correct capitalization in your example, but it
bears mentioning that the module name is case-sensitive, i.e. MySQLdb
(correct) != mysqldb (incorrect).
**Edit** : Looks like nilamo has found the problem. As mentioned in a comment:
you might be running your script with Python 2.6, but MySQLdb is installed in
2.4's site-packages directory.
|
python + Spreadsheet
Question: Can anybody please tell me is there any possible way to connect to spreadsheet
from python? I want to store some data from a form and submit it to google
spreadsheet. Please help on this issue. What steps do I have to follow?
Thanks in advance...
Answer: The easiest way to connect to Google Spreadsheet is by using this [spreadsheet
library](http://burnash.github.com/gspread). These are the steps you need to
follow:
import gspread
# Login with your Google account
gc = gspread.login('account@gmail.com','password')
# Spreadsheets can be opened by their title in Google Docs
spreadsheet = gc.open("_YOUR_TARGET_SPREADSHEET_")
# Select worksheet by index
worksheet = spreadsheet.get_worksheet(0)
# Update cell with your form value
worksheet.update_cell(1, 2, form_value_1)
|
Create NTFS junction point in Python
Question: Is there a way to create an NTFS junction point in Python? I know I can call
the `junction` utility, but it would be better not to rely on external tools.
Answer: you can use python win32 API modules e.g.
import win32file
win32file.CreateSymbolicLink(srcDir, targetDir, 1)
see
<http://docs.activestate.com/activepython/2.5/pywin32/win32file__CreateSymbolicLink_meth.html>
for more details
if you do not want to rely on that too, you can always use ctypes and directly
call CreateSymbolicLinl win32 API, which is anyway a simple call
here is example call using ctypes
import ctypes
kdll = ctypes.windll.LoadLibrary("kernel32.dll")
kdll.CreateSymbolicLinkA("d:\testdir", "d:\testdir_link", 1)
[MSDN](http://msdn.microsoft.com/en-us/library/aa363866%28VS.85%29.aspx) says
Minimum supported client Windows Vista
|
Hacking JavaScript Array Into JSON With Python
Question: I am fetching a .js file from a remote site that contains data I want to
process as JSON using the simplejson library on my Google App Engine site. The
.js file looks like this:
var txns = [
{ apples: '100', oranges: '20', type: 'SELL'},
{ apples: '200', oranges: '10', type: 'BUY'}]
I have no control over the format of this file. What I did at first just to
hack through it was to chop the `"var txns = "` bit off of the string and then
do a series of `.replace(old, new, [count])` on the string until it looked
like standard JSON:
cleanJSON = malformedJSON.replace("'", '"').replace('apples:', '"apples":').replace('oranges:', '"oranges":').replace('type:', '"type":').replace('{', '{"transaction":{').replace('}', '}}')
So that it now looks like:
[{ "transaction" : { "apples": "100", "oranges": "20", "type": "SELL"} },
{ "transaction" : { "apples": "200", "oranges": "10", "type": "BUY"} }]
How would you tackle this formatting issue? Is there a known way (library,
script) to format a JavaScript array into JSON notation?
Answer: It's not too difficult to write your own little parsor for that using
[PyParsing](http://pyparsing.wikispaces.com/).
import json
from pyparsing import *
data = """var txns = [
{ apples: '100', oranges: '20', type: 'SELL'},
{ apples: '200', oranges: '10', type: 'BUY'}]"""
def js_grammar():
key = Word(alphas).setResultsName("key")
value = QuotedString("'").setResultsName("value")
pair = Group(key + Literal(":").suppress() + value)
object_ = nestedExpr("{", "}", delimitedList(pair, ","))
array = nestedExpr("[", "]", delimitedList(object_, ","))
return array + StringEnd()
JS_GRAMMAR = js_grammar()
def parse(js):
return JS_GRAMMAR.parseString(js[len("var txns = "):])[0]
def to_dict(object_):
return dict((p.key, p.value) for p in object_)
result = [
{"transaction": to_dict(object_)}
for object_ in parse(data)]
print json.dumps(result)
This is going to print
[{"transaction": {"type": "SELL", "apples": "100", "oranges": "20"}},
{"transaction": {"type": "BUY", "apples": "200", "oranges": "10"}}]
You can also add the assignment to the grammar itself. Given there are already
off-the-shelf parsers for it, you should better use those.
|
Python Printing StdOut As It Received
Question: I'm trying to run wrap a simple (windows) command line tool up in a PyQt GUI
app that I am writing. The problem I have is that the command line tool throws
it's progress out to stdout (it's a server reset command so you get
"Attempting to stop" and "Restarting" type output.
What I am trying to do is capture the output so I can display it as part of my
app. I assumed it would be quite simple to do something like the following :
import os
import subprocess as sub
cmd = "COMMAND LINE APP NAME -ARGS"
proc = sub.Popen(cmd, shell=True, stdout=sub.PIPE).stdout
while 1:
line = proc.readline()
if not line:
break
print line
This partially works in that I do get the contents of StdOut but instead of as
the progress messages are sent I get it once the command line application
exits and it seems to flush StdOut in one go.
Is there a simple answer?
Answer: Interactive communication through stdin/stdout is a common problem.
You're in luck though, with PyQt you can use QProcess, as described here:
[http://diotavelli.net/PyQtWiki/Capturing_Output_from_a_Process](http://diotavelli.net/PyQtWiki/Capturing%5FOutput%5Ffrom%5Fa%5FProcess)
|
Is it possible to fetch a https page via an authenticating proxy with urllib2 in Python 2.5?
Question: I'm trying to add authenticating proxy support to an existing script, as it is
the script connects to a https url (with urllib2.Request and urllib2.urlopen),
scrapes the page and performs some actions based on what it has found.
Initially I had hoped this would be as easy as simply adding a
urllib2.ProxyHandler({"http": MY_PROXY}) as an arg to urllib2.build_opener
which in turn is passed to urllib2.install_opener. Unfortunately this doesn't
seem to work when attempting to do a urllib2.Request(ANY_HTTPS_PAGE). Googling
around lends me to believe that the proxy support in urllib2 in python 2.5
does not support https urls. This surprised me to say the least.
There appear to be solutions floating around the web, for example
<http://bugs.python.org/issue1424152> contains a patch for `urllib2` and
`httplib` which purports to solve the issue (when I tried it the issue I began
to get the following error instead: `urllib2.URLError: <urlopen error (1,
'error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol')>`).
There is a cookbook recipe here <http://code.activestate.com/recipes/456195>
which I am planning to try next. All in all though I'm surprised this isn't
supported "out of the box", which makes me wonder if I'm simply missing out on
an obvious solutions, so in short — has anyone got a simple method for
fetching https pages using an authenticating proxy with urllib2 in Python 2.5?
Ideally this would work:
import urllib2
#perhaps the dictionary below needs a corresponding "https" entry?
#That doesn't seem to work out of the box.
proxy_handler = urllib2.ProxyHandler({"http": "http://user:pass@myproxy:port"})
urllib2.install_opener( urllib2.build_opener( urllib2.HTTPHandler,
urllib2.HTTPSHandler,
proxy_handler ))
request = urllib2.Request(A_HTTPS_URL)
response = urllib2.urlopen( request)
print response.read()
Many Thanks
Answer: You may want to look into [httplib2](http://code.google.com/p/httplib2/). One
of the [examples](http://code.google.com/p/httplib2/wiki/Examples) claims
support for SOCKS proxies if the [socks](http://socksipy.sourceforge.net/)
module is installed.
|
Listing builtin functions and methods (Python)
Question: I have came up with this:
[a for a in dir(__builtins__) if str(type(getattr(__builtins__,a))) == "<type 'builtin_function_or_method'>"]
I know its ugly. Can you show me a better/more pythonic way of doing this?
Answer: There is the [`inspect` module](http://docs.python.org/library/inspect.html):
import inspect
filter(inspect.isbuiltin, (member for name, member in inspect.getmembers(__builtins__)))
Edit: reading the documentation a little more closely, I came up with this
variant that doesn't use `__getattr__`
import inspect
members = (member for name, member in inspect.getmembers(__builtins__))
filter(inspect.isbuiltin, members)
|
Composable Regexp in Python
Question: Often, I would like to build up complex regexps from simpler ones. The only
way I'm currently aware of of doing this is through string operations, e.g.:
Year = r'[12]\d{3}'
Month = r'Jan|Feb|Mar'
Day = r'\d{2}'
HourMins = r'\d{2}:\d{2}'
Date = r'%s %s, %s, %s' % (Month, Day, Year, HourMins)
DateR = re.compile(Date)
Is anybody aware of a different method or a more systematic approach (maybe a
module) in Python to have composable regexps? I'd rather compile each regexp
individually (e.g. for using individual compile options), but then there
doesn't seem to be a way of composing them anymore!?
Answer: You can use Python's formatting syntax for this:
types = {
"year": r'[12]\d{3}',
"month": r'(Jan|Feb|Mar)',
"day": r'\d{2}',
"hourmins": r'\d{2}:\d{2}',
}
import re
Date = r'%(month)s %(day)s, %(year)s, %(hourmins)s' % types
DateR = re.compile(Date)
(Note the added grouping around Jan|Feb|Mar.)
|
How to use a custom site-package using pth-files for Python 2.6?
Question: I'm trying to setup a custom site-package directory (Python 2.6 on Windows
Vista). For example the directory should be '~\lib\python2.6' (
C:\Users\wierob\lib\python2.6). Hence calling 'setup.py install' should copy
packages to C:\Users\wierob\lib\python2.6.
Following the instructions
[here](http://peak.telecommunity.com/DevCenter/EasyInstall#administrator-
installation):
I've created a pth-file in site-packages directory of the Python installation
(C:\Python26\Lib\site-packages). This file contains a single line:
import os, site; site.addsitedir(os.path.expanduser('~/lib/python2.6'))
Additionally I have a pydistutils.cfg my home directory (C:\Users\wierob) that
contains:
[install]
install_lib = ~/lib/python2.6
install_scripts = ~/bin
When I run 'setup.py install' I get the following error message:
C:\Users\wierob\Documents\Python\workspace\rsreader>setup.py install
running install
Checking .pth file support in C:\Users\wierob\lib\python2.6\
C:\Python26\pythonw.exe -E -c pass
TEST FAILED: C:\Users\wierob\lib\python2.6\ does NOT support .pth files
error: bad install directory or PYTHONPATH
You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
C:\Users\wierob\lib\python2.6\
So it seems that the pth-file does not work. Although, if I enter
site.addsitedir(os.path.expanduser('~/lib/python2.6'))
in an interactive python session the directory is succesfully added to
sys.path.
Any ideas? Thanks.
Answer: The pth-file seems to be ignored if encoded in UTF-8 with BOM.
Saving the pth-file in ANSI or UTF-8 without BOM works.
|
Need assistance with wxPython (newbie)
Question: I need to create what I _think_ should be a simple GUI. I have very little
experience with building GUI's. I'm a visual learner and 'wxPython In Action'
isn't helping me out. I don't learn well by books written by Ph.D.'s. I'm
using Python 2.6. Many of the examples on the Internet don't work in Python
2.6.
I need to create a GUI with 3 columns and some buttons on the bottom.
On the first pass, each of the columns will be just multi-line text input.
I've creating a GUI that did have 3 columns using 3 panels but I couldn't get
the multi-line text input to fill the entire panel. I tried with boxsizer and
flexgridsizer with one panel but again, I couldn't get the multi-line text
input to fill the entire column.
Somewhere, I saw an example of almost exactly what I was looking for but I
either didn't bookmark it or it was in an example and I forget where it is.
This example had 3 columns where each of the columns could be width adjusted
like in a spreadsheet.
I've been at this for quite a few days and I haven't made any progress. What
I'm looking for is something akin to a Sashwindow but with 3 columns.
I've tried multiple panels, boxsizers with flexgridsizers but no luck. I've
gone through all of the wxPython demos and nothing comes closs. Perhaps
because what I looking for is too simple and not worthy of a demo. Some of the
columns in the real program will use selectable lists and grids but first I
need to start with the simplistic possible case.
Can anyone provide a minimalistic program that shows 3 columns with multi-line
text input filling the entire column? I'll figure out how to add the buttons
on the bottom.
Thank you,
Answer: You should take a look at [wxGlade](http://wxglade.sourceforge.net/). It's a
handy little GUI builder you can use to create your UI. After that, you can
also look at the code it generates and go from there.
**Edit:** Okay, here goes:
In wxGlade, create a new frame. Add a horizontal sizer with three slots. Add a
TextCtrl to the first slot. On the Layout page for the text control, check
wxEXPAND and set Proportion to 1; on the Widget page, check wxTE_MULTILINE.
Copy the text control to the clipboard and paste into into the two remaining
slots.
Here's the code that wxGlade generates:
#!/usr/bin/env python
# -*- coding: iso-8859-15 -*-
# generated by wxGlade 0.6.3 on Tue Jul 21 20:00:54 2009
import wx
# begin wxGlade: extracode
# end wxGlade
class MyFrame(wx.Frame):
def __init__(self, *args, **kwds):
# begin wxGlade: MyFrame.__init__
kwds["style"] = wx.DEFAULT_FRAME_STYLE
wx.Frame.__init__(self, *args, **kwds)
self.text_ctrl_1 = wx.TextCtrl(self, -1, "", style=wx.TE_MULTILINE)
self.text_ctrl_1_copy = wx.TextCtrl(self, -1, "", style=wx.TE_MULTILINE)
self.text_ctrl_1_copy_1 = wx.TextCtrl(self, -1, "", style=wx.TE_MULTILINE)
self.__set_properties()
self.__do_layout()
# end wxGlade
def __set_properties(self):
# begin wxGlade: MyFrame.__set_properties
self.SetTitle("frame_1")
# end wxGlade
def __do_layout(self):
# begin wxGlade: MyFrame.__do_layout
sizer_1 = wx.BoxSizer(wx.VERTICAL)
sizer_2 = wx.BoxSizer(wx.HORIZONTAL)
sizer_2.Add(self.text_ctrl_1, 1, wx.EXPAND, 0)
sizer_2.Add(self.text_ctrl_1_copy, 1, wx.EXPAND, 0)
sizer_2.Add(self.text_ctrl_1_copy_1, 1, wx.EXPAND, 0)
sizer_1.Add(sizer_2, 1, wx.EXPAND, 0)
self.SetSizer(sizer_1)
sizer_1.Fit(self)
self.Layout()
# end wxGlade
# end of class MyFrame
if __name__ == "__main__":
app = wx.PySimpleApp(0)
wx.InitAllImageHandlers()
frame_1 = MyFrame(None, -1, "")
app.SetTopWindow(frame_1)
frame_1.Show()
app.MainLoop()
Hope that helps :-)
|
how do I read everything currently in a subprocess.stdout pipe and then return?
Question: I'm using python's subprocess module to interact with a program via the stdin
and stdout pipes. If I call the subprocesses readline() on stdout, it hangs
because it is waiting for a newline.
How can I do a read of all the characters in the stdout pipe of a subprocess
instance? If it matters, I'm running in Linux.
Answer: Someone else appears to have had the same problem, you can see the related
discussion [here](http://stackoverflow.com/questions/375427/non-blocking-read-
on-a-stream-in-python).
If you are running on Linux you can use select to wait for input on the
process' stdout. Alternatively you change the mode of the process' stdout to
non-blocking using
import fcntl, os
fcntl.fcntl(your_process.stdout, fcntl.F_SETFL, os.O_NONBLOCK)
after which you can loop using read() until you encounter a newline character
(if you want to process the output one line at a time).
|
Why can't I import this Zope component in a Python 2.4 virtualenv?
Question: I'm trying to install Plone 3.3rc4 with plone.app.blob and repoze but nothing
I've tried has worked so far. For one attempt I've pip-installed repoze.zope2,
Plone, and plone.app.blob into a virtualenv. I have [this version of
DocumentTemplate](http://svn.zope.org/Zope/trunk/lib/python/DocumentTemplate/?rev=96249)
in the virtualenv's site-packages directory and I'm trying to get it running
in RHEL5.
For some reason when I try to run `paster serve etc/zope2.ini` in this
environment way Python gives the message `ImportError: No module named
DT_Util`? `DT_Util.py` exists in the directory, `__init__.py` is there too,
and the C module it depends on is there. I suspect there's some circular
dependency or failure when importing the C extension. Of course this module
would work in a normal Zope install...
>>> import DocumentTemplate
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "DocumentTemplate/__init__.py", line 21, in ?
File ".../lib/python2.4/site-packages/DocumentTemplate/DocumentTemplate.py", line 112, in ?
from DT_String import String, File
File ".../lib/python2.4/site-packages/DocumentTemplate/DT_String.py", line 19, in ?
from DocumentTemplate.DT_Util import ParseError, InstanceDict
ImportError: No module named DT_Util
Answer: I must say I doubt DocumentTemplate from Zope will work standalone. You are
welcome to try though. :-)
Note that [DT_Util imports C
extensions](https://github.com/zopefoundation/DocumentTemplate/blob/master/src/DocumentTemplate/DT_Util.py#L32-L34):
from DocumentTemplate.cDocumentTemplate import InstanceDict, TemplateDict
from DocumentTemplate.cDocumentTemplate import render_blocks, safe_callable
from DocumentTemplate.cDocumentTemplate import join_unicode
You'll need to make sure those are compiled. My guess is that importing the
`cDocumentTemplate` module fails and thus the import of `DT_Util` fails.
|
python docstrings
Question: ok so I decided to learn python (perl, c, c++, java, objective-c, ruby and a
bit of erlang and scala under my belt). and I keep on getting the following
error when I try executing this:
Tue Jul 21{stevenhirsch@steven-hirschs-macbook-pro-2}/projects/python:-->./apache_logs.py
File "./apache_logs.py", line 17
print __doc__
^
SyntaxError: invalid syntax
#!/usr/local/bin/python
"""
USAGE:
apache_logs.py
"""
import sys
import os
if __name__ == "__main__":
if not len(sys.argv) > 1:
print __doc__
sys.exit(1)
infile_name = sys.argv[1]
I know it must be something really stupid but I've googled and read the
documentation without finding anything. The docs all seem to state that what
I've coded should work.
Many thanks in advance for your help!!
Answer: What version of Python do you have? In Python 3, [`print` was changed to work
like a function](http://docs.python.org/3.1/whatsnew/3.0.html#print-is-a-
function) rather than a statement, i.e. `print('Hello World')` instead of
`print 'Hello World'`
I can recommend you to keep using Python 2.6 unless you're doing some brand
new production development. Python 3 is still pretty new.
|
Image resizing with django?
Question: I'm new to Django (and Python) and I have been trying to work out a few things
myself, before jumping into using other people's apps. I'm having trouble
understanding where things 'fit' in the Django (or Python's) way of doing
things. What I'm trying to work out is how to resize an image, once it's been
uploaded. I have my model setup nicely and plugged into the admin, and the
image uploads fine to the directory:
from django.db import models
# This is to list all the countries
# For starters though, this will be just United Kingdom (GB)
class Country(models.Model):
name = models.CharField(max_length=120, help_text="Full name of country")
code = models.CharField(max_length=2, help_text="This is the ISO 3166 2-letter country code (see: http://www.theodora.com/country_digraphs.html)")
flag = models.ImageField(upload_to="images/uploaded/country/", max_length=150, help_text="The flag image of the country.", blank=True)
class Meta:
verbose_name_plural = "Countries"
def __unicode__(self):
return self.name
The thing I'm now having trouble with is taking that file and making a new
file into a thumbnail. Like I say, I'd like to know how to do it without using
others' apps (for now). I have got this code from DjangoSnippets:
from PIL import Image
import os.path
import StringIO
def thumbnail(filename, size=(50, 50), output_filename=None):
image = Image.open(filename)
if image.mode not in ('L', 'RGB'):
image = image.convert('RGB')
image = image.resize(size, Image.ANTIALIAS)
# get the thumbnail data in memory.
if not output_filename:
output_filename = get_default_thumbnail_filename(filename)
image.save(output_filename, image.format)
return output_filename
def thumbnail_string(buf, size=(50, 50)):
f = StringIO.StringIO(buf)
image = Image.open(f)
if image.mode not in ('L', 'RGB'):
image = image.convert('RGB')
image = image.resize(size, Image.ANTIALIAS)
o = StringIO.StringIO()
image.save(o, "JPEG")
return o.getvalue()
def get_default_thumbnail_filename(filename):
path, ext = os.path.splitext(filename)
return path + '.thumb.jpg'
...but this has ultimately confused me... As I don't know how this 'fits in'
to my Django app? And really, is it the best solution for simply making a
thumbnail of an image that has been successfully uploaded? Can anyone possibly
show me a good, solid, decent way that a beginner like me can learn to do this
properly? As in, knowing where to put that sort of code (models.py? forms.py?
...) and how it would work in context? ... I just need a bit of help
understanding and working this problem out.
Thank you!
Answer: If it's OK for you, there is a Django application ready, doing exactly what
you want: <https://github.com/sorl/sorl-thumbnail>
|
Randomness in Jython
Question: When using (pseudo) random numbers in Jython, would it be more efficient to
use the Python random module or Java's random class?
Answer: Python's version is much faster in a simple test on my Mac:
jython -m timeit -s "import random" "random.random()"
1000000 loops, best of 3: 0.266 usec per loop
vs
jython -m timeit -s "import java.util.Random; random=java.util.Random()" "random.nextDouble()"
1000000 loops, best of 3: 1.65 usec per loop
Jython version 2.5b3 and Java version 1.5.0_19.
|
Iterative find/replace from a list of tuples in Python
Question: I have a list of tuples, each containing a find/replace value that I would
like to apply to a string. What would be the most efficient way to do so? I
will be applying this iteratively, so performance is my biggest concern.
More concretely, what would the innards of processThis() look like?
x = 'find1, find2, find3'
y = [('find1', 'replace1'), ('find2', 'replace2'), ('find3', 'replace3')]
def processThis(str,lst):
# Do something here
return something
>>> processThis(x,y)
'replace1, replace2, replace3'
Thanks, all!
Answer: You could consider using `re.sub`:
import re
REPLACEMENTS = dict([('find1', 'replace1'),
('find2', 'replace2'),
('find3', 'replace3')])
def replacer(m):
return REPLACEMENTS[m.group(0)]
x = 'find1, find2, find3'
r = re.compile('|'.join(REPLACEMENTS.keys()))
print r.sub(replacer, x)
|
How to exit from Python without traceback?
Question: I would like to know how to I exit from Python without having an traceback
dump on the output.
I still want want to be able to return an error code but I do not want to
display the traceback log.
I want to be able to exit using `exit(number)` without trace but in case of an
Exception (not an exit) I want the trace.
Answer: You are presumably encountering an exception and the program is exiting
because of this (with a traceback). The first thing to do therefore is to
catch that exception, before exiting cleanly (maybe with a message, example
given).
Try something like this in your `main` routine:
import sys, traceback
def main():
try:
do main program stuff here
....
except KeyboardInterrupt:
print "Shutdown requested...exiting"
except Exception:
traceback.print_exc(file=sys.stdout)
sys.exit(0)
if __name__ == "__main__":
main()
|
Good or bad practice in Python: import in the middle of a file
Question: Suppose I have a relatively long module, but need an external module or method
only once.
Is it considered OK to import that method or module in the middle of the
module?
Or should `import`s only be in the first part of the module.
Example:
import string, pythis, pythat
...
...
...
...
def func():
blah
blah
blah
from pysomething import foo
foo()
etc
etc
etc
...
...
...
**Please justify your answer and add links
to[PEP](http://en.wikipedia.org/wiki/Python_%28programming_language%29#Development)s
or relevant sources**
Answer: [PEP 8](http://www.python.org/dev/peps/pep-0008/) authoritatively states:
> Imports are always put at the top of the file, just after any module
> comments and docstrings, and before module globals and constants.
PEP 8 should be the basis of any "in-house" style guide, since it summarizes
what the core Python team has found to be the most effective style, overall
(and with individual dissent of course, as on any other language, but
consensus and the BDFL agree on PEP 8).
|
Hello World from cython wiki not working
Question: I'm trying to follow this tutorial from Cython:
<http://docs.cython.org/docs/tutorial.html#the-basics-of-cython> and I'm
having a problem.
The files are very simple. I have a helloworld.pyx:
print "Hello World"
and a setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [Extension("helloworld", ["helloworld.pyx"])]
)
and I compile it with the standard command:
python setup.py build_ext --inplace
I got the following error:
running build
running build_ext
building 'helloworld' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.6 -c helloworld.c -o build/temp.linux-x86_64-2.6/helloworld.o
helloworld.c:4:20: error: Python.h: No such file or directory
helloworld.c:5:26: error: structmember.h: No such file or directory
helloworld.c:34: error: expected specifier-qualifier-list before ‘PyObject’
helloworld.c:121: error: expected specifier-qualifier-list before ‘PyObject’
helloworld.c:139: error: expected ‘)’ before ‘*’ token
helloworld.c:140: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsLongLong’
helloworld.c:141: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsUnsignedLongLong’
helloworld.c:142: error: expected ‘)’ before ‘*’ token
helloworld.c:147: error: expected ‘)’ before ‘*’ token
helloworld.c:148: error: expected ‘)’ before ‘*’ token
helloworld.c:149: error: expected ‘)’ before ‘*’ token
helloworld.c:150: error: expected ‘)’ before ‘*’ token
helloworld.c:151: error: expected ‘)’ before ‘*’ token
helloworld.c:152: error: expected ‘)’ before ‘*’ token
helloworld.c:153: error: expected ‘)’ before ‘*’ token
helloworld.c:154: error: expected ‘)’ before ‘*’ token
helloworld.c:155: error: expected ‘)’ before ‘*’ token
helloworld.c:156: error: expected ‘)’ before ‘*’ token
helloworld.c:157: error: expected ‘)’ before ‘*’ token
helloworld.c:172: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token
helloworld.c:173: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token
helloworld.c:174: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token
helloworld.c:181: error: expected ‘)’ before ‘*’ token
helloworld.c:198: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token
helloworld.c:200: error: array type has incomplete element type
helloworld.c:221: error: ‘__pyx_kp_1’ undeclared here (not in a function)
helloworld.c:221: warning: excess elements in struct initializer
helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’)
helloworld.c:221: warning: excess elements in struct initializer
helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’)
helloworld.c:221: warning: excess elements in struct initializer
helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’)
helloworld.c:221: warning: excess elements in struct initializer
helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’)
helloworld.c:221: warning: excess elements in struct initializer
helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’)
helloworld.c:221: warning: excess elements in struct initializer
helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’)
helloworld.c:222: warning: excess elements in struct initializer
helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’)
helloworld.c:222: warning: excess elements in struct initializer
helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’)
helloworld.c:222: warning: excess elements in struct initializer
helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’)
helloworld.c:222: warning: excess elements in struct initializer
helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’)
helloworld.c:222: warning: excess elements in struct initializer
helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’)
helloworld.c:222: warning: excess elements in struct initializer
helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’)
helloworld.c:237: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘inithelloworld’
helloworld.c:238: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘inithelloworld’
helloworld.c:305: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token
helloworld.c:313: error: expected ‘)’ before ‘*’ token
helloworld.c:379:21: error: compile.h: No such file or directory
helloworld.c:380:25: error: frameobject.h: No such file or directory
helloworld.c:381:23: error: traceback.h: No such file or directory
helloworld.c: In function ‘__Pyx_AddTraceback’:
helloworld.c:384: error: ‘PyObject’ undeclared (first use in this function)
helloworld.c:384: error: (Each undeclared identifier is reported only once
helloworld.c:384: error: for each function it appears in.)
helloworld.c:384: error: ‘py_srcfile’ undeclared (first use in this function)
helloworld.c:385: error: ‘py_funcname’ undeclared (first use in this function)
helloworld.c:386: error: ‘py_globals’ undeclared (first use in this function)
helloworld.c:387: error: ‘empty_string’ undeclared (first use in this function)
helloworld.c:388: error: ‘PyCodeObject’ undeclared (first use in this function)
helloworld.c:388: error: ‘py_code’ undeclared (first use in this function)
helloworld.c:389: error: ‘PyFrameObject’ undeclared (first use in this function)
helloworld.c:389: error: ‘py_frame’ undeclared (first use in this function)
helloworld.c:392: warning: implicit declaration of function ‘PyString_FromString’
helloworld.c:399: warning: implicit declaration of function ‘PyString_FromFormat’
helloworld.c:412: warning: implicit declaration of function ‘PyModule_GetDict’
helloworld.c:412: error: ‘__pyx_m’ undeclared (first use in this function)
helloworld.c:415: warning: implicit declaration of function ‘PyString_FromStringAndSize’
helloworld.c:420: warning: implicit declaration of function ‘PyCode_New’
helloworld.c:429: error: ‘__pyx_empty_tuple’ undeclared (first use in this function)
helloworld.c:440: warning: implicit declaration of function ‘PyFrame_New’
helloworld.c:441: warning: implicit declaration of function ‘PyThreadState_GET’
helloworld.c:448: warning: implicit declaration of function ‘PyTraceBack_Here’
helloworld.c:450: warning: implicit declaration of function ‘Py_XDECREF’
helloworld.c: In function ‘__Pyx_InitStrings’:
helloworld.c:458: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’
helloworld.c:460: error: ‘__Pyx_StringTabEntry’ has no member named ‘is_unicode’
helloworld.c:460: error: ‘__Pyx_StringTabEntry’ has no member named ‘is_identifier’
helloworld.c:461: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’
helloworld.c:461: warning: implicit declaration of function ‘PyUnicode_DecodeUTF8’
helloworld.c:461: error: ‘__Pyx_StringTabEntry’ has no member named ‘s’
helloworld.c:461: error: ‘__Pyx_StringTabEntry’ has no member named ‘n’
helloworld.c:461: error: ‘NULL’ undeclared (first use in this function)
helloworld.c:462: error: ‘__Pyx_StringTabEntry’ has no member named ‘intern’
helloworld.c:463: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’
helloworld.c:463: warning: implicit declaration of function ‘PyString_InternFromString’
helloworld.c:463: error: ‘__Pyx_StringTabEntry’ has no member named ‘s’
helloworld.c:465: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’
helloworld.c:465: error: ‘__Pyx_StringTabEntry’ has no member named ‘s’
helloworld.c:465: error: ‘__Pyx_StringTabEntry’ has no member named ‘n’
helloworld.c:476: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’
helloworld.c: At top level:
helloworld.c:485: error: expected ‘)’ before ‘*’ token
helloworld.c:494: error: expected ‘)’ before ‘*’ token
helloworld.c:500: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsLongLong’
helloworld.c:516: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsUnsignedLongLong’
helloworld.c:538: error: expected ‘)’ before ‘*’ token
helloworld.c:553: error: expected ‘)’ before ‘*’ token
helloworld.c:568: error: expected ‘)’ before ‘*’ token
helloworld.c:583: error: expected ‘)’ before ‘*’ token
helloworld.c:598: error: expected ‘)’ before ‘*’ token
helloworld.c:613: error: expected ‘)’ before ‘*’ token
helloworld.c:628: error: expected ‘)’ before ‘*’ token
helloworld.c:643: error: expected ‘)’ before ‘*’ token
helloworld.c:658: error: expected ‘)’ before ‘*’ token
helloworld.c:673: error: expected ‘)’ before ‘*’ token
helloworld.c:688: error: expected ‘)’ before ‘*’ token
error: command 'gcc' failed with exit status 1
I have python and cython installed from Ubuntu 9.04 repositories. I can't
figure why the compiler can't find Python.h.
I tried doing:
cython helloworld.pyx
and then compiling the result manually with gcc:
gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -fno-strict-aliasing -I/usr/include/python2.5 -o helloworld.so helloworld.c
and got the same exact error message.
Any clues?
Answer: Looks like you're missing some package like `python_dev` or the like -- Debian
and derivatives (including Ubuntu) have long preferred to isolate everything
that could possibly be of "developer"'s use from the parts of a package that
are for "everybody"... a philosophical stance I could debate against (and
_have_ debated against, without much practical success, in mahy fora), but one
that, sadly, can't just be ignored:-(
|
python stdout flush and tee
Question: The following code ends with broken pipe when piped into tee, but behave
correctly when not piped :
#!/usr/bin/python
import sys
def testfun():
while 1:
try :
s = sys.stdin.readline()
except(KeyboardInterrupt) :
print('Ctrl-C pressed')
sys.stdout.flush()
return
print s
if __name__ == "__main__":
testfun()
sys.exit()
Expected output :
./bug.py
Ctrl-C pressed
What is observed when piped into tee is either a broken pipe or no output at
all, ie nothing on tee stdout, and nothing in bug.log :
./bug.py | tee bug.log
Traceback (most recent call last):
File "./bug.py", line 14, in <module>
sys.stdout.flush()
IOError: [Errno 32] Broken pipe
What can be the reason for this ?
Answer: Nope, hitting Ctrl-C does NOT terminate both processes. It terminates the tee
process only, the end of the tee process close the pipe between your script
and tee, and hence your script dies with the broken pipe message.
To fix that, tee has an option to pass the Ctrl-C to its previous process in
the pipe: -i
try: man tee
./bug.py
^CCtrl-C pressed
./bug.py | tee log
^CTraceback (most recent call last):
File "./bug.py", line 14, in <module>
testfun()
File "./bug.py", line 9, in testfun
sys.stdout.flush()
IOError: [Errno 32] Broken pipe
./bug.py | tee -i log
^CCtrl-C pressed
|
combine javascript files at deployment in python
Question: I'm trying to reduce the number of scripts included in our website and we use
buildout to handle deployments. Has anybody successfully implemented a method
of combining and compressing scripts with buildout?
Answer: Here's a Python script I made that I use with all my heavy JavaScript
projects. I'm using YUICompressor, but you can change the code to use another
compressor.
import os, os.path, shutil
YUI_COMPRESSOR = 'yuicompressor-2.4.2.jar'
def compress(in_files, out_file, in_type='js', verbose=False,
temp_file='.temp'):
temp = open(temp_file, 'w')
for f in in_files:
fh = open(f)
data = fh.read() + '\n'
fh.close()
temp.write(data)
print ' + %s' % f
temp.close()
options = ['-o "%s"' % out_file,
'--type %s' % in_type]
if verbose:
options.append('-v')
os.system('java -jar "%s" %s "%s"' % (YUI_COMPRESSOR,
' '.join(options),
temp_file))
org_size = os.path.getsize(temp_file)
new_size = os.path.getsize(out_file)
print '=> %s' % out_file
print 'Original: %.2f kB' % (org_size / 1024.0)
print 'Compressed: %.2f kB' % (new_size / 1024.0)
print 'Reduction: %.1f%%' % (float(org_size - new_size) / org_size * 100)
print ''
#os.remove(temp_file)
I use it like this (the below is just a code snippet, and assumes that the
`compress` function exists in the current namespace):
SCRIPTS = [
'app/js/libs/EventSource.js',
'app/js/libs/Hash.js',
'app/js/libs/JSON.js',
'app/js/libs/ServiceClient.js',
'app/js/libs/jquery.hash.js',
'app/js/libs/Application.js',
'app/js/intro.js',
'app/js/jquery-extras.js',
'app/js/settings.js',
'app/js/api.js',
'app/js/game.js',
'app/js/user.js',
'app/js/pages.intro.js',
'app/js/pages.home.js',
'app/js/pages.log-in.js',
'app/js/pages.log-out.js',
'app/js/pages.new-command.js',
'app/js/pages.new-frame.js',
'app/js/pages.not-found.js',
'app/js/pages.register.js',
'app/js/pages.outro.js',
'app/js/outro.js',
]
SCRIPTS_OUT_DEBUG = 'app/js/multifarce.js'
SCRIPTS_OUT = 'app/js/multifarce.min.js'
STYLESHEETS = [
'app/media/style.css',
]
STYLESHEETS_OUT = 'app/media/style.min.css'
def main():
print 'Compressing JavaScript...'
compress(SCRIPTS, SCRIPTS_OUT, 'js', False, SCRIPTS_OUT_DEBUG)
print 'Compressing CSS...'
compress(STYLESHEETS, STYLESHEETS_OUT, 'css')
if __name__ == '__main__':
main()
|
slow sqlite insert using the jdbc drivers in java
Question: I just inserted 1million records into a simple sqlite table with five columns.
It took a whooping 18 hours in java using the jdbc drivers! I did the same
thing in python2.5 and it took less than a minute. The speed for select
queries seem fine. I think this is an issue with the jdbc drivers.
Is there a faster driver for sqlite3 in java?
Speed of inserting large numbers of rows is important for my schema migration
script, and I'd rather not have to use an external script to do the migrations
if I don't have to.
EDIT: fixed with connection.setAutoCommit(false); thanks Mark Rushakoff for
hinting me to the solution :)
Answer: Did you have your queries autocommitted? That could explain why it took so
long. Try wrapping them in a begin / end so that it doesn't have to do a full
commit for every insert.
[This page](http://www.sqlite.org/lang%5Ftransaction.html) explains begin/end
transaction, while the [FAQ](http://www.sqlite.org/faq.html#q19) touches on
inserts/autocommits.
|
What is the most compatible way to install python modules on a Mac?
Question: I'm starting to learn python and loving it. I work on a Mac mainly as well as
Linux. I'm finding that on Linux (Ubuntu 9.04 mostly) when I install a python
module using apt-get it works fine. I can import it with no trouble.
On the Mac, I'm used to using Macports to install all the Unixy stuff.
However, I'm finding that most of the python modules I install with it are not
being seen by python. I've spent some time playing around with PATH settings
and using python_select . Nothing has really worked and at this point I'm not
really understanding, instead I'm just poking around.
I get the impression that Macports isn't universally loved for managing python
modules. I'd like to start fresh using a more "accepted" (if that's the right
word) approach.
**So, I was wondering, what is the method that Mac python developers use to
manage their modules?**
Bonus questions:
Do you use Apple's python, or some other version? Do you compile everything
from source or is there a package manger that works well (Fink?).
Any tips or suggestions here are greatly appreciated. Thanks for your time. :)
Answer: The most popular way to manage python packages (if you're not using your
system package manager) is to use setuptools and easy_install. It is probably
already installed on your system. Use it like this:
easy_install django
easy_install uses the [Python Package Index](http://pypi.python.org) which is
an amazing resource for python developers. Have a look around to see what
packages are available.
A better option is [pip](http://pypi.python.org/pypi/pip), which is gaining
traction, as it attempts to [fix a lot of the
problems](http://www.b-list.org/weblog/2008/dec/14/packaging/) associated with
easy_install. Pip uses the same package repository as easy_install, it just
works better. Really the only time use need to use easy_install is for this
command:
easy_install pip
After that, use:
pip install django
At some point you will probably want to learn a bit about
[virtualenv](http://pypi.python.org/pypi/virtualenv). If you do a lot of
python development on projects with conflicting package requirements,
virtualenv is a godsend. It will allow you to have completely different
versions of various packages, and switch between them easily depending your
needs.
Regarding which python to use, sticking with Apple's python will give you the
least headaches, but If you need a newer version (Leopard is 2.5.1 I believe),
I would go with the
[macports](http://trac.macports.org/browser/trunk/dports/lang/python26/Portfile)
python 2.6.
|
Ubuntu + virtualenv = a mess? virtualenv hates dist-packages, wants site-packages
Question: Can someone please explain to me what is going on with python in ubuntu 9.04?
I'm trying to spin up `virtualenv`, and the `--no-site-packages` flag seems to
do nothing with ubuntu. I installed `virtualenv 1.3.3` with `easy_install`
(which I've upgraded to `setuptools 0.6c9`) and everything seems to be
installed to `/usr/local/lib/python2.6/dist-packages`
I _assume_ that when installing a package using apt-get, it's placed in
`/usr/lib/python2.6/dist-packages/` ?
The issue is, there is a `/usr/local/lib/python2.6/site-packages` as well that
just sits there being empty. It would seem (by looking at the `path` in a
`virtualenv`) that this is the folder virtualenv uses as backup. Thus even
thought I omit `--no-site-packages`, I cant access my local systems packages
from any of my virtualenv's.
So my questions are:
1. How do I get `virtualenv` to point to one of the `dist-packages`?
2. _Which_ dist-packages should I point it to? `/usr/lib/python2.6/dist-packages` or `/usr/local/lib/python2.6/dist-packages/`
3. What is the point of `/usr/lib/python2.6/site-packages`? There's nothing in there!
4. Is it first come first serve on the path? If I have a newer version of package XYZ installed in `/usr/local/lib/python2.6/dist-packages/` and and older one (from ubuntu repos/apt-get) in `/usr/lib/python2.6/dist-packages`, which one gets imported when I `import xyz`? I'm assuming this is based on the path list, yes?
5. Why the hell is this so confusing? Is there something I'm missing here?
6. Where is it defined that `easy_install` should install to `/usr/local/lib/python2.6/dist-packages`?
7. Will this affect `pip` as well?
Thanks to anyone who can clear this up!
Answer: I believe Mike Orr's answer from [the virtual-env mailing
list](http://groups.google.com/group/python-
virtualenv/browse_thread/thread/1412994580f0a2c6/17271fcd6a6e747e?lnk=gst&q=ubuntu+virtualenv#17271fcd6a6e747e)
seems to be the best. Note the OP published this question in both places.
**Original content of mail:**
Years ago Debian created /usr/local/lib/pythonVERSION/site-packages, and
compiled the Python binary to include it in the default search path. Ubuntu
followed Debian's lead as it normally does. The Python developers did not like
this because you'd get interference with a locally-installed
/usr/local/bin/python using the same site-packages directory. Ubuntu finally
decided to abandon site-packages and use dist-packages instead, a name which
they invented so it wouldn't interfere with anything. The loing story is out
there somewhere if you google it, somewhere in the Python bug tracker or
distutils SIG or such.
The system works, at least if you use the Ubuntu virtualenv package. Some
people have had problems using a locally-installed virtualenv on Ubuntu
because the magic sys.path entries weren't being added or something. I'm not
sure about --no-site-packages because I never use that option: I run PIL and
mysqldb from the Ubuntu packages because it can be hard to compile their C
dependencies sometimes. (Need the right header files, Python is ignoring the
header files, etc.)
So Ubuntu Python packages go into /usr/lib/pythonVERSION/dist-packages. Or
that python-support directory for some reason. Locally-installed Python
packages go into /usr/local/lib/pythonVERSION/dist-packages by default.
Whenever I install an Ubuntu 9.04 system I run:
$ sudo apt-get install python-setuptools (6.0c9) $ sudo apt-get install
python-virtualenv (1.3.3) $ sudo easy_install pip $ sudo pip install
virtualenvwrapper
The virtualenvs work fine this way, although I haven't tried --no-site-
packages.
> I'm trying to spin up virtualenv, and the --no-site-packages flag seems to
> do nothing with ubuntu. I installed virtualenv 1.3.3 with easy_install
> (which I've upgraded to setuptools 0.6c9)
These versions are both in Ubuntu 9.04, so you're making it harder on yourself
by installing them locally.
> and everything seems to be installed to /usr/local/lib/python2.6/dist-
> packages
Yes
> I assume that when installing a package using apt-get, it's placed in /
> usr/lib/python2.6/dist-packages/ ?
Yes
> 4. Is it first come first serve on the path? If I have a newer version of
> package XYZ installed in /usr/local/lib/python2.6/dist- packages/ and and
> older one (from ubuntu repos/apt-get) in /usr/lib/ python2.6/dist-packages,
> which one gets imported when I import xyz? I'm assuming this is based on the
> path list, yes?
>
sys.path is scanned in order. The only funny thing is that .pth eggs get put
earlier or later in the path than some people expect. But if you're using pip
for everything it can do (i.e. except to install pip itself, precompiled eggs,
and a snapshot of a local directory that's a copy rather than an egg link),
you won't have many .pth eggs anyway.
> 5. Why the hell is this so confusing? Is there something I'm missing here?
>
It's not well documented. I figured it out by scanning the web.
> 7. Will this affect pip as well?
>
Yes, pip will automatically install to /usr/local/lib/pythonVERSION/site-
packages. Use "pip install -E $VIRTUAL_ENV packagename" to install into a
virtualenv.
|
Django newbie deployment question - ImportError: Could not import settings 'settings'
Question: The app runs fine using django internal server however when I use apache +
mod_python I get the below error
* * *
File "/usr/local/lib/python2.6/dist-packages/django/conf/__init__.py", line 75, in __init__
raise ImportError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e)
ImportError: Could not import settings 'settings' (Is it on sys.path? Does it have syntax errors?): No module named settings
* * *
Here is the needed information
1) Project directory: /root/djangoprojects/mysite
2) directory listing of /root/djangoprojects/mysite
ls -ltr
total 28
-rw-r--r-- 1 root root 546 Aug 1 08:34 manage.py
-rw-r--r-- 1 root root 0 Aug 1 08:34 __init__.py
-rw-r--r-- 1 root root 136 Aug 1 08:35 __init__.pyc
-rw-r--r-- 1 root root 2773 Aug 1 08:39 settings.py
-rw-r--r-- 1 root root 1660 Aug 1 08:53 settings.pyc
drwxr-xr-x 2 root root 4096 Aug 1 09:04 polls
-rw-r--r-- 1 root root 581 Aug 1 10:06 urls.py
-rw-r--r-- 1 root root 314 Aug 1 10:07 urls.pyc
3) App directory : /root/djangoprojects/mysite/polls
4) directory listing of /root/djangoprojects/mysite/polls
ls -ltr
total 20
-rw-r--r-- 1 root root 514 Aug 1 08:53 tests.py
-rw-r--r-- 1 root root 57 Aug 1 08:53 models.py
-rw-r--r-- 1 root root 0 Aug 1 08:53 __init__.py
-rw-r--r-- 1 root root 128 Aug 1 09:02 views.py
-rw-r--r-- 1 root root 375 Aug 1 09:04 views.pyc
-rw-r--r-- 1 root root 132 Aug 1 09:04 __init__.pyc
5) Anywhere in the filesystem running import django in python interpreter
works fine
6) content of httpd.conf
<Location "/mysite">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE settings
PythonOption django.root /mysite
PythonPath "['/root/djangoprojects/', '/root/djangoprojects/mysite','/root/djangoprojects/mysite/polls', '/var/www'] + sys.path"
PythonDebug On
</Location>
7) PYTHONPATH variable is set to
echo $PYTHONPATH
/root/djangoprojects/mysite
8) DJANGO_SETTINGS_MODULE is set to
echo $DJANGO_SETTINGS_MODULE
mysite.settings
9) content of sys.path is
import sys
>>> sys.path
['', '/root/djangoprojects/mysite', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/dist-packages', '/usr/local/lib/python2.6/dist-packages']
How do I add settings location to sys.path such that it persistent across
sessions ?
I have read umpteen no of post with people having the same issue it and I have
tried a lot completely beats me as to what I need to do.
Looking for some help.
Thanks in advance Ankur Gupta
Answer: Your apache configuration should look like this:
<Location "/mysite">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE mysite.settings
PythonOption django.root /mysite
PythonPath "['/root/djangoprojects/', '/root/djangoprojects/mysite','/root/djangoprojects/mysite/polls', '/var/www'] + sys.path"
PythonDebug On
</Location>
Note that the sole difference is the "mysite.settings". Don't forget to
restart apache once the config has changed (apache2ctl restart). See
<http://docs.djangoproject.com/en/dev/howto/deployment/modpython/> for more
info.
|
Error drawing text on NSImage in PyObjC
Question: I'm trying to overlay an image with some text using PyObjC, while striving to
answer my question, ["Annotate images using tools built into OS
X"](http://stackoverflow.com/questions/1229171/annotate-images-using-tools-
built-into-os-x). By referencing [CocoaMagic](http://www.rubycocoa.com/cocoa-
magic-for-gruff-graphs/2/), a RubyObjC replacement for
[RMagick](http://rmagick.rubyforge.org/), I've come up with this:
#!/usr/bin/env python
from AppKit import *
source_image = "/Library/Desktop Pictures/Nature/Aurora.jpg"
final_image = "/Library/Desktop Pictures/.loginwindow.jpg"
font_name = "Arial"
font_size = 76
message = "My Message Here"
app = NSApplication.sharedApplication() # remove some warnings
# read in an image
image = NSImage.alloc().initWithContentsOfFile_(source_image)
image.lockFocus()
# prepare some text attributes
text_attributes = NSMutableDictionary.alloc().init()
font = NSFont.fontWithName_size_(font_name, font_size)
text_attributes.setObject_forKey_(font, NSFontAttributeName)
text_attributes.setObject_forKey_(NSColor.blackColor, NSForegroundColorAttributeName)
# output our message
message_string = NSString.stringWithString_(message)
size = message_string.sizeWithAttributes_(text_attributes)
point = NSMakePoint(400, 400)
message_string.drawAtPoint_withAttributes_(point, text_attributes)
# write the file
image.unlockFocus()
bits = NSBitmapImageRep.alloc().initWithData_(image.TIFFRepresentation)
data = bits.representationUsingType_properties_(NSJPGFileType, nil)
data.writeToFile_atomically_(final_image, false)
When I run it, I get this:
Traceback (most recent call last):
File "/Users/clinton/Work/Problems/TellAtAGlance/ObviouslyTouched.py", line 24, in <module>
message_string.drawAtPoint_withAttributes_(point, text_attributes)
ValueError: NSInvalidArgumentException - Class OC_PythonObject: no such selector: set
Looking in the docs for drawAtPoint:withAttributes:, it says, "You should only
invoke this method when an NSView has focus." NSImage is not a subclass of
NSView, but I would hope this would work, and something very similar seems to
work in the Ruby example.
What do I need to change to make this work?
* * *
I rewrote the code, converting it faithfully, line for line, into an
Objective-C Foundation tool. It works, without problems. [I would be happy to
post if here if there is a reason to do so.]
The question then becomes, how does:
[message_string drawAtPoint:point withAttributes:text_attributes];
differ from
message_string.drawAtPoint_withAttributes_(point, text_attributes)
? Is there a way to tell which "OC_PythonObject" is raising the
NSInvalidArgumentException?
Answer: Here are the problems in the above code:
text_attributes.setObject_forKey_(NSColor.blackColor, NSForegroundColorAttributeName)
->
text_attributes.setObject_forKey_(NSColor.blackColor(), NSForegroundColorAttributeName)
bits = NSBitmapImageRep.alloc().initWithData_(image.TIFFRepresentation)
data = bits.representationUsingType_properties_(NSJPGFileType, nil)
->
bits = NSBitmapImageRep.imageRepWithData_(image.TIFFRepresentation())
data = bits.representationUsingType_properties_(NSJPEGFileType, None)
Minor typos indeed.
Note that the middle portion of the code can be replaced with this more-
readable variant:
# prepare some text attributes
text_attributes = {
NSFontAttributeName : NSFont.fontWithName_size_(font_name, font_size),
NSForegroundColorAttributeName : NSColor.blackColor()
}
# output our message
NSString.drawAtPoint_withAttributes_(message, (400, 400), text_attributes)
I learned this by looking at the source code to [NodeBox](http://nodebox.net),
the twelve lines of
[psyphography.py](http://dev.nodebox.net/browser/nodebox/trunk/play/psyphography.py)
and
[cocoa.py](http://dev.nodebox.net/browser/nodebox/trunk/src/nodebox/graphics/cocoa.py),
particularly the save and _getImageData methods.
|
MySQL LOAD DATA LOCAL INFILE example in python?
Question: I am looking for a syntax definition, example, sample code, wiki, etc. for
executing a LOAD DATA LOCAL INFILE command from python.
I believe I can use mysqlimport as well if that is available, so any feedback
(and code snippet) on which is the better route, is welcome. A Google search
is not turning up much in the way of current info
The goal in either case is the same: Automate loading hundreds of files with a
known naming convention & date structure, into a single MySQL table.
David
Answer: Well, using python's MySQLdb, I use this:
connection = MySQLdb.Connect(host='**', user='**', passwd='**', db='**')
cursor = connection.cursor()
query = "LOAD DATA INFILE '/path/to/my/file' INTO TABLE sometable FIELDS TERMINATED BY ';' ENCLOSED BY '\"' ESCAPED BY '\\\\'"
cursor.execute( query )
connection.commit()
replacing the host/user/passwd/db as appropriate for your needs. This is based
on the MySQL docs [here](http://dev.mysql.com/doc/refman/5.1/en/load-
data.html), The exact LOAD DATA INFILE statement would depend on your specific
requirements etc (note the FIELDS TERMINATED BY, ENCLOSED BY, and ESCAPED BY
statements will be specific to the type of file you are trying to read in).
|
How can I use a VB6 COM 'reference' in IronPython?
Question: I'm currently developing what is more or less a script that needs to get some
data from a VB 6 COM dll. This dll is currently used in a MS Word VBA project,
and it exports classes, etc to the VBA code. It is added in the Tools ->
References menu in the VBA editor, and I can see it's classes in the object
browser of VBA.
From my readings, it is possible to use a VB6 COM library in VB.NET (or at
least, it is supposed to be able to.) As it should be possible in VB.NET, and
since .NET runs on the CLR, and since IronPython does to, logically, can't i
access this ancient DLL from IronPython?
I have tried `import clr; clr.AddReferenceToFileAndPath(dllpath)` in
IronPython, but keep getting 'IOError: file does not exist', which is clearly
false.
If anyone can point me to using a VB6 COM object in /any/ .NET language, that
would be greatly appreciated.
Thanks!
PS: No, I can't edit / view source of the COM DLL, it's 3rd party proprietary
stuff. PPS: Any way I can get the GUID / COM 'name; of the dll?
Answer: You need to generate a Runtime COM Wrapper assembly using the `tlbimp` tool,
and add a reference to that; languages which support .net attributes can do
the interoperation explicitly, but even there, autogenerating the wrapper is
far simpler.
Inspecting the wrapper assembly in `ildasm` will show exactly how the
conversion has been performed.
|
How do you USE Fortran 90 module data
Question: Let's say you have a Fortran 90 module containing _lots_ of variables,
functions and subroutines. In your `USE` statement, which convention do you
follow:
1. explicitly declare which variables/functions/subroutines you're using with the `, only :` syntax, such as `USE [module_name], only : variable1, variable2, ...`?
2. Insert a blanket `USE [module_name]`?
On the one hand, the `only` clause makes the code a bit more verbose. However,
it forces you to repeat yourself in the code and if your module contains
_lots_ of variables/functions/subroutines, things begin to look unruly.
Here's an example:
module constants
implicit none
real, parameter :: PI=3.14
real, parameter :: E=2.71828183
integer, parameter :: answer=42
real, parameter :: earthRadiusMeters=6.38e6
end module constants
program test
! Option #1: blanket "use constants"
! use constants
! Option #2: Specify EACH variable you wish to use.
use constants, only : PI,E,answer,earthRadiusMeters
implicit none
write(6,*) "Hello world. Here are some constants:"
write(6,*) PI, &
E, &
answer, &
earthRadiusInMeters
end program test
**Update** Hopefully someone says something like _"Fortran? Just recode it in
C#!"_ so I can down vote you.
* * *
**Update**
I like [Tim Whitcomb's answer](http://stackoverflow.com/questions/1240510/how-
do-you-use-fortran-90-module-data/1240696#1240696), which compares Fortran's
`USE modulename` with Python's `from modulename import *`. A topic which has
been on Stack Overflow before:
* [‘import module’ or ‘from module import’](http://stackoverflow.com/questions/710551/import-module-or-from-module-import)
* [In an answer](http://stackoverflow.com/questions/710551/import-module-or-from-module-import/710603#710603), Mark Roddy mentioned:
> don't use 'from module import *'. For any reasonable large set of code, if
> you 'import *' your will likely be cementing it into the module, unable to
> be removed. This is because it is difficult to determine what items used in
> the code are coming from 'module', making it east to get to the point where
> you think you don't use the import anymore but its extremely difficult to be
> sure.
* [What are good rules of thumb for python imports?](http://stackoverflow.com/questions/193919/what-are-good-rules-of-thumb-for-python-imports)
* [dbr's answer](http://stackoverflow.com/questions/193919/what-are-good-rules-of-thumb-for-python-imports/193979#193979) contains
> don't do from x import * - it makes your code very hard to understand, as
> you cannot easily see where a method came from (from x import *; from y
> import *; my_func() - where is my_func defined?)
So, I'm leaning towards a consensus of explicitly stating all the items I'm
using in a module via
USE modulename, only : var1, var2, ...
And as [Stefano Borini
mentions](http://stackoverflow.com/questions/1240510/how-do-you-use-
fortran-90-module-data/1240583#1240583),
> [if] you have a module so large that you feel compelled to add ONLY, it
> means that your module is too big. Split it.
Answer: I used to just do `use modulename` \- then, as my application grew, I found it
more and more difficult to find the source to functions (without turning to
grep) - some of the other code floating around the office still uses a one-
subroutine-per-file, which has its own set of problems, but it makes it much
easier to use a text editor to move through the code and quickly track down
what you need.
After experiencing this, I've become a convert to using `use`...`only`
whenever possible. I've also started picking up Python, and view it the same
way as `from modulename import *`. There's a lot of great things that modules
give you, but I prefer to keep my global namespace tightly controlled.
|
Generating a graph with multiple (sets of multiple sets of multiple) X-axis data sets
Question: I am looking for a way to generate a graph with multiple sets of data on the
X-axis, each of which is divided into multiple sets of multiple sets. I
basically want to take [this graph](http://gdgraph.com/samples/sample1A.html)
and place similar graphs side by side with it. I am trying to graph the build
a graph of the duration (Y-axis) of the same jobs (0-3) with different
configurations (0-1) on multiple servers (each group with the same 8 jobs).
Hopefully the following diagram will illustrate what I am trying to accomplish
(smaller groupings are separated by pipes, larger groupings by double pipes):
|| 0 1 | 0 1 | 0 1 | 0 1 || 0 1 | 0 1 | 0 1 | 0 1 || 0 1 | 0 1 | 0 1 | 0 1 ||
|| 0 | 1 | 2 | 3 || 0 | 1 | 2 | 3 || 0 | 1 | 2 | 3 ||
|| Server 1 || Server 2 || Server 3 ||
Is this possible with either the GD::Graph Perl module or the matplotlib
Python module? I can't find examples or documentation on this subject for
either.
Answer: Here's some Python code that will produce what you're looking for. (The
example uses 3 configurations rather than 2 to make sure the code was fairly
general.)
import matplotlib.pyplot as plt
import random
nconfigs, njobs, nservers = 3, 4, 4
width = .9/(nconfigs*njobs)
job_colors = [(0,0,1), (0,1,0), (1,0,0), (1,0,1)]
def dim(color, fraction=.5):
return tuple([fraction*channel for channel in color])
plt.figure()
x = 0
for iserver in range(nservers):
for ijob in range(njobs):
for iconfig in range(nconfigs):
color = dim(job_colors[ijob], (iconfig+2.)/(nconfigs+1))
plt.bar(x, 1.+random.random(), width, color=color)
x += width
x += .1
plt.show()
This code is probably fairly transparent. The odd term
`(iconfig+2.)/(nconfigs+1)` is just to dim the colors for the different
configurations, but keep them bright enough so the colors can be
distinguished.
The output looks like:
![alt text](http://i28.tinypic.com/dgrnzk.png)
|
can't edit line in python's command line in Linux
Question: I'm running the Python CLI under Linux:
bla:visualization> python
Python 2.1.1 (#18, Nov 1 2001, 11:15:13)
[GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>>
For some reason the arrow keys and the delete key don't work:
**delete:**
>>> x^H^H^H
**up arrow:**
>>> x^[[A^[[A
etc...
How can I make these work?
Answer: The basic problem is that your Python installation was likely not compiled
with the `readline` library. You can confirm this by trying to import the
`readline` module:
import readline
You should get an error when you import if `readline` is not present.
If this is the case, there's not much you can do other than recompile Python
with the `readline` library, if you can.
|
Is the default configuration of re incorrect on macbooks? Or have I simply misunderstood something?
Question: Python came pre-installed on my macbook and I have been slowly getting
acquainted with the langauge. However, it seems that my configuration of the
re library is incorrect, or I simply misunderstand something and things are
amiss. Whenever I run a python script with "import re", I recieve the
following error:
Traceback (most recent call last):
File "regex.py", line 2, in <module>
import re
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/re.py", line 4, in <module>
# re-compatible interface for the sre matching engine
AttributeError: 'module' object has no attribute 'compile'
What gives!
Answer: Pretty mysterious problem, given that line 4 in that file (and many other
lines around that line number) is a comment (indeed the error msg itself shows
that comment line!-) so even with the worst misconfiguration I'd be hard put
to reproduce the problem as given.
Let's try to simplify things and check how they may (or may not) break. Please
open a Terminal, mkdir a new empty directory somewhere and cd into it (so we
know there's no filename conflict wrt modules etc), at the bash prompt `unset
PYTHONPATH` (so we know for sure that isn't interfering), `unset
PYTHONSTARTUP` (ditto); then type the command:
$ python -c'import re; print re.__file__'
It should emit the line:
/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/re.pyc
does it? If so, then we can keep rooting around to understand what name clash
(or whatever) caused your original problem. If the problem persists under such
"clean" conditions then your system is jinxed and I would reinstal Mac OS X
Leopard if I were in your shoes!
|
Simple python/Regex problem: Removing all new lines from a file
Question: I'm becoming acquainted with python and am creating problems in order to help
myself learn the ins and outs of the language. My next problem comes as
follows:
I have copied and pasted a huge slew of text from the internet, but the copy
and paste added several new lines to break up the huge string. I wish to
programatically remove all of these and return the string into a giant blob of
characters. This is obviously a job for regex (I think), and parsing through
the file and removing all instances of the newline character sounds like it
would work, but it doesn't seem to be going over all that well for me.
Is there an easy way to go about this? It seems rather simple.
Answer: The two main alternatives: read everything in as a single string and remove
newlines:
clean = open('thefile.txt').read().replace('\n', '')
or, read line by line, removing the newline that ends each line, and join it
up again:
clean = ''.join(l[:-1] for l in open('thefile.txt'))
The former alternative is probably faster, but, as always, I strongly
recommend you MEASURE speed (e.g., use `python -mtimeit`) in cases of your
specific interest, rather than just assuming you know how performance will be.
REs are probably slower, but, again: don't guess, MEASURE!
So here are some numbers for a specific text file on my laptop:
$ python -mtimeit -s"import re" "re.sub('\n','',open('AV1611Bible.txt').read())"
10 loops, best of 3: 53.9 msec per loop
$ python -mtimeit "''.join(l[:-1] for l in open('AV1611Bible.txt'))"
10 loops, best of 3: 51.3 msec per loop
$ python -mtimeit "open('AV1611Bible.txt').read().replace('\n', '')"
10 loops, best of 3: 35.1 msec per loop
The file is a version of the KJ Bible, downloaded and unzipped from
[here](http://printkjv.ifbweb.com/AV%5Ftxt.zip) (I do think it's important to
run such measurements on one easily fetched file, so others can easily
reproduce them!).
Of course, a few milliseconds more or less on a file of 4.3 MB, 34,000 lines,
may not matter much to you one way or another; but as the fastest approach is
also the simplest one (far from an unusual occurrence, especially in
Python;-), I think that's a pretty good recommendation.
|
How to include and use .eggs/pkg_resources within a project directory targeting python 2.5.1
Question: I have python .egg files that are stored in a relative location to some .py
code. The problem is, I am targeting python 2.5.1 computers which require my
project be self contained in a folder (hundreds of thousands of OLPC XO 8.2.1
release laptops running Sugar). This means I cannot just ./ez_install to
perform a system-wide setuptools/pkg_resources installation.
Example directory structure:
My Application/
My Application/library1.egg
My Application/libs/library2.egg
My Application/test.py
I am wondering how best to import and use library1 and library2 from within
test.py with no pkg_resources system-wide installation. Is my best option
simply to unzip the .egg files?
Thanks for any tips.
Answer: If you want to be able to use pkg_resources, just copy pkg_resources.py
alongside your application's main script. It's designed to be able to be used
this way as a standalone runtime.
|
get_allowed_auths() in paramiko for authentication types
Question: I am trying to get supported authentication types/methods from a running SSH
server in Python.
I found this method get_allowed_auths() in the ServerInterface class in
Paramiko but I can't understand if it is usable in a simple client-like
snippet of code (I am writing something that accomplish in ONLY this task).
Anyone can suggest me some links with examples, other the distribution
documentation? Maybe any other ideas to do this?
Thanks.
Answer: You can try to authenticate using no authentication, which should always fail,
but the server will then send back the auth types that can continue. There is
an `auth_none()` method provided by `paramiko.Transport` to do this.
import paramiko
import socket
s = socket.socket()
s.connect(('localhost', 22))
t = paramiko.Transport(s)
t.connect()
try:
t.auth_none('')
except paramiko.BadAuthenticationType, err:
print err.allowed_types
|
Multipart form post to google app engine not working
Question: I am trying to post a multi-part form using httplib, url is hosted on google
app engine, on post it says Method not allowed, though the post using urllib2
works. Full working example is attached.
My question is what is the difference between two, why one works but not the
other
1. is there a problem in my mulipart form post code?
2. or the problem is with google app engine?
3. or something else ?
* * *
import httplib
import urllib2, urllib
# multipart form post using httplib fails, saying
# 405, 'Method Not Allowed'
url = "http://mockpublish.appspot.com/publish/api/revision_screen_create"
_, host, selector, _, _ = urllib2.urlparse.urlsplit(url)
print host, selector
h = httplib.HTTP(host)
h.putrequest('POST', selector)
BOUNDARY = '----------THE_FORM_BOUNDARY'
content_type = 'multipart/form-data; boundary=%s' % BOUNDARY
h.putheader('content-type', content_type)
h.putheader('User-Agent', 'Python-urllib/2.5,gzip(gfe)')
content = ""
L = []
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="test"')
L.append('')
L.append("xxx")
L.append('--' + BOUNDARY + '--')
L.append('')
content = '\r\n'.join(L)
h.putheader('content-length', str(len(content)))
h.endheaders()
h.send(content)
print h.getreply()
# post using urllib2 works
data = urllib.urlencode({'test':'xxx'})
request = urllib2.Request(url)
f = urllib2.urlopen(request, data)
output = f.read()
print output
Edit: After changing putrequest to request (on Nick Johnson's suggestion), it
works
url = "http://mockpublish.appspot.com/publish/api/revision_screen_create"
_, host, selector, _, _ = urllib2.urlparse.urlsplit(url)
h = httplib.HTTPConnection(host)
BOUNDARY = '----------THE_FORM_BOUNDARY'
content_type = 'multipart/form-data; boundary=%s' % BOUNDARY
content = ""
L = []
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="test"')
L.append('')
L.append("xxx")
L.append('--' + BOUNDARY + '--')
L.append('')
content = '\r\n'.join(L)
h.request('POST', selector, content,{'content-type':content_type})
res = h.getresponse()
print res.status, res.reason, res.read()
so now the question remains what is the difference between two approaches and
can first first be made to work?
Answer: [Nick Johnson's](http://stackoverflow.com/users/12030/nick-johnson) answer
Have you tried sending the request with httplib using .request() instead of
.putrequest() etc, supplying the headers as a dict?
it works!
|
Reimport a module in python while interactive
Question: I know it can be done, but I never remember how.
How can you reimport a module in python? The scenario is as follows: I import
a module interactively and tinker with it, but then I face an error. I fix the
error in the .py file and then I want to reimport the fixed module without
quitting python. How can I do it ?
Answer: This should work:
reload(my.module)
From the [Python docs](http://docs.python.org/library/functions.html#reload)
> Reload a previously imported module. The argument must be a module object,
> so it must have been successfully imported before. This is useful if you
> have edited the module source file using an external editor and want to try
> out the new version without leaving the Python interpreter.
|
Can you get more information about the online file?
Question: I have a online file: <http://dl_dir.qq.com/qqfile/tm/TM2009Beta_chs.exe>
,please donot download it, i want to determine the software version whether is
changed, so i want more information about it. for example, using python,i can
get this:
import urllib2,urllib
req = urllib2.Request('http://dl_dir.qq.com/qqfile/tm/TM2009Beta_chs.exe')
response = urllib2.urlopen(req)
print response.info()
print response.geturl()
Content-Length: 16868680
Server: qqdlsrv(1.84 for linux)
Connection: close
Content-Disposition: attachment; filename=TM2009Beta_chs.exe
Accept-Ranges: bytes
Content-Type: application/octet-stream
http://dl_dir.qq.com/qqfile/tm/TM2009Beta_chs.exe
Can you get more imformation to let me determine the software version is
changed?
Answer: 1. Download the first thousand bytes or so of the file using the range header.
2. Use [pefile](http://code.google.com/p/pefile/) to parse the PE header and extract version information.
3. With the data, [extract useful information](http://books.google.com/books?id=Pas0YoxygnkC&pg=PA40&lpg=PA40&dq=pe+header+layout&source=bl&ots=QmWK2D3niq&sig=FEWTlTHlaukrlQ-mHybNqW81BjY&hl=en&ei=LuGASpnPH42YsgOj5JH9CA&sa=X&oi=book%5Fresult&ct=result&resnum=3#v=onepage&q=pe%20header%20layout&f=false) such as the time date stamp and other goodies that let you find changes in files without reading the whole thing.
|