input_text
stringlengths
1
40.1k
target_text
stringlengths
1
29.4k
Python script to see if a web page exists without downloading the whole page? I am trying to write a script to test for the existence of a web page would be nice if it would check without downloading the whole page This is my jumping off point I have seen multiple examples use httplib in the same way however every site I check simply returns false ````import httplib from httplib import HTTP from urlparse import urlparse def checkUrl(url): p = urlparse(url) h = HTTP(p[1]) h putrequest('HEAD' p[2]) h endheaders() return h getreply()[0] == httplib OK if __name__=="__main__": print checkUrl("http://www stackoverflow com") # True print checkUrl("http://stackoverflow com/notarealpage html") # False ```` Any ideas? Edit Someone suggested this but their post was deleted does urllib2 avoid downloading the whole page? ````import urllib2 try: urllib2 urlopen(some_url) return True except urllib2 URLError: return False ````
how about this: ````import httplib from urlparse import urlparse def checkUrl(url): p = urlparse(url) conn = httplib HTTPConnection(p netloc) conn request('HEAD' p path) resp = conn getresponse() return resp status < 400 if __name__ == '__main__': print checkUrl('http://www stackoverflow com') # True print checkUrl('http://stackoverflow com/notarealpage html') # False ```` this will send an HTTP HEAD request and return True if the response status code is < 400 - notice that StackOverflow's root path returns a redirect (301) not a 200 OK
easy_install extra build arguments I am trying to install <a href="http://www reportlab com/software/opensource/rl-toolkit/" rel="nofollow">ReportLab</a> but I have no C compiler available to compile the rl_accel library Using setup py I would add an extra argument like so: ````python setup py --rl_accel=no install ```` Is it possible to add extra arguments to easy_install so that I can reproduce the same effect? ````easy_install reportlab [something here?] ````
not using the command but you can write the configuration file: <a href="http://packages python org/distribute/easy_install html#controlling-build-options" rel="nofollow">http://packages python org/distribute/easy_install html#controlling-build-options</a>
Who never shot at the queen?
null
Access django dict through she will after aggregate annotate Simple question I am trying to get the result of an aggregate to use in my view to make a simple calculation I have written the following ````sms_raised = SmsBacker objects values('amount') annotate(Sum('amount')) sms_raised [{'amount': 150L 'amount__sum': 600}] ```` How do I access those values in the she will I have tried ````sms_raised_amount sms_raised__amount sms_raised amount ```` All with no luck
Dict is the first item in the list so ````sms_raised[0]['amount'] sms_raised[0]['amount__sum'] ````
How to convert EST/EDT to GMT? I have a few records inside a column which represent either EST or EDT Time I need to convert these times to GMT time The format of the time are: ````10/1/2010 0:0:0 10/1/2010 0:6:0 10/1/2010 23:54:0 10/3/2010 0:0:0 ```` Can someone help me out here? thanks
Without an associated time the time zone does not matter nor can the date be translated to a different time zone Is there a related time in another column? EDIT: Alright now that there IS a time I will let the python guru's take over ;]
Flask - Jinja environment magic in a Blueprint When inside a jinja template: - How is the string provided to {%extends xxx_string%} and {% include xxx_string%} resolved? - Is this relative to actual filesystem or a generated namespace (such as when using the Flask url_for function)? Ultimately I would like to use relative imports when inside my templates (I do not want to have to update filesystem locations INSIDE each and every template with respect to the Blueprint) I would like to be able to : - Store the actual Blueprint package and its nested static/template resources under an arbitrary filesystem path ('/bob/projects/2013_07/marketanalysis') - Within the python Blueprint package define a separate 'slugname' to reference the blueprint instance and all of its resource Register this slugname on the application for global references (without global name collisions or race-conditions) - Have generic view functions that provide 'cookie-cutter' layouts depending on how the blueprint is being used (headlines cover intro fullstory citations) - Internally within the filesystem of the blueprint package use relative pathnames when resolving extends()/include() inside templates (akin to `url_for` shortcuts when referencing relative blueprint views) The idea is that when the blueprint package is bundled with all of its resources it has no idea where it will be deployed and may be relocated several times under different <them>slug-names</them> The python <them>interface</them> should be the same for every "bundle" but the html content css javascript and images/downloads will be unique for each bundle <hr> I have sharpened the question quite a bit I think this is as far as it should go on this thread
Using folders instead of prefixes makes it a bit more clean in my opinion Example application structure: ````yourapplication |- bp143 |- templates |- bp143 |- index jinja |- quiz jinja |- context jinja |- templates |- base jinja |- index jinja |- bp143 |- context jinja ```` With the above structure you can refer to templates as follows: ````base jinja -> comes from the application package index jinja -> comes from the application package bp143/index jinja -> comes from the blueprint bp143/context jinja -> comes from the application package (the app overrides the template of the same name in the blueprint) ````
In what space at the V&A is the Architecture Gallery located?
Room 128
adding triples in 4store here url_add is a link that contains the rdf triples that i want to store in 4store but if i pass url_add as an argument it generates Relative URIerror so what is the way in which i can pass url_add as an argument only response = store add_from_uri('url_add') Traceback (most recent call last): File "" line 1 in File "/usr/local/lib/python2 7/dist-packages/django_gstudio-0 3 dev-py2 7 egg/gstudio/testing1 py" line 152 in ````response = store add_from_uri('url_add') ```` File "/usr/local/lib/python2 7/dist-packages/django_gstudio-0 3 dev-py2 7 egg/gstudio/HTTP4Store/HTTP4Store py" line 74 in add_from_uri ````r_obj = self rh GET(uri headers=headers) ```` File "/usr/local/lib/python2 7/dist-packages/django_gstudio-0 3 dev-py2 7 egg/gstudio/HTTP4Store/utils py" line 53 in GET ````return self _request("%s" % (path) method="GET" headers=headers) ```` File "/usr/local/lib/python2 7/dist-packages/django_gstudio-0 3 dev-py2 7 egg/gstudio/HTTP4Store/utils py" line 92 in _request ````resp content = self h request(path method headers=headers body=data) ```` File "/usr/lib/python2 7/dist-packages/httplib2/<strong>init</strong> py" line 1297 in request (scheme authority request_uri defrag_uri) = urlnorm(uri) File "/usr/lib/python2 7/dist-packages/httplib2/<strong>init</strong> py" line 204 in urlnorm raise RelativeURIError("Only absolute URIs are allowed uri = %s" % uri) RelativeURIError: Only absolute URIs are allowed uri = url_add
What is the value of your URL It is throwing an exception because you are passing a relative url instead of an absolute (you probably have something like " / /directory/filename rdf" If your url is an HTTP url e g (http://host/filename rdf) and <strong>dereferenceable</strong> then you can also use the LOAD directive as part of SPARQL update So that simply means exectuting a SPARQL query (in the same way you execute a SPARQL query using 4store) using the following expression: ````LOAD <http://host/filename rdf&gt; INTO GRAPH <http://optional-name-of-graph&gt; ````
How to force pprint to print one list/tuple/dict element per line? How can I force pprint() to print one list/tuple/dict element per line? <pre class="lang-python prettyprint-override">`&gt;&gt;&gt; from pprint import pprint &gt;&gt;&gt; my_var = ['one' 'two' ('red' 'green') {'state' : 'Oregon' 'city' : 'Portland'}] &gt;&gt;&gt; pprint(my_var) ['one' 'two' ('red' 'green') {'city': 'Portland' 'state': 'Oregon'}] ```` I would like the output to be something like: <pre class="lang-python prettyprint-override">`['one' 'two' ('red' 'green') {'city': 'Portland' 'state': 'Oregon'}] ````
Use a width=1 argument to pprint(): <pre class="lang-python prettyprint-override">`&gt;&gt;&gt; from pprint import pprint &gt;&gt;&gt; my_var = ['one' 'two' ('red' 'green') {'state' : 'Oregon' 'city' : 'Portland'}] &gt;&gt;&gt; pprint(my_var width=1) ['one' 'two' ('red' 'green') {'city': 'Portland' 'state': 'Oregon'}] &gt;&gt;&gt; ```` <a href="http://docs python org/2/library/pprint html" rel="nofollow">"pprint - Data pretty printer" documentation</a>
Looping through list in python to create multiple files I have been messing around with lists and creating files from a list The below works fine but I am sure that there is a better and cleaner way for doing this I understand the concept of a loop but cannot find a specific example which I could remodel to fit what I am doing Please could someone please point me in the right direction of looping my items list through the f write code only the once to generate the files that I am after ```` items = [ "one" "two" "three" ] f = open (items[0] " hello_world txt" "w") f write("This is my first line of code") f write("\nThis is my second line of code with " items[0] " the first item in my list") f write ("\nAnd this is my last line of code") f = open (items[1] " hello_world txt" "w") f write("This is my first line of code") f write("\nThis is my second line of code with " items[1] " the first item in my list") f write ("\nAnd this is my last line of code") f = open (items[2] " hello_world txt" "w") f write("This is my first line of code") f write("\nThis is my second line of code with " items[2] " the first item in my list") f write ("\nAnd this is my last line of code") f close() ````
You can use a `for` loop and a <a href="http://docs python org/2/reference/compound_stmts html#the-with-statement" rel="nofollow">with</a> statement like this The advantage of using `with` statement is that you do not have to explicitly close the files or worry about the cases where there is an exception ````items = ["one" "two" "three"] for item in items: with open("{}hello_world txt" format(item) "w") as f: f write("This is my first line of code") f write("\nThis is my second line of code with {} the first item in my list" format(item)) f write("\nAnd this is my last line of code") ````
What happens in a chennel transupport system?
several proteins form a contiguous channel traversing the inner and outer membranes of the bacteria
Find the powers of 2 from 2^0 to 2^99 using list comprehension ````def lst_comp(): list = [] list = (2**i for i in range(100)) print list return list ```` I get this when I do this : `<generator object <genexpr&gt; at 0x00000000034C48B8&gt;`
You created a <a href="http://docs python org/2/tutorial/classes html#generator-expressions" rel="nofollow">generator expression</a> <strong>not</strong> a list comprehension Use `[ ]` square brackets around the expression instead of `( )` round parenthesis: ````def lst_comp(): lst = [2**i for i in range(100)] print lst return lst ```` A generator expression is very closely related to a list comprehension; it produces the exact same values but lazily as you iterate over the object A list comprehension produces all values in one go and returns a list object containing them You could also have done: ````generator = (2**i for i in range(100)) lst = list(generator) ```` to consume the whole generator object in one go and produce a list object from the results
Passing optional argument in python I am trying to append a url by passing the parameters by commandline argument Here is how I am trying: ````import argparse parser = argparse ArgumentParser(description='Arguments') parser add_argument('input' metavar='input' type=str) parser add_argument('output' metavar='text' type=str) args = parser parse_args() url = 'https://example com/?z=12&amp;text='+args output+'&amp;loc{}' print url ```` When I execute ````python url py text csv hello ```` It appends the second passed argument to the url I want to know how to make the second argument optional so that even without providing the second argument I want the url to be printed by concatenating nothing to the url Here is the output I am expecting: When both arguments given: ````python url py text csv hello ```` The output should be ````https://example com/?z=12&amp;text=hello&amp;loc{} ```` When single argument given ````python url py text csv ```` The output should be ````https://example com/?z=12&amp;text=&amp;loc{} ````
As I see it you have two options Either you avoid using `argparse`altogether and go for something like: <pre class="lang-py prettyprint-override">`import sys args_input = sys argv[1] args_output = sys argv[2] if len(sys argv) &gt; 2 else '' url = 'https://example com/?z=12&amp;text='+args_output+'&amp;loc{}' print url ```` Or you add the `-` to your optional argument and as <a href="http://stackoverflow com/questions/37857567/passing-optional-argument-in-python#comment63172773_37857567">Ignacio's reply</a> suggests you set a default empty value for the `output` argument: <pre class="lang-py prettyprint-override">`import argparse parser = argparse ArgumentParser(description='Arguments') parser add_argument('input' metavar='input' type=str) parser add_argument('-output' metavar='text' type=str default='') args = parser parse_args() url = 'https://example com/?z=12&amp;text='+args output+'&amp;loc{}' print url ```` With this second option you will have to call it like: ````python url py text csb -output hello ```` Or ````python url py text csb ```` but it is more extensible if you want to add more arguments afterwards
Where were the "black" areas?
large townships located away from the cities
How to plot square function with matplotlib I have an list of values that alternate between 0 and 1 eg [0 1 0 1 0] and I want to graph them so they appear as a square wave using matplotlib for python I have this so far: ````input_amp = [1 0 1 0 1 0 1 0 1 0] plt plot(input_amp marker= would' color='blue') plt title("Waveform") plt ylabel('Amplitude') plt xlabel("Time") plt savefig("waveform png") plt show() ```` This gives me an output like this <img src="http://i imgur com/ppyGobq png?1" alt="this">: How do I make it so instead of going on an angle between the points the line stays flat? I found this <a href="http://stackoverflow com/questions/28567520/how-do-you-generate-an-animated-square-wave-from-binary-number-for-the-respectiv">post</a> but it deals more with an animation and not just plotting the function
The relevant bit from that post you reference is the `drawstyle`: ```` plt plot(input_amp marker= would' color='blue' drawstyle='steps-pre') ````
What does orthodoxy reject as a means of establishing Jewish identiy?
patrilineal descent
Can I close the current open window (python) I would like to know if there is a command that closes the current file by that I mean: opened Hello py -> In the middle of the file the window shuts down (not by an error) Is that possible? Thank you
If I managed to understand the question correctly then you can use `sys exit()` ````import sys import time for i in range(100): time sleep(0 3) print i if i == 5: sys exit() ````
Pandas resample bug? Trying to down sample of 8 weekly time points to 2 points each represents the average over 4 weeks I use resample() I started by defining the rule using (60*60*24*7*4) seconds and saw I ended up in 3 time points latest one is dummy Started to check it I noticed that if I define the rule as 4W or 28D it is fine but going down to 672H or smaller units (minutes seconds ) the extra faked column appears This testing code: ````import numpy as np import pandas as pd d = np arange(16) reshape(2 8) res = [] for month in range(1 13): start_date = str(month) '/1/2014' df = pd DataFrame(data=d index=['A' 'B'] columns=pd date_range(start_date periods=8 freq='7D')) print(df '\n') dfw = df resample(rule='4W' how='mean' axis=1 closed='left' label='left') print('4 Weeks:\n' dfw '\n') dfd = df resample(rule='28D' how='mean' axis=1 closed='left' label='left') print('28 Days:\n' dfd '\n') dfh = df resample(rule='672H' how='mean' axis=1 closed='left' label='left') print('672 Hours:\n' dfh '\n') dfm = df resample(rule='40320T' how='mean' axis=1 closed='left' label='left') print('40320 Minutes:\n' dfm '\n') dfs = df resample(rule='2419200S' how='mean' axis=1 closed='left' label='left') print('2419200 Seconds:\n' dfs '\n') res append(([start_date] dfh shape[1] == dfd shape[1])) print('\n\n--------------------------\n\n') [print(res[i]) for i in range(12)] pass ```` is printed as (I pasted here only the printout of the last iteration): ```` 2014-11-01 2014-11-29 2014-12-27 A 1 5 5 5 NaN B 9 5 13 5 NaN 2014-12-01 2014-12-08 2014-12-15 2014-12-22 2014-12-29 2015-01-05 \ A 0 1 2 3 4 5 B 8 9 10 11 12 13 2015-01-12 2015-01-19 A 6 7 B 14 15 4 Weeks: 2014-11-30 2014-12-28 A 1 5 5 5 B 9 5 13 5 28 Days: 2014-12-01 2014-12-29 A 1 5 5 5 B 9 5 13 5 672 Hours: 2014-12-01 2014-12-29 2015-01-26 A 1 5 5 5 NaN B 9 5 13 5 NaN 40320 Minutes: 2014-12-01 2014-12-29 2015-01-26 A 1 5 5 5 NaN B 9 5 13 5 NaN 2419200 Seconds: 2014-12-01 2014-12-29 2015-01-26 A 1 5 5 5 NaN B 9 5 13 5 NaN -------------------------- (['1/1/2014'] False) (['2/1/2014'] True) (['3/1/2014'] True) (['4/1/2014'] True) (['5/1/2014'] False) (['6/1/2014'] False) (['7/1/2014'] False) (['8/1/2014'] False) (['9/1/2014'] False) (['10/1/2014'] False) (['11/1/2014'] False) (['12/1/2014'] False) ```` So there is an error for date_range starting on beginning of 9 months and no error for 3 months (February-April) Either I miss something or it is a bug is it?
Thanks @DSM and @Andy indeed I had pandas 0 15 1 upgrading to latest 0 15 2 solved it
object not is not JSON serializable I am trying to load data from a database into a html page basically when a user access their profile and clicks on "Purchase History" it should query the database and display all the products the user has purchased I am trying to do this using ajax and json but i get an error: ````TypeError: <gluon dal Field object at 0x091CCD90&gt; is not JSON serializable ```` Below is the code: ````def prodHistoryJson(): productName = db sale title productCost = db sale price prodShipAdd = db sale shipping_address prodShipCity = db sale shipping_city prodShipState = db sale shipping_state prodShipZipCode = db sale shipping_zip_code myproducts = {'prodName':productName 'cost':productCost 'shipAdd':prodShipAdd 'shipCity':prodShipCity 'shipState':prodShipState 'shipZipCode':prodShipZipCode} import gluon contrib simplejson as json returnData = json dumps(myproducts) return returnData ```` below is the jquery: ````$ ajax( { url:'/suzannecollins/onlineStore/prodHistoryJson' data: { message: "Your products purchase history is listed below" } success: function(message) { try { myproducts=JSON parse(message); } catch(err) { console log(" error"); } // place returned value in the DOM $('#returnData') html(myproducts title myproducts price myproducts shipping_address myproduct shipping_state myproducts shipping_city myproducts shipping_zip_code); } }); ```` what am i doing wrong? I can get this all to work if i just do this in a simpler way where a user hits the purchase_History button and it queries the database and displays the products purchased How do i do the same thing with the code above?
so thanks to @Amadan last night we finally figured out how to query the database using json and serialize the python objects to display the results ````def pruchaseHistoryJson(): if auth user: rows = db(db sale auth_id == auth user id ) select(db sale title db sale price db sale shipping_address db sale shipping_state db sale shipping_city db sale shipping_zip_code) else: redirect(URL('default' 'user/login')) import gluon contrib simplejson as json prodHistory = json dumps([{'name': i title 'prodValue':i price 'shipAdd':i shipping_address 'shipCity':i shipping_city 'shipState':i shipping_state 'shipCode':i shipping_zip_code} for i in rows]) return prodHistory ```` this code works fine and displays the results as expected now back to the writing up the jquery function for this
Who said that Albania is the most pro-American country in Europe?
Edi Rama
Finding rows in Pandas Data Frame with Same Values I currently have a large Data Frame consisting of over 1 0000 rows and 600 columns The table is indexed on the left by identity and each column is a position The value of each point in the grid is either a 0 or a 1 I would like to be able to fish out and group the Identities by determining which ones have identical patterns of 0's and 1's within their rows For example: ````print df table ID#1 0 1 0 1 0 0 1 0 1 ID#2 0 0 1 0 1 0 1 0 1 ID#3 1 0 0 0 1 0 1 1 0 ID#4 0 1 0 1 0 0 1 0 1 ID#5 1 0 0 0 1 0 1 1 0 ID#6 0 0 1 0 1 0 1 0 1 df table 'GROUP' returns [(ID#1 ID#4) (ID#2 ID#6) (ID#3 ID#5)] ````
````In [39]: data = """ID#1 0 1 0 1 0 0 1 0 1 ID#2 0 0 1 0 1 0 1 0 1 ID#3 1 0 0 0 1 0 1 1 0 ID#4 0 1 0 1 0 0 1 0 1 ID#5 1 0 0 0 1 0 1 1 0 ID#6 0 0 1 0 1 0 1 0 1 """ In [40]: df = read_csv(StringIO(data) header=None sep='\s+' index_col=0) In [41]: df['compressed'] = df apply(lambda x: '' join([ str(v) for v in x ]) 1) In [42]: df Out[42]: 1 2 3 4 5 6 7 8 9 compressed 0 ID#1 0 1 0 1 0 0 1 0 1 010100101 ID#2 0 0 1 0 1 0 1 0 1 001010101 ID#3 1 0 0 0 1 0 1 1 0 100010110 ID#4 0 1 0 1 0 0 1 0 1 010100101 ID#5 1 0 0 0 1 0 1 1 0 100010110 ID#6 0 0 1 0 1 0 1 0 1 001010101 In [43]: df groupby('compressed') apply(lambda x: x index tolist()) Out[43]: compressed 001010101 [ID#2 ID#6] 010100101 [ID#1 ID#4] 100010110 [ID#3 ID#5] dtype: object ```` Here are 2 more reshapings you can do (do this before you add the 'compressed' column) Create a Series with the valuess being a tuple of the 1 positions ````In [45]: pd concat([ Series([ tuple(x[x astype(bool)] index tolist()) ] index=[row]) for (row x) in df iterrows() ]) Out[45]: ID#1 (2 4 7 9) ID#2 (3 5 7 9) ID#3 (1 5 7 8) ID#4 (2 4 7 9) ID#5 (1 5 7 8) ID#6 (3 5 7 9) dtype: object ```` Create a frame that has a column for each 1 position ````In [46]: DataFrame(dict([ (row x[x astype(bool)] index tolist()) for (row x) in df iterrows() ])) T Out[46]: 0 1 2 3 ID#1 2 4 7 9 ID#2 3 5 7 9 ID#3 1 5 7 8 ID#4 2 4 7 9 ID#5 1 5 7 8 ID#6 3 5 7 9 ````
Summation by class label without loop in numpy I have a matrix which represents a distances to the k-nearest neighbour of a set of points and there is a matrix of class labels of the nearest neighbours (both N-by-k matrix) What is the best way WITHOUT explicit python loop (actually I want to implement this in theano where those loops are not going to work) to build a (N-by-#classes) matrix whose (i j) element will be the sum of distances from i-th point to its k-NN points with the class label 'j'? Example: ````# N = 2 # k = 5 # number of classes = 3 K_val = np array([[1 2 3 4 6] [2 4 5 5 7]]) l_val = np array([[0 1 2 0 1] [2 0 1 2 0]]) """ result > [[5 8 3] [11 5 7]] """ ````
You can compute this with <a href="http://docs scipy org/doc/numpy/reference/generated/numpy bincount html" rel="nofollow">numpy bincount</a> It has a `weights` parameter which allows you to count the items in `l_val` but weight the items according to `K_val` The only little snag is that each row of `K_val` and `l_val` seems to be treated independently So add a shift to `l_val` so each row has values which are distinct from every other row <hr> ````import numpy as np num_classes = 3 K_val = np array([[1 2 3 4 6] [2 4 5 5 7]]) l_val = np array([[0 1 2 0 1] [2 0 1 2 0]]) def label_distance(l_val K_val): nrows ncols = l_val shape shift = (np arange(nrows)*num_classes)[: np newaxis] result = (np bincount((l_val+shift) ravel() weights=K_val ravel() minlength=num_classes*nrows) reshape(nrows num_classes)) return result print(label_distance(l_val K_val)) ```` yields ````[[ 5 8 3 ] [ 11 5 7 ]] ```` <hr> Although senderle's method is really elegant using bincount is faster: ````def using_extradim(l_val K_val): return (K_val[: : None] * (l_val[: : None] == numpy arange(3)[None None :])) sum(axis=1) In [34]: K2 = np tile(K_val (1000 1)) In [35]: L2 = np tile(l_val (1000 1)) In [36]: %timeit using_extradim(L2 K2) 1000 loops best of 3: 584 µs per loop In [40]: %timeit label_distance(L2 K2) 10000 loops best of 3: 67 7 µs per loop ````
UnicodeDecodeError in Beautifulsoup I am trying to parse a <a href="http://www google com/finance?q=NYSE%3AF&amp;ei=LvflU_itN8zbkgW0i4GABQ" rel="nofollow">web page</a> using beautifulsoup and used the following code: ````def parse(): gHeader = {'User-Agent': 'Mozilla/5 0'} gNewsLinkUrl = "http://www google com/finance?q=NYSE%3AF&amp;ei=LvflU_itN8zbkgW0i4GABQ" lPrevLinkReq = urllib2 Request(gNewsLinkUrl headers=gHeader) lPrevPage = urllib2 urlopen(lPrevLinkReq) lPrevPageSoup = BeautifulSoup(lPrevPage ) ```` When I am executing the above function `parse()` I get the following error: ```` Traceback (most recent call last): File "google_finance_news py" line 42 in <module&gt; FetchNewsDataFromWeb() File "google_finance_news py" line 33 in FetchNewsDataFromWeb lPrevPage = BeautifulSoup(lPrevPage) File "C:\C42\Finance\bs4\__init__ py" line 172 in __init__ self _feed() File "C:\C42\Finance\bs4\__init__ py" line 185 in _feed self builder feed(self markup) File "C:\C42\Finance\bs4\builder\_lxml py" line 195 in feed self parser close() File "parser pxi" line 1283 in lxml etree _FeedParser close (src\lxml\lxml etree c:98846) File "parser pxi" line 1313 in lxml etree _FeedParser close (src\lxml\lxml etree c:98695) File "parsertarget pxi" line 142 in lxml etree _TargetParserContext _handleParseResult (src\lxml\lxml etree c:112853) File "parsertarget pxi" line 130 in lxml etree _TargetParserContext _handleParseResult (src\lxml\lxml etree c:112677) File "lxml etree pyx" line 327 in lxml etree _ExceptionContext _raise_if_stored (src\lxml\lxml etree c:10196) File "saxparser pxi" line 499 in lxml etree _handleSaxData (src\lxml\lxml etree c:107747) UnicodeDecodeError: 'utf8' codec cannot decode byte 0xc2 in position 719: invalid continuation byte ```` Please help me resolve the issue
You will need to set defaulting coding in your function and use your type of standard encoding So in your function you need to set: ````sys setdefaultencoding('cp1251') gNewsLinkUrl = gNewsLinkUrl encode('cp1251') ```` where cp1251 is your website encoding all standard encodings you can find <a href="https://docs python org/2 4/lib/standard-encodings html" rel="nofollow">here</a>
Do I have a bug in my grammar or the parser-generation tool? The following is an EBNF-format (mostly - the actual syntax is documented <a href="https://pypi python org/pypi/grako/3 4 3#expressions" rel="nofollow">here</a>) grammar that I am attempting to generate a parser for: ````expr = lambda_expr_list $; lambda_expr_list = [ lambda_expr_list " " ] lambda_expr; lambda_expr = conditional_expr [ ">" lambda_expr ]; conditional_expr = boolean_or_expr [ "if" conditional_expr "else" conditional_expr ]; boolean_or_expr = [ boolean_or_expr "or" ] boolean_xor_expr; boolean_xor_expr = [ boolean_xor_expr "xor" ] boolean_and_expr; boolean_and_expr = [ boolean_and_expr "and" ] boolean_not_expr; boolean_not_expr = [ "not" ] relation; relation = [ relation ( "==" | "!=" | "&gt;" | "<=" | "<" | "&gt;=" | [ "not" ] "in" | "is" [ "not" ] ) ] bitwise_or_expr; bitwise_or_expr = [ bitwise_or_expr "|" ] bitwise_xor_expr; bitwise_xor_expr = [ bitwise_xor_expr "^" ] bitwise_and_expr; bitwise_and_expr = [ bitwise_and_expr "&amp;" ] bitwise_shift_expr; bitwise_shift_expr = [ bitwise_shift_expr ( "<<" | "&gt;&gt;" ) ] subtraction_expr; subtraction_expr = [ subtraction_expr "-" ] addition_expr; addition_expr = [ addition_expr "+" ] division_expr; division_expr = [ division_expr ( "/" | "\\" ) ] multiplication_expr; multiplication_expr = [ multiplication_expr ( "*" | "%" ) ] negative_expr; negative_expr = [ "-" ] positive_expr; positive_expr = [ "+" ] bitwise_not_expr; bitwise_not_expr = [ "~" ] power_expr; power_expr = slice_expr [ "**" power_expr ]; slice_expr = member_access_expr { subscript }; subscript = "[" slice_defn_list "]"; slice_defn_list = [ slice_defn_list " " ] slice_defn; slice_defn = lambda_expr | [ lambda_expr ] ":" [ [ lambda_expr ] ":" [ lambda_expr ] ]; member_access_expr = [ member_access_expr " " ] function_call_expr; function_call_expr = atom { parameter_list }; parameter_list = "(" [ lambda_expr_list ] ")"; atom = identifier | scalar_literal | nary_literal; identifier = /[_A-Za-z][_A-Za-z0-9]*/; scalar_literal = float_literal | integer_literal | boolean_literal; float_literal = point_float_literal | exponent_float_literal; point_float_literal = /[0-9]+?\ [0-9]+|[0-9]+\ /; exponent_float_literal = /([0-9]+|[0-9]+?\ [0-9]+|[0-9]+\ )[eE][+-]?[0-9]+/; integer_literal = dec_integer_literal | oct_integer_literal | hex_integer_literal | bin_integer_literal; dec_integer_literal = /[1-9][0-9]*|0+/; oct_integer_literal = /0[oO][0-7]+/; hex_integer_literal = /0[xX][0-9a-fA-F]+/; bin_integer_literal = /0[bB][01]+/; boolean_literal = "true" | "false"; nary_literal = tuple_literal | list_literal | dict_literal | string_literal | byte_string_literal; tuple_literal = "(" [ lambda_expr_list ] ")"; list_literal = "[" [ ( lambda_expr_list | list_comprehension ) ] "]"; list_comprehension = lambda_expr "for" lambda_expr_list "in" lambda_expr [ "if" lambda_expr ]; dict_literal = "{" [ ( dict_element_list | dict_comprehension ) ] "}"; dict_element_list = [ dict_element_list " " ] dict_element; dict_element = lambda_expr ":" lambda_expr; dict_comprehension = dict_element "for" lambda_expr_list "in" lambda_expr [ "if" lambda_expr ]; string_literal = /[uU]?[rR]?(\u0027(\\ |[^\\\r\n\u0027])*\u0027|\u0022(\\ |[^\\\r\n\u0022])*\u0022)/; byte_string_literal = /[bB][rR]?(\u0027(\\[\u0000-\u007F]|[\u0000-\u0009\u000B-\u000C\u000E-\u0026\u0028-\u005B\u005D-\u007F])*\u0027|\u0022(\\[\u0000-\u007F]|[\u0000-\u0009\u000B-\u000C\u000E-\u0021\u0023-\u005B\u005D-\u007F])*\u0022)/; ```` The tool I am using to generate the parser is <a href="https://pypi python org/pypi/grako/3 4 3" rel="nofollow">Grako</a> which generates a modified Packrat parser that claims to support both direct and indirect left recursion When I run the generated parser on this string: ````input filter(e > e[0] in ['t' 'T']) map(e > (e len() str() e)) map(e > '(Line length: ' e[0] ') ' e[1]) list() ```` I get the following error: ````grako exceptions FailedParse: (1:13) Expecting end of text : input filter(e > e[0] in ['t' 'T']) map(e > (e len() str() e)) map(e > '(Line length: ' e[0] ') ' e[1]) list() ^ expr ```` Debugging has shown that the parser seems to get to the end of the first `e[0]` then never backtracks to/reaches a point where it will try to match the `in` token Is there some issue with my grammar such that a left recursion-supporting Packrat parser would fail on it? Or should I file an issue on the Grako issue tracker?
It may be a bug in the grammar but the error message is not telling you where it actually happens What I always do after finishing a grammar is to embed <them>cut</them> (`~`) elements throughout it (after keywords like <them>if</them> operators opening parenthesis everywhere it seems reasonable) The <them>cut</them> element makes the Grako-generated parser commit to the option taken in the closest choice in the parse tree That way instead of having the parser fail at the start on an <them>if</them> it will report failure at the expression it actually could not parse Some bugs in grammars are difficult to spot and for that I just go through the parse trace to find out how far in the input the parser went and why it decided it could not go further I will not use left-recursion on a PEG parser for professional work though it may be fine for simpler academic work ````boolean_or_expr = boolean_xor_expr {"or" boolean_xor_expr}; ```` The associativity can then be handled in a semantic action Also see the discussion under <a href="https://bitbucket org/apalala/grako/issue/49/bug-in-left-recursion" rel="nofollow">issue 49</a> against Grako It says that the algorithm used to support left recursion will not always produce the expected associativity in the resulting AST
Managing dictionary memory size in python I have a program which imports a text file through standard input and aggregates the lines into a dictionary However the input file is very large (1Tb order) and I will not have enough space to store the whole dictionary in memory (running on 64Gb ram machine) Currently Iv got a very simple clause which outputs the dictionary once it has reached a certain length (in this case 100) and clears the memory The output can then be aggregated at later point So i want to: output the dictionary once memory is full what is the best way of managing this? Is there a function which gives me the current memory usage? Is this costly to keep on checking? Am I using the right tactic? ````import sys X_dic = dict() # Used to print the dictionary in required format def print_dic(dic): for key value in dic iteritems(): print "{0}\t{1}" format(key value) for line in sys stdin: value key = line strip() split(" ") if (not key in X_dic): X_dic[key] = [] X_dic[key] append(value) # Limit size of dic if( len(X_dic) == 100): print_dic(X_dic) # Print and clear dictionary X_dic = dict() # Now output print_dic(X_dic) ````
The module `resource` provides some information on how much resources (memory etc ) you are using See <a href="http://pythonforbiologists com/index php/measuring-memory-usage-in-python/http://pythonforbiologists com/index php/measuring-memory-usage-in-python/" rel="nofollow">here</a> for a nice little usage On a Linux system (I do not know where you are) you can watch the contents of the file `/proc/meminfo` As part of the <them>proc</them> file system it is updated automatically But I object to the whole strategy of monitoring the memory and using it up as much as possible actually I would rather propose to dump the dictionary regularly (after 1M entries have been added or such) It probably will speed up your program to keep the dict smaller than possible; also it presumably will have advantages for later processing if all dumps are of similar size If you dump a huge dict which fit into your whole memory when nothing else was using memory then you later will have trouble re-reading that dict if something else is currently using some of your memory So then you would have to create a situation in which nothing else is using memory (e g reboot or similar) Not very convenient
When did Seibu, located in Jakarta, re-close?
null
accessing elements in simplejson load() output i have a dictionary(?) of data returned from the simplejson load() function it looks like this ````{'status': 'OK' 'results': [{'geometry': {'location_type': 'APPROXIMATE' 'bounds': {'northeast': {'lat': 53 86121 'lng': -2 045072} 'southwest': {'lat': 53 80570600000001 'lng': -2 162588}} 'viewport': {'northeast': {'lat': 53 8697753 'lng': -2 0725853} 'southwest': {'lat': 53 81711019999999 'lng': -2 2006447}} 'location': {'lat': 53 84345099999999 'lng': -2 136615}} 'address_components': [{'long_name': 'Trawden' 'types': ['sublocality' 'political'] 'short_name': 'Trawden'} {'long_name': 'Colne' 'types': ['locality' 'political'] 'short_name': 'Colne'} {'long_name': 'Lancashire' 'types': ['administrative_area_level_2' 'political'] 'short_name': 'Lancs'} {'long_name': 'United Kingdom' 'types': ['country' 'political'] 'short_name': 'GB'}] 'formatted_address': 'Trawden Colne Lancashire UK' 'types': ['sublocality' 'political']}]} ```` How do I get at e g results->geometry->location->lat ? Is this structure a regular python dictionary? EDIT: please could someone also explain the simplejson dumps() function I do not find the docs very enlightening thanks Edit by non-OP: here is the JSON pretty-printed: ````{ "status":"OK" "results":[ { "geometry":{ "location":{ "lat":53 843450999999988 "lng":-2 1366149999999999 } "location_type":"APPROXIMATE" "viewport":{ "northeast":{ "lat":53 869775300000001 "lng":-2 0725853000000001 } "southwest":{ "lat":53 817110199999988 "lng":-2 2006446999999998 } } "bounds":{ "northeast":{ "lat":53 86121 "lng":-2 0450719999999998 } "southwest":{ "lat":53 805706000000008 "lng":-2 162588 } } } "address_components":[ { "long_name":"Trawden" "short_name":"Trawden" "types":[ "sublocality" "political" ] } { "long_name":"Colne" "short_name":"Colne" "types":[ "locality" "political" ] } { "long_name":"Lancashire" "short_name":"Lancs" "types":[ "administrative_area_level_2" "political" ] } { "long_name":"United Kingdom" "short_name":"GB" "types":[ "country" "political" ] } ] "formatted_address":"Trawden Colne Lancashire UK" "types":[ "sublocality" "political" ] } ] } ````
Yes it is If you store it in a variable named `d` then you would use ````d['results'][0]['geometry']['location'] ```` et cetera Notice the `[0]` there due to the fact that the dict with key `'geometry'` is inside a list `simplejson load()` maps JSON objects to Python `dict`s and JSON lists to `list`s Very straightforward; do not overthink it <hr> `simplejson dumps()` simply does the opposite of `simplejson loads()` - it takes any standard Python object and dumps it to a string which is a JSON representation of that object For instance: ````&gt;&gt;&gt; q = {} &gt;&gt;&gt; q['foo'] = 'bar' &gt;&gt;&gt; q[1] = 'baz' &gt;&gt;&gt; simplejson dumps(q) '{"1": "baz" "foo": "bar"}' ````
how do I create a python list with a negative index I am new to python and need to create a list with a negative index but have not been successful so far I am using this code: ````a = [] for i in xrange( -20 0 -1 ): a[i] = -(i) log info('a[{i}]={v}' format(i=i v=a[i])) else: log info('end') ```` and getting the log output as ````end ```` Incidentally I am using a site call quantopian so the log info is from their infrastructure and just print out the output into a web console What am I doing wrong? Thanks in advance for your help
the only thing that strikes me is ```` for i in xrange( -20 0 -1 ): ```` seems very wrong since the third argument is step size you will go -1 per step starting at -20 means next number is -21 and the following is a syntax error ````a = [] a[0] = 5 ```` you should do `a = [None]*20`
Easier way to do column division in pandas I have 16 columns I would like to divide each `count` column by its respective `dc(uid)` column ````+------------------------+------------------------------+--------------------------+------------------------------------+-------------------------------------+------------------------+---------------------+--------------------------+--------------------------------+----------------------------+--------------------------------------+---------------------------------------+--------------------------+-----------------------+ | count: interaction_eis | count: interaction_eis_reply | count: interaction_match | count: interaction_single_message_ | count: interaction_single_message_1 | count: interaction_yes | count: revenue_sale | dc(uid): interaction_eis | dc(uid): interaction_eis_reply | dc(uid): interaction_match | dc(uid): interaction_single_message_ | dc(uid): interaction_single_message_1 | dc(uid): interaction_yes | dc(uid): revenue_sale | ------------------------+------------------------------+--------------------------+------------------------------------+-------------------------------------+------------------------+---------------------+--------------------------+--------------------------------+----------------------------+--------------------------------------+---------------------------------------+--------------------------+-----------------------+ ```` I know that I can do this: ````pre_purch_m['interaction_eis_rate'] = pre_purch_m['count: interaction_eis'] / pre_purch_m['dc(uid): interaction_eis'] pre_purch_m['interaction_eis_reply_rate'] = pre_purch_m['count: interaction_eis_reply'] / pre_purch_m['dc(uid): interaction_eis_reply'] ```` But it seems redundant and laborious to do this 8 times Is there a pandas function or paradigm to accomplish something like this in a more efficient manner?
Let us assume your columns are consistent Here is one approach Get the columns from dataframe `df` ````cols = df columns ```` Get the unique columns by striping away `count:` and `dc(uid):` and taking the unique list ````uniq_cols = list(set([x split(': ')[1] for x in cols])) ```` Now loop through creating the new columns ````for col in uniq_cols: df[col '_rate'] = df['count: ' col] / df['dc(uid): ' col] ```` And it would have been much easier if the dataframe was populated initially by storing these `uniq_cols`
PySpark broadcast variables from local functions I am attempting to create broadcast variables from within Python methods (trying to abstract some utility methods I am creating that rely on distributed operations) However I cannot seem to access the broadcast variables from within the Spark workers Let us say I have this setup: ````def main(): sc = SparkContext() SomeMethod(sc) def SomeMethod(sc): someValue = rand() V = sc broadcast(someValue) A = sc parallelize() map(worker) def worker(element): element *= V value ### NameError: global name 'V' is not defined ### ```` However if I instead eliminate the `SomeMethod()` middleman it works fine ````def main(): sc = SparkContext() someValue = rand() V = sc broadcast(someValue) A = sc parallelize() map(worker) def worker(element): element *= V value # works just fine ```` I would rather not have to put all my Spark logic in the main method if I can Is there any way to broadcast variables from within local functions and have them be globally visible to the Spark workers? Alternatively what would be a good design pattern for this kind of situation--e g I want to write a method specifically for Spark which is self-contained and performs a specific function I would like to re-use?
I am not sure I completely understood the question but if you need the `V` object inside the worker function you then you definitely should pass it as a parameter otherwise the method is not really self-contained: ````def worker(V element): element *= V value ```` Now in order to use it in map functions you need to use a partial so that map only sees a 1 parameter function: ````from functools import partial def SomeMethod(sc): someValue = rand() V = sc broadcast(someValue) A = sc parallelize() map(partial(worker V=V)) ````
NaNs in pandas DataFrame not printing to Excel I have a `pandas DataFrame` which contains `numpy nan` floats When using the Excel writer however the fields where there should be `numpy nan` floats are blank I would expected at least a string representation instead of nothing Any thoughts on why this might be? Writer code is as follows: ````writer=pandas ExcelWriter('output xls') frame to_excel(writer 'tab name') writer save() ```` Where `frame` looks something like this (note the NaN on 2013-01-1): ```` Series ID Risk Bucket Contract PX Last Contract Value (Local) Currency X Contract Value (USD) Currency 2013-01-01 Future_ES EQ ES1 Index NaN NaN 1 NaN USD Curncy 2013-01-02 Future_ES EQ ES1 Index 1447 16 72362 5 1 72362 5 USD Curncy 2013-01-03 Future_ES EQ ES1 Index 1443 68 72187 5 1 72187 5 USD Curncy 2013-01-04 Future_ES EQ ES1 Index 1447 90 72400 0 1 72400 0 USD Curncy ```` But the Excel file has blanks (see attached image) <img src="http://i stack imgur com/Bg0bK png" alt="enter image description here">
From the <a href="http://pandas pydata org/pandas-docs/stable/generated/pandas DataFrame to_excel html" rel="nofollow">documentation</a> you should set the option `na_rep` in `to_excel` with a string of your liking E g : ````frame to_excel(writer 'tab name' na_rep='NA') ````
abstract functions in python without using lambda I just have a question how to use abstract functions without lambda? say I have two list ````a = [1 2 3 4 5] b = [2 4 6] ```` if I want to print all the elements both appear in A and B with lambda: ````def f(): print reduce (list __add__ map (lambda x: filter (lambda y: x == y b) a)) ```` how to do it without lambda? i mean i am just using helper functions instead of lambda
How do you get the intersection of two sets? If you use the `set` data type you can just use `&amp;` for union: ````print list(set(a) &amp; set(b)) ````
What kind of glass are containers made from?
soda-lime glass
Iterating over a list in parallel with Cython How does one iterate in parallel over a (Python) list in Cython? Consider the following simple function: ````def sumList(): cdef int n = 1000 cdef int sum = 0 ls = [i for i in range(n)] cdef Py_ssize_t i for i in prange(n nogil=True): sum = ls[i] return sum ```` This gives a lot of compiler errors because a parallel section without the GIL apparently cannot work with any Python object: ````Error compiling Cython file: ------------------------------------------------------------ ls = [i for i in range(n)] cdef Py_ssize_t i for i in prange(n nogil=True): sum = ls[i] ^ ------------------------------------------------------------ src/parallel pyx:42:6: Coercion from Python not allowed without the GIL Error compiling Cython file: ------------------------------------------------------------ ls = [i for i in range(n)] cdef Py_ssize_t i for i in prange(n nogil=True): sum = ls[i] ^ ------------------------------------------------------------ src/parallel pyx:42:6: Operation not allowed without gil Error compiling Cython file: ------------------------------------------------------------ ls = [i for i in range(n)] cdef Py_ssize_t i for i in prange(n nogil=True): sum = ls[i] ^ ------------------------------------------------------------ src/parallel pyx:42:6: Converting to Python object not allowed without gil Error compiling Cython file: ------------------------------------------------------------ ls = [i for i in range(n)] cdef Py_ssize_t i for i in prange(n nogil=True): sum = ls[i] ^ ------------------------------------------------------------ src/parallel pyx:42:11: Indexing Python object not allowed without gil ````
I am not aware of any way to do this A list is a Python object so using its `__getitem__` method requires the GIL If you are able to use a NumPy array in this case it will work For example if you wanted to iterate over an array `A` of double precision floating point values you could do something like this: ````cimport cython from numpy cimport ndarray as ar from cython parallel import prange @cython boundscheck(False) @cython wraparound(False) cpdef cysumpar(ar[double] A): cdef double tot=0 cdef int i n=A size for i in prange(n nogil=True): tot = A[i] return tot ```` On my machine for this particular case prange does not make it any faster than a normal loop but it could work better in other cases For more on how to use prange see the documentation at <a href="http://docs cython org/src/userguide/parallelism html" rel="nofollow">http://docs cython org/src/userguide/parallelism html</a> Whether or not you can use an array depends on how much you are changing the size of the array If you need a lot of flexibility with the size the array will not work You could also try interfacing with the `vector` class in C++ I have never done that myself but there is a brief description of how to do that here: <a href="http://docs cython org/src/userguide/wrapping_CPlusPlus html#nested-class-declarations" rel="nofollow">http://docs cython org/src/userguide/wrapping_CPlusPlus html#nested-class-declarations</a>
Session parse from django templae I got this in my django template ````{{request session[0]}} ```` And I got this error: ````Could not parse the remainder: '[0]' from 'request session[0]' ```` When I used `{{request session}}` in the template also shows the object hash so I guess the data passing is ok and when I can print session[0] without any trouble then why it would it possibly not work at template?
You can access to array elements with ` ` in templates : ````{{request session 0}} ```` from <a href="http://www djangobook com/en/2 0/chapter04 html" rel="nofollow">wiki</a> : <blockquote> Dot lookups can be summarized like this: when the template system encounters a dot in a variable name it tries the following lookups in this order: </blockquote> ````Dictionary lookup (e g foo["bar"]) Attribute lookup (e g foo bar) Method call (e g foo bar()) List-index lookup (e g foo[2]) ````
How to use QueryXML with SUDS and AutotaskAPI I have been trying to use the query() method of the AutotaskAPI which uses QueryXML In order to do this I understand I have to use the ATWSResponse type in order to receive results This is my code: ````class ConnectATWS(): def __init__(self): #Connect to server with the credentials app_config = Init() self username = app_config data["Username"] self password = app_config data["Password"] self login_id = app_config data["LoginID"] self url = app_config data["AutotaskUpdateTicketEstimatedHours_net_autotask_webservices5_ATWS"] strCurrentID = "0" strCriteria = "<condition&gt;<field&gt;Status<expression op=""NotEqual""&gt;5</expression&gt;</field&gt;</condition&gt;" strQuery = "<queryxml&gt;<entity&gt;Ticket</entity&gt;<query&gt;" \ "<condition&gt;<field&gt;id<expression op=""greaterthan""&gt;" strCurrentID "</expression&gt;</field&gt;</condition&gt;" strCriteria \ "<condition&gt;<field&gt;EstimatedHours<expression op=""isnull""&gt;</expression&gt;</field&gt;</condition&gt;" \ "</query&gt;</queryxml&gt;" client = Client(self url "?WSDL" username=self login_id password=self password) response = client service query(strQuery) ```` Trying this returns the following error: ````No handlers could be found for logger "suds client" Traceback (most recent call last): File "/Users/AAAA/Documents/Aptana/AutotaskUpdateTicketEstimatedHours/Main py" line 46 in <module&gt; handler = ConnectATWS() File "/Users/AAAA/Documents/Aptana/AutotaskUpdateTicketEstimatedHours/Main py" line 40 in __init__ response = client service query(strQuery) File "/Library/Python/2 7/site-packages/suds/client py" line 542 in __call__ return client invoke(args kwargs) File "/Library/Python/2 7/site-packages/suds/client py" line 602 in invoke result = self send(soapenv) File "/Library/Python/2 7/site-packages/suds/client py" line 649 in send result = self failed(binding e) File "/Library/Python/2 7/site-packages/suds/client py" line 708 in failed raise Exception((status reason)) Exception: (307 you'Temporary Redirect') ```` I know I am not properly calling the method How can i call the AutotaskAPI with the QueryXML and ATWSResponse type? For reference this is the AutotaskAPI documentation: <a href="https://support netserve365 com/help/Content/Userguides/T_WebServicesAPIv1_5 pdf" rel="nofollow">https://support netserve365 com/help/Content/Userguides/T_WebServicesAPIv1_5 pdf</a> UPDATE: Using Bodsda's suggestion my complete code looks like this: ````import os sys import xml etree ElementTree as ET from suds client import Client from suds sax element import Element class Init(): def __init__(self): #Search the app config file for all data to be used script_dir = os path dirname(__file__) file_path = "app config" abs_file_path = os path join(script_dir file_path) tree = ET parse(abs_file_path) root = tree getroot() sites = root iter('AutotaskUpdateTicketEstimatedHours My MySettings') self data = {} for site in sites: apps = site findall('setting') for app in apps: self data[app get('name')] = app find('value') text class ConnectATWS(): def __init__(self): #Connect to server with the credentials app_config = Init() self username = app_config data["Username"] self password = app_config data["Password"] self login_id = app_config data["LoginID"] self url = app_config data["AutotaskUpdateTicketEstimatedHours_net_autotask_webservices5_ATWS"] strQuery = """ <queryxml&gt; <entity&gt;Ticket</entity&gt; <query&gt; <condition&gt; <field&gt;Id <expression op="GreaterThan"&gt;0</expression&gt; </field&gt; </condition&gt; <condition&gt; <field&gt;Status <expression op="NotEqual"&gt;5</expression&gt; </field&gt; </condition&gt; <condition&gt; <field&gt;EstimatedHours <expression op="IsNull"&gt;</expression&gt; </field&gt; </condition&gt; </query&gt; </queryxml&gt;""" client = Client(self url "?WSDL" username=self login_id password=self password) #obj = client factory create('ATWSResponse') response = client service query(strQuery) if response ReturnCode != 1: print "Error code: %s" % response ReturnCode print "Error response: %s" % response Errors sys exit(1) else: os system("clear") print "Query successful " print "=============================" print response EntityResults if __name__ == '__main__': handler = ConnectATWS() ````
Did you paste all of your code? I do not see how it could work for a start you never define Init() so it should error when you try to instantiate your class I also do not see where you add information to appconfig data dictionary Anyway here is some sample code that I have been using today ````#! /usr/bin/env python import sys import os from suds client import Client at_username = "foo@bar com" at_password = "foobar" at_url = "https://webservices4 autotask net/atservices/1 5/atws wsdl" def main(): client = Client(at_url username=at_username password=at_password) # Instatiate a suds client Client instance and connect to webservices URL q = """ <queryxml&gt; <entity&gt;Ticket</entity&gt; <query&gt; <condition&gt; <field&gt;Id <expression op="GreaterThan"&gt;0</expression&gt; </field&gt; </condition&gt; <condition&gt; <field&gt;Status <expression op="NotEqual"&gt;5</expression&gt; </field&gt; </condition&gt; <condition&gt; <field&gt;EstimatedHours <expression op="IsNull"&gt;</expression&gt; </field&gt; </condition&gt; </query&gt; </queryxml&gt;""" # Status value '5' == Complete response = client service query(q) if response ReturnCode != 1: print "Error code: %s" % response ReturnCode print "Error response: %s" % response Errors sys exit(1) else: os system("clear") print "Query successful " print "=============================" print response EntityResults main() ````
When did Washington University secure its charter?
1853
Can I save a numpy array as a 16-bit image using "normal" (Enthought) python? Is there any way to save a numpy array as a 16 bit image (tif png) using any of the commonly available python packages? <a href="http://stackoverflow com/questions/24240039/save-numpy-array-as-image-with-high-precision-16-bits-with-scikit-image">This</a> is the only way that I could get to work in the past but I needed to install the FreeImage package which is a little annoying This seems like a pretty basic task so I would expect that it should be covered by scipy but scipy misc imsave only does 8-bits Any ideas?
One alternative is to use <a href="https://pypi python org/pypi/pypng" rel="nofollow">pypng</a> You will still have to install another package but it is pure Python so that should be easy (There is actually a Cython file in the pypng source but its use is optional ) Here is an example of using pypng to write numpy arrays to PNG: ````import png import numpy as np # The following import is just for creating an interesting array # of data It is not necessary for writing a PNG file with PyPNG from scipy ndimage import gaussian_filter # Make an image in a numpy array for this demonstration nrows = 240 ncols = 320 np random seed(12345) x = np random randn(nrows ncols 3) # y is our floating point demonstration data y = gaussian_filter(x (16 16 0)) # Convert y to 16 bit unsigned integers z = (65535*((y - y max())/y ptp())) astype(np uint16) # Use pypng to write z as a color PNG with open('foo_color png' 'wb') as f: writer = png Writer(width=z shape[1] height=z shape[0] bitdepth=16) # Convert z to the Python list of lists expected by # the png writer z2list = z reshape(-1 z shape[1]*z shape[2]) tolist() writer write(f z2list) # Here is a grayscale example zgray = z[: : 0] # Use pypng to write zgray as a grayscale PNG with open('foo_gray png' 'wb') as f: writer = png Writer(width=z shape[1] height=z shape[0] bitdepth=16 greyscale=True) zgray2list = zgray tolist() writer write(f zgray2list) ```` Here is the color output: <img src="http://i stack imgur com/x4KCS png" alt="foo_color png"> and here is the grayscale output: <img src="http://i stack imgur com/BENdo png" alt="foo_gray png"> <hr> <them>Update</them>: I recently created a github repository for a module called <a href="https://github com/WarrenWeckesser/numpngw" rel="nofollow">`numpngw`</a> that provides a function for writing a numpy array to a PNG file The repository has a `setup py` file for installing it as a package but the essential code is in a single file `numpngw py` that could be copied to any convenient location The only dependency of `numpngw` is numpy Here is a script that generates the same 16 bit images as those shown above: ````import numpy as np import numpngw # The following import is just for creating an interesting array # of data It is not necessary for writing a PNG file with PyPNG from scipy ndimage import gaussian_filter # Make an image in a numpy array for this demonstration nrows = 240 ncols = 320 np random seed(12345) x = np random randn(nrows ncols 3) # y is our floating point demonstration data y = gaussian_filter(x (16 16 0)) # Convert y to 16 bit unsigned integers z = (65535*((y - y max())/y ptp())) astype(np uint16) # Use numpngw to write z as a color PNG numpngw write_png('foo_color png' z) # Here is a grayscale example zgray = z[: : 0] # Use numpngw to write zgray as a grayscale PNG numpngw write_png('foo_gray png' zgray) ````
Does Python Timer have some limitation for supported longest time? ````import threading import os def shutdown(): os system("shutdown -s") # user setting zone!!! hour = 0 minute = 20 sec = 0 # user setting zone!!! total_sec = hour*3600 0 minute*60 0 sec - 60 0 if total_sec < 0: total_sec = 0 print("The computer will be Shut Down in (%d hour %d minute %d second) \n" %(hour minute sec)) if total_sec &gt;= 120: temp_sec = total_sec - 120 threading Timer(temp_sec lambda: print("Last 3 minutes before shuting down the computer!!\n")) start() else: print("Less than 3 minutes before shuting down the computer!!\n") threading Timer(total_sec shutdown) start() ```` The code is shown above When I set a short time like 10 min 20 min or a little longer the script could work normally But if I set the waiting time to a long time like 4 hours or 5 hours the script could NOT work at all Nothing would happen when the time is up Could you pls point out why the error happens and guide me to fix it? Thanks in advance
Have you actually timed it? You say it works normally but does it actually shut the computer off in 10 minutes when set for 10 minutes and not say 15+ minutes? I ask because it looks like you have it set to give you a 3 minute warning However then the timer resets because you use total_sec in "threading Timer(total_sec shutdown) start()" So when you set it for say 60 minutes it gives you a warning at 57 minutes then runs for another 60 minutes Therefore I suspect if you let it run for 11 hours when you set it for 5 hours it would actually shutoff the computer
json dump object rename attribute before the dump I am currently serializing my custom object into a JSON string by using `json dumps()` ````j = json dumps(object sort_keys=True indent=4 separators=(' ' ': ') default=lambda o: o __dict__) ```` My object has an attribute called `_machines` So when we turn the object into a string one of the properties in the string is called `_machines` Is there any way to tell `json dump()` that we want this property to be called `machines` rather then `_machines`?
You will have to use a more elaborate default: ````json dumps(object sort_keys=True indent=4 separators=(' ' ': ') default=lambda o: {'machines' if k == '_machines' else k: v for k v in o __dict__ iteritems()}) ```` It might be a good idea for the sake of readability to make that a separate function: ````def serialize_custom_object(o): res = o __dict__ copy() res['machines'] = res['_machines'] del res['_machines'] return res json dumps(object sort_keys=True indent=4 separators=(' ' ': ') default=serialize_custom_object) ```` Here `serialize_custom_object()` is a little more explicit that you are renaming one key in the result
Django Admin Static Resources I am new to Django and I am trying to learn how to use it I have hit a brick wall trying to launch the admin site I am working out of `/opt/django/mysite com/proj1` I am just trying to use the built-in webserver so `python manage py runserver` The admin page will load but it is missing base css and dashboard css These files are located in `/usr/lib/python2 7/site-packages/django/contrib/admin/media/` Seeing that it seems like ADMIN_MEDIA_PREFIX typically needs to be set I have tried `ADMIN_MEDIA_PREFIX = '/usr/lib/python2 7/site-packages/django/contrib/admin/media/'` to no effect Could someone help me fix this problem? Thanks Edit: The GET requests where I am seeing errors are <a href="http://localhost:8000/admin/media/css/base css" rel="nofollow">http://localhost:8000/admin/media/css/base css</a> http://localhost:8000/admin/media/css/dashboard css
It looks like your admin media files are located in the default directory Try this: ````ADMIN_MEDIA_PREFIX = '/media/ ```` For more information look to the Django docs: <a href="https://docs djangoproject com/en/1 3/ref/settings/#admin-media-prefix" rel="nofollow">https://docs djangoproject com/en/1 3/ref/settings/#admin-media-prefix</a>
Attempting to deploy a python pyramid application with uWSGI I am attempting to deploy a pyramid application using uWSGI The application works fine when served with the included pyramid development server Also I have set this up before and I swear it worked at one time However putting in the magic phrases right now is resulting in "This webpage is not available" I am trying to keep all of the configuration parameters as similar as possible to what I have currently so I do not have to worry about firewall issues uWSGI section in development ini looks like this (from: <a href="http://stackoverflow com/questions/16351559/setup-uwsgi-as-webserver-with-pyramid-no-nginx">Setup uWSGI as webserver with pyramid (no NGINX)</a>): ````[uwsgi] socket = localhost:8080 virtualenv = /var/www/finance/finance-env die-on-term = 1 master = 1 #logto = /var/log/wsgi/uwsgi log enable-threads = true offload-threads = N py-autoreload = 1 wsgi-file = /var/www/finance/wsgi py ```` wsgy py looks like this: ````from pyramid paster import get_app setup_logging ini_path = '/var/www/finance/corefinance/development ini' setup_logging(ini_path) application = get_app(ini_path 'main') ```` Here is the output right now Everything seems to be listening just fine on port 8080 ````user1@finance1:~$ sudo /var/www/finance/finance-env/bin/uwsgi --ini-paste-logg /var/www/finance/corefinance/development ini [uWSGI] getting INI configuration from /var/www/finance/corefinance/development ini *** Starting uWSGI 2 0 11 2 (64bit) on [Fri Jan 15 21:13:31 2016] *** compiled with version: 4 7 2 on 16 November 2015 20:13:35 os: Linux-4 1 5-x86_64-linode61 #7 SMP Mon Aug 24 13:46:31 EDT 2015 nodename: finance1 machine: x86_64 clock source: unix detected number of CPU cores: 1 current working directory: /home/user1 detected binary path: /var/www/finance/finance-env/bin/uwsgi !!! no internal routing support rebuild with pcre support !!! uWSGI running as root you can use --uid/--gid/--chroot options *** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** your processes number limit is 3934 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes thunder lock: disabled (you can enable it with --thunder-lock) uwsgi socket 0 bound to TCP address localhost:8080 fd 3 Python version: 3 2 3 (default Feb 20 2013 14:49:46) [GCC 4 7 2] Set PythonHome to /var/www/finance/finance-env Python main interpreter initialized at 0xfd0a10 python threads support enabled your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds mapped 145536 bytes (142 KB) for 1 cores *** Operational MODE: single process *** WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0xfd0a10 pid: 6275 (default app) mountpoint already configured skip *** uWSGI is running in multiple interpreter mode *** spawned uWSGI master process (pid: 6275) spawned uWSGI worker 1 (pid: 6282 cores: 1) Python auto-reloader enabled ````
Unless you are behind a proxy such as nginx you need to use the internal http routing support in uwsgi change ````socket = localhost:8080 ```` to ````http = 0 0 0 0:8080 ```` Here are the <a href="http://uwsgi-docs readthedocs org/en/latest/HTTP html" rel="nofollow">uwsgi http support docs</a>
Authenticate imaplib IMAP4_SSL against an Exchange imap server with AUTH=NTLM Yesterday the IT department made changes to the Exchange server I was previously able to use `imaplib` to fetch messages from the server But now it seems they have turned off the authentication mechanism I was using From the output below it looks as if the server now supports NTLM authentication only ````&gt;&gt;&gt; from imaplib import IMAP4_SSL &gt;&gt;&gt; s = IMAP4_SSL("my imap server") &gt;&gt;&gt; s capabilities ('IMAP4' 'IMAP4REV1' 'IDLE' 'LOGIN-REFERRALS' 'MAILBOX-REFERRALS' 'NAMESPACE' 'LITERAL+' 'UIDPLUS' 'CHILDREN' 'AUTH=NTLM') &gt;&gt;&gt; s login("username" "password") imaplib error: Clear text passwords have been disabled for this protocol ```` Questions: - How do I authenticate to the imap server using NTLM with imaplib? I assume I have need to use IMAP4_SSL authenticate("NTLM" authobject) to do this? How do I set up the authobject callback - Since SSL/TLS is the only way to connect to the server re-enabling clear text password authentication should not be a security risk Correct? The process that connects to the imap server is running on Linux BY THE WAY So I am not able to use pywin32 <strong>Edit:</strong> I was able to figure out 1 myself But how about 2 : Clear text passwords in IMAP over SSL is not a security problem is it?
I was able to use the <a href="http://code google com/p/python-ntlm/" rel="nofollow">python-ntlm</a> project `python-ntlm` implements NTLM authentication for HTTP It was easy to add NTLM authentication for IMAP by extending this project I submitted <a href="http://code google com/p/python-ntlm/issues/detail?id=15" rel="nofollow">a patch for the project</a> with my additions
Where was the Condemnation of 1277 enacted?
University of Paris
Tkinter Disable several Entry with checkbutton With Python 2 7 I would like to turn the state of an "Entry" widget to normal/disable thank to a checkbutton With the help of this question <a href="http://stackoverflow com/questions/6129936/disable-widget-with-checkbutton">Disable widget with checkbutton?</a> I can do it with 1 checkbutton and 1 Entry ````#!/usr/bin/env python2 7 # -*- coding: utf-8 -*- import Tkinter as tk root = tk Tk() class Principal(tk Tk): def __init__(self *args **kwargs): self foo = tk StringVar() self nac = tk IntVar() self ck1 = tk Checkbutton(root text='test' variable=self nac command=self naccheck) self ck1 pack() self ent1 = tk Entry(root width=20 background='white' textvariable=self foo state='disabled') self ent1 pack() def naccheck(self): print "check" if self nac get() == 0: self ent1 configure(state='disabled') else: self ent1 configure(state='normal') app = Principal() root mainloop() ```` Problem come when I want to have 2 or more pair (checkbutton/entry) In my final interface I may have 20 or more of this pair so I would like to avoid to have 20 or more the same "naccheck" method I tried this : ````#!/usr/bin/env python2 7 # -*- coding: utf-8 -*- import Tkinter as tk root = tk Tk() class Principal(tk Tk): def __init__(self *args **kwargs): self foo = tk StringVar() self nac = {} self ent = {} self ent["test"] = tk Entry(root width=20 background='white' textvariable=self foo state='disabled') self ent["test"] pack() self ent["image"] = tk Entry(root width=20 background='white' textvariable=self foo state='disabled') self ent["image"] pack() self nac["test"] = tk IntVar() self ck1 = tk Checkbutton(root text='test' variable=self nac["test"] command=self naccheck("test")) self ck1 pack() self nac["image"] = tk IntVar() self ck1 = tk Checkbutton(root text='image' variable=self nac["image"] command=self naccheck("image")) self ck1 pack() def naccheck(self item): print "check "+item print self nac[item] get() if self nac[item] get() == 0: self ent[item] configure(state='disabled') else: self ent[item] configure(state='normal') app = Principal() root mainloop() ```` Unfortunately when I launch this code the method "naccheck" is called immediately for each checkbutton and never after when I click on one What I have done wrong ?
There are many ways to solve this problem One way is to pass the entry and checkbutton variable into your check function Create the entry widget and variable first Then create the checkbutton and pass in the variable and entry to your callback: ````ent = tk Entry( ) var = tk IntVar() chk = tk Checkbutton( command=lambda e=ent v=var: self naccheck(e v)) ```` Notice the use of lambda which is a simple technique for creating anonymous functions This enables you to pass arguments to a callback without having to create named functions Another option is to use <a href="http://docs python org/2/library/functools html#functools partial" rel="nofollow">functools partial</a> There are no doubt dozens of examples of this on StackOverflow as this is a very common question Next you need to modify your function to accept arguments: ````def naccheck(self entry var): if var get() == 0: entry configure(state='disabled') else: entry configure(state='normal') ````
Pythonic way of finding duplicate maps in a list while ignoring certain keys and combining the duplicate maps to make a new list I want to write a code which takes the following inputs: ````list (list of maps) request_keys (list of strings) operation (add substract multiply concat) ```` The code would look at the list for the maps having the same value for all keys except the keys given in request_keys Upon finding two maps for which the value in the search keys match the code would do the operation (add multiple substract concat) on the two maps and combine them into one map This combination map would basically replace the other two maps i have written the following peice of code to do this The code only does add operation It can be extended to make the other operations ````In [83]: list Out[83]: [{'a': 2 'b': 3 'c': 10} {'a': 2 'b': 3 'c': 3} {'a': 2 'b': 4 'c': 4} {'a': 2 'b': 3 'c': 2} {'a': 2 'b': 3 'c': 3}] In [84]: %cpaste Pasting code; enter '--' alone on the line to stop or use Ctrl-D :def func(list request_keys): : new_list = [] : found_indexes = [] : for i in range(0 len(list)): : new_item = list[i] : if i in found_indexes: : continue : for j in range(0 len(list)): : if i != j and {k: v for k v in list[i] iteritems() if k not in request_keys} == {k: v for k v in list[j] iteritems() if k not in request_keys}: : found_indexes append(j) : for request_key in request_keys: : new_item[request_key] = list[j][request_key] : new_list append(new_item) : return new_list :-- In [85]: func(list ['c']) Out[85]: [{'a': 2 'b': 3 'c': 18} {'a': 2 'b': 4 'c': 4}] In [86]: ```` What i want to know is is there a faster more memory efficient cleaner and a more pythonic way of doing the same? Thank you
You manually generate all the combinations and then compare each of those combinations This is pretty wasteful Instead I suggest grouping the dictionaries in another dictionary by their matching keys then adding the "same" dictionaries Also you forgot the `operator` parameter ````import collections operator functools def func(lst request_keys op=operator add): matching_dicts = collections defaultdict(list) for d in lst: key = tuple(sorted(((k d[k]) for k in d if k not in request_keys))) matching_dicts[key] append(d) for group in matching_dicts values(): merged = dict(group[0]) merged update({key: functools reduce(op (g[key] for g in group)) for key in request_keys}) yield merged ```` What this does: First it creates a dictionary mapping the key-value pairs that have to be equal for two dictionaries to match to all those dictionaries that have those key-value pairs Then it iterates the dicts from those groups using one of that group as a prototype and updating it with the sum (or product or whatever depending on the operator) of the all the dicts in that group for the `required_keys` Note that this returns a generator If you want a list just call it like `list(func( ))` or accumulate the `merged` dicts in a list and return that list
Sort a dictionary by values that are instance attributes I am currently working on a program written in Python 2 6 6 It uses a dictionary that looks like: ````{ 'somekeystring': someobject("a name" 1 3) 'anotherkey': someobject("another name" 2 2) 'keythree': someobject("third name" 3 1) } ```` The object has some attributes like: ````name startOrder stopOrder ```` What I am trying to accomplish is to get the dictionary sorted Once by someobject startOrder once by someobject stopOrder I have tried ````sortedByStartOrder = sorted(mydict iteritems() key=lambda x: x[1] startOrder) ```` but this does not seem to work The list items are sorted in the same order no matter if I use startOrder or stopOrder in the example above Any hints?
This example seems to work for me: ````class P: def __init__(self x): self x = x d = { 'how': P(3) 'hi': P(2) 'you': P(5) 'are': P(4) } print list(d iteritems()) print sorted(d iteritems() key=lambda x: x[1] x) ```` produces ````&gt;&gt; [('how' <__main__ P instance at 0x7f92028e52d8&gt;) ('you' <__main__ P instance at 0x7f92028e5368&gt;) ('hi' <__main__ P instance at 0x7f92028e5320&gt;) ('are' <__main__ P instance at 0x7f92028e53b0&gt;)] &gt;&gt; [('hi' <__main__ P instance at 0x7fc210e6c320&gt;) ('how' <__main__ P instance at 0x7fc210e6c2d8&gt;) ('are' <__main__ P instance at 0x7fc210e6c3b0&gt;) ('you' <__main__ P instance at 0x7fc210e6c368&gt;)] ```` I would guess the problem is not in the sort itself; there might be something wrong in the structures you are trying to sort
Retrieve Design Matrix from DMatrix Instance in xgboost In xgboost I am doing something like ````import numpy as np import xgboost as xgb y = np arange(10) X = np arange(20) reshape(10 2) dtrain = xgb DMatrix(X y feature_names=["x1" "x2"]) ```` If I want to extract the y values as an array from dtrain I can do ````y = dtrain get_label() ```` Is there any way to extract the X values as an array from dtrain?
I do not think so With your `DMatrix` `dtrain` you can see: ````dir(dtrain) ['__class__' '__del__' '__delattr__' '__dict__' '__dir__' '__doc__' '__eq__' '__format__' '__ge__' '__getattribute__' '__gt__' '__hash__' '__init__' '__le__' '__lt__' '__module__' '__ne__' '__new__' '__qualname__' '__reduce__' '__reduce_ex__' '__repr__' '__setattr__' '__sizeof__' '__str__' '__subclasshook__' '__weakref__' '_feature_names' '_feature_types' '_init_from_csc' '_init_from_csr' '_init_from_npy2d' 'feature_names' 'feature_types' 'get_base_margin' 'get_float_info' 'get_label' 'get_uint_info' 'get_weight' 'handle' 'num_col' 'num_row' 'save_binary' 'set_base_margin' 'set_float_info' 'set_group' 'set_label' 'set_uint_info' 'set_weight' 'slice'] ```` The best I can find is ````dtrain feature_names ```` which will return your `["x1" "x2"]` `dtrain feature_types` is somewhat helpful and you can take slices like `dtrain slice(range(3))` but this still is not what you are looking for
Some help understanding my own Python code I am starting to learn Python and I have written the following Python code (some of it omitted) and it works fine but I would like to understand it better So I do the following: ````html_doc = requests get('[url here]') ```` Followed by: ````if html_doc status_code == 200: soup = BeautifulSoup(html_doc text 'html parser') line = soup find('a' class_="some_class") value = re search('[regex]' str(line)) print (value group(0)) ```` My questions are: - What does `html_doc text` really do? I understand that it makes "text" (a string?) out of `html_doc` but why is not it text already? What is it? Bytes? Maybe a stupid question but why does not `requests get` create a really long string containing the HTML code? - The only way that I could get the result of `re search` was by `value group(0)` but I have literally no idea what this does Why cannot I just look at `value` directly? I am passing it a string there is only one match why is the resulting `value` not a string?
`requests get()` return value as stated in docs is <a href="http://docs python-requests org/en/latest/api/#requests Response" rel="nofollow">Response</a> object `re search()` return value as stated in docs is <a href="https://docs python org/2/library/re html#re MatchObject" rel="nofollow">MatchObject</a> object Both objects are introduced because they contain much more information than simply response bytes (e g HTTP status code response headers etc ) or simple found string value (e g it includes positions of first and last matched characters) For more information you will have to study docs FYI to check type of returned value you may use built-in `type` function: ````response = requests get('[url here]') print type(response) # <class 'requests models Response'&gt; ````
Unsure of the behavior for opening a file with mode "wrb" I have a socket that I am sending data to through a file created using the makefile method of a socket However the mode of the file created using makefile is 'wrb' I understand that 'w' = write 'r' = read and 'b' = binary I also understand that you can combine them in a number of different ways see <a href="http://stackoverflow com/questions/16208206/confused-by-python-file-mode-w">Confused by python file mode &quot;w+&quot;</a> which contains a list of possible combinations However I have never seen 'w' and 'r' together What is their behavior when together? For example 'r+' allows reading and writing and 'w+' does the same except that it truncates the file beforehand But what does 'wr' do?
The description in the <a href="https://docs python org/2/library/functions html#open%20&#39;Documentation&#39;" rel="nofollow">Python 2 x docs</a> suggests you would be able to both read and write to the file without closing it However the behavior is not so Example: ````f = open('myfile' 'wr') f write('THIS IS A TEST') f read() Traceback (most recent call last): File "<stdin&gt;" line 1 in <module&gt; IOError: [Errno 9] Bad file descriptor ```` It will write however not read If we open the file with the option reversed: ````f = open('myfile' 'rw') f read() f write('THIS IS ALSO A TEST') Traceback (most recent call last): File "<stdin&gt;" line 1 in <module&gt; IOError: [Errno 9] Bad file descriptor ```` Observed behavior is that the open() function only takes the first character for file opening option and disregards the rest except if it ends in a 'b' which would donate that it would be opened in binary mode
How to change password of user in AbstractBaseUser In creation of MyUser I was following: <a href="https://docs djangoproject com/en/1 6/topics/auth/customizing/#a-full-example" rel="nofollow">https://docs djangoproject com/en/1 6/topics/auth/customizing/#a-full-example</a> I would like to change password of user I try: ````In [55]: i=FairsUser objects get(email="admin@andilabs com") In [56]: i __dict__ Out[56]: {'_state': <django db models base ModelState at 0x110a53490&gt; 'email': you'admin@andilabs com' 'id': 8 'is_active': True 'is_admin': False 'is_superuser': False 'last_login': datetime datetime(2014 4 2 16 0 59 109673) 'mail_sent': False 'password': you'pbkdf2_sha256$12000$XgszxXkXbroY$PEEf3vqszclGcf7iQXeZRWDTYcCsvlGh0jH15f6rKR8='} In [57]: i set_password("abcd") In [58]: i save() ```` Then I check: ````In [59]: i_updated=FairsUser objects get(email="admin@andilabs com") In [60]: i_updated __dict__ Out[60]: {'_state': <django db models base ModelState at 0x110a53590&gt; 'email': you'admin@andilabs com' 'id': 8 'is_active': True 'is_admin': False 'is_superuser': False 'last_login': datetime datetime(2014 4 2 16 0 59 109673) 'mail_sent': False 'password': you'pbkdf2_sha256$12000$8VCDlzTuVfHF$ldwqbXo/axzMFLasqOKkddz8o1yW9d5r7gUxD3qH4sU='} ```` The values for hashed password differs but I can not login using "abcd" What is the reason? OK In this case it "started" working But In my logic in form of admin it does not still: ````def clean(self commit=True): super(UserChangeForm self) clean() cleaned_data = self cleaned_data old_status = FairsUser objects get(email=cleaned_data['email']) if (old_status mail_sent is False or old_status mail_sent is None) and cleaned_data['mail_sent'] is True: usr = FairsUser objects get(email=cleaned_data['email']) new_pass = '' join( random choice( string ascii_uppercase string digits ) for _ in range(DESIRED_PASSWORD_LENGTH)) usr set_password(new_pass) usr save() mail_content = 'Hi! Bbelow are your login credentials \n e-mail: {0} \n password: {1} \n Have nice time on fairs! \n' format(usr email new_pass) message = EmailMessage('Your login data' mail_content 'from@andilabs com' [usr email ]) message send() return cleaned_data ```` The delivered by email password does not allow my to login as well I can not authenticate in console
The reason WAS not connected with sending email but with WRONG using of forms methods by me This is the working solution in basic form (without sending email and without generating new password in some random way) to make it clear for others who end up here with the same problem of set_password while calling save() of the form in case of AbstractBaseUser ````def save(self commit=True): user = super(UserChangeForm self) save(commit=False) cleaned_data = self cleaned_data old_status = FairsUser objects get(email=cleaned_data['email']) if (old_status mail_sent is False or old_status mail_sent is None) and cleaned_data['mail_sent'] is True: new_pass = "SOME_NEW_VALUE_OF_PASSWORD" user set_password(new_pass) # here sending email can be initiated if commit: user save() return user ````
Make a SQL Alchemy Expression return value from a different Table's column for filtering How can you use the sql alchemy expression language to make sqlalchemy's `filter_by` look through a hybrid property that returns a value from a column in another table? <h2>Example Code</h2> (Using flask-sqlalchemy so you will see stuff like `Device query get(203)` ````class Service(Model): id = Column(Integer) client_id = Column(Integer) class Device(Mode): id = Column(Integer) owner = Column(Integer) @hybrid_property def client_id(self): return Service query get(self owner) client_id @client_id expression def client_id(self): # ??? # Make this return a useful query Device query filter(client_id=124) ```` <h2>SQL QUERY</h2> This is the SQL that returns the proper values ````SELECT service clientid FROM device INNER JOIN service ON device owner = service id; ````
Not the desired `sql` but should produce the same result: ````@client_id expression def client_id(cls): return ( select([Service client_id]) where(Service id == cls owner) as_scalar() ) ````
Pycharm cannot resolve some Python Standard Libraries like time and sys I just downloaded Pycharm Community Edition 3 4 today and this problem annoys me Pycharm does not detect stdlibs that does not have a `__file__` attributes thus incorrectly marking those as errors like in the snapshot below <img src="http://i stack imgur com/uE0Kc jpg" alt="Annoying highlights of unresolved modules">
I had a similar issue using Pycharm Professional 3 4 running on Arch Linux To fix the problem I removed the Python Interpreter configuration for python 2 7 and readded as described in comments section of the bug report <a href="https://youtrack jetbrains com/issue/PY-13176#" rel="nofollow">https://youtrack jetbrains com/issue/PY-13176#</a>
Creating a debian package for my python application from a system running fedora I have created a small python application to be used internally in my organization I wrote the code on my primary development machine running Fedora 17 and I would like to create a deb in order to make it easy for my colleagues to install my program Is it possible to create debian packages for python application from a system running fedora? If yes how?
It would be possible to do it manually but it would be quicker and less painful to create it on a debian-based distribution (you could use a virtual machine if you do not want to install one) Following <a href="http://ghantoos org/2008/10/19/creating-a-deb-package-from-a-python-setuppy/" rel="nofollow">this guide</a> is probably the best way forward if you are already using an `install py`
Unfathomable (for me) issue when searching a string for a substring in python I have a piece of code that iterates through a list and searches a string the list is football players names It works for almost every player but has randomly will not recognise a player called ashley westwood I have checked the list and he is definately in there everything is lower case and the script functions because it is recognising every other player (so far) Basically I am asking what problems can occur when using 'in'? The DB entries I get from this make no sense at all I have included the code although its a bit dirty and not really relevant I am a relative noob too CODE ```` if 'corner' in text3[:50] or ('inswinging corner' in text3) or ('outswinging corner' in text3) : print text3 print time for player in away_players_names: this_player = player[0] lower() upper = player[0] if this_player in segment: player_id = away_team_dict[upper] player_id = int(player_id[0]) etype = 10 team = 2 cur execute("""INSERT INTO football match_events(type player time game_id team) VALUES (%s %s %s %s %s) ON DUPLICATE KEY UPDATE game_id = game_id""" (etype player_id time game_id team)) db commit() for player in home_players_names: this_player = player[0] lower() print this_player upper = player[0] if this_player in segment: player_id = home_team_dict[upper] player_id = int(player_id[0]) etype = 10 print player_id team = 1 cur execute("""INSERT INTO football match_events(type player time game_id team) VALUES (%s %s %s %s %s) ON DUPLICATE KEY UPDATE game_id = game_id""" (etype player_id time game_id team)) db commit() ```` here is an example of a printed statement and a failure ````corner taken right-footed by ashley westwood to the near post 38 22 bradley guzan ron vlaar ciaran clark nathan baker matthew lowton charles n'zogbia ashley westwood fabian delph christian benteke jordan bowery andreas weimann shay given joe bennett yacouba sylla simon dawkins barry bannan darren bent brett holman ```` This has not recognised the name and I have no idea why? Anyone?
In ````if this_player in segment: ```` What is the value of segment?
How to speed up append to an existing dataframe I am trying to append or add rows to the existing dataframe which has around 7 million rows Now the challenge I am facing is I am able to do the same using `iterrows` in the following manner: ````for key value in df iterrows(): if value['col3'] &gt; 0: df loc[len(df) ['col1' 'col2' 'col3' 'col4' 'col5']] = [value['col1']+value['col3'] value['col2'] value['col3'] value['col4'] 'blah'] ```` and using `itertuples` in following manner: ````for tup in df itertuples(index = False): if tup[4] &gt; 0: df loc[len(df) ['col1' 'col2' 'col3' 'col4' 'col5']] = [tup[1]+ tup[3] tup[2] tup[3] tup[4] 'blah'] ```` Both of these do the job but it is hitting on processing time Can someone please suggest me a better way to do this in Python pandas?
You can use <a href="http://pandas pydata org/pandas-docs/stable/indexing html#the-where-method-and-masking" rel="nofollow">np where</a> if you need append rows by conditions: ````import pandas as pd import numpy as np import io #original data temp=you"""col1 col2 col3 col4 col5 1 2 3 4 5 3 4 -5 6 7 5 6 -7 8 9""" df1 = pd read_csv(io StringIO(temp)) print df1 # col1 col2 col3 col4 col5 #0 1 2 3 4 5 #1 3 4 -5 6 7 #2 5 6 -7 8 9 #copy data for next processing df = df1 copy() #set data by conditions df1['col1'] = np where(df1['col3'] &gt; 0 df1['col1'] df1['col3'] np NaN) df1['col5'] = np where(df1['col3'] &gt; 0 'blah' np NaN) #drop rows with NaN df1 = df1 dropna() #append to original df result = pd concat([df df1] ignore_index=True) print result # col1 col2 col3 col4 col5 #0 1 2 3 4 5 #1 3 4 -5 6 7 #2 5 6 -7 8 9 #3 4 2 3 4 blah ````
Accept parameters only from POST request in python Is there a way to accept parameters only from POST request? If I use cgi FieldStorage() from cgi module it accepts parameters from both GET and POST request
From the documentation I think you can do the following: ````form = cgi FieldStorage() if isinstance(form["key"] cgi FieldStorage): pass #handle field ```` This code is untested
(unicode error) 'unicodeescape' codec cannot decode bytes - string with '\you' Writing my code for Python 2 6 but with Python 3 in mind I thought it was a good idea to put ````from __future__ import unicode_literals ```` at the top of some modules In other words I am asking for troubles (to avoid them in the future) but I might be missing some important knowledge here I want to be able to pass a string representing a filepath and instantiate an object as simple as `MyObject('H:\unittests')` In **Python 2 6** this works just fine no need to use double backslashes or a raw string even for a directory starting with `'\you '` which is exactly what I want In the `__init__` method I make sure all single `\` occurences are interpreted as '`\\`' including those before special characters as in `\a` `\b` `\f` `\n` `\r` `\t` and `\v` (only `\x` remains a problem) Also decoding the given string into unicode using (local) encoding works as expected Preparing for **Python 3 x** simulating my actual problem in an editor (starting with a clean console in Python 2 6) the following happens: ````&gt;&gt;&gt; '\you' '\\you' &gt;&gt;&gt; r'\you' '\\you' ```` (OK until here: `'\you'` is encoded by the console using the local encoding) ````&gt;&gt;&gt; from __future__ import unicode_literals &gt;&gt;&gt; '\you' SyntaxError: (unicode error) 'unicodeescape' codec cannot decode bytes in position 0-1: end of string in escape sequence ```` In other words the (unicode) string is not interpreted as unicode at all nor does it get decoded automatically with the local encoding Even so for a raw string: ````&gt;&gt;&gt; r'\you' SyntaxError: (unicode error) 'rawunicodeescape' codec cannot decode bytes in position 0-1: truncated \uXXXX ```` same for `you'\you'`: ````&gt;&gt;&gt; you'\you' SyntaxError: (unicode error) 'unicodeescape' codec cannot decode bytes in position 0-1: end of string in escape sequence ```` Also I would expect `isinstance(str('') unicode)` to return `True` (which it does not) because importing unicode_literals should make all string-types unicode **(edit:)** Because <a href="http://diveintopython3 org/strings html#divingin">in Python 3 all strings are sequences of Unicode characters</a> I would expect `str(''))` to return such a unicode-string and `type(str(''))` to be both `<type 'unicode'&gt;` and `<type 'str'&gt;` (because all strings are unicode) but also realise that `<type 'unicode'&gt; is not <type 'str'&gt;` Confusion all around **Questions** - how can I best pass strings containing '`\you`'? (without writing '`\\you`') - does `from __future__ import unicode_literals` really implement all Python 3 related unicode changes so that I get a complete Python 3 string environment? edit: In Python 3 **<a href="http://farmdev com/talks/unicode/">`<type 'str'&gt;` is a Unicode object</a>** and `<type 'unicode'&gt;` simply does not exist In my case I want to write code for Python 2( 6) that will work in Python 3 But when I `import unicode_literals` I cannot check if a string is of `<type 'unicode'&gt;` because: - I assume `unicode` is not part of the namespace - if `unicode` is part of the namespace a literal of `<type 'str'&gt;` is still unicode when it is created in the same module - `type(mystring)` will always return `<type 'str'&gt;` for unicode literals in Python 3 My modules use to be encoded in 'utf-8' by a `# coding: UTF-8` comment at the top while my `locale getdefaultlocale()[1]` returns 'cp1252' So if I call `MyObject('çça')` from my console it is encoded as 'cp1252' in Python 2 and in 'utf-8' when calling `MyObject('çça')` from the module In Python 3 it will not be encoded but a unicode literal edit: I gave up hope about being allowed to avoid using '\' before a `you` (or `x` for that matter) Also I understand the limitations of importing `unicode_literals` However the many possible combinations of passing a string from a module to the console and vica versa with each different encoding and on top of that importing `unicode_literals` or not and Python 2 vs Python 3 made me want to create an overview by actual testing Hence the table below <img src="http://i stack imgur com/sHQSx gif" alt="enter image description here"> In other words `type(str(''))` does not return `<type 'str'&gt;` in Python 3 but `<class 'str'&gt;` and all of Python 2 problems seem to be avoided
AFAIK all that `from __future__ import unicode_literals` does is to make all <strong>string literals</strong> of unicode type instead of string type That is: ````&gt;&gt;&gt; type('') <type 'str'&gt; &gt;&gt;&gt; from __future__ import unicode_literals &gt;&gt;&gt; type('') <type 'unicode'&gt; ```` But `str` and `unicode` are still different types and they behave just like before ````&gt;&gt;&gt; type(str('')) <type 'str'&gt; ```` Always is of `str` type About your `r'\you'` issue it is by design as it is equivalent to ru'\you' without `unicode_literals` From the docs: <blockquote> When an 'r' or 'R' prefix is used in conjunction with a 'you' or 'YOU' prefix then the \uXXXX and \UXXXXXXXX escape sequences are processed while all other backslashes are left in the string </blockquote> Probably from the way the lexical analyzer worked in the python2 series In python3 it works as you (and I) would expect You can type the backslash twice and then the `\you` will not be interpreted but you will get two backslashes! <blockquote> Backslashes can be escaped with a preceding backslash; however both remain in the string </blockquote> ````&gt;&gt;&gt; you are'\\you' you'\\\\you' ```` So IMHO you have two simple options: - Do not use raw strings and escape your backslashes (compatible with python3): `'H:\\unittests'` - Be too smart and take advantage of unicode codepoints (<strong>not</strong> compatible with python3): `r'H:\u005cunittests'`
Smartcard PKCS11 AES Key Gen Failure I am attempting to create an AES 256 key on an ACOS5-64 smartcard and OMNIKEY 3121 card reader using PKCS11 in python (using the PyKCS11 library) So far all the "standard" operations seem to work with regards to asymmetric crypto I have run plenty of code samples and pkcs11-tool commands to initialize the token set/change PINs create RSA keypairs etc So the drivers are all functional (pcscd CCID PKCS11 middleware) The following code is causing a problem: ````from PyKCS11 import * import getpass libacospkcs = '/usr/lib/libacospkcs11 so' def createTokenAES256(lbl): pkcs11 = PyKCS11Lib() pkcs11 load(libacospkcs) theOnlySlot = pkcs11 getSlotList()[0] session = pkcs11 openSession(theOnlySlot CKF_SERIAL_SESSION | CKF_RW_SESSION) PIN = getpass getpass('Enter User PIN to login:') session login(PIN) t = pkcs11 getTokenInfo(theOnlySlot) print t label print t model print t serialNumber template = ( (CKA_CLASS CKO_SECRET_KEY) (CKA_KEY_TYPE CKK_AES) (CKA_VALUE_LEN 32) (CKA_LABEL "A") (CKA_PRIVATE True) (CKA_SENSITIVE True) (CKA_ENCRYPT True) (CKA_DECRYPT True) (CKA_TOKEN True) (CKA_WRAP True) (CKA_UNWRAP True) (CKA_EXTRACTABLE False)) ckattr = session _template2ckattrlist(template) m = LowLevel CK_MECHANISM() m mechanism = LowLevel CKM_AES_KEY_GEN key = LowLevel CK_OBJECT_HANDLE() returnValue = pkcs11 lib C_GenerateKey( session session m ckattr key) if returnValue != CKR_OK: raise PyKCS11Error(returnValue) # Now run the method to create the key createTokenAES256('TestAESKey') ```` However I get an error when running it: ````~/projects/smartcard $ python testpkcs11again py Enter User PIN to login: Token #A ACOS5-64 30A740C8704A Traceback (most recent call last): File "testcreateaes py" line 43 in <module&gt; createTokenAES256('TestAESKey') File "testcreateaes py" line 40 in createTokenAES256 raise PyKCS11Error(returnValue) PyKCS11 PyKCS11Error: CKR_ATTRIBUTE_VALUE_INVALID (0x00000013) ```` The thing is that if I switch the CKA_TOKEN line to False then it "works" Of course by setting that to false it makes the key a session object instead of a token object (i e after I logout the key is wiped) Using pkcs11-tool with --list-objects the key is not there I can use the ACSCMU (GUI tool for token admin) I can create an AES key in the "Secret Key Manager" and it does create a persistent key But I have no way to see what the ACSCMU is doing to make it persistent (it may not be using PKCS11 at all) If I had to guess the problem I would guess that it has to do with the session If CKA_TOKEN=True is invalid then it seems the token is not actually in RW mode (as suggested by the CKF_RW_SESSION in the 9th line) So far I am not sure what else to try or how to debug this
I AM GOING TO there is nothing you can do about it but contact the producer of `libacospkcs11 so` and ask for an explanation You will most likely be directed to the documentation which will state that symmetric keys can be created only as a session objects and all operations with such keys are performed in SW (not in the card) - this is rather a common practice with most of the commercially available cards and middleware suites BY THE WAY you can also try to call `C_GetMechanismInfo` for `CKM_AES_KEY_GEN` mechanism (and also other AES mechanisms you are planning to use) and check whether the `CKF_HW` flag is set in the response This flag indicates whether the mechanism is performed by the device or in the software
how to calculate monthly portfolio shares and dividends I have a simple app that should track users stock portfolio Basicly user may buy stocks like JNJ AAPL or MCD He can also sell some/all of them Dividends might be reinvested instantly as they are paid out (same as if user did buy same stock for it is dividend value) I need to calculate this portfolio value on monthly basis <strong>Easy example</strong>: Transactions: ````+----------+--------+------------+-------+ | buy/sell | amount | date | price | ----------+--------+------------+-------+ | buy | 5 | 2015-01-01 | $60 | | sell | 1 | 2015-03-01 | $70 | ----------+--------+------------+-------+ ```` From this transactions I would like to get this dictionary of shares: ````{ you'JNJ': { datetime date(2015 6 1): Decimal('5 00000') datetime date(2015 7 1): Decimal('5 00000') datetime date(2015 8 1): Decimal('4 00000') datetime date(2015 9 1): Decimal('4 00000') datetime date(2015 10 1): Decimal('4 00000')} } ```` These are my shares by month Let Us say there was a $0 75 dividend on 2015-08-21 and it on same day I bought partial shares of JNJ on this date: <strong>Example with Dividends</strong>: Transactions: ````+----------+--------+------------+-------+ | buy/sell | amount | date | price | ----------+--------+------------+-------+ | buy | 5 | 2015-01-01 | $60 | | sell | 1 | 2015-03-01 | $70 | ----------+--------+------------+-------+ ```` Dividends: ````+------------+--------+-------+ | date | amount | price | ------------+--------+-------+ | 2015-08-21 | 0 75 | 64 | ------------+--------+-------+ ```` When dividend was paid I was holding 4 shares For 4 shares I received 4*$0 75 and I bought 0 031393889 shares of JNJ Result: ````{you'JNJ': { datetime date(2015 6 1): Decimal('5 00000') datetime date(2015 7 1): Decimal('5 00000') datetime date(2015 8 1): Decimal('4 031393889') datetime date(2015 9 1): Decimal('4 031393889') datetime date(2015 10 1): Decimal('4 031393889')} } ```` So this is what I have to calculate There might be any number of transaction and dividends There must be at least one Buy transaction but dividends may not exist <strong>These are my classes in models py:</strong> Stock model representing Stock for example JNJ ````class Stock(models Model): name = models CharField("Stock's name" max_length=200 default="") symbol = models CharField("Stock's symbol" max_length=20 default="" db_index=True) price = models DecimalField(max_digits=30 decimal_places=5 null=True blank=True) ```` Than I have StockTransaction which represends object for one stock for one portfolio Transactions are linked to StockTransaction because drip applies to all Transactions ````class StockTransaction(models Model): stock = models ForeignKey('stocks Stock') portfolio = models ForeignKey(Portfolio related_name="stock_transactions") drip = models BooleanField(default=False) ```` Transaction class: ````BUYCHOICE = [(True 'Buy') (False 'Sell')] class Transaction(models Model): amount = models DecimalField(max_digits=20 decimal_places=5 validators=[MinValueValidator(Decimal('0 0001'))]) buy = models BooleanField(choices=BUYCHOICE default=True) date = models DateField('buy date') price = models DecimalField('price per share' max_digits=20 decimal_places=5 validators=[MinValueValidator(Decimal('0 0001'))]) stock_transaction = models ForeignKey(StockTransaction related_name="transactions" null=False) ```` and lastly Dividend class: ````class Dividend(models Model): date = models DateField('pay date' db_index=True) amount = models DecimalField(max_digits=20 decimal_places=10) price = models DecimalField('price per share' max_digits=20 decimal_places=10) stock_transaction = models ManyToManyField('portfolio StockTransaction' related_name="dividends" blank=True) stock = models ForeignKey(Stock related_name="dividends") ```` I have coded my method but I do think there is a better way My method is too long and takes to much time for portfolio with 106 stocks (each 5 transactions) Here is my method: ````def get_portfolio_month_shares(portfolio_id): """ Return number of dividends and shares per month respectfully in dict {symbol: {year: decimal year: decimal} } :param portfolio: portfolio object for which to calculate shares and dividends :return: total dividends and amount of shares respectfully """ total_shares total_dividends = {} {} for stock_transaction in StockTransaction objects filter(portfolio_id=portfolio_id)\ select_related('stock') prefetch_related('dividends' 'transactions' 'stock__dividends'): shares = 0 #number of shares monthly_shares monthly_dividends = {} {} transactions = list(stock_transaction transactions all()) first_transaction = transactions[0] for dividend in stock_transaction stock dividends all(): if dividend date < first_transaction date: continue try: #transactions that are older than last dividend while transactions[0] date < dividend date: if transactions[0] buy: shares = shares transactions[0] amount else: #transaction is a sell shares = shares - transactions[0] amount monthly_shares[date(transactions[0] date year transactions[0] date month 1)] = shares transactions remove(transactions[0]) except IndexError: #no more transactions pass if dividend in stock_transaction dividends all(): # if drip is active for dividend if dividend price!=0: shares = (dividend amount * shares / dividend price) monthly_shares[date(dividend date year dividend date month 1)] = shares try: monthly_dividends[date(dividend date year dividend date month 1)] = shares * dividend amount except KeyError: monthly_dividends[date(dividend date year dividend date month 1)] = shares * dividend amount #fill blank months with 0 if monthly_shares!={}: for dt in rrule rrule(rrule MONTHLY dtstart=first_transaction date until=datetime now() relativedelta relativedelta(months=1)): try: monthly_shares[date(dt year dt month 1)] except KeyError: #keyerror on dt year dt_previous = dt - relativedelta relativedelta(months=1) monthly_shares[date(dt year dt month 1)] = monthly_shares[date(dt_previous year dt_previous month 1)] try: monthly_dividends[date(dt year dt month 1)] except KeyError: monthly_dividends[date(dt year dt month 1)] = 0 # for each transaction not covered by dividend for cycle if transactions: for transaction in transactions: for dt in rrule rrule(rrule MONTHLY dtstart=transaction date until=datetime now() relativedelta relativedelta(months=1)): if transaction buy: try: monthly_shares[date(dt year dt month 1)] = transaction amount except KeyError: monthly_shares[date(dt year dt month 1)] = transaction amount else: #sell monthly_shares[date(dt year dt month 1)] -= transaction amount total_dividends[stock_transaction stock symbol] = monthly_dividends total_shares[stock_transaction stock symbol] = monthly_shares return total_dividends total_shares ```` <strong>Description</strong>: First for cycle - for each stock in portfolio Second for cycle - for each stock's dividend this line `if dividend in stock_transaction dividends all()` checks if dividends are reinvested There exists m2m relation between stock_transaction and dividend objects if they are for cycle with rrule fills up blank months to previous month value <hr> <strong>EDIT1:</strong> I already optimized number of sql queries with django-debug-toolbar (4 sql queries needed) My code is slow probably because of many objects and large dictionaries
Just a shot in the dark here (I am not familiar with stock dividends so I cannot comment on the math) It looks like this could be your bottleneck: ```` for dividend in stock_transaction stock dividends all(): ```` You select_related on `stock` and you prefetch_related on `dividends` but you do not grab `stock__dividends` You can check whether or not this is the bottleneck using the <a href="https://github com/django-debug-toolbar/django-debug-toolbar" rel="nofollow">Django Debug Toolbar</a> If this repeating query is the root problem then you may try adding it in: ```` select_related('stock' 'stock__dividends') ````
What group are older than the Dominican Friars?
Dominican nuns
Numpy: Should I use newaxis or None? In numpy one can use the 'newaxis' object in the slicing syntax to create an axis of length one e g : ````import numpy as np print np zeros((3 5))[: np newaxis :] shape # shape will be (3 1 5) ```` The <a href="http://docs scipy org/doc/numpy/reference/arrays indexing html#numpy newaxis">documentation states</a> that one can also use `None` instead of `newaxis` the effect is exactly the same Is there any reason to choose one over the other? Is there any general preference or style guide? My impression is that `newaxis` is more popular probably because it is more explicit So is there any reason why `None` is allowed?
`None` is allowed because `numpy newaxis` is merely an alias for `None` ````In [1]: import numpy In [2]: numpy newaxis is None Out[2]: True ```` The authors probably chose it because they needed a convenient constant and `None` was available As for why you should prefer `newaxis` over `None`: mainly it is because it is more explicit and partly because someday the `numpy` authors might change it to something other than `None` (They are not planning to and probably will not but there is no good reason to prefer `None` )
How To Bundle jar Files with Pyinstaller How do you get <a href="http://www pyinstaller org" rel="nofollow">pyinstaller</a> to bundle jar files as archives for a python project that utilizes them? For instance to make an exe with (I am using <a href="http://pyjnius readthedocs org/en/latest/" rel="nofollow">pyjnius</a> for handling the <a href="https://code google com/p/sikuli-api/" rel="nofollow">sikuli-standalone jar</a>): ````# test py import os import sys # set the classpath so java can find the code I want to work with sikuli_jar = '/sikuli-api standalone-1 0 3-Pre-1 jar' jarpath = os path dirname(os path realpath(__file__)) sikuli_jar os environ['CLASSPATH'] = jarpath # now load a java class from jnius import autoclass API = autoclass('org sikuli api API') ```` Pyisntaller creates the (<them>one folder</them>) exe with: `pyinstaller -d test py` But the jar to the best of my knowledge is not bundled and is inaccessible to the exe <them>unless</them> you manually place it in the folder generated by Pyinstaller According to the <a href="http://www pyinstaller org/export/d3398dd79b68901ae1edd761f3fe0f4ff19cfb1a/project/doc/Manual html?format=raw" rel="nofollow">Pyinstaller manual</a>: <blockquote> "CArchive contains whatever you want to stuff into it It is very much like a zip file " </blockquote> I then try editing the <them>previously auto-generated</them> `test spec` file with: ````jar = 'sikuli-api standalone-1 0 3-Pre-1 jar' jar_path = 'C:\\Python27\\Lib\\site-packages\\sikuli-0 1-py2 7 egg\\sikuli\\' jar coll = COLLECT(exe a binaries a zipfiles a datas [('sikulijar' jar_path 'PKG')] strip=None upx=True name='test') ```` And I try building the exe based on this spec file with: ````python C:\workspace\code\PyInstaller-2 1\PyInstaller\build py --onefile test spec ```` But nothing happens and no error returns Can someone provide a simple step by step tutorial how this could be done? Many thanks!
````coll = COLLECT(exe a binaries a zipfiles a datas [('sikulijar' jar_path 'PKG')] strip=None upx=True name='test') ```` change 'sikulijar' in the tuple to just <them>jar</them> (the variable that you have already defined) you need to reference the same name that you have used in code However I am still trying to get the JVM to initialize properly I will post that if I figure that out
Printing Linked List I have the following Linked List implementation There is a problem with the printlist() function The while loop is turning an error that there is no attribute next for self Is there a better way to write this function? Thank you!!! ````class Node: def __init__(self data next=None): self data=data def _insert(self data): self next=Node(data) def _find(self data): if self data==data: return self if self next is None: return None while self next is not None: if self next data==data: return self next return None def _delete(self data): if self next data == data: temp=self next self next =self next next temp=None def _printtree(self): while self: print self data self=self next class LinkedList: def __init__(self): self head=None def insert(self data): if self head: self head _insert(data) else: self head=Node(data) def find(self data): if self head data==data: return self head return self head _find(data) def delete(self data): if self head data==data: head=None return self head _delete(data) def printtree(self): self head _printtree() ````
- add next attribute to Node's ini method - you should define printtree of LinkedList this way: def printree(self): ````current_node = self head print current_node data while current_node next is not None: print current_node next data current_node = current_node next ```` adding a <strong>repr</strong> method will make your code nicer
python create class methods on the fly I have a rather complex base class for some sqlalchemy models and I want to create a rails style setters but since i am fairly new to python I am stuck with a problem that I cannot seem to bypass I create the setters on the <strong>new</strong> method so I can have the triggered both on new instances and on queries but no matter what setter I define and execute it always picks the last setter to execute An example serves better: ````class Test(object): columns = ['email' 'username'] def __new__( cls *args **kwargs ): for column in cls columns: setattr( cls "set%s" % column capitalize() lambda cls v: cls setAttribute( cls column v ) ) return super( Test cls ) __new__( cls *args **kwargs ) @staticmethod def setAttribute(cls attribute value): print "Setting attribute %s with value %s" % ( attribute value ) setattr( cls attribute value ) test = Test() test setEmail('test@test com') ```` As you can see I am setting the email but when executed the code tries to set the username which is the last column Any idea why is that?
This happens because your `lambda` function references `column` but does not pass it in as an argument: ````lambda cls v: cls setAttribute( cls column v ) ```` When this function is executed it will look for the name `column` in a containing or global scope and always find the value `'username'` because that is what `column` was set to last Here is a straightforward way to fix this using a default argument value: ```` def __new__( cls *args **kwargs ): for column in cls columns: setattr( cls "set%s" % column capitalize() lambda cls v column=column: cls setAttribute( cls column v ) ) return super( Test cls ) __new__( cls *args **kwargs ) ```` Another alternative would be to use a closure (in a way the mutable default argument is a type of closure): ```` def __new__( cls *args **kwargs ): def make_setter(column): return lambda cls v: cls setAttribute( cls column v ) for column in cls columns: setattr( cls "set%s" % column capitalize() make_setter(column)) return super( Test cls ) __new__( cls *args **kwargs ) ````
Display foreign key value in django template I have looked through the similar questions and was unable to find a solution that fits or I am missing something? I have two models(SafetyCourse and SafetyCourseTaken) I have a foreign key relationship that points from "safety courses taken" to safety course shown below: models py ````class SafetyCourse(models Model): user = models ForeignKey(settings AUTH_USER_MODEL on_delete=models CASCADE) name = models CharField(max_length=128 unique=True) def __str__(self): return self name class SafetyCoursesTaken(models Model): user = models ForeignKey(settings AUTH_USER_MODEL on_delete=models CASCADE) profile = models ForeignKey(EmployeeProfile on_delete=models CASCADE) course = models ForeignKey(SafetyCourse on_delete=models CASCADE related_name='course_name') conducted_date = models DateTimeField(null=True blank=True) expiration_date = models DateTimeField(null=True blank=True) class Meta: verbose_name_plural = 'Safety Courses Taken' ```` views py ````class ManageSafetyCourseTakenView(LoginRequiredMixin generic ListView): login_url = reverse_lazy('users:login') model = SafetyCoursesTaken template_name = 'ppm/courses-taken html' paginate_by = 10 # override get_queryset to only show training related to employee profile def get_queryset(self): pk = self kwargs['pk'] return SafetyCoursesTaken objects filter(profile_id=pk) ```` course-taken html(template) ````{% for course_taken in object_list %} <tr&gt; <td&gt;{{ course_taken course_id}}</td&gt; </tr&gt; {% endfor %} ```` I have tried a number of solutions to similar questions but was unable to find a correct one I have tried: course_taken course_name_set select_related course_taken course_name_set and a few others What I want to do is just display the name of the course instead of the course id What am I doing wrong?
Looking at your schema I think it should be this in the template: ````{% for course_taken in object_list %} <tr&gt; <td&gt;{{ course_taken course name }}</td&gt; </tr&gt; {% endfor %} ````
Is there a way to give a sequence to initializer of Union in Python typing I have code that looks like this: ````MY_SPECIAL_CLASSES = (Foo Bar Spam Eggs) ```` I have many functions that take any of those as the input parameter so their signature is e g : ````def whatever(obj: Union[Foo Bar Spam Eggs]) > None: ```` To me the obvious solution - just use the common superclass - is explicitly stated as a no-no in PEP484 (emphasis mine) <blockquote> By default type variables are considered invariant which means that arguments for arguments annotated with types like `List[Employee]` must exactly match the type annotation -- <strong>no subclasses or superclasses of the type parameter</strong> (in this example `Employee` ) are allowed </blockquote> Since ````def something(obj: Union[*MY_SPECIAL_CLASSES]) > None: ```` does not work I have to do something like: ````UNION_OF_MY_SPECIAL_CLASSES = Union[Foo Bar Spam Eggs] def another(obj: UNION_OF_MY_SPECIAL_CLASSES) > UNION_OF_MY_SPECIAL_CLASSES: ```` To me that is ugly - and a clear violation of DRY To avoid DRY I cand use the dunder-attribute `__union_set_params__` in my definition but to me this way looks a bit unhygienic and backwards e g ````UNION_OF_MY_SPECIAL_CLASSES = Union[Foo Bar Spam Eggs] MY_SPECIAL_CLASSES = tuple(UNION_OF_MY_SPECIAL_CLASSES __union_set_params__) ```` Is there a more elegant way of doing this?
You are misreading the PEP The default invariance of type variables means that it is an error to pass a `Foo[Subclass]` where a `Foo[Superclass]` is expected but it is fine to pass a `Subclass` to a parameter annotated as `SuperClass` As long as your function accepts any instance of the common superclass (so there is not some 5th subclass you want to reject or anything) you should be fine to annotate those parameters with the superclass
Python Spliting extracted CSV Data I have some data (taken from a CSV file) in the format: ```` MyValues = [[2 2 2 1 1] [2 2 2 2 1] [1 2 2 1 1] [2 1 2 1 2] [2 1 2 1 2] [2 1 2 1 2] [2 1 2 1 2] [2 2 2 1 1] [1 2 2 1 1]] ```` I would like to split this data into 2/3 and 1/3 and be able to distinguish between them For example ````twoThirds = [[2 2 2 1 1] [2 2 2 2 1] [1 2 2 1 1] [2 1 2 1 2] [2 1 2 1 2] [2 1 2 1 2]] oneThird = [[2 1 2 1 2] [2 2 2 1 1] [1 2 2 1 1]] ```` I have tried to use the following code to achieve this but am unsure if i have gone about this the correct way? ```` twoThirds = (MyValues * 2) / 3 #What does this code provide me? ````
It is just a list use the slice notation And read the <a href="http://docs python org/2/tutorial/introduction html#lists" rel="nofollow">docs</a>: ````In [59]: l = range(9) In [60]: l[:len(l)/3*2] Out[60]: [0 1 2 3 4 5] In [61]: l[len(l)/3*2:] Out[61]: [6 7 8] ````
django logging "No handlers could be found for logger" I have searched over all the similar questions and nothing helped I have created the 'universal' logger like this: ````'': { 'handlers': ['logfile' 'console'] 'level': 'WARNING' 'propagate': True } ```` in order to be able to write ````import logging log = logging getLogger(__name__) ```` and get logger in any file of my django-application(seen this approach somewhere on SO) and several days ago it was working for me but not now and I could not understand why There is my whole logging-settings: ````LOGGING = { 'version': 1 'disable_existing_loggers': False 'formatters': { 'standard': { 'format' : "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s" 'datefmt' : "%d/%b/%Y %H:%M:%S" } } 'handlers': { 'mail_admins': { 'level': 'ERROR' 'class': 'django utils log AdminEmailHandler' } 'logfile': { 'level':'WARNING' 'class':'logging handlers RotatingFileHandler' 'filename': "/opt/telefacer_1/var/log/inapplog" 'maxBytes': 50000 'backupCount': 2 'formatter': 'standard' } 'console':{ 'level':'WARNING' 'class':'logging StreamHandler' 'formatter': 'simple' } } 'loggers': { 'django request': { 'handlers': ['mail_admins'] 'level': 'ERROR' 'propagate': True } '': { 'handlers': ['logfile' 'console'] 'level': 'WARNING' 'propagate': True } } } ````
I see references to both a simple and standard formatter though only standard is defined
Prohibit unknown values? Can I raise an error with colander if values are in the payload that are not in the schema? Thus allowing only whitelisted fields? This is a sample: ````# coding=utf-8 from colander import MappingSchema String Length from colander import SchemaNode class SamplePayload(MappingSchema): name = SchemaNode(String()) foo = SchemaNode(Int()) class Sample(MappingSchema): type = SchemaNode(String() validator=Length(max=32)) payload = SamplePayload() # This json should not be accepted (and should yield something like: Unknown field in payload: bar { "type":"foo" "payload":{ "name":"a name" "foo":123 "bar":false } } ````
Yes see <a href="http://docs pylonsproject org/projects/colander/en/latest/api html#colander Mapping" rel="nofollow">the docs of `colander Mapping`</a> Creating a mapping with `colander Mapping(unknown='raise')` will cause a `colander Invalid` exception to be raised when unknown keys are present in the cstruct during deserialization According to <a href="https://github com/Pylons/colander/issues/116" rel="nofollow">issue 116 in the tracker</a> the way to apply this to a Schema object is to override the `schema_type` method: ````class StrictMappingSchema(MappingSchema): def schema_type(self **kw): return colander Mapping(unknown='raise') class SamplePayload(StrictMappingSchema): name = SchemaNode(String()) foo = SchemaNode(Int()) ````
Python does not find custom PyQt5 As the offical pyqt5-installation in the ubuntu repositories seem to lack support for QtQuick I tried to install pyqt5 from source The installation itself seems to work correctly but when running a python script that uses PyQt5 python complains that it cannot find that PyQt After building sip 4 15 5 I downloaded PyQt5 2 It should be compatible to my version of Qt (output of `qmake --version`): ````QMake version 3 0 Using Qt version 5 2 0 in /opt/qt5 1 1/5 2 0/gcc_64/lib ```` I ran The output of configure py of pyqt can be found here: <a href="https://gist github com/Mitmischer/8677889" rel="nofollow">https://gist github com/Mitmischer/8677889</a> The installation output of pyqt can be found here: <a href="https://gist github com/Mitmischer/8677780" rel="nofollow">https://gist github com/Mitmischer/8677780</a> After `sudo make install` I can see a folder `PyQt5` in `/usr/lib/python3 3/site-packages` which is quite nice However if I run cat `PyQt5/__init__ py` there is no actual code inside: ````# Copyright (c) 2014 Riverbank Computing Limited <info@riverbankcomputing com&gt; # # This file is part of PyQt5 # # This file may be used under the terms of the GNU General Public License # version 3 0 as published by the Free Software Foundation and appearing in # the file LICENSE included in the packaging of this file Please review the # following information to ensure the GNU General Public License version 3 0 # requirements will be met: http://www gnu org/copyleft/gpl html # # If you do not wish to use this file under the terms of the GPL version 3 0 # then you may purchase a commercial license For more information contact # info@riverbankcomputing com # # This file is provided AS IS with NO WARRANTY OF ANY KIND INCLUDING THE # WARRANTY OF DESIGN MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ```` Yep that is all what is inside that file I do not know whether it is supposed to be that way but it looks strange to me Furthermore (`ls PyQt5`): ````__init__ py QtCore so QtGui so QtMultimediaWidgets so QtPositioning so QtQuick so Qt so QtTest so QtX11Extras so _QOpenGLFunctions_2_0 so QtDBus so QtHelp so QtNetwork so QtPrintSupport so QtSensors so QtSql so QtWebKit so QtXmlPatterns so QtBluetooth so QtDesigner so QtMultimedia so QtOpenGL so QtQml so QtSerialPort so QtSvg so QtWidgets so uic/ ```` Does not look that pythonic As suggested elsewhere I (hopefully) set my pythonpath appropriately: ````&gt; echo $PYTHONPATH /usr/lib/python3 3/site-packages/ ```` Now if I start an interactive `python3 3`-session (or a script) PyQt5 cannot be found: ````Python 3 3 2+ (default Oct 9 2013 14:50:09) [GCC 4 8 1] on linux Type "help" "copyright" "credits" or "license" for more information &gt;&gt;&gt; from PyQt5 import * Traceback (most recent call last): File "<stdin&gt;" line 1 in <module&gt; ImportError: No module named 'PyQt5' &gt;&gt;&gt; ```` Has anyone else tried to install PyQt5 from source? What can I do to make PyQt work?
You probrably messed PYTHONPATH in someway I have successfully built installed and used PyQT using virtual environment So here is how to install it using virtualenv There are tons of tutorals so please read about it So install `python-virtualenv` `virtualenvwrapper` (at least that is what they are called on Debian) ````mkvirtualenv -p /path/to/python3 3 name workon name cd PyQtSource configure make make install ```` To use this enviorment do: ````workon name python ````
Designate specific CPU for a process - python multiprocessing I am using Redis as my queue for a producer/consumer relationship in a multiprocessing setup My problem is that my producers are overloading my consumer then stealing it is CPU My question can I allocate an entire processor to specific function/process (IE: the consumer) in this setup
It is not something Python does out of the box Is also somewhat OS-specific See this answer on doing under Linux: <a href="http://stackoverflow com/a/9079117/4822566">http://stackoverflow com/a/9079117/4822566</a>
"Bad" Python Install I have been having issues with Python recently such as compatibility with anaconda When I ran Homebrew's `brew doctor` I think I came across the problem as laid out below How can I wipe these files and do a clean install of Python? ````Warning: "config" scripts exist outside your system or Homebrew directories ` /configure` scripts often look for *-config scripts to determine if software packages are installed and what additional flags to use when compiling and linking Having additional scripts in your path can confuse software installed via Homebrew if the config script overrides a system or Homebrew provided script of the same name We found the following "config" scripts: /Library/Frameworks/Python framework/Versions/3 4/bin/python3-config /Library/Frameworks/Python framework/Versions/3 4/bin/python3 4-config /Library/Frameworks/Python framework/Versions/3 4/bin/python3 4m-config /opt/local/bin/curl-config /opt/local/bin/freetype-config /opt/local/bin/libpng-config /opt/local/bin/libpng16-config /opt/local/bin/nc-config /opt/local/bin/ncurses5-config /opt/local/bin/ncursesw5-config /opt/local/bin/pcre-config /opt/local/bin/python2 7-config /opt/local/bin/xml2-config /Users/adamg/anaconda/bin/freetype-config /Users/adamg/anaconda/bin/libdynd-config /Users/adamg/anaconda/bin/libpng-config /Users/adamg/anaconda/bin/libpng15-config /Users/adamg/anaconda/bin/llvm-config /Users/adamg/anaconda/bin/nc-config /Users/adamg/anaconda/bin/python-config /Users/adamg/anaconda/bin/python2-config /Users/adamg/anaconda/bin/python2 7-config /Users/adamg/anaconda/bin/xml2-config /Users/adamg/anaconda/bin/xslt-config /Library/Frameworks/Python framework/Versions/2 7/bin/python-config /Library/Frameworks/Python framework/Versions/2 7/bin/python2-config /Library/Frameworks/Python framework/Versions/2 7/bin/python2 7-config Warning: Python is installed at /Library/Frameworks/Python framework Homebrew only supports building against the System-provided Python or a brewed Python In particular Pythons installed to /Library can interfere with other software installs ````
To uninstall using brew try this command `brew uninstall <package&gt;` Also OS X has python preinstalled so there is no need to `brew install python` <strong>Edit</strong> Even though python is preinstalled like jgritty said it should not be used for development So you should `brew uninstall` then `brew install` <h2>Update</h2> To removed preinstalled python (2 7) you need to do this in terminal 1) `sudo rm -rf /Library/Frameworks/Python framework/Versions/2 7` 2) `sudo rm -rf "/Applications/Python 2 7"` 3) remove the symbolic links in `/usr/local/bin` that point to this python version see `ls -l /usr/local/bin | grep ' /Library/Frameworks/Python framework/Versions/2 7'` 4) if necessary edit your she will profile file(s) to remove adding `/Library/Frameworks/Python framework/Versions/2 7` to your PATH environment file Depending on which she will you use any of the following files may have been modified: `~/bash_login ~/bash_profile ~/cshrc ~/profile ~/tcshrc and/or ~/zprofile` Thanks to `Ned Deily` to show how to do it at <a href="http://stackoverflow com/a/3819829 co uk">this link</a> <strong>Edit:</strong> Thanks to `Tim Smyth` for this update <strong>Note</strong> This will only remove a downloaded version of python
Split a user input I would like to take a user input about the delimiter and use it to split the data I thought the line of code should look something like this: ````my_delimiter = raw_input("write down the delimiter of your file(e g ' '): ") for line in file: line2 = line split(my_delimiter) print line2 main_data append(line2) ```` Then my sample input should be like <blockquote> write down the delimiter of your file(e g ' '): '\t' </blockquote> and the output should be like ````['age' 'prescript' 'astigmatic' 'tearRate\n'] ['young' 'myope' 'no' 'reduced' 'no lenses\n'] ```` but it remains the same But it does not work it is not delimited by tab or comma as I hope it to be Please help me figure this out
If the user inputs `\t` it will be coming as `\\t` the backslash would be escaped and it can be interpreted as `blackslash t` not `tab` character To input the `tab` character (`\t`) you should press the tab key and then press `return` key Example - ````&gt;&gt;&gt; raw_input("Please input :") Please input :\t '\\t' &gt;&gt;&gt; raw_input("Please input :") Please input : '\t' ```` Note in the second case I pressed tab key and then return key
"EOFError: Ran out of input" Keep getting this error while trying to pickle I am writing a quiz program I am trying to give the user the opportunity to write and add their own question I have wrote functions to ask and add questions I am trying to pickle the list of questions so I can auto load new questions anytime somebody adds one This is the code I am using to load the pickled file ````sciIn = open('sciList txt' 'rb') sci = pickle load(sciIn) sciIn close() ```` I have this code in the function that adds questions ````sciOut = open("sciList txt" 'wb') sci append(dicQ) pickle dump(sci sciOut) sciOut close() ```` When I run the code I get EOFError: Ran out of input which points to the loading of the pickle I am not sure what I am doing wrong I am using Python 3 Thanks for your help! full code <a href="http://pastebin com/HEp0KhRA" rel="nofollow">http://pastebin com/HEp0KhRA</a>
Try this: ````with open('sciList txt' 'rb'): sci = pickle load(sciIn) sciIn close() ```` EDIT: If that does not work make sure to check your pickled files to make sure some of them are not length 0 To do this: ````ls -l your_directory_path ````
What is written from left to right?
null
Library for SVG path calculations I am looking for a library with Python bindings that can do calculations on SVG paths such as calculating the length and finding the coordinates of a point on the paths (ie say the coordinates of the point 24 4% the length of the path) Is there something around already? A C-library would be acceptable as well as I can easily make my own Python bindings
Try 'Inkscape' (I AM GOING TO the best SVG editor out there) looking at their source code and see how they do it - and possibly you can reuse their libraries (they have a embedded Python scripting engine too) without much rework
How do I create dynamically and manage new instances of common data types in Python? I want to make a program which will ask the user for a number (let us say 3) and create three 3x3 lists or 3 sets of 3 members or some other complex data type (times 3) Python already knows of Many programs create new objects without strict programming declaration of their instances For example in Cinema4D (3D graphics software) i can push a button and create as many cubes I want But I do not know the programming mechanics of this automatic instance creation without a written code declaration like: ````cubeobj cube_1 cube_1 name("Cube 1") ```` In C++ something like that would require the operator `new` and the function `malloc()` Are there any equivalents for them in Python? I have searched among many Python books and did not find anything what kind of Python topic would discuss something like that?
try this: ````num = input() lst = [[0 for __ in range(num)] for _ in range(num)] ```` for specific type you should use numpy arrays or `array` module
overwrite settings and package defaults I have created a reusable application for Django that requires you to add settings in the settings py file i e ````version = '1' url = 'api domain com' ```` However these settings rarely change and I would much prefer them to be defaults in my package but allow the developer to overwrite them should they wish in their own settings py My package is like an app so it does not have a settings py so how do I go about added these setting as defaults in my package while still allowing them to be overwritten in developers settings py of a project? I hope that makes sense
A common thing that django folks do is to include a `local_settings py` in your local copy for stuff that only want on your local copy Then at the <strong>end</strong> of your settings py file put: ````try: from local_settings import * except ImportError: pass ```` Be sure to add `local_settings py` to your ` gitignore` (or equivalent for your VCS) so people are not stepping on eachothers' feet by accidentally commiting local stuff
regex deal with double backslash Ths string is `G:\\one\\two\\three htm` the result I want is `G/one/two/three htm` how can I get the desired result with python and regex ? I have tried `re sub('\\\\\\\\' '/' string)` it worked but left `:` I do not want to do with regex again to remove `:` who can help me with the above question ?
Without knowing python I would suggest ````re sub(':?\\+' '/' string) ````
calculate mod using pow function python So If i would like to calculate the value of 6^8 mod 5 using the pow function what should I put in a line?? In the assumption that You do not need to import it first I know that pow is used like pow (x y) = pow (6 8) = 6^8 and My guess is ````mod pow(6 8) ```` Thank you!
It is simple: `pow` takes an optional 3rd argument for the modulus From the <a href="https://docs python org/3/library/functions html#pow" rel="nofollow">docs</a>: <blockquote> `pow(x y[ z])` Return x to the power y; if z is present return x to the power y modulo z (computed more efficiently than `pow(x y) % z)` The two-argument form pow(x y) is equivalent to using the power operator: `x**y` </blockquote> So you want: ````pow(6 8 5) ```` Not only is `pow(x y z)` faster &amp; more efficient than `(x ** y) % z` it can easily handle large values of `y` without using arbitrary precision arithmetic assuming `z` is a simple machine integer
Python inheritance: init is having an issue with the number of params I am developing a basic python Class-Subclass system for my Django project but i am experiencing a strange issue First of all the definition of the classes: file classes py ````class BaseAd(object): """ base class for all the ads with common parameters """ def __init__(self dom web loc cc a c date desc hl **kwargs): self domain = self doDomain(dom) self url = self doUrl(web) self description = self doDescription(desc hl) self location = self doLocation(a c loc) self date = self doDate(date) ```` file jobs py ````class JobAd(BaseAd): """ extends BaseAd with more parameters """ def __init__(self domain url location countrycode area city index_date description contract_multivalue salary_min company job_title **kwargs): self contract_type = self doContract(contract_multivalue) self salary = self doSalary(salary_min) self company = self doCompany(company) self title = self doTitle(job_title) """ Super constructor call """ super(JobAd self) __init__( domain url location countrycode area city index_date description **kwargs ) ```` Both of the classes have their respective methods (doDomain doSalary etc) which are irrelevant now since they just return the string they get as input (will be implemented better in the future now is just not needed) The kwargs is just used to store some non-useful but still returned params of the original dict (otherwise i will get an error) The JobAd class is used as a constructor parameter for our python-to-solr interface <a href="https://github com/tow/sunburnt" rel="nofollow">sunburnt</a> After you define a class and pass it to the method it translates fields defined in the solr response (which is simply a dict) into the class So params defined in JobAd's init must have the same name as their definition in the solr schema this is the actual constructor call: ````/path/to/myapp/resultsets/views_json py in job_search_json #lines splitted for better reading #res is a solr search object items = res paginate(start=start rows=res_per_page) sort_by("-index_date") sort_by("-score") sort_by("-md5") sort_by("-posted_date") execute(constructor=JobAd) ```` next in the stacktrace there is: ````/path/to/sunburnt-0 6-py2 7 egg/sunburnt/search py in execute return self transform_result(result constructor) ▼ Local vars Variable Value self sunburnt search SolrSearch object at 0x7f8136e78450 result sunburnt schema SolrResponse object at 0x7f8136e783d0 constructor class 'myapp models jobs JobAd' ```` and finally ````/path/to/sunburnt-0 6-py2 7 egg/sunburnt/search py in transform_result result result docs = [constructor(**d) for d in result result docs] ```` inside the last "local vars" tab there is the result dictionary (just the structure not the full dict with values): ````self sunburnt search SolrSearch object at 0x7f8136e78450 d {'area': 'city': 'contract_multivalue': 'country': 'countrycode': 'currency': 'description': 'district': 'domain': 'fileName': 'index_date': 'job_experience': 'job_field_multivalue': 'job_position_multivalue': 'job_title': 'job_title_fac': 'latitude': 'location': 'longitude': 'md5': 'salary_max': 'salary_min': 'study': 'url': 'urlPage': } constructor class 'tothego_frontend sito_maynard models jobs JobAd' ```` In the django log file there is no other error except the DogSlow trap telling nothing other than the trapped line This is the error i am getting: ````TypeError at /jobs/us/search/ __init__() takes exactly 13 arguments (12 given) ```` The behaviour i am expecting is not the behaviour i am actually experiencing: instead of having my class call its parent's constructor (10 arguments) it is using its own init (14 arguments) I have been trying also with old python class definition: no "object" in the superclass; inside the subclass' init parent class is initialized as BaseAd <strong>init</strong>(self ); also i have been trying to call the super method as first statement inside the subclass' init (a la java) but nothing seems to change What am i doing wrong here? EDIT: i fixed the length of the second init line was a bit too much! <strong>ADDED INFORMATIONS FROM DJANGO'S STACKTRACE AS ASKED</strong> Latest tought: i am starting to assume that sunburnt does not support class inheritance even if there is nothing about it in the docs <strong>NEW EDIT</strong>: after some tests today this is what i have discovered (so far) - sunburnt allows inheritance - i had 3 parameters out of sync updated the code and the error Now it is always missing an argument The "self" maybe? I really do not know where to look anymore the error is the same as before (same stacktrace) just different wrong parameters <strong>FOUND THE PROBLEM</strong> actually adding some default values to the init parameters helped me out spot the real error: missing fields in the input Sorry guys for waisting your time and thank you again for counseling
I have taken your code (removing the `do*` methods from the `__init__`s) and turned into a simpler example to try to recreate your problem as you state it ````class BaseAd(object): """ base class for all the ads with common parameters """ def __init__(self dom web loc cc a c date desc hl **kwargs): self domain = dom self url = web self description = desc self location = loc self date = date class JobAd(BaseAd): """ extends BaseAd with more parameters """ def __init__(self domain url location countrycode area city index_date description solr_highlights contract_type salary company job_title **kwargs): self contract_type = contract_type self salary = salary self company = company self title = job_title """ Super constructor call """ super(JobAd self) __init__( domain url location countrycode area city index_date description solr_highlights **kwargs ) j = JobAd(1 2 3 4 5 6 7 8 9 10 11 12 13 kwarg1="foo" kwarg2="bar") ```` When running python 2 7 2 this executes fine with no errors I suggest that perhaps the `__init__` being referred to in the error is `JobAd`s not the super's as `JobAd`'s init actually has 14 arguments which is what the error is complaining about I suggest trying to find a place where JobAdd's `__init__` is called with an insufficient number of arguments As others have said posting the full stack trace and showing how JobAd is used is invaluable to determining the root because
One colorbar for several subplots in symmetric logarithmic scaling I need to share the same colorbar for a row of subplots Each subplot has a symmetric logarithmic scaling to the color function Each of these tasks has a nice solution explained here on stackoverflow: <a href="https://stackoverflow com/a/38940369/6418786">For sharing the color bar</a> and <a href="https://stackoverflow com/a/39256959/6418786">for nicely formatted symmetric logarithmic scaling</a> However when I combine both tricks in the same code the colorbar "forgets" that is is supposed to be symmetric logarithmic Is there a way to work around this problem? Testing code is the following for which I combined the two references above in obvious ways: ````import numpy as np import matplotlib pyplot as plt from mpl_toolkits axes_grid1 import ImageGrid from matplotlib import colors ticker # Set up figure and image grid fig = plt figure(figsize=(9 75 3)) grid = ImageGrid(fig 111 # as in plt subplot(111) nrows_ncols=(1 3) axes_pad=0 15 share_all=True cbar_location="right" cbar_mode="single" cbar_size="7%" cbar_pad=0 15 ) data = np random normal(size=(3 10 10)) vmax = np amax(np abs(data)) logthresh=4 logstep=1 linscale=1 maxlog=int(np ceil(np log10(vmax))) #generate logarithmic ticks tick_locations=([-(10**x) for x in xrange(-logthresh maxlog+1 logstep)][::-1] [0 0] [(10**x) for x in xrange(-logthresh maxlog+1 logstep)] ) # Add data to image grid for ax z in zip(grid data): print z i am = ax imshow(z vmin=-vmax vmax=vmax norm=colors SymLogNorm(10**-logthresh linscale=linscale)) # Colorbar ax cax colorbar(i am ticks=tick_locations format=ticker LogFormatter()) ax cax toggle_label(True) #plt tight_layout() # Works but may still require rect paramater to keep colorbar labels visible plt show() ```` The generated output is the following: <a href="http://i stack imgur com/9TzpH png" rel="nofollow"><img src="http://i stack imgur com/9TzpH png" alt="enter image description here"></a>
Based on the solution by Erba Aitbayev I found that it suffices to replace the line ````ax cax colorbar(i am ticks=tick_locations format=ticker LogFormatter()) ```` in the example code originally posted by the line ````fig colorbar(i am ticks=tick_locations format=ticker LogFormatter() cax = ax cax) ```` and everything works without the need to specify explicit dimensions for the colorbar I have no idea why one works and the other does not though It would be good to add a corresponding comment <a href="https://stackoverflow com/a/38940369/6418786">in the post on sharing colorbars</a> I checked and the linear color scale in that example still works if colorbar is called as in the second of the two alternatives above (I do not have sufficient reputation to add a comment there )
Who believes that clear-text doesn't give enough prominence?
null
Accessing URLs from a list in Python I am trying to search a HTML document for links to articles store them into a list and then use that list to search each one individually for their titles
Just use `Beautiful Soup` to parse the HTML and find the title tag in each page: ````read = [urllib urlopen(link) read() for link in article_links] data = [BeautifulSoup(i) find('title') getText() for i in read] ````
Can Selenium RC tests written in Python be integrated into PHPUnit? I am working on large project in PHP and I am running phpundercontrol with PHPUnit for my unit tests I would like to use Selenium RC for running acceptance tests Unfortunately the only person I have left to write tests only knows Python Can Selenium tests written in Python be integrated into PHPUnit? Thanks!
The only thing that comes to my mind is running them through the she will It would be: ````<?php $output = shell_exec('python testScript py'); echo $output; ?&gt; ```` It is not too integrated with phpunit but once you get the output in a variable ($output) you can then parse the text inside it to see if you have "E" or " " ("E" states for errors in pyunit and " " states for pass) This is the best thing I could think of hope it helps
Outside of promoting diversity, was does Eastland believe the other reason to be in favor of affirmative action is?
remedying past discrimination
What was the White House stance?
the results were clear, the people of Puerto Rico want the issue of status resolved
The Dharmagupta schol spread in what century to Kashmir?
3rd century
What rebellion took place in 1962?
null
how to issue a "show dbs" from pymongo I am using pymongo and I cannot figure out how to execute the mongodb interactive she will equivalent of "show dbs"
````from pymongo import MongoClient # Assuming you are running mongod on 'localhost' with port 27017 c = MongoClient('localhost' 27017) c database_names() ````
Resolve FB GraphAPI picture call to the final URL I am developing an application that displays a users/friends photos For the most part I can pull photos from the album however for user/album cover photos all that is given is the object ID for which the following URL provides the image: `https://graph facebook com/1015146931380136/picture?access_token=ABCDEFG&amp;type=picture` Which when viewed redirects the user to the image file itself such as: `https://fbcdn-photos-a akamaihd net/hphotos-ak-ash4/420455_1015146931380136_78924167_s jpg` My question is is there a Pythonic or GraphAPI method to resolve the final image path an avoid sending the Access Token to the end user?
Make a Graph API call like this and you get the real URL: <blockquote> https://graph facebook com/[fbid]?fields=picture </blockquote> By The Way you don´t need an access token for this
What is the name of the modern art museum located in new Delhi?
National Gallery of Modern Art
What is wrong with my python classes? Can anyone tell me what is wrong with my class code? When I executed it the program just pop up "Indentation Error: unindent does not match any outer indentation level" The following is my code: ````class Student: totalStudents = 0 def __init__(self name year): self name = name self year = 0 self grade = [] self attend = 0 print("Add {0} to the classroom" format(self name) ) Student totalStudents = 1 def addGrade(self grade): self grade append(grade) def attendDay(self): self attend = 1 def classAverage(self grade): return sum(self grade) / len(self grade) def __str__(self name grade): return "{0} is a {1} grader studnet" format(self name self year) ````
While programming with Python you should take care of ; - Do not use TAB character or - Be sure your editor to converts your TAB character to space characters
process stop working while queue is not empty I try to write a script in python to convert url into its corresponding ip Since the url file is huge (nearly 10GB) so I am trying to use multiprocessing lib I create one process to write output to file and a set of processes to convert url Here is my code: ````import multiprocessing as mp import socket import time num_processes = mp cpu_count() sentinel = None def url2ip(inqueue output): v_url = inqueue get() print 'v_url '+v_url try: v_ip = socket gethostbyname(v_url) output_string = v_url+'|||'+v_ip+'\n' except: output_string = v_url+'|||-1'+'\n' print 'output_string '+output_string output put(output_string) print output full() def handle_output(output): f_ip = open("outputfile" "a") while True: output_v = output get() if output_v: print 'output_v '+output_v f_ip write(output_v) else: break f_ip close() if __name__ == '__main__': output = mp Queue() inqueue = mp Queue() jobs = [] proc = mp Process(target=handle_output args=(output )) proc start() print 'run in %d processes' % num_processes for i in range(num_processes): p = mp Process(target=url2ip args=(inqueue output)) jobs append(p) p start() for line in open('inputfile' 'r'): print 'ori '+line strip() inqueue put(line strip()) for i in range(num_processes): # Send the sentinal to tell Simulation to end inqueue put(sentinel) for p in jobs: p join() output put(None) proc join() ```` However it did not work It did produce several outputs (4 out of 10 urls in the test file) but it just suddenly stops while queues are not empty (I did check queue empty()) Could anyone suggest what is wrong?Thanks
You are workers exit after processing a single url each they need to loop internally until they get the sentinel However you should probably just look at <a href="http://docs python org/2/library/multiprocessing html#module-multiprocessing pool" rel="nofollow">multiprocessing pool</a> instead as that does the bookkeeping for you
Python: check if an object is a list or tuple (but not string) This is what I normally do in order to ascertain that the input is a `list`/`tuple` - but not a `str` Because many times I stumbled upon bugs where a function passes a `str` object by mistake and the target function does `for x in lst` assuming that `lst` is actually a `list` or `tuple` ````assert isinstance(lst (list tuple)) ```` My question is: is there a better way of achieving this?
I think ````assert not isinstance(lst basestring) ```` Is actually what you want otherwise you will miss out on a lot of things which act like lists but are not subclasses of `list` or `tuple`
CGI python and setgid I am running a compiled python cgi script (using cxfreeze) in Apache The script among other things calls ````os setuid(some_uid) os setgid(some_gid) ```` Obviously some_uid/gid are legal and I set the sticky bit for both user and group and verified it is indeed set However on every call i get an error ````os setgid(int(self gid)) OSError: [Errno 1] Operation not permitted ```` As you may notice setuid() is successful setgid is not Which is very weird at least for me though I admit I have little experience with permissions in Linux Any thoughts/ideas are welcome I am using apache 2 2 15 python 2 6 5 RHEL 5 4 (kernel 2 6 18) Thank you
The setuid call drops the privileges you need to call setgid so your calls occur in the wrong order But why not use a <a href="http://pypi python org/pypi/privilege/1 0" rel="nofollow">library</a> that is designed for dropping privileges?
Merging whilst keeping order in R and pandas I use this (somewhat kludgy) R function to merge data frames and keep the order of one of them: ````MergeMaintainingOrder = function(Ordered Unordered ByWhatColumn){ Ordered$TEMPINDEX = 1:length(Ordered[ 1]) MergedData = merge(Ordered Unordered by=ByWhatColumn) MergedData = MergedData[order(MergedData$TEMPINDEX) ] MergedData$TEMPINDEX = NULL return(MergedData) } ```` How can I accomplish the same thing in pandas? Is there a less kludgy way or should I just rewrite the same function? Thanks -N
In pandas a merge resets the index but you can easily work around this by resetting the index <them>before</them> doing the merge Resetting the index will create a new column called "index" which you can then use to re-create your index after the merge For example: ````Ordered reset_index() merge(Ordered Unordered on=ByWhatColumn) set_index('index') ```` See this <a href="http://stackoverflow com/questions/11976503/how-to-keep-index-when-using-pandas-merge">question/answer</a> for more discussion (hat tip to @WouterOvermeire)
Django 1 6 editing userprofile model with existing users I have a django database with a custom user profile for the user model The problem is that when I want to add new fields to my userprofile class I get the error `no such column` regarding the new field when trying to access the users table Any way then to update the userprofile model without having to rebuild the database? Thank you
Here is how to do it First install `south` ````pip install south ```` Then add to your settings `INSTALLED_APP` list as - ```` 'south' ```` What is south? Well south is a django app that helps you updating your database without having to rebuild it Now initialise your models with- ````python manage py syncdb // this will create the south tables for migrations python manage py schemamigration <appname&gt; --initial // (once per app) this will create the initial model files in a migrations package python manage py migrate ```` Then every time you update your models just need to perform an update with - ````python manage py schemamigration <appname&gt; --auto python manage py migrate <appname&gt; ```` You can find the full documentation here - <a href="http://south readthedocs org/en/latest/tutorial/" rel="nofollow">http://south readthedocs org/en/latest/tutorial/</a>
Scraping of website that present a choice after login before retrieve the searched page I try to scrape a website that has a strange behavior I point as URL the page I want to retrieve as normal website present me login page I submit the form elements and then I want to scrape the page but after I submit the form the website present me a page with a choice (two links) to choose my profile after the click on a chosed profile I can access the page I want In mechanize I cannot click on a link to retrieve the page I want to read This is my code: ````from bs4 import BeautifulSoup as bs import urllib3 import mechanize import cookielib cj = cookielib CookieJar() br = mechanize Browser() br set_handle_robots(False) br set_cookiejar(cj) br open("the_url_I_want_scrape") br select_form(nr=2) br form set_all_readonly(False) br form['username'] = "my_user" br form["password"] = "my_pass" br form["button submit"] = "entra" br submit() html = br response() read() ```` Now if i iterate in a links I have two objects: ````for link in br links(): print link ```` That it is look like follow lines: ````Link(base_url='https://www sito com/internal/login' url='/internal/sessionProperty?sessid=1111' text='Profile1' tag='a' attrs=[('href' '/nternal/sessionProperty?sessid=1111')]) Link(base_url='https://www sito com/internal/login' url='/shres/internal/sessionProperty?sessid=3333' text='Profile2' tag='a' attrs=[('href' '/internal/sessionProperty?sessid=3333')]) ```` How can I simulate a click on it and the parse the result page? I have tried to add abolute_url to the link and then use follow_link but it hangs and not respond anymore The code I use is: ````for link in br links(): link absolute_url = mechanize urljoin(link base_url link url) br follow_link(link) ```` Someone can help me? Thank you Alex
I had similar experience when I needed to scrape website with heavy Javascript use (like hidden menus) and had to use <a href="https://selenium-python readthedocs org/" rel="nofollow">Selenium</a> to simulate browser behaviour instead of mechanize You could try that You could also track the POST request as stated in <a href="http://stackoverflow com/questions/1418000/how-to-click-a-link-that-has-javascript-dopostback-in-href?rq=1">this</a> question and try to simulate it