input_text
stringlengths
1
40.1k
target_text
stringlengths
1
29.4k
Trying to timeit my sqrt function in Python So I have tried to code a simple square root function I did and I wanted to compare it with Python's original one Here is my code: ````from math import sqrt import timeit def sqrt2(number): i=1 while i**2<number: i+=1 left=float(i-1) right=float(i) temp = left+(right-left)/2 while str(temp**2)!=str(number): if temp**2&gt;number: right=temp temp = left+(right-left)/2 elif temp**2<number: left=temp temp = left+(right-left)/2 else: return temp return temp print timeit timeit("sqrt2(12)" number=10000 setup='from __main__ import sqrt2') print timeit timeit("sqrt(12)" number=10000 setup='from math import sqrt') ```` It simply takes the number and notices when and integer^2 is lower and when (integer+1)^2 is higher than our number then divides the gap between those 2 by 2 and tries the middle number etc It is ok for me but when I try to timeit timeit it just stops cursor in console shows that it is still working but nothing happens
As mentioned in the comments the issue is that you get in an infinite loop here: `while str(temp**2)!=str(number):` Your problem is that `temp**2` in a `float` while `number` is an `int` Since you compare their string representation the first one will be for example `12 0` and the second one `12` You can either fix the condition in your `while` loop (using `str()` seems very odd indeed) or pass a `float` to your function And by the way your implementation is very slow A quick run with a float returns: Yours: `0 461127996445` python's: `0 000571012496948`
Python script to convert csv to dump into database I have exported a file as a CSV In excel the data is all in the first column (four columns per row) and it looks like this -- ````like "name" "email" "message" "yes" "John Smith" "John@gmail com" "My message" etc ```` When I import the file this is what I get -- ````'"""like"" ""name"" ""email"" ""message"""\r"" "yes"" ""John Smith"" ""John@gmail com"" ""My message"""' ```` How would I convert this string in Python so that I can insert the it into a database? I imagine that I would need to separate each row into a list and run a for loop to zip the first row with all the others Then create a dict from the list of tuples and insert that into the db Does this method seem correct? How would I convert this odd string into that -- I was having difficulty trying to do so? Thank you <strong>Update</strong> Granted this is not the most efficient/practical method to do this Going in reverse from the end -- ````{'like':'yes' 'name':'John Smith' 'email': 'John@gmail com 'message': 'My message'} ```` This could be arrived at by doing -- ````zip[('like' 'name' 'email' 'message') ('yes' 'name' 'email' 'message')] ```` And using a for loop so the zip would be performed on `(tuple[0] tuple[n])` So how would I convert the raw python string into a list of tuples such that I could add it to the database -- is is there a better way to do this (excluding the use of python modules to accomplish this easily)?
Use Python's standard library to parse the CSV file and extract the data you need to put into your database: ````import csv with open('some csv' 'rb') as f: reader = csv reader(f delimiter=' ') for row in reader: print row ```` Source: <a href="http://docs python org/library/csv html" rel="nofollow">http://docs python org/library/csv html</a>
pychef api pychef ChefServerNotFoundError I am using chef server configuring few different nodes / environments when asking for `env` attributes using the pychef api few times in a row (when refreshing a web page using python server calling chef server) i am getting `ChefServerNotFoundError` (the first few times are fine and third exception is raised) I guess that there is kind of firewall / anti ddos attacks on this server but i can not figure out how to edit these settings anyone have any idea? this is a part of the method (that is called 3 times and throws an exception): ````env_nodes = Search('node') query('chef_environment: {0}' format(env_name)) nodes_dict = {} for n in env_nodes: node = Node(n['name']) nodes_dict[node name] = node['ipaddress']` ```` and this is the traceback: ````File "C:\env\lib\site-packages\chef\search py" line 91 in __getitem__ row_value = self data['rows'][value] File "C:\env\lib\site-packages\chef\search py" line 59 in data self _data = self api[self url] TypeError: 'NoneType' object is not subscriptable` ````
When using PyChef in a webapp or other multi-threaded system you should really pass in the API object explicitly There is a system to track a default API target in a threadlocal for the purposes of making simple scripts easier but in retrospect this was probably a mistake as it leads to these confusing issues This would be a better version of that code also faster: ````nodes_dict = {row object name: row object['ipaddress'] for row in Search('node' 'chef_environment:{}' format(env_name) api=api)} ```` Where `api` is the return value of `chef autoconfigure()` or some other ChefAPI object
Sorting a list of lists in Python ````c2=[] row1=[1 22 53] row2=[14 25 46] row3=[7 8 9] c2 append(row2) c2 append(row1) c2 append(row3) ```` `c2` is now: ````[[14 25 46] [1 22 53] [7 8 9]] ```` how do i sort `c2` in such a way that for example: ````for row in c2: sort on row[2] ```` the result would be: ````[[7 8 9] [14 25 46] [1 22 53]] ```` the other question is how do i first sort by row[2] and within that set by row[1]
Well your desired example seems to indicate that you want to sort by the last index in the list which could be done with this: ````sorted_c2 = sorted(c2 lambda l1 l2: l1[-1] - l2[-1]) ````
Counting items in a List used as a Dictionary Value I have the following dictionary of lists working fine but the printing at the end is borked I cannot get the counter to return correct numbers of items in each list as soon as I get more than one key in my dictionary! It is supposed to tell me how many people have chosen a particular dessert as their favourite ````desserts = {} name_vote = input ('Name:vote ') while name_vote != '': no_colon_name_vote = name_vote replace(":" " ") listed_name_vote = no_colon_name_vote split() name = listed_name_vote[0] vote = ' ' join(listed_name_vote[1:]) if vote not in desserts: desserts[vote] = [name] else: desserts[vote] append(name) name_vote = input ('Name:vote ') for dessert in desserts: count = sum(len(entries) for entries in desserts values()) print(dessert count 'vote(s):' ' ' join(desserts[dessert])) ```` Desired output: ````apple pie 1 vote(s): Georgina gelato 2 vote(s): Sophia Kirsten chocolate 3 vote(s): Greg Will James ```` But instead I get all three values set to 6!
`count` is based on the `sum` over the entire `desserts` list not `desserts[dessert]` Try this: ````count = len(desserts[dessert]) ```` Also consider using <a href="https://docs python org/2/library/collections html#collections defaultdict" rel="nofollow">`defaultdict`</a> <hr> Let us take a step back and try this instead: ````desserts = collections defaultdict(list) while name_vote != '': name_vote = input('Name:vote ') if not name_vote: break name vote = no_colon_name_vote split(':') desserts[vote] append(name) for dessert in desserts: count = len(desserts[dessert]) print(dessert count 'vote(s):' ' ' join(desserts[dessert])) ````
Pandas: Feeding index values to apply I am having problems when trying to use apply on the result of a groupby operation I have the following 2 dataframes: ````&gt;&gt;&gt; df1 head() col1 col2 col3 id1 2001991 0 0 0 0 1501102 3 0 1 1 1701072 0 0 0 0 2001022 0 1 20 50 2001212 3 0 2 4 &gt;&gt;&gt; df2 head() id2 value id1 0 24400 6 28 2001022 1 24400 3 40 2001011 2 24037 12 30 2002011 3 24037 3 00 2001382 4 24037 20 00 1701071 ```` I first did a groupby and a sum to df2: ````&gt;&gt;&gt; df2 = df2 groupby(['id2' 'id1']) sum() &gt;&gt;&gt; df2 head() value id2 id1 81 1701071 49 94 88 1701071 759 22 2001011 73 26 2001382 199 70 2003071 25 00 ```` And I now would like to use apply but I need to feed it id1 which is part of the index so I get an error when I try to do the following: ````df2['new'] = df2 apply(lambda row: min(row['value'] df1 loc[row['id1'] 'col1']) axis=1) ```` What is the right way of doing this? [BY THE WAY I have also tried to merge df1 and df2 in one table (so that each row in df2 has a field with the corresponding col1 col2 and col3 from df1) but when I do the groupby and sum() it aggregates col1 col2 and col3 values (which I do not want)]
You can reset the index then it will be an ordinary column: ````df2 reset_index(level='id1') ```` Or provide `as_index=False` to the groupby call: ````df2 groupby(['id2' 'id1'] as_index=False) sum() ````
In 1540 Europeans first encountered Native Americans in what state?
null
Just installed BeautifulSoup Python 3 3 0 Does anyone know how to fix it I am using Mac OS 10 8 2 ````&gt;&gt;&gt; from bs4 import BeautifulSoup Traceback (most recent call last): File "<stdin&gt;" line 1 in <module&gt; File "/Library/Frameworks/Python framework/Versions/3 3/lib/python3 3/site-packages/bs4/__init__ py" line 359 print soup prettify() ^ SyntaxError: invalid syntax ````
In Python 3 `print` is a function; it should be: ````print(soup prettify()) ```` Install `bs4` correctly or use a newer version if it is a bug `beautifulsoup4==4 1 3` works fine on Python 3 3
How do I deploy my Google App Engine project when not behind a proxy in Python 2 7 despite this urllib2 error? From this question ( <a href="http://stackoverflow com/questions/5520603/cannot-deploy-my-app-to-google-app-engine">Can&#39;t deploy my app to Google App Engine</a> ) I see that perhaps some imports are not allowed on production GAE I have cut down to the following: ````import webapp2 # Comes with latest GAE w/ Python 2 7 import os # for loading appropriate files on the server from google appengine ext import db from google appengine api import channel from google appengine ext webapp import template ```` I am not directly linking to a website in my Python script This is the only line where I refer to a file ````path = os path join(os path dirname(__file__) 'myfile html') ```` I am working at home not behind a proxy Despite all these things that I have seen as factors in other questions on SO and in various search results I continue getting the following error ````2012-10-11 13:22:01 890 ERROR appcfg py:2182 An error occurred processing file '': <urlopen error [Errno 11004] getaddrinfo failed&gt; Aborting Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\appcfg py" line 171 in <module&gt; run_file(__file__ globals()) File "C:\Program Files (x86)\Google\google_appengine\appcfg py" line 167 in run_file execfile(script_path globals_) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg py" line 4191 in <module&gt; main(sys argv) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg py" line 4182 in main result = AppCfgApp(argv) Run() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg py" line 2579 in Run self action(self) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg py" line 3927 in __call__ return method() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg py" line 3041 in Update self UpdateVersion(rpcserver self basepath appyaml) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg py" line 3023 in UpdateVersion lambda path: self opener(os path join(basepath path) 'rb')) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg py" line 2152 in DoUpload self resource_limits = GetResourceLimits(self rpcserver self config) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg py" line 357 in GetResourceLimits resource_limits update(GetRemoteResourceLimits(rpcserver config)) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg py" line 328 in GetRemoteResourceLimits version=config version) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc py" line 383 in Send f = self opener open(req) File "C:\Python27\lib\urllib2 py" line 400 in open url = rest File "C:\Python27\lib\urllib2 py" line 418 in _open user_passwd = unquote(user_passwd) File "C:\Python27\lib\urllib2 py" line 378 in _call_chain """Default error handler: close the connection and raise IOError """ File "C:\Python27\lib\urllib2 py" line 1215 in https_open s = unichr(int(item[:2] 16)) item[2:] File "C:\Program Files (x86)\Google\google_appengine\lib\fancy_urllib\fancy_urllib\__init__ py" line 383 in do_open raise url_error urllib2 URLError: <urlopen error [Errno 11004] getaddrinfo failed&gt; 2012-10-11 13:22:01 (Process exited with code 1) ```` So there is the error and I really do not think it is on my end because of how minimal I have made my code To recap: - Unlike other questions and their unmarked solutions I am not behind a proxy; - I do not have any internet URLs in my python script; - I am using Python 2 7 and the Deploy button on the Google App Engine launcher One solution I have not tried is setting an environment variable (http_proxy https_proxy) to my proxy because I do not know what proxy to set it to; I do not have one In my project settings I have <a href="https://myappname appspot com" rel="nofollow">https://myappname appspot com</a> as the Deployment Server How do I proceed? Any red flags you can identify immediately from what I have described? Note: - Since starting to write this and continually clicking on more suggestions from StackOverflow I have tried setting a new environment variable http_proxy to my IP my IP:8080 and "host_or_ip:port" (from question <a href="http://stackoverflow com/questions/6468191/google-app-engine-launcher-slow-to-launch-behind-proxy-possibly-because-of-ch">Google App Engine Launcher Slow to Launch-- Behind Proxy Possibly because of check for updates?</a>) - I will be trying the same values with https_proxy while you good folks read and put out your ideas
Make sure you are using the correct development server It should be: <a href="http://appengine google com" rel="nofollow">http://appengine google com</a>
int to string concatenation guys probably this is a silly question I searched on many sites but my program is not working :( I read some values from a MCP3008 connected to a Raspberry Pi and then I send them to PureData using this python script: ```` os system("echo '" value ";' | pdsend 3000 localhost") ```` where "value" contains the readings from the sensor But the script is too slow so I decide to move to C ```` int main() { for ( ; ; ) { int value = mcp3008_value(0 18 23 24 25); char message[]=""; char str[50]; sprintf( str "%d" value ); strcpy(message "echo '"); strcat(message str); strcat(message ";' | pdsend 3000 localhost"); printf(message); } return 0; } ```` but when I execute it I get: Segmentation fault Is there a simple method to concatenate int and strings like in python? And do you think that it will be faster than python? Thanks Guys ;)
your line of ```` char message[]=""; ```` is wrong you need to allocate memory for it because it will only allocate 1 byte You can use the following ```` char message[1000]=""; ````
Oauth aware flow issue Problem statement: calling methods on the oauth aware decorator causes a nonetype error I look at this page <a href="https://developers google com/api-client-library/python/guide/google_app_engine" rel="nofollow">https://developers google com/api-client-library/python/guide/google_app_engine</a> Specifically the code in the guide under: In the following code snippet the OAuth2DecoratorFromClientSecrets class is used to create an oauth_aware decorator and the decorator is applied to a function that accesses the Google Tasks API: and try and create something similar (included below) My app is giving this error ```` Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2 5 2/webapp2 py" line 1535 in __call__ rv = self handle_exception(request response e) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2 5 2/webapp2 py" line 1529 in __call__ rv = self router dispatch(request response) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2 5 2/webapp2 py" line 1278 in default_dispatcher return route handler_adapter(request response) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2 5 2/webapp2 py" line 1102 in __call__ return handler dispatch() File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2 5 2/webapp2 py" line 572 in dispatch return self handle_exception(e self app debug) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2 5 2/webapp2 py" line 570 in dispatch return method(*args **kwargs) File "/base/data/home/apps/APPNAME/DIRECTORYNAME/main py" line 39 in get url = decorator authorize_url() File "/base/data/home/apps/APPNAME/DIRECTORYNAME/oauth2client/appengine py" line 798 in authorize_url url = self flow step1_get_authorize_url() AttributeError: 'NoneType' object has no attribute 'step1_get_authorize_url' ```` When I do logging info(decorator) it returns and when I do logging info(dir(decorator)): ['<strong>class</strong>' '<strong>delattr</strong>' '<strong>dict</strong>' '<strong>doc</strong>' '<strong>format</strong>' '<strong>getattribute</strong>' '<strong>hash</strong>' '<strong>init</strong>' '<strong>module</strong>' '<strong>new</strong>' '<strong>reduce</strong>' '<strong>reduce_ex</strong>' '<strong>repr</strong>' '<strong>setattr</strong>' '<strong>sizeof</strong>' '<strong>str</strong>' '<strong>subclasshook</strong>' '<strong>weakref</strong>' '_auth_uri' '_callback_path' '_client_id' '_client_secret' '_create_flow' '_credentials_class' '_credentials_property_name' '_display_error_message' '_in_error' '_kwargs' '_message' '_revoke_uri' '_scope' '_storage_class' '_tls' '_token_response_param' '_token_uri' '_user_agent' 'authorize_url' 'callback_application' 'callback_handler' 'callback_path' 'credentials' 'flow' 'get_credentials' 'get_flow' 'has_credentials' 'http' 'oauth_aware' 'oauth_required' 'set_credentials' 'set_flow'] but any of those methods like decorator http() or decorator has_credentials() triggers a none type error My code ````import webapp2 import logging import jinja2 import pprint import os import json import time import httplib2 from apiclient discovery import build from apiclient errors import HttpError from google appengine ext webapp util import run_wsgi_app from oauth2client appengine import OAuth2DecoratorFromClientSecrets from oauth2client client import AccessTokenRefreshError from oauth2client client import OAuth2WebServerFlow from oauth2client client import flow_from_clientsecrets from oauth2client file import Storage from oauth2client tools import run from google appengine api import urlfetch decorator = OAuth2DecoratorFromClientSecrets( os path join(os path dirname(__file__) 'client_secrets json') scope='https://www googleapis com/auth/bigquery') # Google App Engine project ID PROJECT_NUMBER = 'XXXXXXXXXXXXX' bigquery_service = build('bigquery' 'v2') class MainHandler(webapp2 RequestHandler): def get(self): if decorator has_credentials(): logging info('has credentials') else: logging info('bouncing credentials') logging info(decorator) url = decorator authorize_url() return self redirect(url) jinja_environment = self jinja_environment template = jinja_environment get_template("/index html") self response out write(template render()) @property def jinja_environment(self): jinja_environment = jinja2 Environment( loader=jinja2 FileSystemLoader('views') ) return jinja_environment app = webapp2 WSGIApplication([ ('/' MainHandler) (decorator callback_path decorator callback_handler()) ] debug=True) ````
From perusing the source I think the "decorator" should be used to decorate something before you use it to do other things In this case you probably want to decorate `get` with <a href="https://code google com/p/google-api-python-client/source/browse/oauth2client/appengine py#752" rel="nofollow">`oauth_aware`</a>: ````class MainHandler(webapp2 RequestHandler): @decorator oauth_aware def get(self): if decorator has_credentials(): else: url = decorator authorize_url() ````
When is {!s} meaningful in format()? ````&gt;&gt;&gt; import math &gt;&gt;&gt; print "{} {!s} {!r}" format(math pi math pi math pi) 3 14159265359 3 14159265359 3 141592653589793 ```` When is the `!s` inside `{}` meaningful? Or in other words when is calling `str()` on a `format()` argument meaningful?
Whenever the type to be printed implements `__format__` in an an unsuitable way for your purpose In this case you can use `!s` or `!r` (and I think there is even `!a`) to specify an alternate formatter You certainly will not need it for printing floating-point numbers but probably for custom types
can the order of code make this program faster? Hi this is my first post I am learning how to write code so technically I am a newbie I am learning python I am still at the very basics I was getting to Know the if statement and I tried to mix it with another concepts(function definition input variables) in order to get a wider vision of python I wrote some code without a specific idea of what I wanted to do I just wanted to mix everything that I have learned so far so probably I over do it and its not practical it "works" when I run it The question that I have its not about how to do it more efficient or with less code it is about the order of code in all programming in general Here I will show 2 different order of code that gives the same result with exactly the same code(but with different order) on (1) I define a function on the first line on (2) I define the same function closer to when I use it on line 5 which one is faster? is defining a function "closer" to when I need it impractical for the complexity of larger programs(but does it make it faster) or defining a function "far" from where I need it makes a larger program slower when running(but also more practical) (1) ````def t(n1 n2): v=n1-n2 return abs(v) a = int(input('how old are you? \n')) b = int(input('how old is your best friend? \n')) c=t(a b) if a==b: print ('you are both the same age') else: print('you are not the same age\nthe difference of years is %s year(s)' % c) input() ```` (2) ````a = int(input('how old are you? \n')) b = int(input('how old is your best friend? \n')) def t(n1 n2): v=n1-n2 return abs(v) c=t(a b) if a==b: print ('you are both the same age') else: print('you are not the same age\nthe difference of years is %s year(s)' % c) input() ````
It does not make the slightest bit of difference Do whatever is most readable
Data models dependencies can I simplify these objects? Essentially I have 6 object models 3 for my regular objects and 3 for the dependencies of those objects Individually these objects can be dependent on one or multiple instances of each of the 3 regular objects <strong>My Question:</strong> Is this best practice? I essentially want to be able to add a new dependency to a regular object when needed For example: ````a = A objects get(id=1) adependency = ADependencies objects get(dependentTo=a) ```` Then I would have an object with all of the dependencies for `a` <hr> Alternatively I did think of a way to merge the 3 dependency objects into one; however I am unsure whether it is good practice ````class Dependencies(models Model): id = models CharField(max_length=16) dependentTo = CharField(max_length=16) a = models ManyToManyField(A) b = models ManyToManyField(B) c = models ManyToManyField(C) ```` In this scenario I do not use a ForeignKey to map the dependentTo object Instead I would use the object id to pull the object - allowing me to be class agnostic; however this would require unique ids throughout the 3 regular objects ````a = A objects get(id=1) adependency = ADependencies objects get(dependentTo=a id) ```` <hr> <strong>One more idea!</strong> Is it possibly to still use ForeignKey's but pass in a string with the class name instead? ````class Dependencies(models Model): id = models CharField(max_length=16) type = models CharField(max_length=16) dependentTo = ForeignKey(type) a = models ManyToManyField(A) b = models ManyToManyField(B) c = models ManyToManyField(C) ```` <hr> Object Models: ````class A(models Model): id = models CharField(max_length=16) title = models CharField(max_length=32) summary = models CharField(max_length=256) class B(models Model): id = models CharField(max_length=16) title = models CharField(max_length=32) summary = models CharField(max_length=256) a = models ForeignKey(A) class C(models Model): id = models CharField(max_length=16) title = models CharField(max_length=32) summary = models CharField(max_length=256) b = models ForeignKey(B) class ADependencies(models Model): id = models CharField(max_length=16) dependentTo = models ForeignKey(A) a = models ManyToManyField(A) b = models ManyToManyField(B) c = models ManyToManyField(C) class BDependencies(models Model): id = models CharField(max_length=16) dependentTo = models ForeignKey(B) a = models ManyToManyField(A) b = models ManyToManyField(B) c = models ManyToManyField(C) class CDependencies(models Model): id = models CharField(max_length=16) dependentTo = models ForeignKey(B) a = models ManyToManyField(A) b = models ManyToManyField(B) c = models ManyToManyField(C) ```` Thanks!
I think inheritance may simplify your data structures a lot Let us leave model `A` as-is: ````class A(models Model): id = models CharField(max_length=16) title = models CharField(max_length=32) summary = models CharField(max_length=256) ```` Your classes `B` and `C` has those fields of `A` plus one additional so we may rewrite that as ````class B(A): a = models ForeignKey(A) class C(A): b = models ForeignKey(B) ```` Now as we have one base class we only need one dependency class: ````class ADependencies(models Model): id = models CharField(max_length=16) dependentTo = models ForeignKey(A) dependents = models ManyToManyField(A) ```` Now you can set any of `A` `B` and `C` objects to `dependentTo` and `dependents` If you only need the main object from dependency the object of type `A` will have either attribute `b` attribute `c` or none of them You may also query on those attributes: ````ADependencies objects filter(dependentTo__b__isnull=False) ```` This structure is more scalable and more easily maintainable because if you need to add one more model you only need to write the unique code for it and do not have to deal with dependency classes One more way to simplify your models is to just have one model: ````class A(models Model): id = models CharField(max_length=16) title = models CharField(max_length=32) summary = models CharField(max_length=256) a = models ForeignKey(A null=True) ```` This way you only have model `A` and you may leave field `a` empty (indication that it is just a simple `A` instance) or set the value of `a` (it will mean the object of type `B` or `C`) Then your dependency class is the same as in previous example but you do not need to deal with those backwards relations to test for true object type If you really need to disinguish between `B` and `C` objects you may write your `A` class like this: ````class A(models Model): A = 0 B = 1 C = 2 TYPE_CHOICES = ( (A "A") (B "B") (C "C") ) id = models CharField(max_length=16) title = models CharField(max_length=32) summary = models CharField(max_length=256) a = models ForeignKey(A null=True) obj_type = models IntegerField(choices=TYPE_CHOICES) ```` This way you have just one model class and one dependency class and can tell what type the object is by checking `obj_type` Also you should implement some check to prevent cases where `a` is not `null` and `obj_type` is A and similar Let me know if this solution is what you need
Mr. Rubenstein was honored in 1994 by which university?
null
Inserting data in postgres using a python list I want to enter a python list into an insert query and enter data into postgres I am using executemany but still getting an error "TypeError: not all arguments converted during string formatting" ````twitter_data = ['eOc6ZEQiNCo' you'TYihcUfotnM' you'7vneoA-vY8U' '73RRIEXsdyE'] query = "INSERT INTO videograbber (video_id) VALUES (%s)" cur executemany("INSERT INTO videograbber (video_id) VALUES (%s)" [twitter_data]) con commit() ```` Can someone please tell me what I am doing wrong here
Your list must be a list of <them>tuples</them> You have not shown what the value of `twitter_data` is but I suspect it is a string You want something like: ````twitter_data = [ ('video_id_1' ) ('video_id_2' ) ] ```` Note the trailing commas making these 1-tuples instead of just parentheses around an expression This is a 2-element list of 1-tuples containing `(str)` so when psycopg2 iterates the tuples within the list it gets the expected one element per tuple If you just wrote `[ ('video_id_1') ('video_id_2') ]` you would be producing a 2-element list containing two `str` objects These are iterable so `psycopg2` will attempt to iterate them converting each <them>character</them> as a a query parameter It will have fewer parameters than characters so it will produce the error you showed You can convert a list of `str` into a list of 1-tuples of `str` with a list comprehension e g : ````twitter_data_tuples = [ (x ) for x in twitter_data ] ````
Unexpected output when filtering Python list: What am I doing wrong? I am trying to filter a list here is the code: ```` test=['aaa' 'bbb' 'ccc' 'ddd' 'footer' 'header'] def rm_hf(x): return x != 'footer' filter(rm_hf test) ```` Results in : ````&gt;&gt;&gt;['aaa' 'bbb' 'ccc' 'ddd' 'header'] ```` That is expected result finding 'footer' in the list and remove it Now I want to remove both 'header' and 'footer' so I do this: ```` test2=['aaa' 'bbb' 'ccc' 'ddd' 'footer' 'header'] def rm_hf2(x): return x != 'footer' or x != 'header' filter(rm_hf2 test2) ```` Results in : ````&gt;&gt;&gt;['aaa' 'bbb' 'ccc' 'ddd' 'footer' 'header'] ```` Now that is weird it just gives out both 'footer' 'header' instead of filtering them? What did I do wrong? I think my logic is correct
Your logic is correct because you think like a human Your computer does not He reads every element from your list then stumbles upon 'footer' "Is footer different than footer?" he says "NO! It is the same string! It is evaluated to false Let us see the next condition" "Is footer different than header? YES!" The condition is therefore `False or True` which obviously evaluates to true You want a `and` not a `or` : ````def rm_hf2(x): return x != 'footer' and x != 'header' ```` You could also use a tuple and the `in` keyword which is more readable : ````def rm_hf2(x): return x not in ('footer' 'header') ```` It is important that you understand what is really going on with "and" and "or" though And let us be honest : if something is not working as you think it should the problem most likely lies in your own code and not in Python language itself
Searching two of the three characters in python ```` info=('x' 'y' 'z') info2=('x' 'Bob' 'y') match=False if any(all x in info for x in info2): match=True print("True") else: print("False") ```` Is that is there a way I can make it work so that it only prints `True` when `x` and either `y` or `z` are in `info2`?
The way I read this you want the first element in `info` (`info[0]`) and at least one other element in `info` to be in `info2` ```` if info[0] in info2 and any(i in info2 for i in info[1:]): # do stuff ````
Sqlalchemy Does Not Emit Correct SQL for MSSQL GETDATE() server default I have the following declarative SQL class: ````class Test(Base): ID = Column(Integer primary_key=True) SYSTEM_TIMESTAMP = Column(DateTime nullable=False server_default='GETDATE()') ```` But using Base metadata create_all(engine) causes the following SQL to be emitted for the SYSTEM_TIMESTAMP: ````[SYSTEM_TIMESTAMP] DATETIME NOT NULL DEFAULT 'GETDATE()' ```` An example for how GETDATE should be emitted from <a href="http://www w3schools com/sql/func_getdate asp" rel="nofollow">http://www w3schools com/sql/func_getdate asp</a>: ````OrderDate datetime NOT NULL DEFAULT GETDATE() ```` I have tried using func getdate instead of 'GETDATE()' as the server_default but get the following error: ````ArgumentError: Argument 'arg' is expected to be one of type '<type 'basestring'&gt;' or '<class 'sqlalchemy sql expression ClauseElement'&gt;' or '<class 'sqlalchemy sql expression TextClause'&gt;' got '<class 'sqlalchemy sql expression _FunctionGenerator'&gt;' ```` SQLAlchemy version: 0 9 6 Pyodbc version: 3 0 5 Python version: 2 7 How do I get SQLAlchemy to emit to correct SQL to set the server default?
The solution is to define the server default using the `text()` function of SQLAlchemy: ````import sqlalchemy as sa SYSTEM_TIMESTAMP = Column(DateTime nullable=False server_default=sa text("GETDATE()")) ```` This will then correctly emit the server default: ````[SYSTEM_TIMESTAMP] DATETIME NOT NULL DEFAULT GETDATE() ```` This is documented here: <a href="http://docs sqlalchemy org/en/latest/core/defaults html#server-side-defaults" rel="nofollow">http://docs sqlalchemy org/en/latest/core/defaults html#server-side-defaults</a>
Who created a book about comics from a philosophical point of view?
David Carrier
Pandas backfilling values based on a datetime index and a column I have a `Pandas` data frame with two sets of dates a `DatetimeIndex`for the index and a column named `date2` containing datetime objects a value and an id For some id's I am missing values where `date2` is equal to the index in this case I want to fill the row/values with the values of the previous DatetimeIndex and id's values The `date1` represents the current point in time and `date2` represents the last date Each `df[df id == id]` can be treated as its own dataframe however the data is stored in one giant dataframe 500k rows <strong>Example: Given</strong> ```` date2 id value index 2006-01-24 2006-01-26 3 3 2006-01-25 2006-01-26 1 1 2006-01-25 2006-01-26 2 2 2006-01-26 2006-01-26 2 2 1 2006-01-27 2006-02-26 4 4 ```` In this example were missing a `index == date2` row for id 1 id 2 and for id3 I would like to backfill each missing row with the previous index value respective to it is id <strong>I would like to return:</strong> ```` date2 id value index 2006-01-24 2006-01-26 3 3 2006-01-25 2006-01-26 1 1 2006-01-25 2006-01-26 2 2 2006-01-26 2006-01-26 1 1 #<---- row added 2006-01-26 2006-01-26 2 2 1 2006-01-26 2006-01-26 3 3 #<---- row added 2006-01-27 2006-02-26 4 4 2006-02-26 2006-02-26 4 4 #<---- row added ````
This is not very clean but is a possible solution First I moved the index into a column `date1`: ````In [228]: df Out[228]: date1 date2 id value 0 2006-01-24 2006-01-26 3 3 0 1 2006-01-25 2006-01-26 1 1 0 2 2006-01-25 2006-01-26 2 2 0 3 2006-01-26 2006-01-26 2 2 1 ```` Then I grouped by each pair of dates adding ids to those pairs that match This involves breaking the DataFrame into a list of subframes and use `concat` to stick back together ````In [229]: dfs = [] : for (date1 date2) df_gb in df groupby(['date1' 'date2']): : if date1 == date2: : to_add = list(set([1 2 3]) - set(df_gb['id'])) : df_gb = df_gb append(pd DataFrame({'id': to_add 'date1': date1 'date2': date2 'value': np nan}) ignore_index=True) : dfs append(df_gb) In [231]: df = pd concat(dfs ignore_index=True) In [232]: df Out[232]: date1 date2 id value 0 2006-01-24 2006-01-26 3 3 0 1 2006-01-25 2006-01-26 1 1 0 2 2006-01-25 2006-01-26 2 2 0 3 2006-01-26 2006-01-26 2 2 1 4 2006-01-26 2006-01-26 1 NaN 5 2006-01-26 2006-01-26 3 NaN ```` Finally I sorted and filled the missing values ````In [233]: df = df sort(['id' 'date1' 'date2']) In [234]: df = df fillna(method='ffill') In [236]: df sort(['date1' 'date2']) Out[236]: date1 date2 id value 0 2006-01-24 2006-01-26 3 3 0 1 2006-01-25 2006-01-26 1 1 0 2 2006-01-25 2006-01-26 2 2 0 4 2006-01-26 2006-01-26 1 1 0 3 2006-01-26 2006-01-26 2 2 1 5 2006-01-26 2006-01-26 3 3 0 ````
Python requests/urllib3 NewConnectionError only when script runs on a cronjob from the office Weird issue I cannot figure out I have a script that uses Python's request library and is run on a cronjob When I am at home over a VPN it works fine If I am at the office the cronjob returns a connection error more specifically a NewConnectionError[error 60: connection timeout] (which is raised by urllib3) The weird thing is if I run the script manually from the command line it does not have a problem I only have a high level understanding of how requests/urllib3/cron works I am guessing the connection is cached in some way but I am not sure Does anyone know what could be causing this? The script itself is a sync utility that creates a connection to bitbucket's api I created an api-wrapper to achieve this which is essentially just an object to build queries with Here is a snippet from the wrapper: ````def __init__(self username password): s = requests Session() s auth = (username password) self _bitbucket_session = s def _get_context(self url paging): try: are = self _bitbucket_session get(url) if r status_code == 403: raise self BitbucketAPIError('BitbucketAPIError: {}' format(r reason)) if 'error' in r json(): raise self BitbucketAPIError('BitbucketAPIError: {}' format(r json()['error']['message'])) except HTTPError as e: print("HTTP Error: {}" format(e)) except ConnectTimeout as e: print("The request timed out while trying to connect to the remote server: {}" format(e)) except ConnectionError as e: print("Connection Error: {}" format(e)) except Timeout as e: print("Connection Timed out: {}" format(e)) except RequestException as e: print("Unhandled exception: {}" format(e)) ```` And here is a simplified version of the sync client that is being "croned": ````bapi = BitbucketApi(username password) # blah blah blah update_members() update_repository() bapi close() ```` Here is the close method: ````def close(self): self _bitbucket_session close() ````
Probably there is a proxy involved When the script is run from your home there is no proxy or the proxy is properly configured so there is no problem When run from the command line at your office the she will environment is properly configured to set a HTTP/S proxy via environment variables: ````export http_proxy="http://proxy com:3128" export https_proxy="https://proxy com:3128" ```` (upper case variables are also effective i e HTTP_PROXY HTTPS_PROXY) However when the script is run from `cron` the environment does not have the proxy variables set and the connection request times out You create a wrapper for the script and then execute the wrapper script from cron e g ````#!/bin/sh export HTTP_PROXY="http://proxy:1234" export HTTPS_PROXY="https://proxy:1234" python your_script py ````
Creating DataTable from Protocol Buffer Good evening everyone I am stuck and cannot figure out what to do I have an application where I am storing data from protocol buffer messages in a DataTable for each message To do this I need to get the field names for each of the columns For the fields that are enums I create another DataTable with those values and put all of these tables into a data set for a single message Below is my working code to do this: ````def makeMessageTables(self messageType): # Instance variable for the message DataSet self messageDataSet = DataSet() ################################################################### # Create a table in the DataSet to hold the message structure ################################################################### messageTable = DataTable() messageTable TableName = 'messageTable' # Construct columns in the table to correspons with the message fields messageFields = messageType DESCRIPTOR fields for field in messageFields: messageTable Columns Add(field name) # Add the table to the DataSet self messageDataSet Tables Add(messageTable) ################################################################### # Make a table for each field that has an enum type associated # with it The table has the display values and the storage # values in it foe setting up the ComboBox in the DataGridView ################################################################### for field in messageFields: if field enum_type != None: tableName = '{}_enumTable' format(field enum_type name) enumTable = DataTable() enumTable TableName = tableName; enumTable Columns Add('enumDisplay') enumTable Columns Add('enumValue') for value in field enum_type values: newRow = enumTable NewRow() newRow['enumDisplay'] = value name newRow['enumValue'] = value number enumTable Rows Add(newRow) self messageDataSet Tables Add(enumTable) ```` Now today I hit a message that has sub-messages I want a separate table for each of the sub-messages The problem I am having is that I do not know how to break down the message to get the fields names and other information from the sub-messages I have provided a representative sample message like I am trying to deal with below ````message SystemOneStatusDetails { required int32 field1 = 1; required int32 field1 = 1; required int32 field1 = 1; required int32 field1 = 1; } message SystemTwoStatusDetails { required int32 field1 = 1; required int32 field1 = 1; required int32 field1 = 1; required int32 field1 = 1; } message StatusMessage { repeated SystemOneStatusDetails sysOneStatus = 1; repeated SystemTwoStatusDetails sysTwoStatus = 2; } ```` I am using IronPython 2 7 inside Visual Studio If anyone could lead me in the right direction I would greatly appreciate it Thanks Robert Hix
I have only used protocol buffers in Java so I can only give high-level hints Hopefully some one know more about Protocol buffers in python There will be a definition for each message and inside each message each field is defined It is all very much like the proto file Here is the proto file I used: ````message SystemOneStatusDetails { required int32 field1 = 1; required int32 field2 = 2; required int32 field3 = 3; required int32 field4 = 4; } message SystemTwoStatusDetails { required int32 field1 = 1; required int32 field2 = 2; required int32 field3 = 3; required int32 field4 = 4; } message StatusMessage { repeated SystemOneStatusDetails sysOneStatus = 1; repeated SystemTwoStatusDetails sysTwoStatus = 2; } ```` Following is the Field-Definition (used in Java): <img src="http://i stack imgur com/q6Z5r png" alt="enter image description here"> <hr> You seem to writing a general purpose utility So for future reference Protocol messages can be a lot more complicated e g messages can be defined in messages ````message outer { message inner { } required inner myInner = 1; ```` } plus you can also have <strong>extensions</strong> to messages like: ````message Message { extensions 100 to max; required uint64 A = 1; required string name = 2; } message Event { extensions 100 to max; required uint64 B = 1; required string eventName = 2; /* assert_p('LineFrameTree FileDisplay_JTbl' 'Content' '[[A 1 UINT64 123 123] [name 1 STRING aa aa]]') */ } message Note { required string text = 1; } extend Message { optional Event ext = 101; repeated Note notes = 103; } ````
Specify the connection_factory to SQLAlchemy's create_engine() I have a custom connection factory class (which inherits from `psycopg2 extensions connection`) that I would like SQLAlchemy to use From the `create_engine()` documentation <blockquote> **kwargs takes a wide variety of options which are routed towards their appropriate components Arguments may be specific to the Engine the underlying Dialect as well as the Pool Specific dialects also accept keyword arguments that are unique to that dialect </blockquote> When I try to specify a connection_factory parameter like this: `engine = create_engine(dsn engine_info() connection_factory=ConnectionEx)` I get this traceback: ````Traceback (most recent call last): File "foo py" line 8 in <module&gt; from user import test_user File "/vagrant/workspace/panel/panel/user py" line 18 in <module&gt; from panel helpers import intval File "/vagrant/workspace/panel/panel/__init__ py" line 51 in <module&gt; import panel views File "/vagrant/workspace/panel/panel/views py" line 13 in <module&gt; from panel api import api_functions File "/vagrant/workspace/panel/panel/api/api_functions py" line 27 in <module&gt; from panel targeting import SavedTargetSet File "/vagrant/workspace/panel/panel/targeting py" line 19 in <module&gt; from panel database import panelists_tbl us_cities_tbl income_buckets_tbl File "/vagrant/workspace/panel/panel/database py" line 39 in <module&gt; engine = create_engine(dsn engine_info() connection_factory=ConnectionEx) File "/home/vagrant/ virtualenvs/project/lib/python2 6/site-packages/sqlalchemy/engine/__init__ py" line 331 in create_engine return strategy create(*args **kwargs) File "/home/vagrant/ virtualenvs/project/lib/python2 6/site-packages/sqlalchemy/engine/strategies py" line 141 in create engineclass __name__)) TypeError: Invalid argument(s) 'connection_factory' sent to create_engine() using configuration PGDialect_psycopg2/QueuePool/Engine Please check that the keyword arguments are appropriate for this combination of components ````
When the documentation is talking about "appropriate components " its referring to the components of the SQLAlchemy API rather than the various drivers Since `connection_factory` is a parameter that needs to be sent to `connect()` you should use the keyword `connect_args` in your call to `create_engine` (<a href="http://docs sqlalchemy org/en/latest/core/engines html#sqlalchemy create_engine" rel="nofollow">documentation</a> also mentioned <a href="http://docs sqlalchemy org/en/latest/core/engines html#custom-dbapi-connect-arguments" rel="nofollow">here</a>) Thus: ````engine = create_engine( dsn engine_info() connect_args={'connection_factory': ConnectionEx}) ````
Why did universities have these gardens?
facilitated the academic study of plants
Fast way to split an int into bytes If I have an int that fits into 32 bits what is the fastest way to split it up into four 8-bit values in python? My simple timing test suggests that bit masking and shifting is moderately faster than `divmod()` but I am pretty sure I have not thought of everything ````&gt;&gt;&gt; timeit timeit("x=15774114513484005952; y1 x =divmod(x 256);y2 x = divmod(x 256); y3 y4 = divmod(x 256)") 0 5113952939864248 &gt;&gt;&gt; timeit timeit("x=15774114513484005952; y1=x&amp;255; x &gt;&gt;= 8;y2=x&amp;255; x&gt;&gt;=8; y3=x&amp;255; y4= x&gt;&gt;8") 0 41230630996869877 ```` Before you ask: this operation will be used a lot I am using python 3 4
If you are doing it a lot the fastest approach is to create specialized a specialized <a href="https://docs python org/3/library/struct html#struct Struct" rel="nofollow">`Struct` instance</a> and pre-bind the `pack` method: ````# Done once int_to_four_bytes = struct Struct('<I') pack # Done many times (you need to mask here because your number is &gt;32 bits) y1 y2 y3 y4 = int_to_four_bytes(x &amp; 0xFFFFFFFF) ```` Using `struct pack` directly would use a cached `Struct` object after the first use but you would pay cache lookup costs to go from format string to cached `Struct` every time which is suboptimal By creating and prebinding the `pack` of a `Struct` object (which is implemented in C in CPython) you bypass all Python byte code execution beyond the actual function call and spend no time on cache lookups On my machine this runs in about 205 ns vs 267 ns for shift and mask (without reassigning `x`) An alternate approach (for more general not `struct` compatible sizes) is using <a href="https://docs python org/3/library/stdtypes html#int to_bytes" rel="nofollow">`int to_bytes`</a>; for example in this case: ````y1 y2 y3 y4 = (x &amp; 0xFFFFFFFF) to_bytes(4 'big') ```` which takes about the same amount of time as the manually shifting and masking approach (it took 268 ns per loop) but scales to larger numbers of bytes better
What color flag did the Czech Republic adopt in 2004?
null
What word means using a single unaccompanied vocal melody line?
monophonic
Truncate a decimal value in Python I am trying to truncate a decimal value in Python I do not want to round it but instead just display the decimal values upto the specified accuracy I tried the following: ````d = 0 989434 '{: {prec}f}' format(d prec=2) ```` This rounds it to 0 99 But I actually want the output to be 0 98 Obviously `round()` is not an option Is there any way to do this? Or should I go back to the code and change everything to `decimal`? Thanks
Also with math: ````d = 0 989434 x = int(d * 100 0) / 100 0 print "{0:0 2f}" format(x) ````
Django: Push app from local server to production server via FTP This is a bit embarassing but I am a Django noob and I could not find a simple solution to this: I have written a Django app in a local VM that I now want to deploy to a "production" server App works like a charm locally Now my IT colleague has set up the server with Django and that also works fine I can open it via the Web and I get the usual "Congratulations on your first Django-powered page" I can also log into the admin interface The project has been created This is a very low-key mini project and I am not too familiar with git so we have decided to just push files via FTP (And I want to stick with that if at all possible ) So I uploaded the app folder into the project folder and also adjusted the project's settings py and urls py However nothing seems to be happening on the server's end The welcome page is the same the app does not show up in the admin interface and the URLs will not be resolved as hoped Any suggestions what I should have done / done differently?
You need to restart apache or whatever is running your django project Your changes to py files are cached when you first load your server config (settings) <blockquote> Any suggestions what I should have done / done differently? </blockquote> You should be using git/jenkins/deployment techniques I know you said you have decided not to use it but you are going to be missing out on important things like being able to keep track of changes and unit testing
What is BSAR-1 a strain of?
Sphingomonas sp.
How much content does the Community Tool Box offer?
more than 7,000 pages
ipython qtconsole on Windows printing error with no apparent side-effects When I launch ipython qtconsole on Windows (by punching `ipython qtconsole` into a CMD window) the following error gets printed quite frequently to CMD: ````QTextCursor::setPosition: Position '4022' out of range ```` Everything in the ipython qtconsole is working to the best of my knowledge So I am curious what does this error mean and is it indicative of any underlying problems?
This is an IPython <a href="https://github com/ipython/ipython/issues/1392" rel="nofollow">bug</a> that is been known for a few years However it seems to be a <a href="https://bugreports qt io/browse/QTBUG-38178" rel="nofollow">Qt error</a> more than an IPython one Neither has been fixed and seems to be very minor So yes there is an underlying problem but it is not with your code
Working with a non-encodable mp4 tag name in utf-8 Python code For reasons that are not clear to me some of the fields that mp4 files use as tag names contain non-printable characters <a href="http://mutagen readthedocs org/en/latest/api/mp4 html" rel="nofollow">at least the way mutagen sees them</a> The one that is causing me trouble is `'\xa9wrt'` which is the tag name for the composer field (!?) If I run `'\xa9wrt' encode('utf-8')` from a Python console I get ````UnicodeDecodeError: 'utf8' codec cannot decode byte 0xa9 in position 0: invalid start byte ```` I am trying to access this value from a Python file that uses some future-proofing including: ````# -*- coding: utf-8 -*- from __future__ import unicode_literals ```` I cannot even figure out how to enter the string `'\xa9wrt'` into my code file since everything in that file is interpreted as utf-8 and the string I am interested in evidently cannot be written in utf-8 Also when I get the string `'\xa9wrt'` into a variable (say from mutagen) it is hard to work with For example `"{}" format(the_variable)` fails because `"{}"` is interpreted as `you"{}"` which once again tries to encode the string as utf-8 Just naively entering `'\xa9wrt'` gives me `you'\xa9wrt'` which is not the same and none of the other stuff I have tried has worked either: ````&gt;&gt;&gt; you'\xa9wrt' == '\xa9wrt' False &gt;&gt;&gt; str(you'\xa9wrt') '\xc2\xa9wrt' &gt;&gt;&gt; str(you'\xa9wrt') == '\xa9wrt' False ```` Note this output is from the console where it does seem that I can enter non-Unicode literals I am using Spyder on Mac OS with `sys version = 2 7 6 |Anaconda 1 8 0 (x86_64)| (default Nov 11 2013 10:49:09)\n[GCC 4 0 1 (Apple Inc build 5493)]` How can I work with this string in a Unicode world? Is utf-8 incapable of doing so? <strong>Update:</strong> Thank you @tsroten's for the answer It sharpened my understanding but I am still unable to achieve the effect I am looking for Here is a sharper form of the question: how could I reach the two lines with '??' on them without resorting to the kinds of tricks I am using? <them>Note that the `str` that I am working with is handed to me by a library</them> I have to accept it as that type ````# -*- coding: utf-8 -*- from __future__ import unicode_literals tagname = 'a9777274' decode('hex') # This value comes from a library as a str not a unicode if you'\xa9wrt' == tagname: # ??: What test could I run that would get me here without resorting to writing my string in hex? print("You found the tag you are looking for!") else: print("Keep looking!") print(str("This will work: {}") format(tagname)) try: print("This will throw an exception: {}" format(tagname)) # ??: Can I reach this line without resorting to converting my format string to a str? except UnicodeDecodeError: print("Threw exception") ```` <strong>Update 2:</strong> I do not think that any of the strings that you (@tsroten) construct are equal to the one that I am getting from mutagen That string still seems to cause problems: ````&gt;&gt;&gt; you = you'\xa9wrt' &gt;&gt;&gt; s = you encode('utf-8') &gt;&gt;&gt; s2 = '\xa9wrt' &gt;&gt;&gt; s3 = 'a9777274' decode('hex') &gt;&gt;&gt; s2 == s False &gt;&gt;&gt; s2 == s3 True &gt;&gt;&gt; match_tag(s) We have a match! tagname == ©wrt Look! We printed tagname and no exception was raised &gt;&gt;&gt; match_tag(s2) Traceback (most recent call last): UnicodeDecodeError: 'utf8' codec cannot decode byte 0xa9 in position 0: invalid start byte ````
I have finally found a way to express the string in question in a utf-8 file with unicode_literals I convert the string to hex and then back Specifically in the console (which is apparently not in unicode_literals mode) I run ````"" join(["{0:x}" format(ord(c)) for c in '\xa9wrt']) ```` and then in my source file I can create the string I want with ````'a9777274' decode('hex') ```` But this cannot be the right way can it? For one thing if my console were running in full unicode I do not know that I could enter the string `'\xa9wrt'` in the first place to get Python to tell me the hex sequence that represents the byte string
Opening multiple CSV files I am trying to open multiple excel files My program throws error message "FileNotFoundError" The file is present in the directory Here is the code: ````import os import pandas as pd path = "C:\\GPA Calculations for CSM\\twentyfourteen" files = os listdir(path) print (files) df = pd DataFrame() for f in files: df = pd read_excel(f 'Internal' skiprows = 7) print ("file name is " f) print (df loc[0][1]) print (df loc[1][1]) print (df loc[2][1]) ```` Program gives error on `df = pd read_excel(f 'Internal' skiprows = 7)` I opened the same file on another program (which opens single file) and that worked fine Any suggestions or advice would be highly appreciated
`os listdir` lists the filenames <them>relative</them> to the directory (path) you are giving as argument Thus you need to join the path and filename together to get the absolute path for each file Thus in your loop: ````for filename in files: abspath = os path join(path filename) <etc replace f by abspath&gt; ````
Django Multiple Foreign Key Model Here is my code is there a more efficient way of writing it? I am not cool with it Basically both Company and Supplier models should be abble to have several contacts with several phone numbers ````class Contact(models Model): company = models ForeignKey(Company related_name='contact' blank=True null=True) supplier = models ForeignKey(Supplier related_name='contact' blank=True null=True) name = models CharFields(max_length=50 blank=True null=True) class Phone(models Model): contact = models ForeignKey(Contato related_name='phone') number = models CharFields(max_length=50 blank=True null=True) ````
There are at least four approaches to the "both companies of type X and companies of type Y have contacts" problem: - Two tables: Companies and Contacts There is an enumerated property on Companies which has values X and Y and every contact has one foreign key to a company - Three tables: one table X for X-companies another table Y for Y-companies and one table for contacts C where C has foreign keys to both X and Y The foreign keys can be nullable - Four tables: X Y Cx and Cy tracking the two different contacts for the two different sorts of companies separately (So Cx has a foreign key to X and Cy has a foreign key to Y) - Five tables: you start with these three tables X Y and C but instead of adding nullable pointers to C you add two many-to-many joining tables XC and YC These have different demands of the underlying data You are right now using the three-table solution `(X Y C) = (Company Supplier Contact)` This is great if some contacts will be shared between companies and suppliers so that you sometimes need to ask "who is the contact who sits between this company and that supplier?" I maintain a database where the two-table solution is used and when it was initially adopted it was a good solution (why duplicate all of that logic about addresses and contacts when we do not have to?) but today it seems clunky (because the "Company" table contains fields which only make sense for X's and Y's separately) Probably the easiest to deal with in my case if we migrated would be the four-table solution: keep contacts for X-type companies totally separate from contacts for Y-type companies If you start with your current approach then the five-table solution would be the obvious generalization if you face similar growing pains in your application As for tracking phone numbers you have some limited options: - Store a bunch of columns in the Contacts table one for each separate phone number This gets ugly real fast but it is the quick-and-easy way to do it This is called "denormalized" data - Store JSON in a text field in the Contacts list Phone numbers are unlikely to be searched over much; it is just not very common to say "I have this number who does it belong to?" so you can easily denormalize This also let us you do things like `{"mon thru thurs": 12025551234 "fri sat": 12025554321}` storing simple custom annotations for the numbers - Create a phone table like you have done now This is the most general way to do this and if you need those sorts of annotations you can add another text field to that table If you mix option 3 here with option 3 above (four tables plus an explicit phone table) then you will probably want to have separate phone tables as well as separate contact tables; Px and Py each with a foreign key to Cx and Cy
Python application not recognizing updates made in python file I have written python code in file py which is in the directory containing python application Python version is Python 2 7 4 Platform is Windows 7 I imported file py to python application and made changes to file py while keeping the python application window on But changes are not reflected Every time changes are made in file py I will have to close python application and import file py again for the changes to be reflected Is there away to solve this problem?
Yes `reload(file)` in your application code every time after you changing something It will reflect the change to your application code
Hydroelectricity accounts for what percentage of global electricity generation?
Hydroelectricity
How do you reload your Python source into the console window in Eclipse/Pydev? In other Python IDEs (PythonWin and Idle) it is possible to hit a key and have your current source file window reloaded into the console I find this useful when experimenting with a piece of code; you can call functions from the console interactively and inspect data structures there Is there a way to do this with Eclipse/Pydev? So far I have been making do with this hack in my source file: ````def relo(): execfile("/Path/To/Source py" __builtins__) ```` I call `relo()` in the console after I save changes to the source But I would much rather just tap a key I am using pydev 1 4 7 2843 This is somewhat related to <a href="http://stackoverflow com/questions/323581/eclipse-pydev-is-it-possible-to-assign-a-shortcut-to-send-selection-to-the-pyt">this</a> question but I want to just reload the whole source file
Use the revert option on the File menu You can bind a key to it in Windows > Preferences > General > Keys Edit: The reload(module) function will update packages in the interactive console It is built in for python 2 x and in the imp module for 3 x Python docs link: <a href="http://docs python org/3 1/library/imp html?#imp reload" rel="nofollow">http://docs python org/3 1/library/imp html?#imp reload</a> Could not find a way to run it by hotkey I would like to know if you find a way
All taxis in Delhi were ordered to switch to what type of fuel by March 1, 2016?
compressed natural gas
How do I check a string using python against some specific ABNF rules? I need to check a string if is in conformance with this rules: <a href="http://www w3 org/TR/widgets/#zip-rel-path" rel="nofollow">http://www w3 org/TR/widgets/#zip-rel-path</a> ````Zip-rel-path = [locale-folder] *folder-name file-name / [locale-folder] 1*folder-name locale-folder = %x6C %x6F %x63 %x61 %x6C %x65 %x73 "/" lang-tag "/" folder-name = file-name "/" file-name = 1*allowed-char allowed-char = safe-char / zip-UTF8-char zip-UTF8-char = UTF8-2 / UTF8-3 / UTF8-4 safe-char = ALPHA / DIGIT / SP / "$" / "%" / "'" / "-" / "_" / "@" / "~" / "(" / ")" / "&amp;" / "+" / " " / "=" / "[" / "]" / " " UTF8-2 = %xC2-DF UTF8-tail UTF8-3 = %xE0 %xA0-BF UTF8-tail / %xE1-EC 2( UTF8-tail ) / %xED %x80-9F UTF8-tail / %xEE-EF 2( UTF8-tail ) UTF8-4 = %xF0 %x90-BF 2( UTF8-tail ) / %xF1-F3 3( UTF8-tail ) / %xF4 %x80-8F 2( UTF8-tail ) UTF8-tail = %x80-BF lang-tag = primary-subtag *( "-" subtag ) primary-subtag = 1*8low-alphasubtag = 1*8(alphanum) alphanum = low-alpha / DIGITlow-alpha = %x61-7a ```` A code example exactly on the rules above would help I am not familiar with ABNF I do not need a way to parse the ABNF I just need only the above rules translated manually by someone who is used to or understands ABNF to python code with regular expressions or any other way Practically just input a string and verify it against the above mentioned rules eventually as a function that enters a string and returns true or false if the rules are matched or not So to put it in a form of a question: How would this look in implemented in python? I see from the UTF8 documentation that much of the part from the rules above is just checking if string is utf8: <a href="http://tools ietf org/html/rfc3629" rel="nofollow">http://tools ietf org/html/rfc3629</a> ````UTF8-char = UTF8-1 / UTF8-2 / UTF8-3 / UTF8-4 UTF8-1 = %x00-7F UTF8-2 = %xC2-DF UTF8-tail UTF8-3 = %xE0 %xA0-BF UTF8-tail / %xE1-EC 2( UTF8-tail ) / %xED %x80-9F UTF8-tail / %xEE-EF 2( UTF8-tail ) UTF8-4 = %xF0 %x90-BF 2( UTF8-tail ) / %xF1-F3 3( UTF8-tail ) / %xF4 %x80-8F 2( UTF8-tail ) UTF8-tail = %x80-BF ````
You should probably try <a href="http://pyparsing wikispaces com/" rel="nofollow">pyparsing</a> Here is a <a href="http://pyparsing wikispaces com/file/view/ebnf py" rel="nofollow">quick example</a> from pyparsing website which you can easily modify to suit your purpose
3D pcolor/pclormesh plot in matplotlib <h3>1 My attempt</h3> - 2-d numpy array represent some value of an area showing like this: <img src="http://i8 tietuku com/90d045a6c2375474 png" alt=""> And I want to plot the value in the Z-axis For example If I want to plot the altitude of this area using 3d pcolor plot I can get a figure just like the real terrain <h3>2 My result</h3> For now I can plot 3-d plot of this area using <them>Code</them> ````import mpl_toolkits mplot3d axes3d as axes3d fig ax = plt subplots(subplot_kw=dict(projection='3d') figsize = (12 8)) ax plot_surface(xx yy value rstride = 1 cstride = 1 \ cmap=plt cm Spectral_r linewidth = 0) ```` <them>Result</them> <img src="http://i4 tietuku com/b2e650d0d23d5756 png" alt=""> Can some one offer some advice to plot 3-d based on each grid value (like a 3-d histogram)? <h3>Update</h3> A little question added here how to change the xticks and yticks in 3-d axes?
Is something like this <a href="http://matplotlib org/examples/mplot3d/bars3d_demo html" rel="nofollow">http://matplotlib org/examples/mplot3d/bars3d_demo html</a> what you are looking for?
What did Baker say Schwarzenegger didn't understand when she first met him?
polite society
What 2003 movie featuring Julia Roberts was filmed in New Haven?
Mona Lisa Smile
extract data between html tags using BeautifulSoup in python I want to extract the data between the html tag 'title' and in the 'meta' tag I want to extract value of URL attribute and that too the text just before the '?' ````<html lang="en" id="facebook" class="no_js"&gt; <head&gt; <meta charset="utf-8" /&gt; <script&gt; function envFlush(a) {function b(c){for(var d in)c[d]=a[d];}if(window requireLazy){window requireLazy(['Env'] b);}else{window Env=window Env||{};b(window Env);}}envFlush({"ajaxpipe_token":"AXjbmsNXDxPlvhrf" "lhsh":"4AQFQfqrV" "khsh":"0`sj`e`rm`s-0fdu^gshdoer-0gc^eurf-3gc^eurf;1;enbtldou;fduDmdldourCxO`ld-2YLMIuuqSdptdru;qsnunuxqd;rdoe"}); </script&gt; <script&gt;CavalryLogger=false;</script&gt; <noscript&gt; <meta http-equiv="refresh" content="0; URL=/notes/kursus-belajar-bahasa-inggris/bahasa-inggris-siapa-takut-/685004288208871?_fb_noscript=1" /&gt; </noscript&gt; <meta name="referrer" content="default" id="meta_referrer" /&gt; <title id="pageTitle"&gt; &amp;quot; CARA CEPAT BELAJAR BAHASA INGGRIS MUDAH DAN MENYENANGKAN &amp;quot; </title&gt; <link rel="shortcut icon" href="https://fbstatic-a akamaihd net/rsrc php/yl/r/H3nktOa7ZMg ico" /&gt; ```` i e CARA CEPAT BELAJAR BAHASA INGGRIS MUDAH DAN MENYENANGKAN and 685004288208871 I tried the following code : ````&gt;&gt;&gt; soup title contents ```` output is ````[you'" CARA CEPAT BELAJAR BAHASA INGGRIS MUDAH DAN MENYENANGKAN "'] ```` In this I do not want the characters '[]' 'you' and single quotes Also on implementing the following : ````&gt;&gt;&gt; soup meta contents ```` I get the o/p as : ````[] ```` Please help me at this I am new to BeautifulSoup
The <a href="http://www crummy com/software/BeautifulSoup/bs4/doc/#contents-and-children" rel="nofollow">` contents()`</a> method of Beautiful Soup objects returns a list In this case it has only one element which is a Unicode string You should find that the expression you want is actually ````&gt;&gt;&gt; soup title contents[0] ```` Note that the single quotes only appear because you are asking the interactive interpreter to display a string value You will find that ````&gt;&gt;&gt; print(soup title contents[0]) ```` displays ````" CARA CEPAT BELAJAR BAHASA INGGRIS MUDAH DAN MENYENANGKAN " ```` and that is actually the contents of the title tag You will observe that Beautiful Soup has converted the `&amp;quot;` HTML entities into the required double-quote characters To lose the quotes and adjacent spaces you can use ````soup title contents[0][2:-2] ```` The meta tag is a little tricker I make the assumption that there is only one `<meta&gt;` tag with an `http-equiv` attribute whose value is `"refresh" so the retrieval returns a list of one element` You retrieve that element like so: ````&gt;&gt;&gt; meta = soup findAll("meta" {"http-equiv": "refresh"})[0] &gt;&gt;&gt; meta <meta content="0; URL=/notes/kursus-belajar-bahasa-inggris/bahasa-inggris-siapa-takut-/685004288208871?_fb_noscript=1" http-equiv="refresh"/&gt; ```` Note by the way that meta is not a string but a soup element: ````&gt;&gt;&gt; type(meta) <class 'bs4 element Tag'&gt; ```` You can retrieve attributes of a soup element using indexing just like Python dicts so you can get the value of the `content`attribute as follows: ````&gt;&gt;&gt; content = meta["content"] &gt;&gt;&gt; content you'0; URL=/notes/kursus-belajar-bahasa-inggris/bahasa-inggris-siapa-takut-/685004288208871?_fb_noscript=1' ```` In order to extract the URL value you <them>could</them> just look for the first equals sign and take the rest of the string I prefer to use a rather more disciplined approach splitting at the semicolon and then splitting the right-hand element of that split on (only one) equals sign ````&gt;&gt;&gt; url = content split(";")[1] split("=" 1)[1] &gt;&gt;&gt; url you'/notes/kursus-belajar-bahasa-inggris/bahasa-inggris-siapa-takut-/685004288208871?_fb_noscript=1' ````
Issues to split the string in Python I would like to split the searched string into string so that I can search that string in a file and get output ````#!/usr/bin/python f = open('t2 txt') for line in f: if '2613'in line: words = line split("/dev") print words ```` output of this script is: `['' '/dm-40\t\t\t3402613000\n']` I would like to get `dm-40` from this list and then search it again a file
````#!/usr/bin/python f = open('t2 txt') for line in f: if '2613'in line: words = line split()[0] replace('/dev/' '') print words ````
Pandas pivot and create extra columns for duplicates So I have some data with duplicate index's and what I want as columns example ````df = pd DataFrame({ "id":[1 1 1 2 2 3 3 3] "contact_type":["email" "phone" "phone" "email" "mobile" "email" "phone" "mobile"] "contact":["a@a ca" "123" "456" "b@b com" "78432" "c@c ca" "12" "12"] }) ```` what I am trying to do is make it so each ID is a single row My ideal output would be ````ID email phone phone 1 mobile 1 a@a ca 123 456 NaN 2 b@b com NaN NaN 78432 3 c@c ca 12 NaN 12 ```` Trying to use df pivot("id" "contact_type" "contact") gives me an error "Index contains duplicate entries cannot reshape" Problem is it does not seem to like that ID 1 has 2 phone's in contact_type So is there another way I can go to get the data into this format?
I think you have to assemble the final dataframe piece by piece (`pd concat`) as you do not know in advance how many different phone numbers an ID may have at the most Assuming each ID has at most only 1 email or mobile number: ````In [130]: df_mail = df ix[df contact_type=='email' ['contact' 'id']] set_index('id') In [131]: df_mobile = df ix[df contact_type=='mobile' ['contact' 'id']] set_index('id') In [132]: df_phone = df ix[df contact_type=='phone' ['contact' 'id']] In [133]: # make a columns stores 'phone0' 'phone1' and so on: df_phone['field'] = 'Phone' df_phone groupby('id') transform(lambda x: range(len(x))) contact map(str) In [134]: df_phone = df_phone pivot('id' 'field' 'contact') In [135]: df_mail columns = ['Email'] df_mobile columns = ['Mobile'] In [136]: print pd concat((df_mail df_phone df_mobile) axis=1) Email Phone0 Phone1 Mobile id 1 a@a ca 123 456 NaN 2 b@b com NaN NaN 78432 3 c@c ca 12 NaN 12 ````
VIM comment hashtag error? I thought that text editors like vim took the # to indicate a comment and pass over those lines Here is the content of my ~/ vimrc file: ````# Testing comments set nocompatible # Testing comments set number ```` But when I run vim I get this: Error detected while processing /home/user/ vimrc: ````line 1: E488: Trailing characters: # Testing comments line 4: E488: Trailing characters: # Testing comments ```` When I remove the comment lines it works fine but I should like to be able to add comments to my text files without it fouling things up Does anyone know what I am doing wrong? Also I read that I should not use """ as this is not really commenting That and it is also for blocks
in vimrc to write comments you have to use " your files would look like ````" Testing comments set nocompatible " Testing comments set number ````
generate jwt when signing in with allauth How would you generate a token with <a href="https://github com/GetBlimp/django-rest-framework-jwt">django-rest-framework-jwt</a> and pass it to a template that can store the token in localstorage when signing in with allauth? I know django-rest-framework-jwt let us you generate tokens via POST: ````$ curl -X POST -d "username=admin&amp;password=abc123" http://localhost:8000/api-token-auth/ ```` But how would you implement this in the login/signup flow of allauth?
(I have not used JWT but I do not believe there is anything special about JWT compared to regular tokens other than the extra security and more importantly not having to keep a database table of tokens So my answer is for regular tokens assuming/hoping you can adjust to JWT) I am assuming you are trying to write stand-alone client in which case the problem is that django-allauth is not really intended for use with cleints/APIs so a lot of the magic cannot be used through an API See this some how old issue which I believe is still valid: <a href="https://github com/pennersr/django-allauth/issues/360" rel="nofollow">3rd party REST/JSON APIs</a> If you scroll to the end you will see somebody recommending the use of <a href="https://github com/Tivix/django-rest-auth" rel="nofollow">django-rest-auth</a> to handle the social login for the API while keeping the main django-allauth handing the native django web site side of things I have not yet used them both together (I am currently not supporting social login on the API side so have not had to deal with it) <a href="https://thinkster io/django-angularjs-tutorial" rel="nofollow">This post</a> shows an excellent example for developing an Angular Client using django-rest-framework You will see how it creates its own APIs to registering and logging in You should be able to replace that part with django-rest-auth but the point is that django-allauth will not really play a big role on anything that comes via the API (unfortunately) Finally you may also want to check my own implementation <a href="https://github com/dkarchmer/aws-eb-docker-django" rel="nofollow">here</a> Look at the 'authentication' app and look at the tests for how is used which is my version of link <a href="https://thinkster io/django-angularjs-tutorial" rel="nofollow">3</a>
Importing matplotlib on Ubuntu So I downloaded and installed matplotlib The weird things is that I can run the examples fine when they were placed in home/user/Desktop but when I moved them to home/user/Documents they stopped working and I get the below message Is there something special about the Documents folder that they prevent matplotlib from importing? <pre class="lang-none prettyprint-override">`Traceback (most recent call last): File "contour_manual py" line 4 in <module&gt; import matplotlib pyplot as plt File "/usr/local/lib/python2 7/dist-packages/matplotlib/pyplot py" line 23 in <module&gt; from matplotlib figure import Figure figaspect File "/usr/local/lib/python2 7/dist-packages/matplotlib/figure py" line 18 in <module&gt; from axes import Axes SubplotBase subplot_class_factory File "/usr/local/lib/python2 7/dist-packages/matplotlib/axes py" line 8454 in <module&gt; Subplot = subplot_class_factory() File "/usr/local/lib/python2 7/dist-packages/matplotlib/axes py" line 8446 in subplot_class_factory new_class = new classobj("%sSubplot" % (axes_class __name__) AttributeError: 'module' object has no attribute 'classobj' ````
Do you have a file `new py` in your `Documents` folder by any chance? If you have try renaming it to something else The matplotlib module `axes py` imports `new` and if you have a file `new py` lying around in your Documents folder that will cause Python to load it instead of the built-in `new` module
how to send data from a python file to html file that are in a separate projects via a url (not having html as a template) how to send data from a python file to html file that has a js code that are in a separate projects via a url (not having html as a template) and vice versa? i think i need to use AJAX function and json object but I do not know how
You will want to encode your data as JSON (or something else but JSON is the easiest) within your Python function then make that Python function visible visible to the web via something like <a href="http://flask pocoo org/" rel="nofollow">Flask</a> and as you correctly surmised pull the data with AJAX in javascript on the web app's side In the end it will look something like this ````import json from flask import Flask app = Flask(__name__) @app route("/get-data/") def get_data(): data = my_function() data = json dumps(data) return data if __name__ == "__main__": app run() ```` Where `my_function()` is the current function you have written that returns the data needed Your web app can then get the data at `http://your-machines-web-address com/get-data/`
Who formed a university in Germany?
null
Why does the regex not capture the initial word? Python Why does my regex pattern not capture the word before the preposition? My regex pattern is trying to capture Proper Nouns that have prepositions after them For instance: • Academy of Management --> Academy of • McGraw Hill Foundation of Books --> Foundation of For the following text: <blockquote> 'The Academy of Management Entrepreneurship Division and McGraw Hill present the annual award to individuals who develop and implement an innovation in entrepreneurship pedagogy for either graduate or undergraduate education ' </blockquote> ````pp = r'[A-Z][A-Za-z]+\s+\b(for|of|in|by)\b(?=\s+[A-Z][A-Za-z]+)' x2 = re findall(pp test) ```` `x2` outputs: 'of' Why does not it output 'Academy of'?
Just put a capture group for the word before the preposition: `pp = r'([A-Z][A-Za-z]+)\s+\b(for|of|in|by)\b(?=\s+[A-Z][A-Za-z]+)'` Or if you want to capture the whole word/preposition string: `pp = r'([A-Z][A-Za-z]+\s+\b(?:for|of|in|by))\b(?=\s+[A-Z][A-Za-z]+)'`
Near what Oklahoma city is the Spindletop oil field located?
null
What factors limit performance of servo motors?
winding inductance and rotor inertia
Adding a cms plugin to a placeholder from code I am trying to add a plugin to a PlaceholderField from code I have a model (Question) with a few fields one of them is a PlaceholderField What I want to do is adding a TextPugin (or any other generic cms_plugin) to that Placeholder Field This is needed as I do not want people to add the TextPlugin manually from the frontend edit mode of the cms but rather creating it myself so they can just add the right content after I know there is add_plugin from cms api but still I would need to figure out a way to convert the PlaceholderField to Placeholder for it to work This is the code I have right now <strong>models py</strong> ````from django utils translation import ugettext as _ from django db import models from djangocms_text_ckeditor cms_plugins import TextPlugin from cms models fields import PlaceholderField from cms api import add_plugin class Question(models Model): topic = models ForeignKey('Topic') question = models CharField(_("Question") max_length=256) answer = PlaceholderField ('Answer plugin') priorityOrder = models IntegerField(_("Priority Order")) def save(self *args **kwargs): # Here is the critical point: I can cast self answer to PlaceholderField # but I cannot cast it to a Placeholder or add a placeholder to it add_plugin( ???? plugin_type='TextPlugin' language='us' ) super(Question self) save(*args **kwargs) # set the correct name of a django model object in the admin site def __unicode__(self): return self question class Topic(models Model): title = models CharField(_("Topic title") max_length=256) priorityOrder = models IntegerField(_("Priority Order")) # set the correct name of a django model object in the admin site def __unicode__(self): return self title ```` Any help (including alternative ways of doing this) is really welcome!
A `PlaceholderField` is nothing but a `ForeignKey` that auto-creates the relation to a new `Placeholder` object when a new instance is created As a result you cannot use `add_plugin` on a `PlaceholderField` on an unsaved instance You need to call `super() save()` <them>first</them> then call `add_plugin(self answer )`
Viewing object attributes with tab completion in sublimerepl python In the IDLE interpreter in Python you can see a drop-down list of an object's attributes by typing the object's name then period then hitting TAB Is it possible to get similar functionality with sublimerepl? I have tried the different autocomplete packages but they do not appear to make this happen
So there are many different packages Andy's package being my favorite None provide every method for a given function but try hitting "ctrl-space" after the period to see what is available! Link: <a href="https://sublime wbond net/packages/AndyPython" rel="nofollow">https://sublime wbond net/packages/AndyPython</a>
When was STOP.THINK.CONNECT created?
null
Finding which rows have all elements as zeros in a matrix with numpy I have a large `numpy` matrix `M` Some of the rows of the matrix have all of their elements as zero and I need to get the indices of those rows The naive approach I am considering is to loop through each row in the matrix and then check each elements However I think there is a better and a faster approach to accomplish this using `numpy` I hope you can help!
Here is one way I assume numpy has been imported using `import numpy as np` ````In [20]: a Out[20]: array([[0 1 0] [1 0 1] [0 0 0] [1 1 0] [0 0 0]]) In [21]: np where(~a any(axis=1))[0] Out[21]: array([2 4]) ```` It is a slight variation of this answer: <a href="http://stackoverflow com/questions/16092557/how-to-check-that-a-matrix-contains-a-zero-column/16092714#16092714">How to check that a matrix contains a zero column?</a> Here is what is going on: The `any` method returns True if any value in the array is "truthy" Nonzero numbers are considered True and 0 is considered False By using the argument `axis=1` the method is applied to each row For the example `a` we have: ````In [32]: a any(axis=1) Out[32]: array([ True True False True False] dtype=bool) ```` So each value indicates whether the corresponding row contains a nonzero value The `~` operator is the binary "not" or complement: ````In [33]: ~a any(axis=1) Out[33]: array([False False True False True] dtype=bool) ```` (An alternative expression that gives the same result is `(a == 0) all(axis=1)` ) To get the row indices we use the `where` function It returns the indices where its argument is True: ````In [34]: np where(~a any(axis=1)) Out[34]: (array([2 4]) ) ```` Note that `where` returned a tuple containing a single array `where` works for n-dimensional arrays so it always returns a tuple We want the single array in that tuple ````In [35]: np where(~a any(axis=1))[0] Out[35]: array([2 4]) ````
The problem of the Middle East was published by who?
Thomas Edward Gordon
Sending email attachment python <blockquote> I am trying to send an email through python but I am getting an unexpected unindent error on message addpayload(part) I am getting the error on the same message attach(part) when I copy paste other peoples code as well </blockquote> ````def sendemail(logfile password='somepassword'): # Initialize email sender/receiver/servers email_subject = logfile email_receiver = 'email@gmail com' email_sender = 'someemail@gmail com' gmail_smtp = 'smtp gmail com' gmail_smtp_port = 587 text_subtype = 'plain' filepath = os path abspath(logfile) # Create the message message = MIMEMultipart() message['From'] = email_sender message['To'] = email_receiver message['Date'] = formatdate(localtime=True) message['Subject'] = str(email_subject) part = MIMEBase('application' 'octet-stream') part set_payload( open(logfile 'rb') read() ) Encoders encode_base64(part) part add_header('Content-Disposition' 'attachment' filename=filepath) # Attach file to message message add_payload(part) # try: server_gmail = smtplib SMTP(gmail_smtp gmail_smtp_port) # Identify self to gmail server server_gmail ehlo() # Put SMTP connection in TLS mode and call ehlo again server_gmail starttls() #server_gmail ehlo() # Login to service server_gmail login(email_sender password) # Send email server_gmail sendmail(email_sender email_receiver message as_string()) # Close connection server_gmail close() print("mail sent") # except: # print("failed to send mail") sendemail('logtest csv' 'somepassword') ````
You are mixing tabs and spaces in your indentation This confuses Python <a href="http://i stack imgur com/zhuEM png" rel="nofollow"><img src="http://i stack imgur com/zhuEM png" alt="enter image description here"></a> Only use one or the other not both Spaces is preferable
POST request via requests (python) not returning data I have another question about posts This post should be almost identical to one referenced on stack overflow using this question 'Using request post to post multipart form data via python not working' but for some reason I cannot get it to work The website is <a href="http://www camp bicnirrh res in/predict/" rel="nofollow">http://www camp bicnirrh res in/predict/</a> I want to post a file that is already in the FASTA format to this website and select the 'SVM' option using requests in python This is based on what @NorthCat gave me previously which worked like a charm: ````import requests import urllib file={'file':(open('Bishop/newdenovo2 txt' 'r') read())} url = 'http://www camp bicnirrh res in/predict/hii php' payload = {"algo[]":"svm"} raw = urllib urlencode(payload) response = session post(url files=file data=payload) print(response text) ```` Since it is not working I assumed the payload was the problem I have been playing with the payload but I cannot get any of these to work ````payload = {'S1':str(data) 'filename':'' 'algo[]':'svm'} # where I tried just reading the file in called 'data' payload = {'svm':'svm'} # not actually in the headers but I tried this too) payload = {'S1': '' 'algo[]':'svm' 'B1': 'Submit'} ```` None of these payloads resulted in data Any help is appreciated Thanks so much!
You need to set the file post variable name to "userfile" i e ````file={'userfile':(open('Bishop/newdenovo2 txt' 'r') read())} ```` Note that the `read()` is unnecessary but it does not prevent the file upload succeeding Here is some code that should work for you: ````import requests session = requests session() response = session post('http://www camp bicnirrh res in/predict/hii php' files={'userfile': ('fasta txt' open('fasta txt') 'text/plain')} data={'algo[]':'svm'}) ```` `response text` contains the HTML results save it to a file and view it in your browser or parse it with something like <a href="http://www crummy com/software/BeautifulSoup/" rel="nofollow">Beautiful Soup</a> and extract the results In the request I have specified a mime type of "text/plain" for the file This is not necessary but it serves as documentation and might help the receiving server The content of my `fasta txt` file is: ````&gt;24 6jsd2 Tut GGTGTTGATCATGGCTCAGGACAAACGCTGGCGGCGTGCTTAATACATGCAAGTCGAACGGGCTACCTTCGGGTAGCTAGTGGCGGACGGGTGAGTAACACGTAGGTTTTCTGCCCAATAGTGGGGAATAACAGCTCGAAAGAGTTGCTAATACCGCATAAGCTCTCTTGCGTGGGCAGGAGAGGAAACCCCAGGAGCAATTCTGGGGGCTATAGGAGGAGCCTGCGGCGGATTAGCTAGATGGTGGGGTAAAGGCCTACCATGGCGACGATCCGTAGCTGGTCTGAGAGGACGGCCAGCCACACTGGGACTGAGACACGGCCCAGACTCCTACGGGAGGCAGCAGTAAGGAATATTCCACAATGGCCGAAAGCGTGATGGAGCGAAACCGCGTGCGGGAGGAAGCCTTTCGGGGTGTAAACCGCTTTTAGGGGAGATGAAACGCCACCGTAAGGTGGCTAAGACAGTACCCCCTGAATAAGCATCGGCTAACTACGTGCCAGCAGCCGCGGTAATACGTAGGATGCAAGCGTTGTCCGGATTTACTGGGCGTAAAGCGCGCGCAGGCGGCAGGTTAAGTAAGGTGTGAAATCTCCCTGCTCAACGGGGAGGGTGCACTCCAGACTGACCAGCTAGAGGACGGTAGAGGGTGGTGGAATTGCTGGTGTAGCGGTGAAATGCGTAGAGATCAGCAGGAACACCCGTGGCGAAGGCGGCCACCTGGGCCGTACCTGACGCTGAGGCGCGAAGGCTAGGGGAGCGAACGGGATTAGATACCCCGGTAGTCCTAGCAGTAAACGATGTCCACTAGGTGTGGGGGGTTGTTGACCCCTTCCGTGCCGAAGCCAACGCATTAAGTGGACCGCCTGGGGAGTACGGTCGCAAGACTAAAACTCAAAGGAATTGACGGGGACCCGCACAAGCAGCGGAGCGTGTGGTTTAATTCGATGCGACGCGAAGAACCTTACCTGGGCTTGACATGCTATCGCAACACCCTGAAAGGGGTGCCTCCTTCGGGACGGTAGCACAGATGCTGCATGGCTGTCGTCAGCTCGTGTCGTGAGATGTTGGGTTAAGTCCCGCAACGAGCGCAACCCCTGTCCTTAGTTGTATATCTAAGGAGACTGCCGGAGACAAACCGGAGGAAGGTGGGGATGACGTCAAGTCAGCATGGCTCTTACGTCCAGGGCTACACATACGCTACAATGGCCGTTACAGTGAGATGCCACACCGCGAGGTGGAGCAGATCTCCAAAGGCGGCCTCAGTTCAGATTGCACTCTGCAACCCGAGTGCATGAAGTCGGAGTTGCTAGTAACCGCGTGTCAGCATAGCGCGGTGAATATGTTCCCGGGTCTTGTACACACCGCCCGTCACGTCATGGGAGCCGGCAACACTTCGAGTCCGTGAGCTAACCCCCCCTTTCGAGGGTGTGGGAGGCAGCGGCCGAGGGTGGGGCTGGTGACTGGGACGAAGTCGTAACAAGGT ````
Python - read csv file of unicode substitutions I need to replace unicode according to a custom set of substitutions The custom substitutions are defined by someone else's API and I basically just have to deal with it As it stands I have extracted all the required substitutions into a csv file Here is a sample: ````\u0020 \u0021 ! \u0023 # \u0024 $ \u0025 % \u0026 &amp; \u0028 ( \u0029 ) \u002a * \u002b + \u002c " " \u002d - \u002e \u002f / \u03ba kappa ```` I generated this in MS Excel by hacking up the java program the API owners use for themselves when they need to do conversions (and no they will not just run the converter when the API receives input ) There are ~1500 substitutions defined When I generate output (from my Django application) to send to their API as input I want to handle the substitutions Here is how I have been trying to do it: ````class UTF8Converter(object): def __init__(self): #create replacement mapper full_file_path = os path join(os path dirname(__file__) CONVERSION_FILE) with open(full_file_path) as csvfile: reader = csv reader(csvfile) mapping = [] for row in reader: #remove escape-y slash mapping append( (row[0] row[1]) ) # here is the problem self mapping = mapping def replace_UTF8(self string): for old new in self mapping: print new string replace(old new) return string ```` <strong>The problem is that the unicode codes in the csv file are appearing as for example </strong> `self mapping[example][0] = '\\u00e0'` Ok well that is wrong so let us try: ````mapping append( (row[0] decode("string_escape") row[1]) ) ```` No change How about: ````mapping append( (row[0] decode("unicode_escape") row[1]) ) ```` Ok now `self mapping[example][0] = you'\xe0'` So yeah that is the character that I need to replace but the string that I need to call the replace_UTF8() function on looks like `you'\u00e0'` I have also tried `row[0] decode("utf-8")` `row[0] encode("utf-8")` `unicode(row[0] "utf-8")` I also tried <a href="http://stackoverflow com/questions/1846135">this</a> but I do not have unicode characters in the csv file I have unicode code points (not sure if that is the correct terminology or what) So how do I turn the string that I read in from the csv file into a unicode string that I can use with mythingthatneedsconverted replace( )? Or do I need to do something else with the csv file to use a more sensible approach?
I do not think your problem actually exists: <blockquote> Ok now self mapping[example][0] = you'\xe0' So yeah that is the character that I need to replace but the string that I need to call the replace_UTF8() function on looks like you'\u00e0' </blockquote> Those are just different representations of the exact same string You can test it yourself: ````&gt;&gt;&gt; you'\xe0' == you'\u00e0' True ```` <hr> The actual problem is that you are not doing any replacing In this code: ````def replace_UTF8(self string): for old new in self mapping: print new string replace(old new) return string ```` You are just calling `string replace` over and over which returns a new string but does nothing to `string` itself (It <them>cannot</them> do anything to `string` itself; strings are immutable ) What you want is: ````def replace_UTF8(self string): for old new in self mapping: print new string = string replace(old new) return string ```` <hr> However if `string` really is a UTF-8-encoded `str` as the function name implies this still will not work When you UTF-8-encode `you'\u00e0'` what you get is `'\xce\xa0'` There is no `\u00e0` in there to be replaced So what you really need to do is decode it do the replaces then re-encode Like this: ````def replace_UTF8(self string): you = string decode('utf-8') for old new in self mapping: print new you = you replace(old new) return you encode('utf-8') ```` Or even better keep things as `unicode` instead of encoded `str` throughout your program except at the very edges so you do not have to worry about this stuff <hr> Finally this is a very slow and complicated way to do the replacing when strings (both `str` and `unicode`) have a built-in <a href="http://docs python org/2 7/library/stdtypes html#str translate" rel="nofollow">`translate`</a> method to do exactly what you want Instead of building your table as a list of pairs of Unicode strings build it as a dict mapping ordinals to ordinals: ````mapping = {} for row in reader: mapping[ord(row[0] decode("unicode_escape"))] = ord(row[1]) ```` And now the whole thing is a one-liner even with your encoding mess: ````def replace_UTF8(self string): return string decode('utf-8') translate(self mapping) encode('utf-8') ````
Sum multiple value in one single looping This is what I want: ````cr execute(#some query) lines = self cr dictfetchall() total_qty = 0 00 total_weight = 0 00 total_weight_net = 0 00 total_volume = 0 00 for line in lines: total_qty = line['product_qty'] total_weight = line['weight'] total_volume = line['volume'] ```` is there any better way to do this kind of sum ?thanks
You are probably trying to find alternative looping method: ````lines = [{"a": 10 "b":22} {"a": 11 "b":22} {"a": 12 "b":20} {"a": 15 "b":15} {"a": 15 "b":12}] sumA = sum(itm["a"] for itm in lines) sumB = sum(itm["b"] for itm in lines) print sumA print sumB ```` prints ````63 91 ```` Anyway as @MartijnPieters suggested getting the sums by sql query is very likely to be faster
Django BigInteger auto-increment field as primary key? I am currently building a project which involves a lot of collective intelligence Every user visiting the web site gets created a unique profile and their data is later used to calculate best matches for themselves and other users By default Django creates an INT(11) `id` field to handle models primary keys I am concerned with this being overflown very quickly (i e ~2 4b devices visiting the page without prior cookie set up) How can I change it to be represented as BIGINT in MySQL and long() inside Django itself? I have found I could do the following (http://docs djangoproject com/en/dev/ref/models/fields/#bigintegerfield): ````class MyProfile(models Model): id = BigIntegerField(primary_key=True) ```` But is there a way to make it autoincrement like usual `id` fields? Additionally can I make it unsigned so that I get more space to fill in? Thanks!
You could alter the table afterwards That may be a better solution
Django uwsgi nginx CentOS 7 : connection refused on 8001 port Django uwsgi nginx CentOS 7 : connection refused on 8001 port I get http 520 error when I tried to go to <a href="http://domain com:8000" rel="nofollow">http://domain com:8000</a> <strong>nginx conf</strong> ```` upstream django { # connect to this socket # server unix:///tmp/uwsgi sock; # for a file socket server 127 0 0 1:8001; # for a web port socket } server { # the port your site will be served on listen 8000; # the domain name it will serve for server_name domain com; # substitute your machine's IP address or FQDN #root /home/mysite; charset utf-8; #Max upload size client_max_body_size 75M; # adjust to taste # Finally send all non-media requests to the Django server location / { uwsgi_pass django; include /home/mysite/uwsgi_params; # or the uwsgi_params you installed manually } } ```` error message on /var/log/nginx/error log ````2015/04/09 12:28:07 [error] 23235#0: *1 connect() failed (111: Connection refused) while connecting to upstream client: 118 131 206 235 server: domain com request: "GET / HTTP/1 1" upstream: "uwsgi://127 0 0 1:8001" host: "domain com:8000" 2015/04/09 12:28:08 [error] 23235#0: *1 connect() failed (111: Connection refused) while connecting to upstream client: 118 131 206 235 server: domain com request: "GET /favicon ico HTTP/1 1" upstream: "uwsgi://127 0 0 1:8001" host: "domain com:8000" ```` I have tried everything but could not find any clue that it gives me http 502 error
SELinux may prevent such type of connection by default You need to check logfile /var/log/audit/audit log to be sure about it Or use following command to stop SELinux for this time: ````setenforce 0 ````
Store BLOB data with django and sqlite First of all I am aware that there are many similar question like this but the other solutions do not cover my specific case: In my sqlite-database are existing binary data (SHA1 and similar hashes) With googling and reading the <a href="https://docs djangoproject com/en/dev/howto/custom-model-fields/" rel="nofollow" title="docs">django-docs</a> i came up with the following: ````import base64 class BlobField(models Field): """ Stores raw binary data """ description = 'Stores raw binary data' __metaclass__ = models SubfieldBase def __init__(self *args **kwds): super(BlobField self) __init__(*args **kwds) def get_internal_type(self): return "BlobField" def get_db_prep_value(self value connection=None prepared=False): return base64 decodestring(value) def to_python(self value): return base64 encodestring(value) ```` which does what I want the value is encoded and decoded at the right moment but on saving the model into the database it gives me the following error: <blockquote> DatabaseError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str) It is highly recommended that you instead just switch your application to Unicode strings </blockquote> How can I fix this? (Possibly without breaking all my unicode-compatibility in the rest of the app) I cannot change the format of the db-columns since the data is used by another application <hr> EDIT: As suggested by @filip-dupanovic I adopted the BinaryField class as follows: class BinaryField(models Field): description = _("Raw binary data") ````def __init__(self *args **kwargs): kwargs['editable'] = False super(BinaryField self) __init__(*args **kwargs) if self max_length is not None: self validators append(validators MaxLengthValidator(self max_length)) def get_internal_type(self): return "BinaryField" def get_default(self): if self has_default() and not callable(self default): return self default default = super(BinaryField self) get_default() if default == '': return b'' return default def get_db_prep_value(self value connection prepared=False): #value = super(BinaryField self # ) get_db_prep_value(value prepared connection=connection) #if value is not None: # return connection Database Binary(value) return value ```` Notice the comments I had to insert at `get_db_prep_value()` like this it works as expected if I uncomment the lines I get an error <blockquote> TypeError: get_db_prep_value() got multiple values for keyword argument 'connection' </blockquote> I could live with this but do not fully understand the implications of leaving it out Will it still work even without the call to `super()`?
Ticket <a href="https://code djangoproject com/ticket/2417" rel="nofollow">#2417</a> closed not long ago added a `BinaryField` model field You should look at <a href="https://github com/django/django/commit/8ee1eddb7e148de89aebde9e68da495633fc1ec9" rel="nofollow">this commit</a> specifically the changes related to how you can map your internal `BlobField` field type to an appropriate type supported by your database
How are Mexican wrestlers different?
more agile and perform more aerial maneuvers
What are on each side of the three points argent of Elizabeth's current arms since she ascended to the throne?
null
Android and Python different SHA1 checksum of the same file The scenario is the next: I want to upload image to the server But before uploading the file I have to send the `SHA1` checksum of that file so the server could check if the file is already uploaded so I do not upload it again The problem is that for the same file I do not get the same `SHA1` checksum in my app and on the server side Here is the code in my Android app: ````public static String getSHA1FromFileContent(String filename) throws NoSuchAlgorithmException IOException { final MessageDigest messageDigest = MessageDigest getInstance("SHA-1"); InputStream is = new BufferedInputStream(new FileInputStream(filename)); final byte[] buffer = new byte[1024]; for (int read = 0; (read = is read(buffer)) != -1;) { messageDigest update(buffer 0 read); } is close(); // Convert the byte to hex format Formatter formatter = new Formatter(); for (final byte b : messageDigest digest()) { formatter format("%02x" b); } String res = formatter toString(); formatter close(); return res; } ```` And here is the code on the server side: ````def hashFile(f): sha1 = hashlib sha1() if hasattr(f 'multiple_chunks') and f multiple_chunks(): for c in f chunks(): sha1 update(c) else: try: sha1 update(f read()) finally: f close() return sha1 hexdigest() ```` What is the problem and why do I get different `SHA1` checksums?
Turned out there was some server side image editing before generating the `sha1` sum that was not meant to be done in this scenario They made changes on the server side and now this is working perfectly
How do you pass arguments from one function to another? Sorry for the newbie question guys but I am relatively new to python I want to write a function that passes keyword and value arguments into another function: e g ````def function_that_passes_arguments(arguments): some_other_function(arguments) ```` so when I call the first function they are passed into the second e g ````function_that_passes_arguments(arg1=1 arg2=2) ```` is effectively ````some_other_function(arg1=1 arg2=2) ```` The argument names will change so it is important that I pass both keyword and value from one function to another
Accept `*args **kwargs` and pass those to the called function: ````def function_that_passes_arguments(*args **kwargs): some_other_function(*args **kwargs) ```` In both places you can also use regular arguments - the only requirement is that the `*` and `**` arguments are the last ones
Add string to another string I currently encountered a problem: I want to handle adding strings to other strings very efficiently so I looked up many methods and techniques and I figured the "fastest" method But I quite can not understand how it actually works: ````def method6(): return '' join([`num` for num in xrange(loop_count)]) ```` From <a href="http://www skymind com/~ocrow/python_string/" rel="nofollow"><them>source</them> (Method 6)</a> Especially the `([`num` for num in xrange(loop_count)])` confused me totally
it is a <a href="http://docs python org/tutorial/datastructures html#list-comprehensions" rel="nofollow">list comprehension</a> that uses backticks for <a href="http://docs python org/library/functions html#repr" rel="nofollow">`repr`</a> conversion Do not do this Backticks are deprecated and removed in py3k and more efficient and pythonic way is not to build intermediate list at all but to use generator expression: ````'' join(str(num) for num in xrange(loop_count)) # use range in py3k ````
Python - How can I return a list for each xml node I am iterating through using xml etree ElementTree? I am using the xml etree ElementTree module to parse an XML file returning the attributes into lists and then entering those lists in a MySQL database (this last step I am not worried about so there is no need to cover it here) Simple enough and I am currently able to do so but only for one child node at a time The goal is to do so with multiple child nodes regardless of how many there are Here is a sample file: ````<?xml version="1 0"?&gt; <catalog&gt; <book id="bk101" type="hardcover"&gt; <info author="Gambardella Matthew" title="XML Developer's Guide" genre="Computer" price="44 95" publish_date="2000-10-01" description="An in-depth look at creating applications with XML " /&gt; </book&gt; <book id="bk102" type="softcover"&gt; <info author="Ralls Kim" title="Midnight Rain" genre="Fantasy" price="5 95" publish_date="2000-10-01" description="A former architect battles corporate zombies an evil sorceress and her own childhood to become queen of the world " /&gt; </book&gt; <book id="bk101" type="softcover"&gt; <info author="Corets Eva" title="Maeve Ascendant" genre="Fantasy" price="5 95" publish_date="2000-11-17" description="After the collapse of a nanotechnology society in England the young survivors lay the foundation for a new society " /&gt; </book&gt; </catalog&gt; ```` I am able to parse the correct attributes for the first book node where id="bk101" or the last book node where id="bk103" by returning a list with the correct attributes However I am only returning one list per file when I need to return multiple lists (one for each book node and info node so in this case 6 total lists) Here is my code: ````import xml etree ElementTree book_attribute = ['id' 'type'] info_attribute = ['author' 'title' 'genre' 'price' 'publish_date' 'description'] class ApplicationClass(object): # define the only class in this file def __init__(self): self ET = xml etree ElementTree parse('file xml') getroot() self bookNodes = self ET findall('book') self book_values_list = [] self info_values_list = [] def get_book(self): for bookNode in self bookNodes: self book_values_list = [bookNode get(i) for i in book_attribute] return self book_values_list def get_info(self): for bookNode in self bookNodes: for infoNode in bookNode findall('info'): self info_values_list = [infoNode get(i) for i in info_attribute] return self info_values_list a = ApplicationClass() a get_book() print(a book_values_list) a get_info() print(a info_values_list) ```` So I know my problem is that I am only returning one list per function because I am returning the list at the end of the function and then calling the function at the end of my script I just cannot find the proper way to achieve my desired outcome If I do not run my functions at the end of the script then how can I return the multiple lists that I am looking for?
This line is your problem: ````self book_values_list = [bookNode get(i) for i in book_attribute] ```` That line will replace your existing list with a new list But you have this line inside a loop which means on each pass through the loop you lose what was previously processed I think you want this instead: ````self book_values_list append([bookNode get(i) for i in book_attribute]) ```` Using ` append()` instead of `=` will make it so that your variable will have more stuff inserted into it Ultimately you will end up with a list of lists like this: ````[['bk101' 'hardcover'] ['bk102' 'softcover'] ['bk101' 'softcover']] ```` Your other method/loop has the same problem in it - you assign a new list to the variable instead of inserting a new list into the existing list
Decorators versus inheritance How do you decide between using decorators and inheritance when both are possible? E g <a href="http://stackoverflow com/questions/6394511/python-functools-wraps-equivalent-for-classes">this problem</a> has two solutions I am particularly interested in Python
If both are equivalent I would prefer decorators since you can use the same decorator for many classes while inheriting apply to only one specific class
Where was their court in Kuaiji removed to?
Jianye
Tornado Error In database py line 58 - NameError global name 'CONVERSIONS' is not defined I am trying to run the Tornado Demo 'Blog' I have changed the database options in the blog py file but when i try to run the blog py file i get the following error!!<img src="http://i stack imgur com/aNhci png" alt="enter image description here"> <a href="https://github com/facebook/tornado/blob/master/demos/blog/blog py">blog py file link</a> <a href="https://github com/facebook/tornado/blob/master/tornado/database py">database py file link</a> What needs to be done to get past this error??
Install <a href="http://pypi python org/pypi/MySQL-python" rel="nofollow">MySQLdb</a> (required by tornado database)
How to write a python script (on linux) that executes another script and exits? I want script a py to execute script B y then exit immediately script B y is then to continue running indefinitely and regularly as if run from the command line Target system is Linux Centos if it makes any difference
I guess <a href="https://docs python org/2/library/subprocess html" rel="nofollow">Popen subprocess</a> is what you are looking for i e : For windows something like: ````import sys subprocess subprocess Popen(["C:/Python27/python exe" "C:/path/to/script py"]) sys exit(0) ```` <hr> For linux just change the path: ````import sys subprocess subprocess Popen(["/usr/local/bin/python" "/path/to/script py"]) sys exit(0) ```` Note: To find python location on linux you can use `which python`
How to do a 'groupby' by multilevel index in Pandas I have a dataframe 'RPT' indexed by (STK_ID RPT_Date) contains the accumulated sales of stocks for each qurter: ```` sales STK_ID RPT_Date 000876 20060331 798627000 20060630 1656110000 20060930 2719700000 20061231 3573660000 20070331 878415000 20070630 2024660000 20070930 3352630000 20071231 4791770000 600141 20060331 270912000 20060630 658981000 20060930 1010270000 20061231 1591500000 20070331 319602000 20070630 790670000 20070930 1250530000 20071231 1711240000 ```` I want to calculate the single qurterly sales using 'groupby' by STK_ID &amp; RPT_Yr such as : `RPT groupby('STK_ID' 'RPT_Yr')['sales'] transform(lambda x: x-x shift(1))` how to do that ? suppose I can get the year by `lambda x : datetime strptime(x '%Y%m%d') year`
Assuming here that RPT_Data is a string any reason why not to use Datetime? It is possible to groupby using functions but only on a non MultiIndex-index Working around this by resetting the index and set 'RPT_Date' as index to extract the year (note: pandas toggles between object and int as dtype for 'RPT_Date') ````In [135]: year = lambda x : datetime strptime(str(x) '%Y%m%d') year In [136]: grouped = RPT reset_index() set_index('RPT_Date') groupby(['STK_ID' year]) In [137]: for key df in grouped: : print key : print df : (876 2006) STK_ID sales RPT_Date 20060331 876 798627000 20060630 876 1656110000 20060930 876 2719700000 20061231 876 3573660000 (876 2007) STK_ID sales RPT_Date 20070331 876 878415000 20070630 876 2024660000 20070930 876 3352630000 20071231 876 4791770000 (600141 2006) STK_ID sales RPT_Date 20060331 600141 270912000 20060630 600141 658981000 20060930 600141 1010270000 20061231 600141 1591500000 (600141 2007) STK_ID sales RPT_Date 20070331 600141 319602000 20070630 600141 790670000 20070930 600141 1250530000 20071231 600141 1711240000 ```` Other option is to use a tmp column ````In [153]: RPT_tmp = RPT reset_index() In [154]: RPT_tmp['year'] = RPT_tmp['RPT_Date'] apply(year) In [155]: grouped = RPT_tmp groupby(['STK_ID' 'year']) ```` <strong>EDIT</strong> Reorganising your frame make it much easier ````In [48]: RPT Out[48]: sales STK_ID RPT_Year RPT_Quarter 876 2006 0 798627000 1 1656110000 2 2719700000 3 3573660000 2007 0 878415000 1 2024660000 2 3352630000 3 4791770000 600141 2006 0 270912000 1 658981000 2 1010270000 3 1591500000 2007 0 319602000 1 790670000 2 1250530000 3 1711240000 In [49]: RPT groupby(level=['STK_ID' 'RPT_Year'])['sales'] apply(sale_per_q) Out[49]: STK_ID RPT_Year RPT_Quarter 876 2006 0 798627000 1 857483000 2 1063590000 3 853960000 2007 0 878415000 1 1146245000 2 1327970000 3 1439140000 600141 2006 0 270912000 1 388069000 2 351289000 3 581230000 2007 0 319602000 1 471068000 2 459860000 3 460710000 ````
What term describes the number and variety of plants, animals and other organisms in existence?
biodiversity
python3 : Cannot Import cv2 : DLL load failed after installing opencv_contrib modules I had been using cv2 in python on Windows 7 for basic image processing but wanted xfeatured2d from the opencv_contrib package I followed the instructions given in <a href="https://www youtube com/watch?v=vp0AbhXXTrw" rel="nofollow">https://www youtube com/watch?v=vp0AbhXXTrw</a> up till the point of building the `ALL_BUILD` and `INSTALL` on release and debug mode and creating the system variable I could not make the sample projects since I use the express version of Visual Studio 2012 and the UI was different from what has been shown in the video but I think that is just for testing and is not essential to use the modules in python Now the problem I am facing is that my python 3 4 in which earlier I had earlier used cv2 is longer identifying cv2 package and throws the following error ````&gt;&gt;&gt; import cv2 Traceback (most recent call last): File "<pyshell#0&gt;" line 1 in <module&gt; import cv2 ````
I installed latest OpenCV from Git master on Ubuntu 12 10 with Python 3 2 and 3 3 bindings But if you want at Window check below given Link <a href="http://sourceforge net/projects/opencvlibrary/files/opencv-win/" rel="nofollow">Open CV Package Installation</a> if Ubuntu before I did for Linux: you have the *-dev package installed for your version of Python <strong>sudo apt-get install python3 3-dev</strong>
How to flush plots in IPython Notebook sequentially? ````for i in range(3): print("Info " i) plt figure() plt plot(np arange(10)*(i+1)) ```` In an IPython Notebook this will first print out the three info messages and afterwards plot the three figures Which command can I use to enforce the sequential display of prints and plots? That is print "Info 0" plot "Figure 0" print "Info 1" plot "Figure 1" etc This a simple bare-bones example In my case it is much more complicated and it is important to get the behavior correctly
`IPython` first evaluates all code in your cell When this is done open figures are plotted to the output area If that is not what you want you can display your figures manually However you have to be sure to close all newly created figure objects before the evaluation of the cell ends This is a short example: ````%matplotlib inline import matplotlib pyplot as plt import numpy as np from IPython display import display for i in range(3): print("Info " i) fig ax = plt subplots() ax plot(np arange(10)*(i+1)) display(fig) plt close() ````
A Default Dict that default's to a dictionary with pre-generated keys If there a better way to accomplish this? ````from functool import partial from collections import defaultdict dict_factory = partial(dict {'flag_name' : False 'flag_name2' : False 'flag_name3' : True etc } self ids_with_flags_dictionary = defaultdict(dict_factory) ```` The goal here being a dictionary of keys(the keys being id's of some kind) that autogenerates the list of default flag states if I call an ID that has not been called before
There is nothing wrong with it exactly but using `partial` seems a bit overkill just to return a static value Why not just: ````defaultFlags = {'flag_name' : False 'flag_name2' : False 'flag_name3' : False # etc } self ids_with_flags_dictionary = defaultdict(lambda: defaultFlags copy()) ````
The Banská Akadémia was originally intended for training workers in what two precious metals?
silver and gold
Buyers were often deceived when precious metals were mixed with?
less valuable substances
Trying to understand Python loop using underscore and input One more tip - if anyone is learning Python on HackerRank knowing this is critical for starting out I am trying to understand this code: ```` stamps = set() for _ in range(int(raw_input())): print 'underscore is' _ stamps add(raw_input() strip()) print stamps ```` Output: ```` &gt;&gt;&gt;2 underscore is 0 &gt;&gt;&gt;first set(['first']) underscore is 1 &gt;&gt;&gt;second set(['second' 'first']) ```` - I put 2 as the first raw input How does the function know that I am only looping twice? This is throwing me off because it is not the typical for i in xrange(0 2) structure - At first my thinking was the underscore repeats the last command in she will So I added print statements in the code to see the value of underscore but the values just show the 0 and 1 like the typical loop structure I read through this post already and I still cannot understand which of those 3 usages of underscore is being used <a href="http://stackoverflow com/questions/5893163/what-is-the-purpose-of-the-single-underscore-variable-in-python">What is the purpose of the single underscore &quot;_&quot; variable in Python?</a> I am just starting to learn Python so easy explanations would be much appreciated!
<a href="http://stackoverflow com/a/5893946/918959">ngoghlan's answer</a> lists 3 conventional uses for `_` in Python: <blockquote> - To hold the result of the last executed statement in an interactive interpreter session This precedent was set by the standard CPython interpreter and other interpreters have followed suit - For translation lookup in i18n (imported from the corresponding C conventions I believe) as in code like: ````raise forms ValidationError(_("Please enter a correct username"))` ```` - As a general purpose "throwaway" variable name to indicate that part of a function result is being deliberately ignored as in code like: ```` label has_label _ = text partition(':') ```` </blockquote> <hr> Your question is which one of these is being used in the example in your code The answer would be that is a throwaway variable (case 3) but its contents are printed <strong>here</strong> for debugging purposes It is however not a general Python convention to use `_` as a loop variable if its value is used in any way Thus you regularly might see: ```` for _ in range(10): print("Hello world") ```` where `_` immediately signals the reader that the value is not important and it the loop is just repeated 10 times However in a code such as ```` for i in range(10): do_something(i) ```` where the value of the loop variable is used it is the convention to use a variable name such as `i` `j` instead of `_`
Import function manually I am beginner in python and i tried to understand the <strong>from import </strong> statement I surprised to found that we can import function manually and i want to ask you if it is legal My module: ````**my_module py** def func1(): print 10 def func2(): print 20 ```` My program: ````import sys from my_module import func1 func2() ==&gt; ERROR! globals()["func2"] = sys modules["my_module"] func2 func2() ==&gt; Printing 20! ````
Do not do that some pythonians will chew your leg off if they see that It is very non-pythonic `import` is a runtime statement (I mean everything is) so there is no reason at all to use `sys modules` version
Multiple characters in Python ord function Programming beginner here (Python 2 7) Is there a work around for using more than a single character for Python's ord function? For example I have a hex string '\xff\x1a' which I would like the decimal value for so that I can sum it with other hex strings However ord only accepts a single hex string character Thank you!
Strings are iterable so you can loop through the string use `ord` and add the results: ````your_sum = sum([ord(i) for i in '\xff\x1a']) ````
Django JOIN eliminates desired columns I am trying to join two tables with django related to each other with a foreign key field ````class Question(models Model): description = models TextField('Description' blank=True null=True) class Vote(models Model): question = models ForeignKey(Question) profile = models ForeignKey(UserProfile) value = models IntegerField('Value') creator = models ForeignKey(User) ```` I tried to create a queryset by using ````questions = Question objects filter(vote__creator=3) values() ```` which results in a set like this ````+----+-------------+ | id | description | ----+-------------+ ```` If I run a slightly similar query by hand in mysql with ````select * from questions as t1 join votes as t2 on t1 id=question_id where creator_id=3; ```` it results in a set like this ````+----+-------------+------+-------------+------------+-------+------------+ | id | description | id | question_id | profile_id | value | creator_id | ```` How can I prevent django from cutting columns from my resulting queryset? I would really wish to retrieve a fully joined table
use objects select_related(): ````questions = Question objects select_related() filter(vote__creator=3) values() ````
HTML Extraction with Python The issue being tackled is being unable to click and go to the next page on an HTML page An HTML page is being accessed which displays results after your search query At the bottom of the page there is a line of numbers to select from the page of your query i e "1 2 3 4 next" - clicking "2" shows you the results on the second page If you are on a different page number i e 2 or 3 the line at the bottom looks like: "previous 1 2 3 4 next" I am using Python and Webdriver to click to the next page to scroll through my results The first time I press it it takes me to the next page The SECOND time I click it it takes me to the previous page Meaning I am stuck on the first two pages and cannot see results for 3 and 4 I noticed that the reason this was happening was because of the <strong>li class="arrow"</strong> tag being present twice in the HTML code That when the second call was made the first tag that appears is the one with the "arrow" class How do I go about clicking this? HTML Notes: - the "li" tag defines a list item HTML Code: BEFORE CLICKING NEXT: ````<div class="list"&gt; <ul class="line"&gt; <li class="current page"&gt;<a href&gt;1</a&gt;</li&gt; <li&gt;<a href="/search_text=&amp;&amp;page=1"&gt;2</a&gt;</li&gt; <li&gt;<a href="/search_text=&amp;&amp;page=2"&gt;3</a&gt;</li&gt; <li&gt;<a href="/search_text=&amp;&amp;page=3"&gt;4</a&gt;</li&gt; <li class="arrow"&gt;<a href="/search_text=&amp;&amp;page=1"&gt;next</a&gt;</li&gt; </ul&gt; </div&gt; ```` AFTER CLICKING "NEXT" HTML CODE looks like this: ````<div class="list"&gt; <ul class="line"&gt; <li class="arrow"&gt;<a href="/search_text=&amp;"&gt;previous</a&gt;</li&gt; <li&gt;<a href="/search_text=&amp;"&gt;1</a&gt;</li&gt; <li class="current page"&gt;<a href&gt;2</a&gt;</li&gt; <li&gt;<a href="/search_text=&amp;&amp;page=2"&gt;3</a&gt;</li&gt; <li&gt;<a href="/search_text=&amp;&amp;page=3"&gt;4</a&gt;</li&gt; <li class="arrow"&gt;<a href="/search_text=&amp;&amp;page=2"&gt;next</a&gt;</li&gt; </ul&gt; </div&gt; ```` Python Code: ````chromedriver = "C:\temp\chromedriver exe" os environ["webdriver chrome driver"] = chromedriver driver = webdriver Chrome(executable_path=r"C:\temp\chromedriver exe") driver implicitly_wait(3) driver get(urlLink) driver find_element_by_css_selector("li arrow") click() #Takes me to the next page driver find_element_by_css_selector("li arrow") click() #Takes me to the previous page ````
You can use the method driver <strong>find_element_by_link_text</strong>('next') to find the element and then call click()
How to use django_socketio on a django project I would like to use real-time application on my website on django I installed django_socketio but I do not found a tutorial witch talk about how can I use it Have you an idea how can I do this?
<a href="http://jpadilla com/post/74391616727/the-easiest-way-to-add-websockets-to-django" rel="nofollow">This</a> might be a good place to start It is not exactly socketio but it is a great place to start Here is the <a href="http://dwr-example herokuapp com/" rel="nofollow">demo</a>
Pycharm IDE tries to load django script from sh instead of bash If i try to compilemessages in Django I get erros like this: ````sh: msgfmt: command not found ```` while from bash (terminal) all these commands work fine How do I change Pycharm to run these commands from bash instead of sh ? Thanks
Actually PyCharm itself does not use either sh or bash to run these commands; it runs the manage py Python script using the interpreter configured in Settings | Python Interpreters The compilemessages command uses os system() Python function to run the msgfmt command The os system() function runs the system() C function which always uses /bin/sh Therefore you need to configure your OS in such a way that msgfmt could be run from /bin/sh
How to iterate in a cartesian product of lists I would like to iterate in a for loop using 3 (or any number of) lists with any number of elements for example: ````from itertools import izip for x in izip(["AAA" "BBB" "CCC"] ["M" "Q" "S" "K" "B"] ["00:00" "01:00" "02:00" "03:00"]): print x ```` but it gives me: ````('AAA' 'M' '00:00') ('BBB' 'Q' '01:00') ('CCC' 'S' '02:00') ```` I want: ````('AAA' 'M' '00:00') ('AAA' 'M' '01:00') ('AAA' 'M' '02:00') ('CCC' 'B' '03:00') ```` Actually I want this: ````for word letter hours in [cartesian product of 3 lists above] if myfunction(word letter hours): var_word_letter_hours = 1 ````
You want to use the <a href="http://docs python org/library/itertools html#itertools product">product</a> of the lists: ````from itertools import product for word letter hours in product(["AAA" "BBB" "CCC"] ["M" "Q" "S" "K" "B"] ["00:00" "01:00" "02:00" "03:00"]): ```` Demo: ````&gt;&gt;&gt; from itertools import product &gt;&gt;&gt; for word letter hours in product(["AAA" "BBB" "CCC"] ["M" "Q" "S" "K" "B"] ["00:00" "01:00" "02:00" "03:00"]): print word letter hours AAA M 00:00 AAA M 01:00 AAA M 02:00 AAA M 03:00 CCC B 00:00 CCC B 01:00 CCC B 02:00 CCC B 03:00 ````
Parse iframe with blank src using bs4 Good time of day SO community Here is the problem I recently encountered: I got this HTML source code on main page: ```` <div id="contents_layout"&gt; <iframe name="contentsFrame" id="contentsFrameID" src="" width="100%" height="100%" scrolling="no" frameborder="0" marginheight="0" marginwidth="0"&gt;</iframe&gt; </div&gt; ```` And I have read a lot of materials on parsing the iframe but all they do is getting the src attribute from iframe and make another request afterwards I cannot do same trick here as the src attribute is blank and web logic lies underneath I am using Python 3 5 bs4 and requests Page source code - <a href="http://collabedit com/kqp88" rel="nofollow">http://collabedit com/kqp88</a> Frame source code - <a href="http://collabedit com/hwuj7" rel="nofollow">http://collabedit com/hwuj7</a> I do not know if it is okay if I share the original webpage
The <them>iframe</them> had an id so just use that: ````h= """<div id="contents_layout"&gt; <iframe name="contentsFrame" id="contentsFrameID" src="" width="100%" height="100%" scrolling="no" frameborder="0" marginheight="0" marginwidth="0"&gt;</iframe&gt; </div&gt; """ from bs4 import BeautifulSoup soup = BeautifulSoup(h) iframe = soup select_one("#contentsFrameID") ```` which would give you: ````<iframe frameborder="0" height="100%" id="contentsFrameID" marginheight="0" marginwidth="0" name="contentsFrame" scrolling="no" src="" width="100%"&gt;</iframe&gt; ```` You could also use the empty <them>src</them> attribute: ````ifr = soup select_one("iframe[src=""]") ```` Use the name: ```` ifr = soup select_one("iframe[name=contentsFrame]") ```` In the actual site you are scraping the content inside `contentsFrameID` is dynamically created so you will need something like <a href="http://selenium-python readthedocs io/" rel="nofollow">selenium</a> an example below getting dynamically created form: ````from selenium import webdriver from bs4 import BeautifulSoup dr = webdriver PhantomJS() dr get("http://encykorea aks ac kr/Contents/Index?contents_id=E0000089") soup = BeautifulSoup(dr page_source) print(soup select_one("#contentFrameForm") ````
Having trouble insert data to table with default values I have a table that has 4 attributes ```` --------------+--------------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | --------------+--------------+------+-----+-------------------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | url | varchar(255) | YES | | NULL | | | mail | varchar(255) | YES | | NULL | | | date_entered | timestamp | NO | | CURRENT_TIMESTAMP | | | active | tinyint(1) | NO | | 1 | | --------------+--------------+------+-----+-------------------+----------------+ ```` Now i want to insert only the data_entered and other attributes to get default values I am doing it for the id field which I need to be exact to another id I insereted to different table this is the code: ```` tx execute(\ "insert into company_career (date_entered) " "values (%s)" (time time()) ) ```` This is the error: ```` query = query % db literal(args) exceptions TypeError: not all arguments converted during string formatting ```` How to fix it?
Try this: ````tx execute(\ "insert into company_career (date_entered) " "values (FROM_UNIXTIME('%s'))" (time time()) ) ````
What is the discipline not taught in mantiq courses?
null
10 000+ Point 3D Scatter Plots in Python (with Quick Rendering) Performance-wise the following code snippet works perfectly fine for me when plotting in `mayavi` ````import numpy as np from mayavi import mlab n = 5000 x = np random rand(n) y = np random rand(n) z = np random rand(n) s = np sin(x)**2 np cos(y) mlab points3d(x y z s colormap="RdYlBu" scale_factor=0 02 scale_mode='none') ```` But `mayavi` begins to choke once `n &gt;= 10000` The analogous 3d plotting routine in `matplotlib` (`Axes3D scatter`) similarly struggles with data sets of this size (why I started looking into `mayavi` in the first place) First is there something in `mayavi` (trivial or nontrivial) that I am missing that would make 10 000+ point scatter plots much easier to render? Second if the answer above is no what other options (either in `mayavi` or a different python package) do I have to plot datasets of this magnitude? I tagged ParaView simply to add that rendering my data in ParaView goes super smoothly leading me to believe that I am not trying to do anything unreasonable <strong>Update:</strong> Specifying the mode as a 2D glyph goes a long way towards speeding things up E g ````mlab points3d(x y z s colormap="RdYlBu" scale_factor=0 02 scale_mode='none' mode='2dcross') ```` can easily support up to 100 000 points <a href="http://i stack imgur com/cDUbH png" rel="nofollow"><img src="http://i stack imgur com/cDUbH png" alt="enter image description here"></a> It would still be nice if anyone could add some info about how to speed up the rendering of 3D glyphs
<a href="http://www pyqtgraph org/" rel="nofollow">PyQtGraph</a> is a much more performant plotting package although not as "beautiful" as matplotlib or mayavi It is made for number crunching and should therefore easily render points in the order of ten thousands As for `mayavi` and `matplotlib`: I think with that number of points you have reached what is possible with those packages Edit: <a href="http://vispy org/" rel="nofollow">VisPy</a> seems to be the successor to PyQtGraph and some other visualization packages Might be a bit overkill but it can display a few hundred thousand points easily by offloading computation to the GPU
When did the BBC cease broadcasts due to World War II?
September 1939
python read text from table scraping I need a value from table with website My Python Code ````import sys import getopt import linecache import string import ftplib import os import requests from lxml import html import datetime page = requests get(URL) tree = html fromstring(page content) all_id = tree xpath('//td[@style="display:none&gt; &amp;gt;"]/text()') print 'Wszystkie ID:' all_id ```` website Code ````<td style="display:none"&gt;id&amp;gt;277918954 id32&amp;gt;c14f940e3eed6a3871e1e3376048303f level&amp;gt;0 key_left&amp;gt;0 key_right&amp;gt;0 name&amp;gt;file png type&amp;gt;File png size&amp;gt;139 27 KB hash&amp;gt;538dd38791b76170ab71feec9ef6fed5</td&gt; ```` I am view only error where is problem?
<blockquote> `//td[@style="display:none&gt; &amp;gt;"]/text()` </blockquote> This would not match the presented element this would: ````//td[@style="display:none"]/text() ```` Though this expression does not seem reliable To provide you with a reliable locator we would need to see the complete HTML of the page but given what we have how about checking that the text <them>starts with</them> `id`: ````//td[starts-with( "id")]/text() ````
Feature selection algorithms in Scikit-learn I am using Scikit-learn for a binary classification task and I am trying to perform Feature selection using one of these methods : Correlation-based Feature Selection (CFS) Subset Evaluator and Best First search algorithm I could not find any of them in the Feature selection module ! are they listed with another name in Scikit learn or is there any equivalent ?! because I want to repeat an experiment and they have used these algorithms in reducing there data
Not sure whether Scikit-Learn has FCBF (Fast correlation based filter) or not However you can use the implementation (MATLAB) from the Feature selection package from ASU (Arizona State University) Download link: <blockquote> <a href="http://featureselection asu edu/algorithms/fs_sup_fcbf zip" rel="nofollow">http://featureselection asu edu/algorithms/fs_sup_fcbf zip</a> </blockquote> Corresponding paper: <blockquote> <a href="http://www cs binghamton edu/~lyu/publications/Yu-Liu03ICML pdf" rel="nofollow">http://www cs binghamton edu/~lyu/publications/Yu-Liu03ICML pdf</a> </blockquote>
Which constitution details the role and abilities of the prime minister?
"Bangladeshs"