input_text
stringlengths 1
40.1k
| target_text
stringlengths 1
29.4k
⌀ |
---|---|
Calling methods on already running python script I want to call a method in running python script from another script For eg: <strong>script1 py</strong> is continuously running In this script there are methods From another <strong>script2 py</strong> i want to call method of <strong>script1 py</strong> | You can import sript1 py and use function from it because the script1 py is cached by interpreter If I understand the question correctly |
What contestant came in fourth on season two of American Idol? | Josh Gracin |
What did Sir Isaac Newton believe time and space were the product of? | null |
Cassandra or MongoDB for good scaling and big amount of queries I have the following situation There are a lot of queries to database (commonly writing comments reading profiles etc ) and I think will be more reads than writes I want to have a good possibility to scaling db to a several servers So and I enjoy nosql:) As I understand by reading blogs and answers for question on StackOverflow (for example <a href="http://stackoverflow com/questions/2892729/mongodb-vs-cassandra">this one</a>) the best choice in this situation is using Cassandra So the question is - is Cassandra more suitable for my purposes? Why? And the second question will be about async client libraries for <a href="https://github com/facebook/tornado/wiki/Links">Tornado</a> - do you know some implementations of this? As you can see on the wiki page linked above there are async clients only for mongodb and couchdb And this fact also stops me Maybe I can use MongoDB now (because of async library esists and maybe in first time it will be faster than Cassandra on several servers but without async) and after some time convert data in mongo to cassandra What do you think about it? | Half answer - since it is not about suitability Tornado 2 1 supports twisted as a async pattern which means that you can use the telephus Cassandra library (twisted+Cassandra) to have async Cassandra access ```` import tornado platform twisted from telephus pool import CassandraClusterPool from twisted internet import reactor tornado platform twisted install() from twisted internet import reactor pool = CassandraClusterPool([HOST] keyspace='XXXX' reactor=reactor) pool startService() reactor run() # this calls tornado ioloop IOLoop instance() start() ```` That said I am using MongoDB and mongoengine (non-async) for some personal projects at the moment and Cassandra+telephus for work projects I am making a tradeoff in terms of flexibly data models versus fixed data models and performance |
What does the term Stimulus diffusion mean? | the sharing of ideas |
How can I treat double quoted data and single quoted data same in Python 2 x? I have a hardcoded <strong>metadata</strong> file whose data is wrapped around in single quotes like 'Australia' 'USA' This metadata is compared against <strong>new data</strong> which can be wrapped in either double quotes like "USA" (problem in comparing) or singles quotes (where I have no problem) Also I cannot compare 'USA' against "USA" Since the new files are large ~ 700 MB I do not want performance intensive replacement of data using replace function How can I compare the metadata with the new data ? | Try it ````import ast a = 'USA' b = '"USA"' if a == ast literal_eval(b): print(a) USA ```` |
Python Changing a font size of a string variable I have a variable that gets sent to a email as text but the text is all pretty much a standard size with everything the same I would like to add some emphasis to it as well as make it bigger and make it bold if possible Here is the code I would like to edit ```` final_name = "Changes by" str(name)+" ***" ```` I know it is not much but I would like to know if it is possible if I can make a variable string bold and with a slightly bigger font size | Strings do not have a font size Strings store sequences of characters (unicode strings) or bytes to be interpreted as characters (byte strings) Font size is an element of presentation and is a function of whatever presentation and rendering system you are using As you mention an email you could create a multipart email with an HTML part and format it accordingly in that HTML document |
installing mysql for python 2 7 ````D:\PythonPack\MySQL-python-1 2 3>python setup py install Traceback (most recent call last): File "setup py" line 15 in <module> metadata options = get_config() File "D:\PythonPack\MySQL-python-1 2 3\setup_windows py" line 7 in get_confi g serverKey = _winreg OpenKey(_winreg HKEY_LOCAL_MACHINE options['registry_ke y']) WindowsError: [Error 2] The system cannot find the file specified ```` I am trying to install mysqldb for python in windows7 and run into that error message Do you have an idea how I can make it work ? I already have pythonsetuptool installed | Just install with the exe from <a href="http://www lfd uci edu/~gohlke/pythonlibs/" rel="nofollow">here</a> And maybe you can find more information about this in the previous asked questions like <a href="http://stackoverflow com/questions/645943/mysql-for-python-in-windows">this</a> |
According to 2015 data, how many how many visitors gave London its ranking as the number one visited city in the world? | 65 million |
wxpython rearrange bitmap on resize I have this wxpython code where I am displaying image along with text I am using flexgridsizer for layout management I want when I resize the window images should reshuffle as per the size like if there is more room than the column and row should expand also I see flicker while regenerating images Please suggest if there is anyother better way to do this ````import wx ID_MENU_REFRESH = wx NewId() #dummy description of image imglist =['One' 'Two' 'Three' 'Four' 'Five' 'Six' 'Seven' 'Eight' 'Nine' 'Ten'] class Example(wx Frame): def __init__(self *args **kwargs): super(Example self) __init__(*args **kwargs) mb = wx MenuBar() fMenu = wx Menu() fMenu Append(ID_MENU_REFRESH 'Refresh') mb Append(fMenu '&Action') self SetMenuBar(mb) self Bind(wx EVT_MENU self refreshApps id=ID_MENU_REFRESH) #storing the thumb image in memory self bmp=wx Image('img/myimg png' wx BITMAP_TYPE_PNG) ConvertToBitmap() self panelOne = wx Panel(self) sizer = wx BoxSizer(wx HORIZONTAL) self panelOne SetSizer(sizer) self panelOne Layout() self myimg_holder={} self showThumb(imglist) self SetTitle('Example') self Centre() self Show(True) def refreshApps(self event=None): #remove self showThumb() #repaint self showThumb(imglist) def showThumb(self thumblist=None): if not thumblist: for child in self panelOne GetChildren(): child Destroy() self myimg_holder clear() self panelOne Layout() return vbox = wx BoxSizer(wx VERTICAL) hbox = wx BoxSizer() gs = wx FlexGridSizer(8 6 10 20) #blank text holder for padding gs Add(wx StaticText(self panelOne) flag=wx ALL|wx EXPAND border=2) vzis = [] for num app in enumerate(thumblist): vzis append(wx BoxSizer(wx VERTICAL)) appid = wx StaticBitmap(self panelOne wx ID_ANY self bmp (5 5) (self bmp GetWidth()+5 self bmp GetHeight()) name=app strip()) vzis[num] Add(appid 0 wx ALIGN_CENTER) self myimg_holder[appid]=app vzis[num] Add(wx StaticText(self panelOne -1 app) 0 wx ALIGN_CENTER border=1) for i in range(len(thumblist)): if i in [4 8 12]: gs Add(wx StaticText(self panelOne) flag=wx ALL border=2) gs Add(wx StaticText(self panelOne) flag=wx ALL border=2) gs Add(vzis[i] flag=wx ALL border=1) else: gs Add(vzis[i] flag=wx ALL border=1) vbox Add(wx StaticText(self panelOne) flag=wx ALL border=4) vbox Add(gs proportion=1 flag=wx ALL) vbox Add(wx StaticText(self panelOne) flag=wx ALL border=4) self panelOne SetSizer(vbox) self panelOne Layout() def main(): ex = wx App() frame = Example(None) frame Show() ex MainLoop() if __name__ == '__main__': main() ```` | If you want the images to rearrange themselves then you do not want to use the FlexGridSizer What you want is the WrapSizer: - <a href="http://wxpython org/Phoenix/docs/html/WrapSizer html" rel="nofollow">http://wxpython org/Phoenix/docs/html/WrapSizer html</a> - <a href="http://www blog pythonlibrary org/2014/01/22/wxpython-wrap-widgets-with-wrapsizer/" rel="nofollow">http://www blog pythonlibrary org/2014/01/22/wxpython-wrap-widgets-with-wrapsizer/</a> If you want the bitmaps themselves to resize when you are resizing the frame then you will have to do that yourself The sizers do not do that automatically They do resize normal widgets but not images You would probably need to catch EVT_SIZE and resize the bitmaps as appropriate if you wanted to go that route To reduce flicker you will probably want to take a look at the Freeze and Thaw methods |
What artist issued the album known as "the first true Adult Contemporary album of the decade"? | Linda Ronstadt |
Django Flatpages Using: Site matching query does not exist I am a noob at Django I have created a flatpages object in my admin console when I visit the url which I run on a local server I get a "Site Matching query does not exist" error Can someone help me? ````urlpatterns = [ url(r'^' include('main urls')) url(r'^home/' include('django contrib flatpages urls')) url(r'^admin/' admin site urls) ] ```` Here is the error log: ````Traceback: File "/usr/local/lib/python2 7/dist-packages/Django-1 10-py2 7 egg/django/core/handlers/exception py" in inner 39 response = get_response(request) File "/usr/local/lib/python2 7/dist-packages/Django-1 10-py2 7 egg/django/core/handlers/base py" in _get_response 187 response = self process_exception_by_middleware(e request) File "/usr/local/lib/python2 7/dist-packages/Django-1 10-py2 7 egg/django/core/handlers/base py" in _get_response 185 response = wrapped_callback(request *callback_args **callback_kwargs) File "/usr/local/lib/python2 7/dist-packages/Django-1 10-py2 7 egg/django/contrib/flatpages/views py" in flatpage 35 site_id = get_current_site(request) id File "/usr/local/lib/python2 7/dist-packages/Django-1 10-py2 7 egg/django/contrib/sites/shortcuts py" in get_current_site 15 return Site objects get_current(request) File "/usr/local/lib/python2 7/dist-packages/Django-1 10-py2 7 egg/django/contrib/sites/models py" in get_current 67 return self _get_site_by_request(request) File "/usr/local/lib/python2 7/dist-packages/Django-1 10-py2 7 egg/django/contrib/sites/models py" in _get_site_by_request 44 SITE_CACHE[host] = self get(domain__iexact=host) File "/usr/local/lib/python2 7/dist-packages/Django-1 10-py2 7 egg/django/db/models/manager py" in manager_method 85 return getattr(self get_queryset() name)(*args **kwargs) File "/usr/local/lib/python2 7/dist-packages/Django-1 10-py2 7 egg/django/db/models/query py" in get 385 self model _meta object_name Exception Type: DoesNotExist at /default/ Exception Value: Site matching query does not exist ```` | Perhaps you have not defined <a href="https://docs djangoproject com/en/1 10/ref/settings/#std:setting-SITE_ID" rel="nofollow">SITE_ID</a> in settings |
Python equivalent of R's head and tail function I want to preview a Pandas dataframe I would use head(mymatrix) in ARE but I do not know how to do this in Pandas Python When I type df head(10) I get ````<class 'pandas core frame DataFrame'> Int64Index: 10 entries 0 to 9 Data columns (total 14 columns): #Book_Date 10 non-null values Item_Qty 10 non-null values Item_id 10 non-null values Location_id 10 non-null values MFG_Discount 10 non-null values Sale_Revenue 10 non-null values Sales_Flg 10 non-null values Sell_Unit_Cost 5 non-null values Store_Discount 10 non-null values Transaction_Id 10 non-null values Unit_Cost_Amt 10 non-null values Unit_Received_Cost 5 non-null values Unnamed: 0 10 non-null values Weight 10 non-null values ```` | Suppose you want to output the first and last 10 rows of the iris data set In R: ````data(iris) head(iris 10) tail(iris 10) ```` In Python (scikit-learn required to load the iris data set): ````import pandas as pd from sklearn import datasets iris = pd DataFrame(datasets load_iris() data) iris head(10) iris tail(10) ```` Now as <a href="http://stackoverflow com/questions/13085709/df-head-sometimes-does not-work-in-pandas-python">previously answered</a> if your data frame is too large for the display you use in the terminal a summary is output To visualize your data in a terminal you could either expend the terminal or reduce the number of columns to display as follows ````iris ix[: 1:2] head(10) ```` |
Repeat a while loop ````years = int(input("How many years?: ")) i= 0 temperaturer = {} monthnumber = 1 nummer = 1 while i <= years: print("Which is " str(nummer) ":a year?: ") for i in range(0 13): temp = input("Month " str(monthnumber) ": ") monthnumber = 1 if monthnumber == 13: break temperaturer append(temp) ```` Is there a simple way to make this repeat itself as many times as asked in "How many years at the top?" | ````years = int(input("How many years?: ")) numeral = {1 : 'first' 2: 'second'} # and so on data = {} for year in range(1 years 1): cur_year = input("Which is the " numeral[year] " year?: ") data[cur_year] = {} for month in range(1 13): d = input("Month " str(month) ": ") data[cur_year][month] = d print data {2012: {1: 22 2: 1 3: 42 4: 22 5: 3 6: 22 7: 11 8: 23 9: 42 10: 1 11: 223 12: 23} 2018: {1: 23 2: 2 3: 4 4: 1 5: 52 6: 235 7: 2 8: 52 9: 25 10: 25 11: 25 12: 25} ```` |
global name error in python I have a `searchengine py` file and I also created an index for this `searchengine py`: ````import sqlite3 import urllib2 from bs4 import BeautifulSoup from urlparse import urljoin # Create a list of words to igonre ignorewords=set(['the' 'of' 'to' 'and' 'a' 'in' 'is' 'it']) class crawler: # Initialize the crawler with the name of database def __init__(self dbname): self con=sqlite3 connect(dbname) def __del__(self): self con close() def dbcommit(self): pass # Auxilliary function for getting an entry id and # adding it if not present def getentryid(self table field value createnew=True): cur=self con execute("select rowid from %s where %s='%s'" % (table field value)) res=cur fetchone() if res==None: cur=self con execute("insert into %s (%s) values ('%s')" % (table field value)) return cur lastrowid else: return res[0] # Index an individual page def addtoindex(self url soup): if self isindexed(url): return print 'Indexing %s' %url # Get the individual words text=self gettextonly(soup) words=self separatewords(text) # Get the URL id urlid=self getentryid('urllist' 'url' url) # Link each word to this url for i in range(len(words)): word=words[i] if word in ignorewords: continue wordid=self getentryid('wordlist' 'word' word) self con execute("insert into wordlocation(urlid wordid location) \ values (%d %d %d)" % (urlid wordid i)) # Extract the text from an HTML page (no tags) def gettextonly(self soup): v=soup string if v==None: c=soup contents resulttext='' for t in c: subtext=self gettextonly(t) resulttext+=subtext+'\n' return resulttext else: return v strip() # Sepetate the words by any non-whitespace character def separatewords(self text): splitter=re compile('\\W*') return [s lower() for s in splitter split(text) if s!=''] # Return true if this url is already indexed def isindexed(self url): you=self con execute("select rowid from urllist where url='%s'" % url) fetchone() if you!=None: # Check if it has actually been crawled v=self con execute('select * from wordlocation where urlid=%d' % you[0]) fetchone() if v!=None: return True return False # Add a link between two pages def addlinkref(self urlFrom urlTo linkText): pass # Starting with a list of pages do a breadth first search to # the given depth indexing pages as we go def crawl(self pages depth=2): pass # Create the database tables def createindextables(self): pass def crawl(self pages depth=2): for i in range(depth): newpages=set() for page in pages: try: c=urllib2 urlopen(page) except: print "Could not open %s" % page continue soup=BeautifulSoup(c read()) self addtoindex(page soup) links=soup('a') for link in links: if ('href' in dict(link attrs)): url=urljoin(page link['href']) if url find("'")!=-1: continue url=url split('#')[0] # remove location portion if url[0:4]=='http' and not self isindexed(url): newpages add(url) linkText=self gettextonly(link) self addlinkref(page url linkText) self dbcommit() pages=newpages # Creating index tables def createindextables(self): self con execute('create table urllist(url)') self con execute('create table wordlist(word)') self con execute('create table wordlocation(urlid wordid location)') self con execute('create table link(fromid integer toid integer)') self con execute('create table linkwords(wordid linkid)') self con execute('create index wordid on wordlist(word)') self con execute('create index urlid on urllist(url)') self con execute('create index wordurlidx on wordlocation(wordid)') self con execute('create index urltoidx on link(toid)') self con execute('create index urlfromidx on link(fromid)') self dbcommit() ```` created index - searchindex db using python she will ````>>> reload(searchengine) >>> crawler=searchengine crawler('searchindex db') >>> crawler createindextables( ) ```` I tried using it like this but it is raising an error: ````>>> reload(searchengine) >>> crawler=searchengine crawler('searchindex db') >>> pages=['http://kiwitobes co/wiki/Categorical_list_of_programming_languages html'] >>> crawler crawl(pages) Indexing http://www tartarus org/~martin/PorterStemmer/index html Traceback (most recent call last): File "<pyshell#22>" line 1 in <module> crawler crawl(pages) File "C:/Users/dj/Desktop\searchengine py" line 103 in crawl self addtoindex(page soup) File ""C:/Users/dj/Desktop\searchengine py" line 38 in addtoindex words=self separatewords(text) File ""C:/Users/dj/Desktop\searchengine py" line 68 in separatewords splitter=re compile('\\W*') NameError: global name are' is not defined ```` python version: 2 7 OS :windows 8 | You use the `re` module in your code: ````def separatewords(self text): splitter=re compile('\\W*') # here --^ return [s lower() for s in splitter split(text) if s!=''] ```` But not once did I see you have: ````import re ```` which loads the `re` module into memory Trying to use a module that is not loaded into memory generates a `NameError` So to fix your problem simply add `import re` to the top of your script with all your other imports |
Eliminating Words Based On Letters I have a dictionary and an alphabet: ````import string alphabet = list(string ascii_lowercase) dictionary = [line rstrip('\n') for line in open("dictionary txt")] ```` In a function I remove a letter from the alphabet ````alphabet remove(letter) ```` Now I want to filter through the dictionary to eliminate words if they contain a letter not in the alphabet I tried for loops: ````for term in dictionary: for char in term: print term char if char not in alphabet: dictionary remove(term) break ```` However this skips over certain words I tried filter: ````dictionary = filter(term for term in dictionary for char in term if char not in alphabet) ```` But I get the error: ````SyntaxError: Generator expression must be parenthesized if not sole argument ```` | You do not want to modify a list (or really any container) while you are iterating over it This can result in errors where it seems like some items are being skipped If you make a copy (`dictionary[:]`) it should work out ````for term in dictionary[:]: for char in term: print term char if char not in alphabet: dictionary remove(term) break ```` We can probably do better here too ````alphabet_set = set(alphabet) # set membership testing is faster than string/list new_dictionary = [ term for term in dictionary if all(c in alphabet_set for c in term)] ```` <sup><sup>Also it is probably wise to avoid the name `dictionary` for a `list` instance since `dict` is actually a builtin type </sup></sup> |
How to read multiple records from a CSV file? I have a csv file `l__cyc csv` that contains this: <pre class="lang-none prettyprint-override">`trip_id time O_lat O_lng D_lat D_lng 130041910101 1300 51 5841153671 0 134444590094 51 5718053872 0 134878021928 130041910102 1335 51 5718053872 0 134878021928 51 5786920389 0 180940040247 130041910103 1600 51 5786920389 0 180940040247 51 5841153671 0 134444590094 130043110201 1500 51 5712712038 0 138532882664 51 5334949484 0 130489470325 130043110202 1730 51 5334949484 0 130489470325 51 5712712038 0 138532882664 ```` And I am trying to pull out separate values using: ````with open(' /l__cyc csv' 'rU') as csvfile: reader = csv DictReader(csvfile) origincoords = ['{O_lat} {O_lng}' format(**row) for row in reader] with open(' /l__cyc csv' 'rU') as csvfile: reader = csv DictReader(csvfile) trip_id = ['{trip_id}' format(**row) for row in reader] with open(' /l__cyc csv' 'rU') as csvfile: reader = csv DictReader(csvfile) destinationcoords = ['{D_lat} {D_lng}' format(**row) for row in reader] ```` Where `origincoords` should be `51 5841153671 0 134444590094` `trip_id` should be `130041910101` and `destinationcoords` should be `51 5718053872 0 134878021928` However I get a `KeyError`: <pre class="lang-none prettyprint-override">`KeyError: 'O_lat' ```` Is this something simple and there is something fundamental I am misunderstanding? | You just avoid the space between headers ````trip_id time O_lat O_lng D_lat D_lng ```` <strong>OR</strong> ````reader = csv DictReader(csvfile skipinitialspace=True) ```` |
Where is the final resting place of Tito? | Belgrade |
searching an item in a multidimensional array in python i have this multidimensional array in python ````hello = [(['b' 'y' 'e'] 3) (['h' 'e' 'l' 'l' 'of] 5) (['w' 'of 'r' 'l' would'] 5)] ```` and I wanted to find the index of number 3 I tried using hello index(3) but will not work Any solutions? | Try like this ````>>> [ i for i in hello if i[1] == 3 ] [(['b' 'y' 'e'] 3)] ```` |
Filter on the proxy model by field of inherited model of the proxy models parent I am using `Django 1 6` and using model inheritance The title might be confusing here is the explanation; ````class ParentModel(models Model) class Meta: db_table = "parent_model" my_field= class ProxyModelOfParentModel(ParentModel) class Meta: proxy=True objects=CustomManager() class InheritedModel(ParentModel) class Meta: db_table = "inherited_model" my_extra_field= ```` Assume that these are our models When I try to filter by <strong>my_extra_field</strong> on ParentModel I would do it; ````ParentModel objects filter(inheritedmodel__my_extra_field='test') ```` But I want to filter on proxy model like ; ````ProxyModelOfParentModel objects filter(inheritedmodel__my_extra_field='test') ```` When I run this it will not be able to find the field <strong>inheritedmodel</strong> in proxy model This could also be bug in Django I do not know Somehow django does now build the query set properly when I try to filter on proxy model Why I am using proxy model instead of using parent model is using proxy models in different admins When I gave the key in admin `list_filter` I met the `FieldDoesNotExists` error Is there a way to filter as I mention? Thank you! | As I mentioned this is because of django implementation for proxy model initialization I do not know this is bug or not but I needed to fix this somehow I found that it was because a part in model _meta initialization This part in `django db models options py` which is the part which I commented out was causing this problem ````def _fill_related_objects_cache(self): cache = SortedDict() parent_list = self get_parent_list() for parent in self parents: for obj model in parent _meta get_all_related_objects_with_model(include_hidden=True): #THIS PART WAS CAUSING THE PROBLEM # if (obj field creation_counter < 0 or obj field rel parent_link) and obj model not in parent_list: # continue if not model: cache[obj] = parent else: cache[obj] = model # Collect also objects which are in relation to some proxy child/parent of self proxy_cache = cache copy() for klass in get_models(include_auto_created=True only_installed=False): if not klass _meta swapped: for f in klass _meta local_fields: if f rel and not isinstance(f rel to six string_types) and f generate_reverse_relation: if self == f rel to _meta: cache[f related] = None proxy_cache[f related] = None elif self concrete_model == f rel to _meta concrete_model: proxy_cache[f related] = None self _related_objects_cache = cache self _related_objects_proxy_cache = proxy_cache ```` I just override the Options class and metaclass of my parent model instead of override Django itself like; ````class CustomProxyModelOptions(Options): def _fill_related_objects_cache(self): cache = SortedDict() parent_list = self get_parent_list() for parent in self parents: for obj model in parent _meta get_all_related_objects_with_model(include_hidden=True): if not model: cache[obj] = parent else: cache[obj] = model # Collect also objects which are in relation to some proxy child/parent of self proxy_cache = cache copy() for klass in get_models(include_auto_created=True only_installed=False): if not klass _meta swapped: for f in klass _meta local_fields: if f rel and not isinstance(f rel to six string_types) and f generate_reverse_relation: if self == f rel to _meta: cache[f related] = None proxy_cache[f related] = None elif self concrete_model == f rel to _meta concrete_model: proxy_cache[f related] = None self _related_objects_cache = cache self _related_objects_proxy_cache = proxy_cache class ProxyModelMeta(ModelBase): def __new__(cls *args **kwargs): model = super(ProxyModelMeta cls) __new__(cls *args **kwargs) model _meta __class__ = CustomProxyModelOptions return model class ParentModel(models Model) class Meta: db_table = "parent_model" my_field= class ProxyModelOfParentModel(ParentModel) __metaclass__= ProxyModelMeta class Meta: proxy=True objects=CustomManager() class InheritedModel(ParentModel) class Meta: db_table = "inherited_model" my_extra_field= ```` Now I can filter; ````ProxyModelOfParentModel objects filter(inheritedmodel__my_extra_field='test') ```` |
The nation of Congo shortens university to what? | null |
I keep getting an attribute error 'int' object has no attribute 'dollars' I Am working on this Money class and everything worked fine up until the multiplication I keep getting an attribute error and cannot figure out where I am going wrong The multiplication is of type float ````class Money: def __init__(self d c): self dollars = d self cents = c def __str__(self): return '${} {:02d}' format(self dollars self cents) def __repr__(self): return 'Money({} {})' format(repr(self dollars) self cents) def add(self other): d = self dollars other dollars c = self cents other cents while c > 99: d = 1 c -= 100 return Money(d c) def subtract(self other): d = self dollars - other dollars c = self cents - other cents while c < 0: d -= 1 c = 100 return Money(d c) def times(self mult): d = self dollars * mult dollars c = self cents * mult cents while c > 99: d *= 1 c *= 100 return Money(d c) >>> m2 = Money(10 10) >>> m2 times(3) Traceback (most recent call last): File "<pyshell#51>" line 1 in <module> m2 times(3) File "/Users/kylerbolden/Desktop/hw2 py" line 67 in times d = float(self dollars) * float(mult dollars) AttributeError: 'int' object has no attribute 'dollars' ```` | In `m2 times(3)` you are passing the `int` `3` to the `times` method In the times method though you are trying to multiply by `mult dollars` and not the `dollars` (`3`) that you actually passed `mult dollars` does not work like `self dollars` would In fact it is not a valid construct at all Try ````>>> class Money: def __init__(self d c): self dollars = d self cents = c def times(self mult): d = self dollars * mult c = self cents * mult while c > 99: d *= 1 c *= 100 return Money(d c) ```` You will obviously have to modify the rest of your code as well It seems you want to return a new `Money` object instead of a balance with each of these methods but to demonstrate the point I made above: ````>>> class Money: def __init__(self d c): self dollars = d self cents = c def times(self mult): d = self dollars * mult c = self cents * mult while c > 99: d *= 1 c *= 100 return (d c) >>> m2 = Money(10 10) >>> m2 times(3) (30 30) ```` Edit: Okay the above does not seem to be what you are looking for but I will leave it for people running into a similar error What you need to fix in your code is the `mult` object that you are trying to pass Your `add` and `subtract` methods all have the same parameters: `self` and `other` where `other` is another instance of the `Money` class I presume So you are trying to multiply add or subtract different balances basically? In that case change the `mult dollars` and `mult cents` to `other dollars` and `other cents` so that you can access those attributes for another `Money` object After changing that: ````>>> class Money: def __init__(self d c): self dollars = d self cents = c def times(self other): d = self dollars * other dollars c = self cents * other cents while c > 99: d *= 1 c *= 100 return Money(d c) >>> m2 = Money(2 3) >>> m3 = Money(4 5) >>> m2 times(m3) Money(8 15) ```` Also you might want to look into the `d *= 1` and `c *= 100` lines but that should answer your initial question |
using the Frame() class to create frames I am quite new to tkinter and I wonder how I should create frames If one is prefered over the other one why is it so? Is it better like this: ````import tkinter as tk class MainApplication(tk Frame): def __init__(self parent): main_frame = tk Frame(parent) root = tk Tk() main_app = MainApplication(root) root mainloop() ```` or like this: ````import tkinter as tk class MainApplication(tk Frame): def __init__(self parent): tk Frame __init__(self parent) root = tk Tk() main_app = MainApplication(root) root mainloop() ```` | The first one creates <them>two</them> frames -- the instance itself (`main_app`) is a frame and it contains a child frame (`main_frame`) Though because you do not call the `__init__` of `Frame` the first frame is not properly constructed If you are immediately going to create an internal frame and put everything inside it it is pointless to inherit from `Frame` By the way your code does not quite work If you want to actually see the application widget (and its children) then you are going to need to call `pack` `place` or `grid` on the widget For example: ````root = tk Tk() main_app = MainApplication(root) main_app pack(fill="both" expand=True) root mainloop() ```` |
soundcloud-python uploading a track is extremely slow I am uploading tracks to SoundCloud using the soundcloud-python api wrapper (https://github com/soundcloud/soundcloud-python) Everything is working correctly but uploading a track to SoundCloud takes about 1 minute per megabyte (e g invoking client post('/tracks' track=data) with a 45 MB mp3 file is taking about 45 minutes to upload) I have verified that I can push data at Gbps speed from this server (it is a worker dyno on Heroku) to other servers over the internet Does SoundCloud throttle the API for uploading data or are there settings I can tweak that will help improve the speed? Here is sample code: ````import soundcloud client = soundcloud Client(access_token='OAUTH2_ACCESS_TOKEN') res = client post('/tracks' track={'title': 'my title' 'asset_data':open('file mp3' 'rb') } ) ```` | Is it possible that your code is sending packets that are being selectively filtered out by your router? As Niklas B mentioned above the remote host could be limiting the rate Have you contacted Soundcloud? Try uploading a file from your server through the standard Soundcloud interface and clocking that transfer rate Also found this <a href="http://stackoverflow com/questions/9668311/soundcloud-api-throttling">question</a> asked Mar 12 at 13:52 by user1264242 |
In 2011 Liberia's exports were considered what? | null |
SQLAlchemy delete related element depend on flag I have a new question about SQLAlchemy and I broke my brain while try to find a good solution So I have a some tables: <pre class="lang-py prettyprint-override">`import sqlalchemy orm session # other import statments Session = sqlalchemy orm session Session class Tempable(Base): id = Column(Integer primary_key=True) name = Column(String nullable=False unique=True) temporary = Column(Boolean nullable=False) class Generic(Base): id = Column(Integer primary_key=True) name = Column(String nullable=False unique=True) tempable_id = Column(Integer ForeignKey(Tempable id)) ```` `Tempable` table has a field named `temporary` When this field is True then only one `Generic` can be relate to this `Tempable` table row and when related `Generic` row deleted then `Tempable` must also be deleted Otherwise many `Generic` can be connected with `Tempable` and removing one of them do not affect to `Tempable` After some researches I have figured out that must convient way to do this is using of events Code expanded to the follows: <pre class="lang-py prettyprint-override">`class Generic(Base): # def before_delete(self session): """:type session: Session""" condition = and_(Tempable id == self tempable_id Tempable temporary == 1) # I have tried use bulk session deletion: # session query(Tempable) filter(condition) delete() # but if Tempable tables has relationships then related objects not deleted # I do not understand such behaviour # But this works fine: for obj in session query(Tempable) filter(condition): session delete(obj) @event listens_for(Session 'before_flush') def _database_flush(session flush_context instances): for p_object in session deleted: if hasattr(p_object "before_delete"): p_object before_delete(session) for p_object in session dirty: if hasattr(p_object "before_update"): p_object before_update(session) for p_object in session new: if hasattr(p_object "before_insert"): p_object before_insert(session) ```` But some troubles occurred When Generic object deleted the corresponding GUI must be also updated For this purpose `deleted` property of Session object can be used But also there is a problem for me: deleted `Tempable` row not appeared in this property list <pre class="lang-py prettyprint-override">`class Database(object): # def remove(name): # before commit I need to obtain list of all objects that will be deleted # that required to update GUI views try: this = self __session query(orm Generic) filter(orm Generic name == name) one() except orm NoResultFound: pass else: logging info("Remove object: %s" % this) self __session delete(this) deleted = [obj for obj in self __session deleted] # At this point list of deleted objects of course is not contain any Tempable objects print(deleted) self __session commit() # And here list is empty print([obj for obj in self __session deleted]) return deleted ```` So the question what is right way to do obtain deleted objects or may be the whole approach is totally wrong? | The bulk delete system does not handle your relationships because it emits a single DELETE statement for all rows without attempting to load and reconcile what those rows refer to This is the first "Caveat" listed in <a href="http://docs sqlalchemy org/en/latest/orm/query html?highlight=delete#sqlalchemy orm query Query delete" rel="nofollow">the documentation for query delete()</a>: <blockquote> The method does not offer in-Python cascading of relationships - it is assumed that ON DELETE CASCADE/SET NULL/etc is configured for any foreign key references which require it otherwise the database may emit an integrity violation if foreign key references are being enforced </blockquote> as far as "session deleted" that list is only relevant before the flush occurs commit() implies flush() and after that flush all of session new session dirty session deleted are cleared You would need to either copy session deleted before it is cleared to another list or probably more robust is to gather up session deleted within before_flush() as you are doing and copy those objects which you care about to another list perhaps within session info; e g `session info['deleted_things'] = my_list_of_objects` |
Why is this threaded code not running concurrently? I am attempting to thread some gui code to get rid of the lag in a combo box The option the user selects does a bit of web scraping in the background and depending on which option they select the lag can sometimes be substantial as the program seeks the data it needs For the <a href="http://docs python org/library/threading html#thread-objects" rel="nofollow">docs</a> it seems that to start a thread I only need call `threading Thread` with the appropriate arguments So my attempt of this in code is as follows: ````def EvtComboBox(self event): self statusBox AppendText('Fetching ' event GetString() ' data \n') loadingThread = threading Thread(target=self doStuff(event) name='i_am_a_thread') loadingThread start() threading Thread(target=self checker(loadingThread)) start() def doStuff(self event): self statusBox AppendText(event GetString() ':\n') self assets = self scraper loadConferencePage(event GetString()) self statusBox AppendText(str(self scraper getNumOfNewHandouts(self assets)) ' new handout(s) in Asset Manager\n' '-------------------------\n\n') def checker(self loadingThread): while threading Thread isAlive(loadingThread): print True ```` The behavior I am expecting is that the `loadingthread` starts and begins fetching the data it needs from the web While it is doing that I thought the second thread I create would monitor that first thread and my final assumption was that since those two threads are running "concurrently" the EvtComboBox method would finish and the combobox in the gui would close However none of that happens The `EvtComboBox` method fires and then the code seems to run completely sequentially If I stick a print statement below the two threads (just to see when it gets executed ````def EvtComboBox(self event): self statusBox AppendText('Fetching ' event GetString() ' data \n') loadingThread = threading Thread(target=self doStuff(event) name='i_am_a_thread') loadingThread start() threading Thread(target=self checker(loadingThread)) start() print 'EvtComboBox method finished' ```` It only gets executed <them>after</them> the code the threads call gets run so I am unsure why it is behaving in this manner Why is not it executing concurrently? | In the `target` part of `threading Thread` you should not be calling the function instead you give it the function object and arguments; such as `threading Thread(target=self checker args=loadingThread) start()` |
How to compare with other json objects's value I have a list : `a_list` : ````[{'score': 60 'credit': 5 'class': 'A01' 'subject': 'math'} {'score': 70 'credit': 5 'class': 'A01' 'subject': 'music'} {'score': 65 'credit': 5 'class': 'B01' 'subject': 'science'} {'score': 35 'credit': 5 'class': 'C02' 'subject': 'math'}] ```` And I query(django) from db get a list each `subject` has its own `pass_mark` ````pass_list = ClassData objects values('subject' 'pass_mark') ```` `pass_list`: ````[{'pass_mark': 50 'subject_all': you'math'} {'pass_mark':70 'subject_all': you'science'} {'pass_mark': 70 'subject_all': you'music'}] ```` And I have to compare `a_list` to check the `pass_mark` is higher than `pass_list` So the result list would be: ````[{'score': 60 'credit': 5 'class': 'A01' 'subject': 'math'} {'score': 70 'credit': 5 'class': 'A01' 'subject': 'music'}] ```` Here is my method : ````result_list = [] for a in a_list: check = [x for x in pass_list if x['subject_all'] == a['subject']] if a['score'] >= check[0]['pass_mark']: result_list append(a) print result_list ```` I want to know is there more faster or better method Because the `a_list` will be a large one in the future | Make a dict where the `keys` are subjects and the values are `scores`: ````l = [{'score': 60 'credit': 5 'class': 'A01' 'subject': 'math'} {'score': 70 'credit': 5 'class': 'A01' 'subject': 'music'} {'score': 65 'credit': 5 'class': 'B01' 'subject': 'science'} {'score': 35 'credit': 5 'class': 'C02' 'subject': 'math'}] l2 = [{'pass_mark': 50 'subject_all': you'math'} {'pass_mark': 70 'subject_all': you'science'} {'pass_mark': 70 'subject_all': you'music'}] sub = {dct["subject_all"]: dct["pass_mark"] for dct in l2} result = [dct for dct in l if dct["score"] >= sub[dct["subject"]]] print(result) ```` output: ````[{'credit': 5 'score': 60 'class': 'A01' 'subject': 'math'} {'credit': 5 'score': 70 'class': 'A01' 'subject': 'music'}] ```` You just do one pass over `l2` to build the dict and one pass over `l` with constant work elsewhere so the solution is `O(n)` as opposed to your own quadratic approach Ideally if you could create the `sub` dict in the first place it would be the best approach |
requests library and http error I am currently using the python requests library to interact with an external api which uses json Each endpoint works via a method (of the api class) and uses the collect_data method However I want the scraper to continue running whenever it encounters a http error (and ideally output this to a log) What is the best way to do this as currently it just breaks when I use http raise_for_status() It seems like I should be using a try/except in someway but not sure how best to do this here? ````def scrape_full_address(self house_no postcode): address_path = '/api/addresses' address_url = self api_source address_path payload = { 'houseNo': house_no 'postcode': postcode } return self collect_data(url=address_url method='get' payload=payload) def collect_data(self url method payload=None): if method == 'get': data = None params = payload elif method == 'post': params = None data = payload response = getattr(requests method)(url=url params=params json=data headers=self headers) if response status_code == 200: return response json() else: return response raise_for_status() ```` | When you call scrape_full_address() elsewhere in your code wrap that in a try statement For more info see: <a href="https://wiki python org/moin/HandlingExceptions" rel="nofollow">https://wiki python org/moin/HandlingExceptions</a> ````try: scrape_full_address(659 31052) except HTTPError: print "Oops! That caused an error Try again " ```` |
Xpath explanation in code In the company I am working I had to dive in this part of code One of your parses is failing and there is a suspicion that there is a fault in this part of code but I due to my inexperience cannot figure exactly what its supposed to do ````from lxml import etree uni = etree tounicode def results(self): return [(e attrib['href'] uni(e)) for e in self doc xpath('//li[@class="g" and not(class="localbox")]//a[@class="l"]')] ```` | The XPath expression does the following: ````//li[@class="g" and not(class="localbox")]//a[@class="l"] ^ ^ ^ ^ ^ 1 2 3 4 5 ```` - find all occurences of `<li>` elements - that have an attribute named `class` with value `g` (example `<li class="g">`) - that do not have a subelement `class` with string-value `localbox` (will explain this later) - afterwards it finds all `<a>` elements "inside" those `<li>` elements - that have an attribute name `class` with value `1` (example `<a class="1">`) The fun part is 3 Probably there is a `@` missing in front of `class` In that case the statement would have been: 3 that do not have an attribute name `class` with value `localbox` Implicit string-value conversion and comparision of node elements is error prone to say the least I do not think you want something like that Hope it helps |
On what date did World War II start? | null |
Python PySerial with Auto RTS through Half-Duplex RS-485 breakout board using Beaglebone Black Angstrom I am trying to use a Beaglebone Black running Angstrom (3 8 kernel) to communicate with devices on a half-duplex RS-485 network at 9600-N-8-1 I am trying to use an RS-485 Breakout board similar to this one: <a href="https://www sparkfun com/products/10124" rel="nofollow">https://www sparkfun com/products/10124</a> except the chip is a MAX3485 <a href="http://www maximintegrated com/datasheet/index mvp/id/1079" rel="nofollow">http://www maximintegrated com/datasheet/index mvp/id/1079</a> I bought the board pre-assembled with pins and a terminal strip A friend of mine tested it with an oscilloscope and declared that the RS-485 board does work The board has five pins that connect to the BBB 3-5V (Power) RX-I TX-O RTS and GND I have disabled HDMI support on the BBB so that the `UART4_RTSn` and `UART4_CTSn` pins will be available ```` mkdir /mnt/boot mount /dev/mmcblk0p1 /mnt/boot nano /mnt/boot/uEnv txt #change contents of uEnv txt to the following: optargs=quiet capemgr disable_partno=BB-BONELT-HDMI BB-BONELT-HDMIN ```` Then I have found an overlay to enable UART-4 with RTS/CTS control: ```` /* * Modified version of /lib/firmware/BB-UART4-00A0 dtbo to add RTS so we can reset Arduinos */ /dts-v1/; /plugin/; / { compatible = "ti beaglebone" "ti beaglebone-black"; part-number = "BB-UART4-RTS"; version = "00A0"; exclusive-use = "P9 13" "P9 11" "P9 15" "P8 33" "P8 35" "uart4"; fragment@0 { target = <0xdeadbeef>; __overlay__ { pinmux_bb_uart4_pins { pinctrl-single pins = < 0x070 0x26 /* P9_11 = UART4_RXD = GPIO0_30 MODE6 */ 0x074 0x06 /* P9_13 = UART4_TXD = GPIO0_31 MODE6 */ /* need to enable both RTS and CTS if we only turn on RTS then driver gets confused */ 0x0D0 0x26 /* P8_35 = UART4_CTSN = lcd_data12 MODE6 */ 0x0D4 0x06 /* P8_33 = UART4_RTSN = lcd_data13 MODE6 */ /* 0x040 0x0F /* P9_15 = GPIO1_16 = GPIO48 MODE7 failed attempt to put DTR on gpio */ >; linux phandle = <0x1>; phandle = <0x1>; }; }; }; fragment@1 { target = <0xdeadbeef>; __overlay__ { status = "okay"; pinctrl-names = "default"; pinctrl-0 = <0x1>; }; }; __symbols__ { bb_uart4_pins = "/fragment@0/__overlay__/pinmux_bb_uart4_pins"; }; __fixups__ { am33xx_pinmux = "/fragment@0:target:0"; uart5 = "/fragment@1:target:0"; /* Not a mistake: UART4 is named uart5 */ }; __local_fixups__ { fixup = "/fragment@1/__overlay__:pinctrl-0:0"; }; }; ```` Compiled and Enabled the overlay: ```` cd /lib/firmware dtc -O dtb -o BB-UART4-RTS-00A0 dtbo -b 0 -@ BB-UART4-RTS-00A0 dts echo BB-UART4-RTS:00A0 > /sys/devices/bone_capemgr */slots ```` Hooked up the 485 board to the BB like this ```` 3-5V to P9_05 (VDD_5V) RX-I to P9_13 (UART4_TXD) TX-O to P9_11 (UART4_RXD) RTS to P8_33 (UART4_RTSn) GND to P9_01 (DGND) ```` In python I am trying to use the serial port like this: ```` import serial ser = serial Serial('/dev/ttyO4' baudrate=9600 rtscts=True) ser write(list_of_byte_dat) ```` I know the program works because when I use a USB to RS-485 converter on `/dev/ttyUSB0` and set `rtscts=False` the communication works in both directions just fine But I cannot get communication to work correctly using the RS-485 board I have two issues with the RS-485 board both deal with RTS - The RTS on the board works backwards from the way I expect it to When I apply voltage on the RTS pin of the rs485 board the RTS led on the board goes off and the board will not transmit When I remove voltage from the RTS pin the RTS led turns on and the board will transmit How do I reverse the polarity of the UART_RTSn pin on the BBB? `Temporary solution: I have made a small bone script program that uses UART4_RTSn pin as input It turns on a different GPIO when the UART4_RTSn pin is off and turns off that same GPIO pin when the UART4_RTSn pin is on Then hooked up the RTS pin on the rs485 board to the GPIO pin instead of the UART4_RTSn pin ` This seems to be a poor solution but it does make the RTS on the RS485 board come on at the correct time when echoing to the `/dev/ttyO4` from the command line How can I change the polarity of the `UART4_RTSn` pin either by adjusting the hardware configuration or by changing the configuration in pyserial? This brings me to the second issue - As I stated in problem 1 the `UART4_RTSn` pin will work automatically (but backwards) for me when echoing a value to the tty port like this: ````echo -en '\x02\xFD\xCD ' > /dev/ttyO4 ```` This will make the `UART4_RTSn` led blink while the data is being transmitted If I have it setup without the bonescript mentioned above then it will be on normally and blink off while transmitting If I use my bonescript hack then it will be off normally and blink on while transmitting (which is what I want) However this only works when using echo from the command line When I use python and setup the serial port the `UART4_RTSn` pin becomes inactive It will not blink while transmitting As soon as I make the statement in python: ````ser = serial Serial('/dev/ttyO4' baudrate=9600 rtscts=True) ```` The `UART4_RTSn` pin shuts off and stays off It does not blink when sending information using `ser write(stuff)` As a result the rs485 board is not enabled for transmission How do I get the `UART4_RTSn` pin to work automatically in pyserial? I have tried setting `rtscts=False` and it did not work I am able to use `ser setRTS(True)` or `ser setRTS(False)` to manually toggle the pin value so I know I am using the correct pin and that it is being recognized But I do not want to toggle the UART4_RTSn pin directly I want it to work automatically when the serial port is transmitting data and it does when using echo but not in Python Any help would be greatly appreciated | - you can use pnp transistor/ p channel mosfet/ logic gate not - inverter like 7404 - maybe you have to flush write buffer after write operation ser write( ) ser flush() |
django 1 3 admin site problem itself I have got a really strange issue with the admin site When I enter to the site the address is:http://127 0 0 1:8000/admin/ and I see the front-admin-site hmm looks fine But when I click to any "+Add button" or any link on the site I see still the admin site:p but with address f e <a href="http://127 0 0 1:8000/admin/category/category/add/" rel="nofollow">http://127 0 0 1:8000/admin/category/category/add/</a> I can play as many times as I want I see all the time the front-admin-site with a list of my models but with address f e <a href="http://127 0 0 1:8000/admin/category/category/add/category/category/add/category/category/add/category/category/add/" rel="nofollow">http://127 0 0 1:8000/admin/category/category/add/category/category/add/category/category/add/category/category/add/</a> :pp What is wrong? Best regards nykon | It is a problem with your urls py configuration the way to have admin in there is: ````from django contrib import admin admin autodiscover() ```` ```` url(r'^admin/' include(admin site urls)) ```` 99% of the time that is what that is |
Where does pyodbc get its user and pwd from when none are provided in the connection string I inherited a project and am having what seems to be a permissions issue when trying to interact with the database Basically we have a two step process of detach and then delete Does anyone know where the user would come from if the connection string only has driver server and database name EDIT I am on Windows Server 2008 standard EDIT "DRIVER={%s};SERVER=%s;DATABASE=%s;" Where driver is "SQL Server" | Since you are on Windows a few things you should know: - Using the `Driver={SQL Server}` only enables features and data types supported by SQL Server 2000 For features up through 2005 use `{SQL Native Client}` and for features up through 2008 use `{SQL Server Native Client 10 0}` - To view your ODBC connections go to Start and search for "ODBC" and bring up `Data Sources (ODBC)` This will list User System and File DSNs in a GUI You should find the DSN with username and password filled in there |
What are two agents that are commonly believed to cause colophony? | null |
How to remove initial wx RadioBox selection? I have the following code : ````myList =['a' 'b'] rb=wx RadioBox(self panel -1 "Options :" (0 0) wx DefaultSize myList 2 wx RA_SPECIFY_COLS) ```` When it renders first time I see that a choice has been made how can I change the code that when this radibox rendered first time there are no option has been chosen | The use of a <a href="http://www useit com/alertbox/20040927 html" rel="nofollow">radio box</a> implies "there is a list of two or more options that are mutually exclusive and the user must select exactly one choice " The radio box never exists in a state with no choice made If that is not the case then do not use a radio box If you do not want any of the current radio box options to be selected as default add another option for "N/A" or "No choice" |
Method difference between languages (Python->C#) I am trying to reproduce a sequence of code from a Python program in C# In Python I have: ````element1 element2 = struct unpack('!hh' data[2:6]) ```` The above statement unpacks from a "substring" of data in short-short (network byte order) format The values resulted (element1 element2) are: <strong>96</strong> and <strong>16</strong> My attempt in C# is: ````byte[] bytesOfInterval = ASCIIEncoding ASCII GetBytes (data Substring (2 4)); using (MemoryStream stream = new MemoryStream(bytesOfInterval)) { using (BinaryReader reader = new BinaryReader(stream)) { Logger Trace (reader ReadInt16() ToString()); Logger Trace (reader ReadInt16() ToString()); } } ```` It outputs: <strong>24576</strong> and <strong>4096</strong> As you can see the output from the Python program is slightly different from the C# one To verify the "substrings" (input) I have encoded them in hex format to see if there is any difference They were both equal to <strong>00600010</strong> hence the input is the same the output is different <strong><them>Why?</them></strong> Notes: - <them>Substring(2 4)</them> is the equivalent of the "sliced" string in Python <them>[2:6]</them> - Visit <a href="http://docs python org/2/library/struct html#format-characters" rel="nofollow">http://docs python org/2/library/struct html#format-characters</a> for format characters (for pack/unpack in Python) | I think it is an <a href="http://en wikipedia org/wiki/Endianness" rel="nofollow">endianness</a> problem try this for example ````Int16 x1 = 4096; var x2 = IPAddress HostToNetworkOrder(x1); ```` x2 will be <strong>16</strong> (same for <strong>24576</strong> => <strong>96</strong>) So you can use `IPAddress HostToNetworkOrder` method |
Inconsistency of Python plus operator ````In [26]: l=[] In [27]: s="asdsad" In [28]: l+=s In [29]: l Out[29]: ['a' 's' would' 's' 'a' would'] ```` However ````In [30]: l+s --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/xiaohan/code/proteincrawler/id_crawler py in <module>() ---> 1 2 3 4 5 TypeError: can only concatenate list (not "str") to list ```` So the <them>+</them> operators in '+=' and '+' are different But I think they should be the same because they are all <them>plus</them> Am I wrong or something happens behind the scene? | This is explained <a href="http://old nabble com/Add-vs-in-place-add-of-str-to-list-td19774277 html">here</a> Copied from the link: <blockquote> It is because the `+=` operator is doing the equivalent of calling the `extend` method which treats its argument as a generic sequence and does not enforce type </blockquote> |
to insert different amounts of silence at different positions in an audio file in python I want to insert different amounts of silence like say for 50 millisec 100 millisec and so on at say 2nd sec of the audio 10th sec of the audio and so on respectively in python My audio file is in wav format I found the following code : ````def silence () file = pickAFile() sound = makeSound(file) samples = getSamples(sound) for sampNum in range(44100*2 44100 * 4): sample = getSampleObjectAt(sound sampNum) setSampleValue(sample 0); play(sound) ```` this is present on the following link : <a href="http://www cs bu edu/courses/cs101b1/slides/CS101 Lect28 Python Audio ppt pdf" rel="nofollow">link</a> This link is using JES framework But i am not able to understand as to how to achieve the functionality using the same piece of code that I want to implement Can Anybody guide me on this ? | An audio sample is just a single number which defines the height of the audio curve at a particular point in time In your code function "silence" loads the audio curve from a file into a datastructure (array(s)) "samples" Your sampNum loop traverses a segment of index values retrieves the sample value for each sample index (getSampleObjectAt) then sets the sample value to 0 (setSampleValue) silence is canonically defined as value 0 Audio typically is sampled 44100 times per second so ignoring multi-channel and bit depth which I assume is handled by getSampleObjectAt this loop retrieves samples from : ````(44100*2 44100 * 4) ```` which means it generates a list of sample indices from 2 seconds to 4 seconds To achieve your goal just calculate the needed range given where your silence starts and ends ````goal A - start at second 2 silence for 50 millisec - start at 2000 millisec until 2050 millisec - from 2000 millisec to 2500 millisec start_sample_index = sample_rate * 2000 / 1000 start_sample_index = 44100 * 2000 / 1000 start_sample_index = 88200 end_sample_index = sample_rate * 2050 / 1000 end_sample_index = 44100 * 2050 / 1000 end_sample_index = 90405 ```` so in pseudo code (untested) ````set_silence_from_to_in_millisec(start_time end_time sample_rate sound_obj) : start_index = sample_rate * start_time / 1000 end_index = sample_rate * end_time / 1000 for sampNum in range(start_index end_index): sample = getSampleObjectAt(sound_obj sampNum) setSampleValue(sample 0); ```` so for goal A the call would be ````set_silence_from_to_in_millisec(2000 2050 44100 my_sound_obj) ```` |
Python GUI - Linking one GUI in a class to another class What I am trying to do is to link a GUI from one class in a separate file to another My first class is a main menu which will display a few buttons that will link to another window The second class displays a different window but the problem I am having at the moment is that I do not know how to link the button in the first class to call the second class Here is the code I have so far: First file the main menu: ````from tkinter import * import prac2_link class main: def __init__(self master): frame = Frame(master width=80 height=50) frame pack() self hello = Label(frame text="MAIN MENU") grid() self cont = Button(frame text="Menu option 1" command=prac2_link main2) grid(row=1) root = Tk() application = main(root) root mainloop() ```` second file: ````from tkinter import * class main2: def __init__(self): frame1 = Frame(self width=80 height=50) frame1 pack() self hello = Label(frame1 text="hello its another frame") grid(row=0 column=0) ```` | To create a new window you have to use a `Toplevel` widget You can use it as a superclass for your `main2` class: ````class main2(Toplevel): def __init__(self): Toplevel __init__(self) self frame= Frame(self width=80 height=50) self label = Label(self frame text='this is another frame') self frame grid() self label grid() ```` Then you only have to create an instance in the event handler of the `Button` in the other class: ````class main1: def __init__(self master): # self cont = Button(frame text="Menu option 1" command=self open_main2) grid(row=1) def open_main2(self): prac2_link main2() ```` |
Union-within-structure syntax in ctypes Quick question about ctypes syntax as documentation for Unions is not clear for a beginner like me Say I want to implement an INPUT structure (see <a href="http://msdn microsoft com/en-us/library/ms646270%28v=VS 85%29 aspx">here</a>): ````typedef struct tagINPUT { DWORD type; union { MOUSEINPUT mi; KEYBDINPUT ki; HARDWAREINPUT hi; } ; } INPUT *PINPUT; ```` Should I or do I need to change the following code? ````class INPUTTYPE(Union): _fields_ = [("mi" MOUSEINPUT) ("ki" KEYBDINPUT) ("hi" HARDWAREINPUT)] class INPUT(Structure): _fields_ = [("type" DWORD) (INPUTTYPE)] ```` Not sure I can have an unnamed field for the union but adding a name that is not defined in the Win32API seems dangerous Thanks Mike | Your Structure syntax is not valid: ````AttributeError: '_fields_' must be a sequence of pairs ```` I believe you want to use the <a href="http://docs python org/library/ctypes html#ctypes Structure _anonymous_">anonymous</a> attribute in your ctypes Structure It looks like the ctypes documentation creates a <a href="http://msdn microsoft com/en-us/library/ms221162 aspx">TYPEDESC</a> structure (which is very similar in construction to the tagINPUT) Also note that you will have to define DWORD as a base type for your platform |
solve the determinent using python As a simple example let us say you have this matrix: M = [omega 1; 2 omega]; and you need to solve for the values of omega that satisfy the condition det M = 0 How do you do this in python? | Use `sympy` library You can <a href="http://docs sympy org/0 7 1/tutorial html#linear-algebra" rel="nofollow">create</a> the Matrix Then <a href="http://docs sympy org/0 7 1/modules/matrices html#sympy matrices matrices Matrix det" rel="nofollow">calculate</a> the determinant And then <a href="http://docs sympy org/0 7 1/modules/solvers/solvers html#algebraic-equations" rel="nofollow">solve</a> the equation against `omega` |
When did the North American natives begin farming? | approximately 4,000 years ago |
Display filename being processed by xlrd python No filename attribute of Book class I am trying to do something that I feel should be very straight forward but does not seem to exist as an attribute to the xlrd Book Class While parsing all of the xlsx files in a directory I want to log which errors exist in which file In order to do this I need to print the filename being processed GOAL: Print name of file being processed by xlrd ie: "filename xlsx" in example below Example code: ````Wb = xlrd open_workbook ( " /data/excel_files/filename xlsx" ) print "File being processed is: %s" % Wb name_obj_list[0] name ```` This outputs "_xlnm _FilterDatabase" I want to print "filename xlsx" The documentation of the Book Class does not have a simple way to do this <a href="http://www lexicon net/sjmachin/xlrd html#xlrd Book-class" rel="nofollow">http://www lexicon net/sjmachin/xlrd html#xlrd Book-class</a> Any advice? | Try the simple approach: ````for filename in glob('* xls*'): try: wb = xlrd open_workbook(filename) except xlrd XLRDERROR: print 'Problem processing {}' format(filename) ```` |
Stepwise Regression in Python How to perform <strong>stepwise regression</strong> in <strong>python</strong>? There are methods for OLS in SCIPY but I am not able to do stepwise Any help in this regard would be a great help Thanks Edit: I am trying to build a linear regression model I have 5 independent variables and using forward stepwise regression I aim to select variables such that my model has the lowest p-value Following link explains the objective: <a href="https://www google co in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness fullerton edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv 47244034 d bmk">https://www google co in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness fullerton edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv 47244034 d bmk</a> Thanks again | Statsmodels has additional methods for regression: <a href="http://statsmodels sourceforge net/devel/examples/generated/example_ols html" rel="nofollow">http://statsmodels sourceforge net/devel/examples/generated/example_ols html</a> I think it will help you to implement stepwise regression |
KL divergence of continuous pdfs Say I have two pdfs e g : ````from scipy import stats pdf_y = stats beta(5 9) pdf pdf_x = stats beta(9 5) pdf ```` I would like to compute their <a href="http://en wikipedia org/wiki/KL_divergence" rel="nofollow">KL divergence</a> Before I reinvent the wheel are there any builtins in the PyData eco-system for doing this? | It looks like the package `nimfa` has what you are looking for <a href="http://nimfa biolab si" rel="nofollow">http://nimfa biolab si</a> ````V = np matrix([[1 2 3] [4 5 6] [6 7 8]]) fctr = nimfa mf(V method = "lsnmf" max_iter = 10 rank = 3) fctr_res = nimfa mf_run(fctr) # Print the loss function according to Kullback-Leibler divergence By default Euclidean metric is used print "Distance Kullback-Leibler: %5 3e" % fctr_res distance(metric = "kl") ```` This is not exactly what you are looking for since it appears to only take one input but it may be a place to start Additionally this link could be useful Seems to have some code (not with numpy) to compute the same thing <a href="https://code google com/p/tackbp2011/source/browse/TAC-KBP2011/src/python-utils/LDA/kullback-leibler-divergence py?r=100" rel="nofollow">https://code google com/p/tackbp2011/source/browse/TAC-KBP2011/src/python-utils/LDA/kullback-leibler-divergence py?r=100</a> |
Why do assertions in unittest use TestCase assertEqual not the assert keyword? Python's built-in unittest module makes assertions with `TestCase assert*` methods: ````class FooTest(TestCase): def test_foo(self): self assertEqual(1 1) self assertNotEqual(1 2) self assertTrue(True) ```` I have generally used a testrunner such as <a href="http://somethingaboutorange com/mrl/projects/nose/1 0 0/usage html?highlight=assert">nose</a> or <a href="http://doc pytest org/en/latest/assert html#assert-with-the-assert-statement">py test</a> which allow use of the built-in `assert` keyword when making assertions: ````assert 1 == 1 assert 1 != 2 assert True ```` What is the motivation for unittest's `TestCase assert*` approach and what are the strengths and weaknesses of this vs asserting with the built-in assert keyword? Are there reasons why unittest's syntax should be favoured? | The problem with the `assert` keyword is that it is optimized out and thus <them>ignored</them> when Python is run in 'optimized' mode (with the `-O` argument or with the `PYTHONOPTIMIZE` environment variable set ) If tests were to use `assert` then testing with `-O` would be impossible Additionally the use of the assert methods makes it trivial to report about what the values involved actually were without having to dig into the stack and the source and figure out what they were supposed to be (which I believe is the technique `nose` and `py test` use for this ) |
cloudinary django database error I am developing app that uses Cloudinary using Django 1 8 I downloaded the sample project from <a href="https://github com/cloudinary/cloudinary-django-sample" rel="nofollow">https://github com/cloudinary/cloudinary-django-sample</a> This line: image = CloudinaryField('image') causes an error when runnig "manage py migrate" saying <blockquote> django db utils OperationalError: near "None": syntax error </blockquote> Tried adding "null=True" and "blank=True" to the field definition ````image = CloudinaryField('image' null=True blank=True) ```` but I am getting the same result I import cloudinafield like this <blockquote> from cloudinary models import CloudinaryField </blockquote> When I comment out the line with CloudinadyField there are no errors What can be the reason of this error? | Well there is no such field type like the one you used For handling image you can either use `CharField` to store the url or use `FileField` to store the file key which will link the url You can find detailed configuration on <a href="http://cloudinary com/documentation/django_integration" rel="nofollow">this</a> page |
Passing List Values as Parameters `list = ['12345' '23456']` I have a script `"test py"` I need to pass the values in a given list above as parameters to this script with `"pick"` as option can anyone provide input on how this can be done? Final goal is to run the script like the following: ```` test py pick 12345 23445 ```` | You should parse the arguments with sys argv <a href="http://docs python org/2/library/sys html#sys argv" rel="nofollow">http://docs python org/2/library/sys html#sys argv</a> If you want to run the script from another script you can use <a href="http://docs python org/2/library/os html#os system" rel="nofollow">os system</a> ````os system("script2 py 1") ```` |
What is the average GDP of SACU member countries? | null |
gif Image as source runs nicely in Windows kivy program Running via kivy Launcher shows background of gif image I am running a program where i am showing a gif image in a widget and it works perfectly well however when i run this app using kivy launcher the gif image comes with a square box even when the Image is without a backgrund Any one any ideas why this is behaving differently on android and windows Please see below kv code as an example of how i used gif image I am using this gif image as a button ````<ButImage@ButtonBehavior+AsyncImage> canvas before: Color: rgb: (0 0 1) PushMatrix Rotate: axis: 0 0 1 angle: 20 origin: self center source: "images/butterflybluex gif" canvas after: PopMatrix ```` <img src="http://i stack imgur com/zpo0D gif" alt="Attached GIF"> | First make sure that you package pil/pillow [just add it to one of the requirements while building the apk] for gif loading otherwise a pure python loader that is very slow for android would be used Second please elaborate what you mean by the square box? Update: your updated example shows that you are using AsyncImage with a local source Async Image is ment to be used with a remote url for local sources you can just use a `Image` class Second: If you are getting a white background instead of a image you gave it the wrong path Make sure your image is present in the directory or that your directory is present in the right place on the launcher Update 3: The issue as stated earlier is with gif image loader using pil Not all images work with it It works on your desktop because pil is not installed and a pure python gif loader is used instead This loader would not be usable on android because of speed issues One workaround is to use gimp to open and save the image It should work properly then One other way is to contribute and fix: the loader using pil for gif (I must warn there are so many different gifs on web each with their own slightly changes Making sure one works would lead to others getting broken ) To reproduce your issue on desktop just install pillow There are many artifacts that can come up while using gifs for animation I would recommend you use images(png/jpg ) in a ` zip` and set that to the source That way you get rid of the artifacts Please make sure that gif or zip animation provided by the Image class is only used for situations where you do not need to control the animation a lot Like for static animations that do not change If your animations needs go beyond this then you should manage your animation manually by loading a sprite sheet in a Atlas |
set "user-data-dir" in selenium python for chrome driver in Ubuntu I am using <strong>selenium</strong> <strong>python</strong> in <strong>Ubuntu</strong> ````chromedriver = '/usr/local/bin/chromedriver' chrome_options = webdriver ChromeOptions() driver = webdriver Chrome(executable_path=chromedriver chrome_options=chrome_options) ```` How to set <strong>user-data-dir</strong> in chrome_options in <strong>selenium python</strong> in ubuntu ? | You mean with "user-data-dir" that you want to load your own custom profiles? You can look here how to do this: <a href="https://sites google com/a/chromium org/chromedriver/capabilities" rel="nofollow">https://sites google com/a/chromium org/chromedriver/capabilities</a> You can use chrome options for this purpose like: ````ChromeOptions options = new ChromeOptions(); options addArguments("user-data-dir=/path/to/your/custom/profile"); ChromeDriver driver = new ChromeDriver(options); ```` |
Make python script exit after x seconds of inactivity I have a script for my raspberry pi that continually takes readings and saves data to a file However I know this is dangerous as there is a real risk of the SD card becoming corrupted if the power is pulled while data is being saved Is there a way I can have the script terminate itself if the computer is inactive for a set amount of time? Sorry if this question is vague but I have no idea where to even begin with this so I cannot show any code that I have tried | That is a naive watchdog implementation: ````import os import signal import threading class Watchdog(): def __init__(self timeout=10): self timeout = timeout self _t = None def do_expire(self): os kill(os getpid() signal SIGKILL) def _expire(self): print("\nWatchdog expire") self do_expire() def start(self): if self _t is None: self _t = threading Timer(self timeout self _expire) self _t start() def stop(self): if self _t is not None: self _t cancel() self _t = None def refresh(self): if self _t is not None: self stop() self start() ```` Build it by `wd = Watchdog()` and every time you get something that feed your work call `wd refresh()` If you do not call refresh before timeout ends it will call `os kill(os getpid() signal SIGKILL)` You cannot use just `sys exit()` because it raise just a `SystemExit` exception: use `kill` works as you want Now you can use something to poll the system and use the answer to refresh or not the watch dog For instance `xprintidle` tell to you the X idle time but all depend from what you need to monitoring Use example ````timeout=10 wd = Watchdog(timeout) wd start() while True: a=str(raw_input('Tell me something or I will die in {} seconds: ' format(timeout))) wd refresh() print("You wrote '{}' you win an other cycle" format(a[:-1)) ```` |
Python: What is the fastest way to split and join these strings? I have a script where a global constant ZETA is input by a user ZETA = alpha * A user will supply an input like: ````alpha aaa_xxx alpha bbb_yyy alpha abc_xyz etc ```` The alpha * arguments are representative of integers and there are so many of them that building a class function to assign outputs to every known integer value would be messy Later in my code from the input given to ZETA I need outputs that look like this: ````beta aaa beta xxx beta bbb beta yyy beta abc beta xyz ```` I would like to use a list split and join method to get these outputs but I have been running into an error: when I try to list(ZETA) I was hoping for: ````a l p h a a a a _ x x x ```` I get: ````TypeError: 'int' object is not iterable ```` I understand why but it throws a kink in my plan for sure Thoughts? Thanks! ETA: so I think I can get to where I need to be if I require the inputs to be in quotes: 'alpha aaa_xxx' instead of alpha aaa_xxx is there any way around that? | Ok let us say you have your data in one list ````sample = ['alpha 111_222' 'alpha 222_444' 'alpha 433_cvx'] # for example ```` You can read your file into list but beauty of generators you do not have to you can easily build one generator yourself for read file As far as I understood you want to cut out alpha part (split by ) first We can do this by generator ````[y split(' ') for y in sample] # list of [['alpha' 'xxx_yyy'] ] ```` Now we want get second member of each list and split it by '_' Same idea with generator ````[x split('_') for x in [y split(' ')[1] for y in sample]] ```` Now when we have list of pairs ['xxx' 'yyy'] all we need is form new lines like this ````result = ["beta %s beta %s" % (x[0] x[1]) for x in [x split('_') for x in [y split(' ')[1] for y in sample]]] # ['beta xxx beta yyy' ] ```` Or if you dislike nested generators you can write code like this: ````k = [x split(' ')[1] for x in sample] l = [x split('_') for x in k] result = ["beta %s beta %s" % (x[0] x[1]) for x in l] ```` Looks a bit clearer more variables though Now all we need is print Or output to file ````for item in result: print item ```` I hope it helps |
float required?? Error with round() in Python could not find this question answered by search I am trying to learn some Python and need your help this this function: ````def roundtest(): i = round(raw_input("call a number: ") 2) print i ```` My input & the error I get: ````call a number: 1 2222 TypeError: a float is required ```` Thank you for your help | `raw_input` returns a string which you then have to parse in to a `float` like so: ````def roundtest(): i = round(float(raw_input("call a number: ")) 2) print i ```` |
the session is unavailable because no secret key was set Set the secret_key on the application to something unique and secret Flask/Heroku The flask app can login and register all fine on localhost But this becomes an issue when i push it to heroku It shows the above mentioned error Here is the app py code ````from flask import Flask render_template request redirect jsonify url_for flash from sqlalchemy import create_engine asc desc from sqlalchemy orm import sessionmaker from database_setup import Base User BlogPost from flask import session as login_session import random import string from wtforms import Form BooleanField TextField PasswordField validators from passlib hash import sha256_crypt app = Flask(__name__) #Connecting to database engine = create_engine('sqlite:///travellerdata db') Base metadata bind = engine DBSession = sessionmaker(bind=engine) session = DBSession() ```` And ends with ````if __name__ == "__main__": app secret_key = 'some secret key' app debug = True app run() ```` | It is likely that when your HTTP server is loading your application `__name__` is not equal to `'main'` Try moving the line `app secret_key = 'some secret key'` outside the if block It is not a good idea to put your secret key in source code because if anyone gets it they can malevolently gain access to your system Try storing it in a file in the application's instance directory (<a href="http://flask pocoo org/snippets/104/" rel="nofollow">snippet here</a>) or putting it in an environment variable (<a href="http://flask pocoo org/docs/0 10/config/#development-production" rel="nofollow">explanation here</a>) |
How to catch the error in each time in python? I have got a problem with my code I am trying to get pass on the error but the error are jumping on this line: ````program_stop_time = time strptime(prog_stop_clock '%I:%M%p') ```` The error I am getting is: (data_string format)) ValueError: time data '' does not match format '%I:%M%p' I understand that it can be used on try and expect function but I want to catch the error I get in each time as I want run the code on below in each time when the error get catch: ````if program_stop_time == epg_time_1: if program_finished == '00': if program_width >= 342: programs_width = 342 self getControl(int(program_id)) setWidth(programs_width) self getControl(int(program_id)) setVisible(False) self getControl(int(program_id)) setPosition(int(pos_X) - 350 int(pos_Y)) self getControl(int(nextprogram)) setPosition(375 int(pos_Y)) ```` Here is the full code: ````for prog_stop_clock prog_start_clock program_id in izip_longest(self program_stop_clock program_start_clock prog_id_list fillvalue=''): current_time = int(time strftime("%M")) program_stop_time = time strptime(prog_stop_clock '%I:%M%p') program_finished = prog_stop_clock split(':')[1] replace('PM' '') replace(' '') if program_stop_time == epg_time_1: if program_finished == '00': if program_width >= 342: programs_width = 342 self getControl(int(program_id)) setWidth(programs_width) self getControl(int(program_id)) setVisible(False) self getControl(int(program_id)) setPosition(int(pos_X) - 350 int(pos_Y)) self getControl(int(nextprogram)) setPosition(375 int(pos_Y)) ```` I have already tried this: ````try: for prog_stop_clock prog_start_clock program_id in izip_longest(self program_stop_clock program_start_clock prog_id_list fillvalue=''): current_time = int(time strftime("%M")) program_stop_time = time strptime(prog_stop_clock '%I:%M%p') program_finished = prog_stop_clock split(':')[1] replace('PM' '') replace(' '') except: if program_stop_time == epg_time_1: if program_finished == '00': if program_width >= 342: programs_width = 342 self getControl(int(program_id)) setWidth(programs_width) self getControl(int(program_id)) setVisible(False) self getControl(int(program_id)) setPosition(int(pos_X) - 350 int(pos_Y)) self getControl(int(nextprogram)) setPosition(375 int(pos_Y)) ```` I have try to use `try:` and `expect` function it will let me to get pass only once when the error is catch I want to catch the error in each time when the error is found How I can do that? | You will want to move your `try except` within your `for` loop if you want to continue with the ones that do not fail such as below ````for prog_stop_clock prog_start_clock program_id in izip_longest(self program_stop_clock program_start_clock prog_id_list fillvalue=''): try: current_time = int(time strftime("%M")) program_stop_time = time strptime(prog_stop_clock '%I:%M%p') program_finished = prog_stop_clock split(':')[1] replace('PM' '') replace(' '') if program_stop_time == epg_time_1: if program_finished == '00': if program_width >= 342: programs_width = 342 self getControl(int(program_id)) setWidth(programs_width) self getControl(int(program_id)) setVisible(False) self getControl(int(program_id)) setPosition(int(pos_X) - 350 int(pos_Y)) self getControl(int(nextprogram)) setPosition(375 int(pos_Y)) except ValueError e: # Do something or ignore error ```` |
What did Al Gore announce was impossible for the Democrats to do to win in 2000? | null |
SPEED UP Django Form to Upload large (500k obs) CSV file to MySQL DB The Django table is approximately 430 000 obs and 230mb file;\ and comes from a flat CSV file outlined by details below in\ MODELS PY I have considered using chunks for the CSV Reader but I think the Processor\ function I have that populates the MySQL table is my hangup; it takes 20 hours+\ HOW CAN I SPEED THIS UP?? ````class MastTable(models Model): evidence = models ForeignKey(Evidence blank=False) var2 = models CharField(max_length=10 blank=True null=True) var3 = models CharField(max_length=10 blank=True null=True) var4 = models CharField(max_length=10 blank=True null=True) var5 = models CharField(max_length=10 blank=True null=True) var6 = models DateTimeField(blank=True null=True) var7 = models DateTimeField(blank=True null=True) var8 = models DateTimeField(blank=True null=True) var9 = models DateTimeField(blank=True null=True) var10 = models DateTimeField(blank=True null=True) var11 = models DateTimeField(blank=True null=True) var12 = models DateTimeField(blank=True null=True) var13 = models DateTimeField(blank=True null=True) var14 = models CharField(max_length=500 blank=True null=True) var15 = models CharField(max_length=500 blank=True null=True) var16 = models CharField(max_length=50 blank=True null=True) var17 = models CharField(max_length=500 blank=True null=True) var18 = models CharField(max_length=500 blank=True null=True) var19 = models CharField(max_length=500 blank=True null=True) var20 = models CharField(max_length=500 blank=True null=True) var21 = models CharField(max_length=500 blank=True null=True) var22 = models CharField(max_length=500 blank=True null=True) var23 = models DateTimeField(blank=True null=True) var24 = models DateTimeField(blank=True null=True) var25 = models DateTimeField(blank=True null=True) var26 = models DateTimeField(blank=True null=True) ```` This helper function will create a reader object for the CSV\ and also decode any funky codecs in the file before MySQL upload ````def unicode_csv_reader(utf8_data dialect=csv excel **kwargs): csv_reader = csv reader(utf8_data dialect=dialect **kwargs) for row in csv_reader: yield [unicode(cell 'ISO-8859-1') for cell in row] ```` A function in a UTILS PY File will then access a DB table (named 'extract_properties') which\ contains the file header to identify which processor function to go to\ the processor function will look like this below ````def processor_table(extract_properties): #Process the table into MySQL evidence_obj created = Evidence objects get_or_create(case=case_obj evidence_number=extract_properties['evidence_number']) #This retrieves the Primary Key reader = unicode_csv_reader(extract_properties['uploaded_file'] dialect='pipes') #CSVfunction for idx row in enumerate(reader): if idx <= (extract_properties['header_row_num'])+3: #Header is not always 1st row of file pass else: try: obj created = MastTable objects create( #I was originally using 'get_or_create' evidence=evidence_obj var2=row[0] var3=row[1] var4=row[2] var5=row[3] var6=date_convert(row[4] row[5]) #funct using 'dateutil parser parse' var7=date_convert(row[6] row[7]) var8=date_convert(row[8] row[9]) var9=date_convert(row[10] row[11]) var10=date_convert(row[12] row[13]) var11=date_convert(row[14] row[15]) var12=date_convert(row[16] row[17]) var13=date_convert(row[18] row[19]) var14=row[20] var15=row[21] var16=row[22] var17=row[23] var18=row[24] var19=row[25] var20=row[26] var21=row[27] var22=row[28] var23=date_convert(row[29] row[30]) var24=date_convert(row[31] row[32]) var25=date_convert(row[33] row[34]) var26=date_convert(row[35] row[36]) ) except Exception as e: #This logs any exceptions to a custom DB table print "Error" e print "row" row print "idx:" idx SystemExceptionLog objects get_or_create(indexrow=idx errormsg=e args[0] timestamp=datetime datetime now() uploadedfile=extract_properties['uploaded_file']) continue return True ```` Finally the VIEWS PY Form below to accept file and Call the processor above to populate DB Checks for valid form data and passes any files to the file handler if valid ````def upload_file(request): if request method == 'POST': form = UploadFileForm(request POST request FILES) if form is_valid(): for _file in request FILES getlist('file'): extract_properties = get_file_properties(_file) if extract_properties: for property in extract_properties: #File is found and processor kicked off print "starting parser" try: property['evidence_number'] = request POST get('evidence_number') result = process_extract(property) if result is None: print 'Unable to get determine extract properties!' except Exception as e: print "!!!!!!!" print "Error could not upload" e pass else: print 'Unable to identify file uploaded!' return HttpResponseRedirect('') print form else: form = UploadFileForm() return render_to_response('nettop/upload_file html' # The web frontend Page for Upload {'form': form} context_instance=RequestContext(request)) ```` | The most basic and effective optimization in Django is to reduce the number of queries to the database That is true for 100 queries and that is most certainly true for 500 000 queries Instead of using `MastTable objects create()` you should construct a list of unsaved model instances and use `MastTable objects bulk_create(list_of_models)` to create them all in as few round-trips to the database as possible This should speed it up tremendously If you are using MySQL you can increase the `max_allowed_packet` setting to allow for larger batches Its default of 1MB is quite low PostGRESQL has no hardcoded limit If you are still running into performance issues you can switch to <a href="https://docs djangoproject com/en/1 7/topics/db/sql/#executing-custom-sql-directly" rel="nofollow">raw SQL statements</a> Creating 500 000 python objects can be a bit of an overhead In one of my recent tests executing the exact same query with `connection cursor` was about 20% faster It can be a good idea to leave the actual processing of the file to a background process using e g Celery or using a `StreamingHttpResponse` to provide feedback during the process |
Getting deprecation warning in Sklearn over 1d array despite not having a 1D array I am trying to use SKLearn to run an SVM model I am just trying it out now with some sample data Here is the data and the code: ````import numpy as np from sklearn import svm import random as random A = np array([[random randint(0 20) for i in range(2)] for i in range(10)]) lab = [0 1 0 1 0 1 0 1 0 1] clf = svm SVC(kernel='linear' C=1 0) clf fit(A lab) ```` FYI when I run ````import sklearn sklearn __version__ ```` It outputs 0 17 Now when I run `print(clf predict([1 1]))` I get the following warning: ````C:\Users\me\AppData\Local\Continuum\Anaconda2\lib\site-packages\sklearn\ut ils\validation py:386: DeprecationWarning: Passing 1d arrays as data is deprecat ed in 0 17 and willraise ValueError in 0 19 Reshape your data either using X re shape(-1 1) if your data has a single feature or X reshape(1 -1) if it contain s a single sample DeprecationWarning) ```` It does give me a prediction which is great However I find this weird for a few reasons I do not have a 1d array If you print A you get ````array([[ 9 12] [ 2 16] [14 14] [ 4 2] [ 8 4] [12 3] [ 0 0] [ 3 13] [15 17] [15 16]]) ```` Which appears to me to be 2 dimensional But okay let us just say that what I have is in fact a 1D array Let us try to change it using `reshape` as suggested by the error Same code as above but now we have ````A = np array([[random randint(0 20) for i in range(2)] for i in range(10)]) reshape(-1 1) ```` But then this outputs an array of length 20 which makes no sense and is not what I want I also tried it with `reshape(1 -1)` but then this gives me a single observation / list with 20 items in it How can I reshape my data in numpy arrays so that I do not get this warning? <hr> I looked at two answers on SO and neither worked for me <a href="http://stackoverflow com/questions/35082140/preprocessing-in-scikit-learn-single-sample-depreciation-warning">Question 1</a> and <a href="http://stackoverflow com/questions/35166146/sci-kit-learn-reshape-your-data-either-using-x-reshape-1-1">Question 2</a> It seems that Q1 was actually 1D data and was solved using `reshape` which I tried and failed at Q2 has an answer about how to track warnings and errors which is not what I want The other answer is again an instance of a 1D array | The error is coming from the predict method Numpy will interpret [1 1] as a 1d array So this should avoid the warning: `clf predict(np array([[1 1]]))` Notice that: ````In [14]: p1 = np array([1 1]) In [15]: p1 shape Out[15]: (2 ) In [16]: p2 = np array([[1 1]]) In [17]: p2 shape Out[17]: (1 2) ```` Also note that you cannot use an array of shape (2 1) ````In [21]: p3 = np array([[1] [1]]) In [22]: p3 shape Out[22]: (2 1) In [23]: clf predict(p3) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-e4070c037d78> in <module>() ---> 1 clf predict(p3) /home/juan/anaconda3/lib/python3 5/site-packages/sklearn/svm/base py in predict(self X) 566 Class labels for samples in X 567 """ -> 568 y = super(BaseSVC self) predict(X) 569 return self classes_ take(np asarray(y dtype=np intp)) 570 /home/juan/anaconda3/lib/python3 5/site-packages/sklearn/svm/base py in predict(self X) 303 y_pred : array shape (n_samples ) 304 """ -> 305 X = self _validate_for_predict(X) 306 predict = self _sparse_predict if self _sparse else self _dense_predict 307 return predict(X) /home/juan/anaconda3/lib/python3 5/site-packages/sklearn/svm/base py in _validate_for_predict(self X) 472 raise ValueError("X shape[1] = %d should be equal to %d " 473 "the number of features at training time" % -> 474 (n_features self shape_fit_[1])) 475 return X 476 ValueError: X shape[1] = 1 should be equal to 2 the number of features at training time ```` |
For string with lower then UPPER Give position when UPPER starts Python If i have a string ````ex = 'aaatttgggatgaATG' ```` and I want to find the index where the lowercase ends so in this case it would be ````indx_lower = 13 ```` how would i get that value ? would I have to do a for loop where i checked the boolean for each element in the string ? like this ? ````total_indx = range(0 len(ex)) for p k in zip(ex total_indx): if upper print k ? ```` ya i do not know how i would do this | ````next(i for i j in enumerate(ex) if j isupper()) - 1 ```` |
When should we use split() in Python? So I would like to remove the white space from my integer input and store them into a list ````t = raw_input() split() numbers = [int(x strip()) for x in t] numbers = sorted(numbers) print numbers ```` However the output's still the same when I do not use strip() Can somebody please explain to me why we should use strip() I saw a couple of posts on the forum and people also frequently use strip() I understand split() returns all the numbers by eliminating the white space and strip() also does the same job Thanks! | I do not understand the confusion The `split()` function return a list of all subparts of your string by removing all occurences of the given argument For example if you have the following string : "Hello world!" and split this one by split("o") then your output will be : ["Hell" " w" "rld!"] With a code: ````str = "Hello world!" split_str = str split("o") print "str has type" type(str) "with the value" str "\n" print "split_str has type" type(split_str) "with the value" split_str ```` Then the output will be : <blockquote> str has type string with the value Hello world! split_str has type list with the value ["Hell" " w" "rld!"] </blockquote> So if you have a string that represents a sequence of different integers separated by space: you could operate with this solution ````input_integers = raw_input() split(" ") # splits the given input string numbers = [int(x) for x in input_integers] # iteration to convert from string to int numbers = sorted(numbers) # makes a sort on the integer list print numbers # display ```` It is a very basic use of string so for the next time have the reflex to read the doc It is the first tool that you may read to have your solution |
Where does Dr. Carlos Moore work? | "Brazils University of the State of Bahia" |
(Tkinter) Image will not show up in new window I just started using python tkinter and I have a button that opens a new window One the new window there is an image but the image will not show up Can you please help me solve my problem? ````from tkinter import * def nwindow(): nwin = Toplevel() nwin title("New Window") btn config(state = 'disable') photo2 = PhotoImage(file = 'funny gif') lbl2 = Label(nwin image = photo2) lbl2 pack() def quit(): nwin destroy() btn config(state = 'normal') qbtn = Button(nwin text = 'Quit' command = quit) qbtn pack() main = Tk() main title("Main Window") main geometry("750x750") photo = PhotoImage(file = 'funny gif') lbl = Label(main image = photo) lbl pack() btn = Button(main text = "New Winodw" command = nwindow) btn pack() main mainloop() ```` | your coding does not work but putting mainloop() should fix your issue ````def nwindow(): nwin = Toplevel() nwin title("New Window") btn config(state = 'disable') photo2 = PhotoImage(file = 'funny gif') lbl2 = Label(nwin image = photo2) lbl2 pack() nwin mainloop() ```` |
Create a csv file with values from a Python list I am trying to create a csv file with the values from a Python list When I print the values in the list they are all unicode (?) i e they look something like this ````[you'value 1' you'value 2' ] ```` If I iterate through the values in the list i e `for v in mylist: print v` they appear to be plain text And I can put a ` ` between each with `print ' ' join(mylist)` And I can output to a file i e ````myfile = open( ) print >>myfile ' ' join(mylist) ```` But I want to output to a CSV and have delimiters around the values in the list e g ````"value 1" "value 2" ```` I cannot find an easy way to include the delimiters in the formatting e g I have tried through the `join` statement How can I do this? | You could use the string join method in this case Split over a few of lines for clarity - here is an interactive session ````>>> a = ['a' 'b' 'c'] >>> first = '" "' join(a) >>> second = '"%s"' % first >>> print second "a" "b" "c" ```` Or as a single line ````>>> print ('"%s"') % '" "' join(a) "a" "b" "c" ```` However you may have a problem is your strings have got embedded quotes If this is the case you will need to decide how to escape them The <a href="http://docs python org/library/csv html">CSV module</a> can take care of all of this for you allowing you to choose between various quoting options (all fields only fields with quotes and seperators only non numeric fields etc) and how to esacpe control charecters (double quotes or escaped strings) If your values are simple string join will probably be OK but if you are having to manage lots of edge cases use the module available |
Call Python Script from Javascript I am on a dreamhost server and have some HTML that calls some javascript when a button is pressed I am trying to call a python script when this button is tapped First off to my knowledge as I am on a shared host I cannot use AJAX as it is not supported so I need to do this without AJAX Right now I am trying to do a `XMLHttpRequest` which is working I also realize doing an `XMLHttpRequest` is not the best way since the files are both on the server there must be a way to just call the file directly? So if someone call tell me how to call it directly or help me fix this error in the browser console that would be great Thanks for the help <strong>EDIT</strong> I have an HTML file when a user taps a button on this file it calls some javascript that is in the HTML file This javascript currently makes a POST Request to a python script that is on the same server and the HTML file What I want instead of making a post request to the python file that is on the server I want to just directly call the python file from the javascript that runs when the button the clicked in the HTML file Both the HTML file which contains the javascript and the python file are on the same server And I do not want the python to run in the browser I want it to run in the background on the server How can I use the Javascript to call this python file? | As far as I understand your question what you are looking to do is called a "remote procedure call" or some sort of Service Oriented Architecture (SOA) You are on a right track in making a `POST` request to the server You can setup a middleware like flask or cherrypy to run the script when you send a GET PUT POST request And inside of the middleware controller you can call your script Basically you have started to create a RESTful api and its a pretty standard way these days to run logic on the backend Some examples of different frameworks for doing url routing: Python: - CherryPy: <a href="http://docs cherrypy org/en/latest/tutorials html#tutorial-9-data-is-all-my-life" rel="nofollow">http://docs cherrypy org/en/latest/tutorials html#tutorial-9-data-is-all-my-life</a> - Flask: <a href="http://flask pocoo org/docs/0 10/quickstart/" rel="nofollow">http://flask pocoo org/docs/0 10/quickstart/</a> - a longer list of lower level python: <a href="http://wsgi readthedocs org/en/latest/libraries html" rel="nofollow">http://wsgi readthedocs org/en/latest/libraries html</a> NodeJs: - Express: <a href="http://expressjs com/guide/routing html" rel="nofollow">http://expressjs com/guide/routing html</a> - Koa: <a href="https://github com/koajs/route" rel="nofollow">https://github com/koajs/route</a> - Hapi: <a href="http://hapijs com/tutorials/routing" rel="nofollow">http://hapijs com/tutorials/routing</a> Also very good is this question: <a href="http://stackoverflow com/questions/16626021/json-rest-soap-wsdl-and-soa-how-do-they-all-link-together">JSON REST SOAP WSDL and SOA: How do they all link together</a> Another way that you could do this from the browser would be to use sockets which opens a connection between the client and the server Inside the javscript you could use socketio: ````<script src='/socket io/socket io js'></script> <script> var socket = io(); socket connect('http://localhost:8000') socket emit('run_a_script_event' {arg1: 'hello' arg2: 'world'}); </script> ```` And in your python code you could use the socketio client for python (<a href="https://pypi python org/pypi/socketIO-client" rel="nofollow">https://pypi python org/pypi/socketIO-client</a>): ````from your_module import thescript from socketIO_client import SocketIO LoggingNamespace def run_a_script(*args): print('arg1 and arg2' args) thescript() socketIO = SocketIO('localhost' 8000 LoggingNamespace) socketIO on('run_a_script_event' run_a_script) ```` Looks like there is also a version specifically for flask: <a href="https://flask-socketio readthedocs org/en/latest/" rel="nofollow">https://flask-socketio readthedocs org/en/latest/</a> Or you could run the python directly in the browser by converting it to javascript with a transpiler: - <a href="http://www skulpt org/" rel="nofollow">http://www skulpt org/</a> Or you could use node javascript to spawn a child process which runs the python script: - <a href="http://stackoverflow com/a/20973067/2026508">http://stackoverflow com/a/20973067/2026508</a> Or you can import the python script into node like: - <a href="http://stackoverflow com/questions/21655552/use-of-node-python-to-execute-python-scripts-from-web-application-hosted-under-p">Use of node-python to execute python scripts from web application hosted under python?</a> |
Python | Is it possible to have multiple outcomes of one IF statement? Making an IT quiz for school and I do not want it to be case sensitive so I would have 2 if statements if I want there to be 2 possible answers My problem is that only one answer works May sound confusing ````a = input("And the answer is ") if a == "answer1": print("Correct!") if a == "answer2": print("Correct!") else: print("Incorrect!") ```` The outcome if you enter "answer1" is: ````Correct! Incorrect! ```` However "answer2" will give you: ````Correct! ```` Basically I want both "answer1" and "answer2" to have the outcome of "Correct!" | A couple of different ways: ````if a == "answer1" or a == "answer2": print("Correct!") else: print("Incorrect!") ```` or ````if a in ["answer1" "answer2"]: print("Correct!") else: print("Incorrect!") ```` or ````if a == "answer1": print("Correct!") elif a == "answer2": print("Correct!") else: print("Incorrect!") ```` You can read more about in the <a href="https://docs python org/3 3/tutorial/controlflow html" rel="nofollow">"Flow Control"</a> section of the Python tutorial |
What was the limit of its high speed memory? | ENIAC |
Django/Python URL Regex of form "string#1" I would like to match URL string "anything#1" in django My regular expression goes like this: ````r'^(?P<title>\w+[_]*)?/#(?P<id>\d+)/$' ```` The "#1" part should be optional so I put question mark there However it shows me an error What am I doing bad? | Django does not control signet (or fragment) you can get it in your view with <a href="https://docs djangoproject com/en/dev/ref/request-response/" rel="nofollow">`HttpRequest get_full_path()`</a> : Get it only with : ```` >>> request get_full_path() split('#')[1] '1' ```` |
How can I parse a list of dictionaries into JSON Format using Javascript? I am trying to parse a list of dictionaries in JSON format so I can use their data to create a set of list items where the text and id are generated using this data I am passing the following to my web page and stashing it in a hidden div before serving it: ```` [{'text': 'org1' 'id': 'org1ID'} {'text': 'org2' 'id': 'org2ID'} {'text': 'org3' 'id': 'org3ID'}] ```` I am doing it this way because the data can be added to on the client by performing a lookup using ajax and the same ui elements will be able to view different data sets using a subscription based observer-observable model Firebug shows it is successfully rendered as: ````<div id="selection">[{'text': 'org1' 'id': 'org1ID'} {'text': 'org2' 'id': 'org2ID'} {'text': 'org3' 'id': 'org3ID'}]</div> ```` The problem starts when I try a JSON parse() on the contents of this div I get back the following message: ````JSON parse: expected property name or '}' ```` If I remove JSON parse() I get a list of li elements but with Undefined as values and the number of them does not match expected either: ````<li id="Undefined">Undefined</> <!-- Repeated a number of times equal to char lengh of div html -> ```` Please be aware if the above has not already told you that I am trying to iterate over my list and extract the information from each dict I also tried JSON stringify() on the data before parsing it but that only seems to convert the data into a string Looking like this when I step through: ````"[{'text': 'org1' 'id': 'org1ID'} {'text': 'org2' 'id': 'org2ID'} {'text': 'org3' 'id': 'org3ID'}]" <!-- The difference is that double qoutes have been injected -> ```` To make matters even more irritating for me at least is that if I actually hard code the associative array part of this data (as Javascript would see it) then it works correctly However in order to do this I have to compromise a lot of the flexibility of my current design The most prominent drawback aside from more client-side processing is the inability to just pass my data structure back from the server Can anyone out there offer some insight or perhaps a solution I may have overlooked? Is this a common gotcha with working with JSON in js? UPDATE I have discovered that the reason why hard coding works differently is that either python or Jinja2 is converting my data structure to use single quotes instead of double but entering it on the client bypasses python At this point I believe it is python as stepping through on eclipse shows a single quoted list of dicts | A valid JSON string directly represents a javascript array/object and does not require any parsing at all Just output this directly to a javascript variable inside script tags: ````<script> var json_array = [{'text': 'org1' 'id': 'org1ID'} {'text': 'org2' 'id': 'org2ID'} {'text': 'org3' 'id': 'org3ID'}]; </script> ```` Your data will automatically be in a javascript array no need to parse |
How many students attended the Bronx public and private schools as of 2000? | 333,100 |
How to calculate simulated values while plotting discrepancy plot for goodness of fit? I am trying to make the discrepancy plot for testing goodness-of-fit after obtaining best fit values by MCMC using pymc My code goes as: ````import pymc import numpy as np import matplotlib pyplot as plt seaborn as sns # Seeding np random seed(55555) # x-data x = np linspace(1 50 50) # Gaussian function def gaus(x A x0 sigma): return A*np exp(-(x-x0)**2/(2*sigma**2)) # y-data f_true = gaus(x 10 25 10 ) noise = np random normal(size=len(f_true)) * 0 2 f = f_true noise # y_error f_err = f*0 05 # Defining the model def model(x f): A = pymc Uniform('A' 0 50 value = 12) x0 = pymc Uniform('x0' 0 50 value = 20) sigma = pymc Uniform('sigma' 0 30 value=8) @pymc deterministic(plot=False) def gaus(x=x A=A x0=x0 sigma=sigma): return A*np exp(-(x-x0)**2/(2*sigma**2)) y = pymc Normal('y' mu=gaus tau=1 0/f_err**2 value=f observed=True) return locals() MDL = pymc MCMC(model(x f)) MDL sample(20000 10000 1) # Extract best-fit parameters A_bf A_unc = MDL stats()['A']['mean'] MDL stats()['A']['standard deviation'] x0_bf x0_unc = MDL stats()['x0']['mean'] MDL stats()['x0']['standard deviation'] sigma_bf sigma_unc = MDL stats()['sigma']['mean'] MDL stats()['sigma']['standard deviation'] # Extract and plot results y_fit = MDL stats()['gaus']['mean'] plt clf() plt errorbar(x f yerr=f_err color='r' marker=' ' label='Observed') plt plot(x y_fit 'k' ls='-' label='Fit') plt legend() plt show() ```` So far so good and gives the following plot:<a href="http://i stack imgur com/ezT9p png" rel="nofollow"><img src="http://i stack imgur com/ezT9p png" alt="Best fit plot using MCMC"></a> Now I want to test the goodness-of-fit using method as described in section 7 3 in <a href="https://pymc-devs github io/pymc/modelchecking html" rel="nofollow">https://pymc-devs github io/pymc/modelchecking html</a> For this I have to find f_sim first so I wrote following code after above lines: ````# GOF plot f_sim = pymc Normal('f_sim' mu=gaus(x A_bf x0_bf sigma_bf) tau=1 0/f_err**2 size=len(f)) pymc Matplot gof_plot(f_sim f name='f') plt show() ```` This gives error saying <strong>AttributeError: 'Normal' object has no attribute 'trace'</strong> I am trying to use gof_plot before doing the discrepancy plot I do not think using other distribution instead of Normal would be a good idea because of gaussian nature of the function I would really appreciate if someone could let me know what I am doing wrong Also Normal distribution in pymc does not have Normal_expval to get the expected values Is there any other way that f_exp can be calculated? Thanks | I realized that f_sim is actually y values defined during the main fit since simulated values are the backbone of montecarlo method So I extracted y values for last 10000 iterations and used gof_plot as follows: ````f_sim = MDL trace('gaus' chain = None)[:] pymc Matplot gof_plot(f_sim f name='f') plt show() ```` Works great now! Still not sure how to get f_exp though |
Boolean Converted To decimal Decimal object in Boto DynamoDB Requests When I send a put item request to dynamoDB using the boto dynamodb2 table Table method with a boolean value in one of the item's attributes and then make a get item request on that same item the attribute value returns as Decimal('1') ````>>> Table put_item(data={ 'id': 'sjx7MQrKNqD7uQ6Xc2UepQkBY7xbJxvcGViP' 'active': True }) >>> response = Table get_item(id='sjx7MQrKNqD7uQ6Xc2UepQkBY7xbJxvcGViP') >>> print(response) {'active': Decimal('1') 'id': 'sjx7MQrKNqD7uQ6Xc2UepQkBY7xbJxvcGViP'} ```` Although there has been a lot of discussion in the boto Github repo about converting <a href="https://github com/boto/boto/pull/1885" rel="nofollow">python float values to decimal Decimal objects</a> or strings before sending them to dynamoDB to maintain data integrity I have been unable to find any discussion about booleans being converted <a href="https://docs aws amazon com/amazondynamodb/latest/developerguide/DataModel html" rel="nofollow">AWS documentation</a> indicates that a boolean is an acceptable datatype and does not mention anything about it being converted to a string like numbers are But there is a cryptic method for Table called <a href="https://boto readthedocs org/en/latest/ref/dynamodb2 html#boto dynamodb2 table Table use_boolean" rel="nofollow">use_boolean()</a> with no documentation So I am confused Is this a problem other people are experiencing? If so is there any explanation for it? If not any clue why my build would be doing this? | I have just ran into this very same problem myself and dug into the `use_boolean()` a bit further with commentary below and have a solution It seems that support for the `Boolean` type in dynamoDB was only added to boto on the 17th of January 2015 in this pull request: <a href="https://github com/boto/boto/pull/2667" rel="nofollow">https://github com/boto/boto/pull/2667</a> This patch is relatively new (if you consider new <1 year) so may explain why the `use_boolean()` method is not too well documented but the approach seems to be similar to the `use_decimals()` method which is well documented in the plain old `boto dynamodb` tutorial: <a href="http://boto readthedocs org/en/latest/dynamodb_tut html#working-with-decimals" rel="nofollow">http://boto readthedocs org/en/latest/dynamodb_tut html#working-with-decimals</a> In the pull request you can see there is discussion around the importance of and how to maintain backwards compatibility with those users of boto who have already had their boolean types coerced to int The pull request introduced a `NonBooleanDynamizer` that is declared as the default `_dynamizer`; unless you call the `use_boolean()` method on your table object The most relevant part of the patch is as follows: <a href="https://github com/kain-jy/boto/commit/886c4bf1877538a6acc28dd5f9bdd1c8f1c30dd9#different-454bd7ad5c48dd01834d852f01e4b573R114" rel="nofollow">https://github com/kain-jy/boto/commit/886c4bf1877538a6acc28dd5f9bdd1c8f1c30dd9#different-454bd7ad5c48dd01834d852f01e4b573R114</a> The following should illustrate how to use `boto dynamodb2` more clearly (I have not fully investigated what the equivalent would be for plain old `boto dynamodb` but the sprinkling of use_boolean function parameters in the codebase suggests there is a way if needs be): ````>>> from boto dynamodb2 fields import HashKey >>> from boto dynamodb2 table import Table >>> data = {'true': True 'false': False 'one': 1 'zero': 0} >>> table = Table create('q32109154-test1' schema=[HashKey('hkey')]) >>> data['hkey'] = 'test1' >>> table put_item(data=data) >>> result = table get_item(hkey='test1') >>> print [(k v) for k v in result items()] [(you'hkey' you'test1') (you'zero' Decimal('0')) (you'true' Decimal('1')) (you'false' Decimal('0')) (you'one' Decimal('1'))] >>> table = Table create('q32109154-test2' schema=[HashKey('hkey')]) >>> data['hkey'] = 'test2' >>> table use_boolean() >>> table put_item(data=data) >>> result = table get_item(hkey='test2') >>> print [(k v) for k v in result items()] [(you'hkey' you'test2') (you'zero' Decimal('0')) (you'true' True) (you'false' False) (you'one' Decimal('1'))] ```` |
Different serializers for serializing/deserializing using Django REST framework I have got a model with a recursive relationship to itself: ````class Tweet(models Model): text = models CharField(max_length=140) original = models ForeignKey("self" null=True blank=True) ```` And a serializer that renders the original Tweet inline: ````class TweetSerializer(serializers ModelSerializer): class Meta: model = Tweet fields = ('id' 'text' 'original' 'original_id') original_id = serializers IntegerField(source='original_id' required=False) def to_native(self obj): ret = super(TweetSerializer self) to_native(obj) del ret['original_id'] return ret TweetSerializer base_fields['original'] = TweetSerializer(source='original' read_only=True) ```` As you can see I have also got an original_id field that is removed in `to_native` The purpose of `original_id` is to allow me to set the original_id of a new tweet rather than having to supply a full blown Tweed object to the `original` field You could say that I am using it as a write only field This seems a bit clunky though Is there a better way to do it? | OK two points: - Have you tried using <a href="http://django-rest-framework org/api-guide/relations html#primarykeyrelatedfield" rel="nofollow">PrimaryKeyRelatedField</a> for your `original_id`? It would seem to target your use-case specifically Combined with the <a href="http://django-rest-framework org/api-guide/serializers html#specifying-nested-serialization" rel="nofollow">depth option</a> it may give you everything you need - You can switch serializers (e g based on request method) by overriding <a href="http://django-rest-framework org/api-guide/generic-views html#genericapiview" rel="nofollow">`get_serializer_class()`</a> on your view Not sure if you will get the exact behaviour you want here though |
Flask-admin editing relationship giving me object representation of Foreign Key object I have a flask project and I am getting started learning the flask-admin module SqlAlchemy schema for the required tables ````import datetime import sqlalchemy from sqlalchemy ext declarative import declarative_base from sqlalchemy orm import backref relationship Base = declarative_base() class Workgroup(Base): __tablename__ = 'workgroups' id = sqlalchemy Column(sqlalchemy Integer primary_key=True autoincrement=True ) name = sqlalchemy Column(sqlalchemy String(16)) shorthand = sqlalchemy Column(sqlalchemy String(4)) def __unicode__(self): return self name class Drive(Base): """ A drive in an edit station """ __tablename__ = 'drives' id = sqlalchemy Column(sqlalchemy Integer primary_key=True autoincrement=True ) name = sqlalchemy Column(sqlalchemy String(64)) computer_id = sqlalchemy Column(sqlalchemy Integer sqlalchemy ForeignKey(Computer id) ) computer = relationship('Computer' backref='drives') is_active = sqlalchemy Column(sqlalchemy Boolean) free_space = sqlalchemy Column(sqlalchemy BigInteger) used_space = sqlalchemy Column(sqlalchemy BigInteger) total_space = sqlalchemy Column(sqlalchemy BigInteger) percentage_full = sqlalchemy Column(sqlalchemy Float) boot_time = sqlalchemy Column(sqlalchemy DateTime) last_changed_workgroup = sqlalchemy Column(sqlalchemy DateTime) last_checked_in = sqlalchemy Column(sqlalchemy DateTime) last_notified = sqlalchemy Column(sqlalchemy DateTime) image_version = sqlalchemy Column(sqlalchemy String(64)) image_date = sqlalchemy Column(sqlalchemy DateTime) current_workgroup_id = sqlalchemy Column(sqlalchemy Integer sqlalchemy ForeignKey(Workgroup id) ) workgroup = relationship('Workgroup' backref='drives') ```` <h2>Admin Test</h2> ````class DriveAdmin(sqla ModelView): column_display_pk = True column_hide_backrefs = False column_display_all_relations = True form_columns = [ 'computer_id' 'workgroup name' ] column_list = ('computer name' 'name' 'workgroup' 'computer short_description' 'computer notes' 'computer station_type description' 'computer room name') class WorkgroupAdmin(sqla ModelView): column_display_pk = True # optional but I like to see the IDs in the list column_hide_backrefs = False column_list = ('id' 'name' 'shorthand') # Create admin admin = admin Admin(app name='Example: SQLAlchemy2' template_mode='bootstrap3') admin add_view(WorkgroupAdmin(schema Workgroup db)) admin add_view(DriveAdmin(schema Drive db)) ```` replacing form columns for 'workgroup' with 'workgroup name' gives me an invalid model property name even though I have successfully used schema workgroup name elsewhere in code The resulting admin form looks like this <a href="http://i stack imgur com/iQsPF png" rel="nofollow"><img src="http://i stack imgur com/iQsPF png" alt="enter image description here"></a> How do I go about getting the workgroup name value to appear as opposed to the object representation? Thanks for reading! | You need to get the workgroup class to return its name via the <strong>repr</strong> function That way it will show in the field ````class Workgroup(Base): __tablename__ = 'workgroups' id = sqlalchemy Column(sqlalchemy Integer primary_key=True autoincrement=True ) name = sqlalchemy Column(sqlalchemy String(16)) shorthand = sqlalchemy Column(sqlalchemy String(4)) def __unicode__(self): return self name def __repr__(self): return '<Workgroup %r>' % (self name) ```` |
What did Liu Shankun do with the pictures he took of collapsed schools? | put them online |
Scrapy nested selectors with conditions I am trying to run the following code on the node that can be found here: <a href="http://pastebin com/r0GaCVLh" rel="nofollow">http://pastebin com/r0GaCVLh</a> Basically: if my node is a `table fielded` I want to do a specific processing on the table otherwise if it is a `table substitutes` I want a different one For a reason completely beyond me the conditions: ````if table xpath(' /*[@class="table fielded"]'): ```` and ````if table xpath(' /*[@class="table substitutes"]'): ```` seem to fail And the variable `fielded` is never set Here is the snippet of the code that does not work (the `print` are only there for debut purposes): ````for table in sel xpath(' //table[@class="table fielded" or @class="table substitutes"]'): print table extract() if table xpath(' /*[@class="table fielded"]'): fielded = True print fielded if table xpath(' /*[@class="table substitutes"]'): fielded = False print fielded ```` | Let us take a look at this line: ````if table xpath(' /*[@class="table fielded"]') ```` Here you are searching for any element that has a `table fielded` class <them>inside the already found `table`</them> Obviously there are no elements matching this locator Instead extract the `@class` value and make a decision based on it is value: ````for table in sel xpath(' //table[@class="table fielded" or @class="table substitutes"]'): table_class = table xpath("@class") extract_first() if "fielded" in table_class: # do smth ```` |
python tarfile error: struct error: unpack requires a string argument of length 4 I have a bunch of files on which I do the following processing : ````for file_name in filelist: tar_file = tarfile open(file_name) tar_file extractall(" /" "location") ```` For a particular file I am getting this error immediately after the `tarfile open` call: ````$ file file_name tgz file_name tgz: gzip compressed data from Unix last modified: Mon Dec 16 16:33:34 2013 ```` Traceback (most recent call last): ```` File "<stdin>" line 1 in <module> File "/usr/lib/python2 7/tarfile py" line 1660 in open return func(name "r" fileobj **kwargs) File "/usr/lib/python2 7/tarfile py" line 1727 in gzopen **kwargs) File "/usr/lib/python2 7/tarfile py" line 1705 in taropen return cls(name mode fileobj **kwargs) File "/usr/lib/python2 7/tarfile py" line 1574 in __init__ self firstmember = self next() File "/usr/lib/python2 7/tarfile py" line 2319 in next tarinfo = self tarinfo fromtarfile(self) File "/usr/lib/python2 7/tarfile py" line 1239 in fromtarfile buf = tarfile fileobj read(BLOCKSIZE) File "/usr/lib/python2 7/gzip py" line 256 in read self _read(readsize) File "/usr/lib/python2 7/gzip py" line 320 in _read self _read_eof() File "/usr/lib/python2 7/gzip py" line 339 in _read_eof isize = read32(self fileobj) # may exceed 2GB File "/usr/lib/python2 7/gzip py" line 25 in read32 return struct unpack("<I" input read(4))[0] struct error: unpack requires a string argument of length 4 ```` Question: How can I avoid this? | You want some exception handling: ````for file_name in filelist: try: tar_file = tarfile open(file_name) tar_file extractall(" /" "location") except struct error e: # or except struct error as e depends on Python version print "Corrupt:" file_name except tarfile TarError e: print "Tar error (%s): %s" % (str(e) file_name) ```` This way you see the error document it but keep going <a href="http://docs python org/2 7/library/tarfile html" rel="nofollow">http://docs python org/2 7/library/tarfile html</a> documents the exceptions from the tarfile module |
Minimize overhead in Python multiprocessing Pool with numpy/scipy I have spent several hours on different attempts to parallelize my number-crunching code but it only gets slower when I do so Unfortunately the problem disappears when I try to reduce it to the example below and I do not really want to post the whole program here So the question is: what pitfalls should I avoid in this type of program? (Note: follow-up after Unutbu's answer is at the bottom ) Here are the circumstances: - It is about a module that defines a class `BigData` with a lot of internal data In the example there is one list `ff` of interpolation functions; in the actual program there are more e g `ffA[k]` `ffB[k]` `ffC[k]` - The calculation would be classified as "embarrassingly parallel": the work can be done on smaller chunks of data at a time In the example that is `do_chunk()` - The approach shown in the example would result in my actual program in the worst performance: about 1 second per chunk (on top of 0 1 second or so of actual calculation time when done in a single thread) So for n=50 `do_single()` would run in 5 seconds and `do_multi()` would run in 55 seconds - I also tried to split up the work by slicing the `xi` and `yi` arrays into contiguous blocks and iterating over all `k` values in each chunk That worked a bit better Now there was no difference in total execution time whether I used 1 2 3 or 4 threads But of course I want to see an actual speedup! - This may be related: <a href="http://stackoverflow com/questions/15414027/multiprocessing-pool-makes-numpy-matrix-multiplication-slower">Multiprocessing Pool makes Numpy matrix multiplication slower</a> However elsewhere in the program I used a multiprocessing pool for calculations that were much more isolated: a function (not bound to a class) that looks something like `def do_chunk(array1 array2 array3)` and does numpy-only calculations on that array There there was a significant speed boost - The CPU usage scales with the number of parallel processes as expected (300% CPU usage for three threads) ````#!/usr/bin/python2 7 import numpy as np time sys from multiprocessing import Pool from scipy interpolate import RectBivariateSpline _tm=0 def stopwatch(message=''): tm = time time() global _tm if _tm==0: _tm = tm; return print("%s: % 2f seconds" % (message tm-_tm)) _tm = tm class BigData: def __init__(self n): z = np random uniform(size=n*n*n) reshape((n n n)) self ff = [] for i in range(n): f = RectBivariateSpline(np arange(n) np arange(n) z[i] kx=1 ky=1) self ff append(f) self n = n def do_chunk(self k xi yi): s = np sum(np exp(self ff[k] ev(xi yi))) sys stderr write(" ") return s def do_multi(self numproc xi yi): procs = [] pool = Pool(numproc) stopwatch('Pool setup') for k in range(self n): p = pool apply_async( _do_chunk_wrapper (self k xi yi)) procs append(p) stopwatch('Jobs queued (%d processes)' % numproc) sum = 0 0 for k in range(self n): # Edit/bugfix: replaced p get by procs[k] get sum = np sum(procs[k] get(timeout=30)) # timeout allows ctrl-C interrupt if k == 0: stopwatch("\nFirst get() done") stopwatch('Jobs done') pool close() pool join() return sum def do_single(self xi yi): sum = 0 0 for k in range(self n): sum = self do_chunk(k xi yi) stopwatch('\nAll in single process') return sum def _do_chunk_wrapper(bd k xi yi): # must be outside class for apply_async to chunk return bd do_chunk(k xi yi) if __name__ == "__main__": stopwatch() n = 50 bd = BigData(n) m = 1000*1000 xi yi = np random uniform(0 n size=m*2) reshape((2 m)) stopwatch('Initialized') bd do_multi(2 xi yi) bd do_multi(3 xi yi) bd do_single(xi yi) ```` The output: ````Initialized: 0 06 seconds Pool setup: 0 01 seconds Jobs queued (2 processes): 0 03 seconds First get() done: 0 34 seconds Jobs done: 7 89 seconds Pool setup: 0 05 seconds Jobs queued (3 processes): 0 03 seconds First get() done: 0 50 seconds Jobs done: 6 19 seconds All in single process: 11 41 seconds ```` Timings are on an Intel Core i3-3227 CPU with 2 cores 4 threads running 64-bit Linux For the actual program the multi-processing version (pool mechanism even if using only one core) was a factor 10 slower than the single-process version <strong>Follow-up</strong> Unutbu's answer got me on the right track In the actual program `self` was pickled into a 37 to 140 MB object that needed to be passed to the worker processes Worse Python pickling is very slow; the pickling itself took a few seconds which happened for each chunk of work passed to the worker processes Other than pickling and passing big data objects the overhead of `apply_async` in Linux is very small; for a small function (adding a few integer arguments) it takes only 0 2 ms per `apply_async`/`get` pair So splitting up the work in very small chunks is not a problem by itself So I transmit all big array arguments as indices to global variables I keep the chunk size small for the purpose of CPU cache optimization The global variables are stored in a global `dict`; the entries are immediately deleted in the parent process after the worker pool is set up Only the keys to the `dict` are transmitted to the worker procesess The only big data for pickling/IPC is the new data that is created by the workers ````#!/usr/bin/python2 7 import numpy as np sys from multiprocessing import Pool _mproc_data = {} # global storage for objects during multiprocessing class BigData: def __init__(self size): self blah = np random uniform(0 1 size=size) def do_chunk(self k xi yi): # do the work and return an array of the same shape as xi yi zi = k*np ones_like(xi) return zi def do_all_work(self xi yi num_proc): global _mproc_data mp_key = str(id(self)) _mproc_data['bd'+mp_key] = self # BigData _mproc_data['xi'+mp_key] = xi _mproc_data['yi'+mp_key] = yi pool = Pool(processes=num_proc) # processes have now inherited the global variabele; clean up in the parent process for v in ['bd' 'xi' 'yi']: del _mproc_data[v+mp_key] # setup indices for the worker processes (placeholder) n_chunks = 45 n = len(xi) chunk_len = n//n_chunks i1list = np arange(0 n chunk_len) i2list = i1list chunk_len i2list[-1] = n klist = range(n_chunks) # placeholder procs = [] for i in range(n_chunks): p = pool apply_async( _do_chunk_wrapper (mp_key i1list[i] i2list[i] klist[i]) ) sys stderr write(" ") procs append(p) sys stderr write("\n") # allocate space for combined results zi = np zeros_like(xi) # get data from workers and finish for i p in enumerate(procs): zi[i1list[i]:i2list[i]] = p get(timeout=30) # timeout allows ctrl-C handling pool close() pool join() return zi def _do_chunk_wrapper(key i1 i2 k): """All arguments are small objects """ global _mproc_data bd = _mproc_data['bd'+key] xi = _mproc_data['xi'+key][i1:i2] yi = _mproc_data['yi'+key][i1:i2] return bd do_chunk(k xi yi) if __name__ == "__main__": xi yi = np linspace(1 100 100001) np linspace(1 100 100001) bd = BigData(int(1e7)) bd do_all_work(xi yi 4) ```` Here are the results of a speed test (again 2 cores 4 threads) varying the number of worker processes and the amount of memory in the chunks (total bytes of the `xi` `yi` `zi` array slices) The numbers are in "million result values per second" but that does not matter so much for the comparison The row for "1 process" is a direct call to `do_chunk` with the full input data without any subprocesses <pre class="lang-none prettyprint-override">`#Proc 125K 250K 500K 1000K unlimited 1 0 82 2 4 28 1 96 1 3 1 31 3 2 69 1 06 1 06 1 07 4 2 17 1 27 1 23 1 28 ```` The impact of data size in memory is quite significant The CPU has 3 MB shared L3 cache plus 256 KB L2 cache per core Note that the calculation also needs access to several MB of internal data of the `BigData` object Hence what we learn from this is that it is useful to do this kind of speed test For this program 2 processes is fastest followed by 4 and 3 is the slowest | Try to reduce interprocess communication In the `multiprocessing` module all (single-computer) interprocess communication done through Queues Objects passed through a Queue are pickled So try to send fewer and/or smaller objects through the Queue - Do not send `self` the instance of `BigData` through the Queue It is rather big and gets bigger as the amount the amount of data in `self` grows: ````In [6]: import pickle In [14]: len(pickle dumps(BigData(50))) Out[14]: 1052187 ```` Every time `pool apply_async( _do_chunk_wrapper (self k xi yi))` is called `self` is pickled in the main process and unpickled in the worker process The size of `len(pickle dumps(BigData(N)))` grows a `N` increases - Let the data be read from a global variable On Linux you can take advantage of Copy-on-Write As <a href="http://stackoverflow com/a/15415690/190597">Jan-Philip Gehrcke explains</a>: <blockquote> After fork() parent and child are in an equivalent state It would be stupid to copy the entire memory of the parent to another place in the RAM That is [where] the copy-on-write principle [comes] in As long as the child does not change its memory state it actually accesses the parent's memory Only upon modification the corresponding bits and pieces are copied into the memory space of the child </blockquote> Thus you can avoid passing instances of `BigData` through the Queue by simply defining the instance as a global `bd = BigData(n)` (as you are already doing) and referring to its values in the worker processes (e g `_do_chunk_wrapper`) It basically amounts to removing `self` from the call to `pool apply_async`: ````p = pool apply_async(_do_chunk_wrapper (k_start k_end xi yi)) ```` and accessing `bd` as a global and making the necessary attendant changes to `do_chunk_wrapper`'s call signature - Try to pass longer-running functions `func` to `pool apply_async` If you have many quickly-completing calls to `pool apply_async` then the overhead of passing arguments and return values through the Queue becomes a significant part of the overall time If instead you make fewer calls to `pool apply_async` and give each `func` more work to do before returning a result then interprocess communication becomes a smaller fraction of the overall time Below I modified `_do_chunk_wrapper` to accept `k_start` and `k_end` arguments so that each call to `pool apply_async` would compute the sum for many values of `k` before returning a result <hr> ````import math import numpy as np import time import sys import multiprocessing as mp import scipy interpolate as interpolate _tm=0 def stopwatch(message=''): tm = time time() global _tm if _tm==0: _tm = tm; return print("%s: % 2f seconds" % (message tm-_tm)) _tm = tm class BigData: def __init__(self n): z = np random uniform(size=n*n*n) reshape((n n n)) self ff = [] for i in range(n): f = interpolate RectBivariateSpline( np arange(n) np arange(n) z[i] kx=1 ky=1) self ff append(f) self n = n def do_chunk(self k xi yi): n = self n s = np sum(np exp(self ff[k] ev(xi yi))) sys stderr write(" ") return s def do_chunk_of_chunks(self k_start k_end xi yi): s = sum(np sum(np exp(self ff[k] ev(xi yi))) for k in range(k_start k_end)) sys stderr write(" ") return s def do_multi(self numproc xi yi): procs = [] pool = mp Pool(numproc) stopwatch('\nPool setup') ks = list(map(int np linspace(0 self n numproc+1))) for i in range(len(ks)-1): k_start k_end = ks[i:i+2] p = pool apply_async(_do_chunk_wrapper (k_start k_end xi yi)) procs append(p) stopwatch('Jobs queued (%d processes)' % numproc) total = 0 0 for k p in enumerate(procs): total = np sum(p get(timeout=30)) # timeout allows ctrl-C interrupt if k == 0: stopwatch("\nFirst get() done") print(total) stopwatch('Jobs done') pool close() pool join() return total def do_single(self xi yi): total = 0 0 for k in range(self n): total = self do_chunk(k xi yi) stopwatch('\nAll in single process') return total def _do_chunk_wrapper(k_start k_end xi yi): return bd do_chunk_of_chunks(k_start k_end xi yi) if __name__ == "__main__": stopwatch() n = 50 bd = BigData(n) m = 1000*1000 xi yi = np random uniform(0 n size=m*2) reshape((2 m)) stopwatch('Initialized') bd do_multi(2 xi yi) bd do_multi(3 xi yi) bd do_single(xi yi) ```` yields ````Initialized: 0 15 seconds Pool setup: 0 06 seconds Jobs queued (2 processes): 0 00 seconds First get() done: 6 56 seconds 83963796 0404 Jobs done: 0 55 seconds Pool setup: 0 08 seconds Jobs queued (3 processes): 0 00 seconds First get() done: 5 19 seconds 83963796 0404 Jobs done: 1 57 seconds All in single process: 12 13 seconds ```` compared to the original code: ````Initialized: 0 10 seconds Pool setup: 0 03 seconds Jobs queued (2 processes): 0 00 seconds First get() done: 10 47 seconds Jobs done: 0 00 seconds Pool setup: 0 12 seconds Jobs queued (3 processes): 0 00 seconds First get() done: 9 21 seconds Jobs done: 0 00 seconds All in single process: 12 12 seconds ```` |
Boston is split between how many congressional districts? | two |
Programmatically read GSM modem port number Is there any way to read GSM modem port number programmatically using Python when I connect mobile to Windows XP machine? | Sorry I donot know the python syntaxes just an idea to follow You can use `SerialPort GetPortNames();` to get the list of available ports in your system And then send an `AT` command to each port Which ever port responds with an `OK` it means that your modem is connected to that port |
What can alkaline ions in glass harm, in addition to electrical resistance? | loss tangent |
After a four-week period, who could refer a proposed measure to the Supreme Court? | Counsel General for Wales or the Attorney General |
Who can the Policy Committee call the meetings of? | null |
Horace de Saussure built the first box cooker in what year? | 1767 |
How to filter by field of field (multi level relationship) in django admin? With models design like this: ````class School(models Model): name = models CharField(max_length=50) class Grade(models Model): name = models CharField(max_length=10) school = models ForeignKey(School) class Student(models Model): name = models CharField(max_length=100) grade = models ForeignKey(Grade) ```` Since `Student` object doens't have direct relation with `School` object How can I filter django-admin to only display students from selected `school`? | Easy ````school_name = "your school name" Students_in_school = Student objects filter(grade__school__name = school_name) ```` Sorry I wrote this on Bus |
Python: else ValueError: (Specifically ValueError In This Case) I have a question which is unrelated to my code I am just curious Why is it that I (I do not know about you) can only use a ValueError with a try and except loop? for example: ````print("What is 1 1?") while(True): try: UserInput = int(input(("Your answer here:")) if(UserInput == 2): print("Congratulations you are correct!") break else: print("That is incorrect Try again!") except ValueError: print("That is not a number Try again!") ```` This works perfectly fine (or at least it should) but why (if not) would this next piece of code not work ````print("What is 1 1?") while(True): UserInput = int(input("Your answer here:")) if(UserInput == 2): print("Congratulations you are correct!") break elif(UserInput != 2): print("That is incorrect Try again!") else(ValueError): print("That is not a number Try again!") ```` When I run this I get this error: ````Traceback (most recent call last): File "python" line 9 else(ValueError): ^ SyntaxError: invalid syntax ```` I know it is because ValueError only works (I think) with try and except loops but why cannot it not work in the above scenario? I assume they would give the same results but I do not know everything Maybe one of you amazingly smart people can tell me my that will not work or an alternative Thank you for trying to clarify this to me :) | The SyntaxError in the second example comes from the fact that `else` needs no condition The first example is totally ok Still better keep the try-block as short as possible: ````print("What is 1 1?") while True: try: UserInput = int(input(("Your answer here:")) except ValueError: print("That is not a number Try again!") else: if UserInput == 2: print("Congratulations you are correct!") break else: print("That is incorrect Try again!") ```` |
Avoid a Newline at End of File - Python I would like to avoid writing a newline character to the end of a text file in python This is a problem I have a lot and I am sure can be fixed easily Here is an example: ````fileout = open('out txt' 'w') list = ['a' 'b' 'c' would'] for i in list: fileout write('%s\n' % (i)) ```` This prints a \n character at the end of the file How can I modify my loop to avoid this? | ````fileout = open('out txt' 'w') list = ['a' 'b' 'c' would'] fileout write('\n' join(list)) ```` |
Google App Engine: Storing Data & querying it I am wondering what the best way to store data is for my situation I want to store a list of properties like phone numbers for one person For example: name: John Doe numbers: 0998234 23443145 2341234 3425425 The problem is given a number how can I search through the 'numbers' property to locate John Doe From what I have read there is no LIKE statement using GAE? Thanks EDIT: I am going to use python to scrap data and insert it into a datastore Then I am going to query this data using <a href="https://developers google com/eclipse/docs/cloudsql-jpatools" rel="nofollow">Google Cloud Endpoints</a> (Java JPA Querys) At the minute I cannot search through multiple values in a property & return the corresponding entity as there is no LIKE statement available in GAE datastore | You can use the repeated property Here is an example: ````class User(ndb Model): name = ndb StringProperty(required=True) numbers = ndb StringProperty(repeated=True) # save a user john_doe = User(name='John Doe' numbers=['0998234' '23443145' '2341234' '3425425']) john_doe put() # query with a number '0998234' query = User query(User numbers == '0998234') for matched_user in query: # do something with matched_user ```` |
Updating a line of text in Python 3 I want to update a line of text so that I do not have too many lines of output Look at an installer program for an example: Instead of <blockquote> Installation status: 1% Installation status: 2% Installation status: 3% </blockquote> I want the same line to update every time the percentage changes I already found a way to do so (well it is actually tricking the user) but it is kind of bad because all the lines from above disappear I am talking about importing 'os' and then doing 'os system("clear")' Is there a better way of doing so? BY THE WAY: I am talking about a few hundred changes per second The installer is just an example | Use the appendage `\r` and then `sys stdout flush()` To continue to use the installer example: ````import sys import time for i in range(100): sys stdout write("\rInstallation Progress: %d percent" % i) time sleep( 05) sys stdout flush() ```` Happy coding! EDIT--I used % to represent percent completed The result was an incomplete placeholder My apologies! |
Trouble using python's gzip/"How do I know what compression is being used?" Ok so I have got an Open Source Java client/server program that uses packets to communicate I am trying to write a python client for said program but the contents of the packet seem to be compressed A quick perusal through the source code suggested gzip as the compression schema (since that was the only compression module imported in the code that I could find) but when I saved the data from one of the packets out of wireshark and tried to do ````import gzip f = gzip open('compressed_file') f read() ```` It told me that this was not a gzip file because the header was wrong Can someone advise me what I have done wrong here? Did I change or mess up the format when I saved it out? Do I need to strip away some of the extraneous data from the packet before I try running this block on it? ```` if (zipped) { // XML encode the data and GZIP it ByteArrayOutputStream baos = new ByteArrayOutputStream(); Writer zipOut = new BufferedWriter(new OutputStreamWriter( new GZIPOutputStream(baos))); PacketEncoder encodeData(packet zipOut); zipOut close(); // Base64 encode the commpressed data // Please note I could not get anything other than a // straight stream-to-stream encoding to work byte[] zipData = baos toByteArray(); ByteArrayOutputStream base64 = new ByteArrayOutputStream( (4 * zipData length 2) / 3); Base64 encode(new ByteArrayInputStream(zipData) base64 false); ```` EDIT: Ok sorry I have the information requested here This was gathered using Wireshark to listen in on communication between two running copies of the original program on different computers To get the hex stream below I used the "Copy -> Hex (Byte Stream)" option in Wireshark 001321cdc68ff4ce46e4f00d0800450000832a85400080061e51ac102cceac102cb004f8092a9909b32c10e81cb25018f734823e00000100000000000000521f8b08000000000000005bf39681b59c85818121a0b4884138da272bb12c512f27312f5dcf3f292b35b9c47ac2b988f902c59a394c0c0c150540758c250c5c2ea5b9b9950a2e89258900aa4c201a3f000000 I know this will contain the string "Dummy Data" in it I believe it should also contain "Jonathanb" (the player name I used to send the message) and the integer 80 (80 is the command # for "Chat" as far as I can gather from the code) | You could try using standard library module <a href="http://docs python org/library/zlib html?highlight=zlib#module-zlib" rel="nofollow">zlib</a> directly -- that is what <a href="http://docs python org/library/gzip html?highlight=gzip#module-gzip" rel="nofollow">gzip</a> uses for the compress/decompress part If the whole packet is not liked by the <a href="http://docs python org/library/zlib html?highlight=zlib#zlib decompress" rel="nofollow">decompress</a> function you can try using different values of `wbits` and/or slicing off a few bytes off the packet's front (if you could "reverse engineer" exactly <strong>how</strong> the Java code is compressing that packet -- even just understand how many `wbits` is using or whether it is putting out any prefix before the compressed data -- that would help immensely of course) The only likely "damage" you might have done to the file itself would be on windows if you had written it without specifying `'wb'` to use binary mode -- writing it in "text mode" on windows would make the file unusable Just saying !-) |
Who advanced the idea of a final stage between the PPNA and PPNB? | null |
How can I make my python code run on the AWS slave nodes using Apache-Spark? I am learning Apache-Spark as well as its interface with AWS I have already created a master node on AWS with 6 slave nodes I also have the following Python code written with Spark: ````from pyspark import SparkConf SparkContext conf = SparkConf() setAppName("print_num") setMaster("AWS_master_url") sc = SparkContext(conf = conf) # Make the list be distributed rdd = sc parallelize([1 2 3 4 5]) # Just want each of 5 slave nodes do the mapping work temp = rdd map(lambda x: x 1) # Also want another slave node do the reducing work for x in temp sample(False 1) collect(): print x ```` My question is how I can set up the 6 slave nodes in AWS such that 5 slave nodes do the mapping work as I mentioned in the code and the other slave node do the reducing work I really appreciate if anyone helps me | From what I understand you cannot specify five nodes serve as map nodes and one as a reduce node within a single spark cluster You could have two clusters running one with five nodes for running the map tasks and one for the reduce tasks Then you could break your code into two different jobs and submit them to the two clusters sequentially writing the results to disk in between However this might be less efficient than letting Spark handle shuffle communication In Spark the call to <them> map()</them> is "lazy" in the sense that it does not execute until the call to an "action " In your code this would be the call to <them> collect()</them> See <a href="https://spark apache org/docs/latest/programming-guide html" rel="nofollow">https://spark apache org/docs/latest/programming-guide html</a> Out of curiosity is there a reason you want one node to handle all reductions? Also based on the documentation the <them> sample()</them> function takes three parameters Could you post stderr and stdout from this code? |
William Howard Russell wrote for what newspaper at the time? | The Times |
dev_appserver py does not load appengine_config py I have an App Engine app running locally using `dev_appserver py` In the app directory I have the standard `appengine_config py` that is supposed to execute on every request made to the app In the past it used to execute the module but suddenly it stopped doing it In another app runs on the same machine it works fine I checked with Process Monitor to see if the file is loaded from another location but it is not (I can see the other app's file being loaded) Any ideas why `appengine_config py` is not executed? | I had the same problem before Solved by changing the loading method in `app yaml` to wsgi for example from: <blockquote> script: my_app/main py </blockquote> To: <blockquote> script: my_app main application </blockquote> Let me know if it works for you |
Having an excess of 500ppm of zinc in soil helps with what? | null |
python when define a function why value changes without using return ````>>> lst=[1] >>> def f(lst): lst[0]=3 >>> f(lst) >>> lst [3] ```` I did not return lst in f why lst changes? | It is because the way lists work in python it does not send the function the list It sends the function the place in memory where the already existing list is which can then be changed |
What was the second largest denomination in North carolina in 2010? | United Methodist Church |
Django - Creating form fields by iterating through data in my database? I would like to set up a form based on the values I have stored in my database Let us say I have some models like this: ````class Questions(models Model): question = models CharField(max_length=350) class Answers(models Model): question = models ForeignKey(Questions) answer = models CharField(max_length=350) ```` And I would like to create a form in this way: ````for x in Questions: answer = forms CharField(label = x question) ```` And thereby having exactly the number of answer fields in the form as there are questions stored in the database Any ideas? | This is the solution I came up with Seems to work okay but I feel like there is a cleaner solution I am missing in my forms py: ````class AnswerForm(forms Form): def __init__(self *args **kwargs): super(AnswerForm self) __init__(*args **kwargs) for x in Questions objects all(): self fields[x question] = ModelChoiceField(queryset= ```` |
Update a field in a django model only if it needs updating Suppose I have some django model and I am updating an instance ````def modify_thing(id new_blah): mything = MyModel objects get(pk=id) mything blah = new_blah mything save() ```` My question is if it happened that it was already the case that `mything blah == new_blah` does django somehow know this and not bother to save this [non-]modification again? Or will it always go into the db (MySQL in my case) and update data? If I want to avoid an unnecessary write does it make any sense to do something like: ````if mything blah != new_blah: mything blah = new_blah mything save() ```` Given that the record would have to be read from db anyway in order to do the comparison in the first place? Is there any efficiency to be gained from this sort of construction - and if so is there a less ugly way of doing that than with the if statement in python? | You can use <a href="https://docs djangoproject com/en/1 6/topics/signals/" rel="nofollow">Django Signals</a> to ensure that code like that you just posted don´t <strong>write</strong> to the db Take a look at `pre_save` that is the signal you are looking for |
Subsets and Splits