input_text
stringlengths 1
40.1k
| target_text
stringlengths 1
29.4k
⌀ |
---|---|
Courts in commonwealth nations are often influenced by what? | "by each others rulings" |
What are some ways to post python pandas dataframes to slack? How can I export a pandas dataframe to slack? df to_json() seems like a potential candidate coupled with the slack incoming webhook but then parsing the message to display as a nice markdown/html-ized table is not obvious to me Long time listener first time caller please go easy on me | There is a ` to_html()` method on DataFrames so that might work But if you are just looking to cut and paste <a href="https://pypi python org/pypi/tabulate" rel="nofollow">Tabulate</a> is a good choice From the docs: ````from tabulate import tabulate df = pd DataFrame([["Name" "Age"] ["Alice" 24] ["Bob" 19]]) print tabulate(df tablefmt="grid") ```` Returns ````+---+-------+-----+ | 0 | Name | Age | ---+-------+-----+ | 1 | Alice | 24 | ---+-------+-----+ | 2 | Bob | 19 | ---+-------+-----+ ```` Paste that in a code block in Slack and it should show up nicely |
Strange: why Django related many-to-many lookup works this way (but not like in docs)? The Django docs describe how to query reverse m2m fields: <a href="https://docs djangoproject com/en/1 8/topics/db/queries/#many-to-many-relationships" rel="nofollow">https://docs djangoproject com/en/1 8/topics/db/queries/#many-to-many-relationships</a> According to <a href="http://stackoverflow com/questions/9352662/how-to-use-the-reverse-of-a-django-manytomany-relationship">this answer</a> we should use `related_name` as the first argument in the query But I am having trouble making this work (I am using Django 1 8 5) Here are my example models: ````class Nlpneutralfeature(models Model): neutral_feature = models CharField(you'found word' max_length=255 default='') class Userproject(models Model): user = models ForeignKey(settings AUTH_USER_MODEL related_name="project" verbose_name=("owner")) monitoring_words = models ManyToManyField(Nlpneutralfeature null=True blank=True related_name="monitoringwords") name = models CharField(you'Ðазвание пÑоекÑа' unique=True max_length=255) ```` So to get all `Nlpneutralfeature` rows that belong to `Userproject` where `Userproject name == '48'` I need to do: ````Nlpneutralfeature objects filter(userproject__monitoringwords__name='48') ```` But this does not work The error I get is: `Cannot resolve keyword *'userproject'* into field` So Django cannot understand that <them>'userproject'</them> is lowercased model name Userproject Now this is working: ````Nlpneutralfeature objects filter(monitoringwords__name='48') ```` How does Django know that `monitoringwords` is `related_name`? No error is strange to me sicne there is no `monitoringwords` field in the `Nlpneutralfeature` model! | Note the phrasing in the <a href="https://docs djangoproject com/en/1 8/ref/models/fields/#django db models ManyToManyField related_name" rel="nofollow">Django docs</a>: <blockquote> The name to user for the relation from the related object back to this one It is also the default value for <strong>related_query_name</strong> (the name to use for the reverse filter name from the target model) </blockquote> In your example the "target model" is `Nlpneutralfeature` When you set `related_name = 'monitoringwords'` on the ManyToManyField `Userproject monitoringwords` it tells Django to use that `related_name` to refer from `Nlpneutralfeature` model objects <them>back</them> to the corresponding `Userproject` model objects Correspondingly if you had declared `Userproject monitoringwords` with `related_name = 'foo'` the way to query for all `Nlpneautralfeatures` belonging to project 48 would be: `Nlpneautralfeature objects filter(foo__name='48')` |
basic example for generating Xml in Django This is my view ````def Getdata(request): data = serializers serialize("xml" MessageHeaderModel2 objects all()) pprint pprint(data) f = open('content xml' 'w') myfile = File(f) myfile write(data) myfile close() return HttpResponse(data) ```` the output `127 0 0 1:8000` give this error ````Exception Value: Expected a character buffer object ```` But the '`data`' output in terminal gives xml output Why is the xml not generated in `content xml` ? | I could not understand why you have this line: ````myfile = File(f) ```` Its not a python method as far as I know If you simply want to write the data to a file remove this line: ````def Getdata(request): with open("file xml" "w") as out: xml_serializer serialize(MessageHeaderModel2 objects all() stream=out) data = serializers serialize("xml" MessageHeaderModel2 objects all()) pprint pprint(data) return HttpResponse(data) ```` |
python inward number spiral going backwards I am trying to make a spiral that looks like this ````21 22 23 24 25 20 7 8 9 10 19 6 1 2 11 18 5 4 3 12 17 16 15 14 13 ```` this is my code and it prints out a matrix but the numbers start on the outside and work in which is the opposite of what I want How can I change this? ````def main(): spiral = open('spiral txt' 'r') # open input text file dim = int(spiral readline()) # read first line of text num = int(spiral readline()) # read second line spiral close() print(dim) if dim % 2 == 0: # check to see if even dim = 1 # make odd print(dim) print(num) dx dy = [0 1 0 -1] [1 0 -1 0] x y c = 0 -1 1 m = [[0 for i in range(dim)] for j in range(dim)] for i in range(dim dim - 1): for j in range((dim dim - i) // 2): x = dx[i % 4] y = dy[i % 4] m[x][y] = c c = 1 print(m) print('\n' join([' ' join([str(v) for v in r]) for r in m])) print(num) main() ```` | replace ````m[x][y] = c ```` by ````m[x][y] = dim**2 1 - c ```` which basically counts backwards Also you might want to have proper spacing with: ````print('\n' join([' ' join(["{:2}" format(v) for v in r[::-1]]) for r in m])) ```` |
is it possible that ordering by priority in SQLAlchemy? There are 3 rows ````| Name | State | | 1 | Busy | | 2 | Online | | 3 | Offline | ```` I want to give priority "online" > "busy" > "offline" ```` heroes = HeroModel query \ filter(HeroModel id in_(hero_ids)) \ order_by(???????) \ all() ```` How can I order? not by alphabetical order | Use something like this in your select query ````case state when 'online' then 1 when 'offline' then 2 else 3 end as rnk ```` and use order by on this derived column `rnk` ````order by rnk ```` |
What supported the GL carpet? | six GCI sets controlling radar-equipped night-fighters |
python ternary operator with assignment I am new to python I am trying to write this ````if x not in d: d[x] = {} q = d[x] ```` in a more compact way using the ternary operator ````q = d[x] if x in d else (d[x] = {}) ```` but this gives the syntax error What am I missing? | The reason that `else (d[x] = {})` is a syntax error is that in Python assignment is a <them>statement</them> But the conditional operator expects <them>expressions</them> and while every expression can be a statement not every statement is an expression |
Python subclassing immutable types I have the following class: ````class MySet(set): def __init__(self arg=None): if isinstance(arg basestring): arg = arg split() set __init__(self arg) ```` This works as expected (initialising the set with the words of the string rather than the letters) However when I want to do the same with the immutable version of set the `__init__` method seems to be ignored: ````class MySet(frozenset): def __init__(self arg=None): if isinstance(arg basestring): arg = arg split() frozenset __init__(self arg) ```` Can I achieve something similar with `__new__` ? | Yes you need to override `__new__` special method: ````class MySet(frozenset): def __new__(cls *args): if args and isinstance (args[0] basestring): args = (args[0] split () ) args[1:] return super (MySet cls) __new__(cls *args) print MySet ('foo bar baz') ```` And the output is: ````MySet(['baz' 'foo' 'bar']) ```` |
Browser() initialization So I installed this module according to these steps: ````git clone https://github com/makinacorpus/spynner git cd spynner python setup py install ```` And I get this kind of error and I do not have any clues what could be wrong <blockquote> <blockquote> <blockquote> import spynner browser1 = spynner Browser() </blockquote> </blockquote> </blockquote> ````Traceback (most recent call last): File "<stdin>" line 1 in <module> File "/usr/local/lib/python2 7/dist-packages/spynner-1 11dev-py2 7 egg/spynner/browser py" line 136 in __init__ self jquery = open(os path join(directory self _jquery)) read() IOError: [Errno 2] No such file or directory: '/usr/local/lib/python2 7/dist-packages/spynner-1 11dev-py2 7 egg/spynner/javascript/jquery-1 5 2 js' ```` | Yeah in that folders where missing 3 jquery files and I added them manually Now it works |
convert a mixed list of strings to a list of integer I not familiar with python and need help to convert a list containing strings and numbers (all represented as strings!) to a new list that include only the numbers - input string : `['LOAD' '0x00134' '0' '0' 'R' 'E' '0x1df0' '0x1df0']` - result needed: `[0x00134 0 0 0x1df0 0x1df0]` All non-numeric entries like 'LOAD' and 'R' 'E' should be removed | ````def list2num(mylist): result = [] for item in mylist: try: if item lower() startswith("0x"): result append(int(item 16)) else: result append(int(item)) except ValueError: pass return result ```` This gives you ````>>> numbers = ['LOAD' '0x00134' '0' '0' 'R' 'E' '0x1df0' '0x1df0'] >>> list2num(numbers) [308 0 0 7664 7664] ```` Or better if you just need an iterator we do not have to build that result list in memory: ````def list2num(mylist): for item in mylist: try: if item lower() startswith("0x"): yield int(item 16) else: yield int(item) except ValueError: pass ```` |
What do airside areas include? | all areas accessible to aircraft, including runways, taxiways and aprons |
Unpacking a list from a function gives a list is not defined error ````import os path import re def request (): print ("What file should I write to?") file = input () thing = os path exists (file) if thing == "true": start = 0 elif re match ("^ + \txt$" file): stuff = open (file "w") stuff write ("Requests on what to add to the server ") stuff close () start = 0 else: start = 1 go = "yes" list1 = (start file go) return list1 start = 1 while start == 1: request () (start file go) = list1 ```` I try to return get list1 to return and unpack it in the loop so I can set variables that come after the while loop Whenever I try and run this and enter "Thing txt" I get `NameError: name 'list1' is not defined` Am I missing something here? | Try this: ````# -*- coding: utf-8 -*- #!/usr/bin/python import os path import re def request (): print ("What file should I write to?") file = input () thing = os path exists (file) # thing is a boolean variable but not a string no need to use '==' if thing: start = 0 elif re match ("^ + \txt$" file): stuff = open (file "w") stuff write ("Requests on what to add to the server ") stuff close () start = 0 else: start = 1 go = "yes" list1 = (start file go) return list1 start = 1 while start == 1: # you need to get return value of function request list1 = request () (start file go) = list1 # Or you can simply write this way (start file go) = request() ```` |
Which Chronicle contains the Originates Livoniae? | Chronicle of Henry of Livonia |
Python urlparse urljoin ````link = "http://www test com tw/t/time html" date ="20150113" ```` I use `mergelink = link '/' date` to get the result I want ans I want to ask is there anyway to join `link` and `date` to `http://www test com tw/t/time html/20150113` ?? | `str format` is an easier way to add to the url: ````date ="20150113" link = "http://www test com tw/t/time html/{}" format(date) ```` |
Which much anticipated third-party game with the name of a month of the year in it did Sony show at E4 2007? | null |
In pandas can I deeply copy a DataFrame including its index and column? First I create a DataFrame ````In [61]: import pandas as pd In [62]: df = pd DataFrame([[1] [2] [3]]) ```` Then I deeply copy it by `copy` In [63]: df2 = df copy(deep=True) Now the `DataFrame` are different ````In [64]: id(df) id(df2) Out[64]: (4385185040 4385183312) ```` However the `index` are still the same ````In [65]: id(df index) id(df2 index) Out[65]: (4385175264 4385175264) ```` Same thing happen in columns is there any way that I can easily deeply copy it not only values but also index and columns? | I wonder whether this is a bug in pandas it is interesting because Index/MultiIndex (index and columns) are in some sense <a href="https://github com/pydata/pandas/pull/4039">supposed to be immutable</a> (however I think these should be copies) For now it is easy to create your own method and add it to DataFrame: ````In [11]: def very_deep_copy(self): return pd DataFrame(self values copy() self index copy() self columns copy()) In [12]: pd DataFrame very_deep_copy = very_deep_copy In [13]: df2 = df very_deep_copy() ```` As you can see this will create new objects (and preserve names): ````In [14]: id(df columns) Out[14]: 4370636624 In [15]: id(df2 columns) Out[15]: 4372118776 ```` |
Python Django - how to store http requests in the middleware? It might be that this question sounds pretty silly but I can not figure out how to do this I believe the simplest issue (because just start learning Django) What I know is I should create a middleware file and connect it to the settings Than create a view and a * html page that will show these requests and write it to the urls - how can one store last (5/10/20 or any) http requests in the middleware and show them in a * html page? The problem is I do not even know what exactly should I write into middlaware py and views py in the way it could be displayed in the * html file Ideally this page should be also updated after the new requests occur I read Django documentation some other topics with middleware examples but it seems to be pretty sophisticated for me I would be really thankful for any insights and elucidates P S One more time sorry for a dummy question | You can implement your own `RequestMiddleware` (which plugs in before the URL resolution) or `ViewMiddleware` (which plugs in after the view has been resolved for the URL) In that middleware it is standard python You have access to the filesystem database cache server the same you have anywhere else in your code Showing the last <them>N</them> requests in a separate web page means you create a view which pulls the data from the place where your middleware is storing them |
How to make Python ignore indentation and \n in re expressions I need to match a very large string like this: ````""" A= B= C= D= """ ```` I used such an re expression to match the string: ````''' A= +? [^ABCD]+? C= +? ''' ```` As the re expression is very long I split it into several lines according to `PEP8` But python matches my implicit `\n` as well i e it is trying to match: ````A=( +?)\\n[^ABCD]+?\\nC=( +?)\\n ```` which is definitely not what I want Furthermore I cannot use a pretty indentation because python matches that white spaces as well So how can I get out of this? I want pretty indentation and right matching | Specify the `re VERBOSE` (also known as `re X`) flag when creating the Regex: ````pattern = re compile(''' A= +? [^ABCD]+? C= +? ''' re VERBOSE) ```` From the <a href="https://docs python org/2/library/re html#re VERBOSE" rel="nofollow">docs</a>: <blockquote> This flag allows you to write regular expressions that look nicer Whitespace within the pattern is ignored except when in a character class or preceded by an unescaped backslash and when a line contains a `'#'` neither in a character class or preceded by an unescaped backslash all characters from the leftmost such `'#'` through the end of the line are ignored </blockquote> |
How can I start an interactive python/ipython session from the middle of my python program? I have a python program which first does some tasks and then in certain conditions goes into an interactive mode where the user has access to an interactive python console with the current program's scope Right now I do this using the code module by calling code InteractiveConsole(globals()) interact('') (see <a href="http://docs python org/2/library/code html" rel="nofollow">http://docs python org/2/library/code html</a>) My problem is that the resulting interactive console lacks some functionalities that I usually get with the standard python console (i e the one you get by typing 'python' in a terminal) such as remembering the previous command etc Is there a way to get that same interactive console in the middle of my python program or even better yet ipython's interactive console? | Would <a href="https://cloud sagemath com/" rel="nofollow">SageMathCloud</a> work for you? SMC hosts IPython NBs online remembers and stores your history if you use a Notebook and is interactive From their page: - Use Sage ARE Octave Python Cython GAP Macaulay2 Singular and much more - Edit LaTeX documents with inverse and forward search and Sage mode - Collaboratively edit IPython notebooks Sage worksheets and all other document types - Write compile and run code in most programming languages - Use command line terminals |
What device uses far-infrared or terahertz radiation? | null |
Regarding simple python date calc I am trying to find out when someone will turn 1 billion seconds old the user inputs when they were born These values are then converted to seconds and then I add 1 billion seconds and convert back into a date However when I enter certain dates python seems to mess up Such an example is 1993/11/05 00:00:00 where python says the user will turn in the 0th month Note I cannot use if/else or datetime Here Is my code: ````YEARSEC=(12*30*24*3600) MONTHSEC=(3600*24*30) DAYSEC=(24*3600) HOURSEC=3600 MINUTESEC=60 year=int(input("Please enter the year in which you were born: ")) month=int(input("Please enter the month you were born: ")) day=int(input("Please enter the day you were born: ")) hour=int(input("Please enter the hour you were born: ")) minute=int(input("Please enter the minute you were born: ")) second=int(input("Please enter the second you were born: ")) year_calc=(year*YEARSEC) month_calc=(month*MONTHSEC) day_calc=(day*DAYSEC) hour_calc=(hour*HOURSEC) minute_calc=(minute*MINUTESEC) s=(1000000000+year_calc+month_calc+day_calc+hour_calc+minute_calc+second) year_num=int((s/YEARSEC)) s=(s-(year_num*YEARSEC)) month_num=int((s/MONTHSEC)) s=(s-(month_num*MONTHSEC)) day_num=int((s/DAYSEC)) s=(s-(DAYSEC*day_num)) hour_num=int((s/HOURSEC)) s=(s-(HOURSEC*hour_num)) minute_num=int((s/MINUTESEC)) s=(s-(MINUTESEC*minute_num)) print("You will turn 1 000 000 000 seconds old on: %04d/%02d/%02d %02d:%02d:%02d" %(year_num month_num day_num hour_num minute_num s)) ```` | Time calculations are tricky Months do not all have 30 days for example Hours minutes and seconds are numbered starting from 0 but days and months are numbered starting from 1 creating off-by-one bugs in your calculations (hint ask for month then subtract one do all the calculations then add one when displaying it again) You are not accounting for leap years either Best to use built-in tools if only to check your eventual homework answer although it looks like the teacher said to assume 30-day months ;^) ````>>> import datetime >>> birthday = datetime datetime(1993 11 05 0 0 0) >>> billion = birthday datetime timedelta(seconds=1000000000) >>> billion ctime() 'Mon Jul 14 01:46:40 2025' ```` |
How do I add a 'set' datatype to a Django model? This seems like such an obvious question that I am absolutely certain it is been asked before but unfortunately 'set' is not a very good keyword for a search I am trying to define a model that uses a 'set' datatype for one of it is member's -- think hours of the day and days of the week There is a very limited set of data that can be put in there but you can select none or all of the possible values Something might be allowed on any day of the week just weekends or anything in between (Edit: Technically I could manage this through a many-to-many relationship and then adding the days of the week and hours of the day classes but that seems rather absurd given that they are so heavily fixed Making the client add hours of the day / days of the week to the system is silly and they actually need to be fixed for consumers of relevant APIs to properly parse them) But I cannot figure out how to reproduce MySQLs 'set' datatype in Django | I approached the problem from the wrong direction search wise Instead of searching for a 'set' datatype (which is simply possible because 'set' is also used in the sense of 'set the variable to foo') I needed to look up multiple select options From there there are plenty of references that point me to this code snippet which is apparently the only option: <a href="http://djangosnippets org/snippets/1200/" rel="nofollow">http://djangosnippets org/snippets/1200/</a> |
Can eclipse pydev interpret a file as a python file without a suffix If I have a python file that has no suffix Can pydev read that file as a python file using the first line of the file if it includes a #!/usr/bin/python? I am not really concerned specifically about using that first line just that that line exists and might be useable If there is a manual way to mark a file as a python file without mucking with its suffix that would be fine as well | Just right-click on the file then hit "Open With" -> "Other" then choose "Python editor" and hit OK Eclipse will remember your choice and from then on will open that particular file in the Python editor when you double-click it |
Time-delayed grequests without using grequests map() This is the first time I have tried to use a library with less-than-ideal levels of documentation and example code so bear with me I have a tiny bit of experience with the Requests library but I need to send separate requests to a specific address every second: - Without waiting for the first request to complete handling the individual responses as they come in - The responses' content need to be parsed separately - While limiting the total number of connections I cannot figure out how to satisfy these conditions simultaneously `grequests map()` will give me the responses' content that I want but only in a batch after they have all completed `grequests send()` seems to only return a response object that does not contain the html text of the web page (I may be wrong about `grequests send()` but I have not yet found an example that pulls content from that object) Here is the code that I have so far: ````import grequests from time import sleep def print_res(res **kwargs): print res print kwargs headers = {'User-Agent':'Python'} req = grequests get('http://stackoverflow com' headers=headers hooks=dict(response=print_res) verify=False) for i in range(3): job = grequests send(req grequests Pool(10)) sleep(1) ```` The response I get: ````1 <Response [200]> {'verify': False 'cert': None 'proxies': {'http': 'http://127 0 0 1:8888' 'ht tps': 'https://127 0 0 1:8888'} 'stream': False 'timeout': None} 2 <Response [200]> {'verify': False 'cert': None 'proxies': {'http': 'http://127 0 0 1:8888' 'ht tps': 'https://127 0 0 1:8888'} 'stream': False 'timeout': None} 3 <Response [200]> {'verify': False 'cert': None 'proxies': {'http': 'http://127 0 0 1:8888' 'ht tps': 'https://127 0 0 1:8888'} 'stream': False 'timeout': None} ```` I have tried accessing the html response with `req content` and `job content` but neither work | Of course while writing up this question I realized that I had not tried to access `res content` which turns out to be exactly what I needed Lesson learned: The object that is returned to the response hook in the `grequests get()` statement has a `content` attribute which contains the text of the response sent from the server |
How to time how long a Python program takes to run? Is there a simple way to time a Python program's execution? clarification: Entire programs | For snippets use the <a href="http://docs python org/library/timeit html" rel="nofollow">`timeit`</a> module For entire programs use the <a href="http://docs python org/library/profile html#module-pstats" rel="nofollow">`cProfile`</a> module |
ImportError: No module named _backend_gdk I am starting to get some insight into interactive plotting with python and matplotlib using pyGTK+ Therefore I took a look at the example given at the matplotlib website: <a href="http://matplotlib org/examples/user_interfaces/gtk_spreadsheet html">http://matplotlib org/examples/user_interfaces/gtk_spreadsheet html</a> This is a short exerpt of the Code: ````#!/usr/bin/env python """ Example of embedding matplotlib in an application and interacting with a treeview to store data Double click on an entry to update plot data """ import pygtk pygtk require('2 0') import gtk from gtk import gdk import matplotlib matplotlib use('GTKAgg') # or 'GTK' from matplotlib backends backend_gtk import FigureCanvasGTK as FigureCanvas from numpy random import random from matplotlib figure import Figure ```` Ones I try to run this Script in the Terminal I get the following error: ````Traceback (most recent call last): File "gtk_spreadsheet py" line 15 in <module> from matplotlib backends backend_gtk import FigureCanvasGTK as FigureCanvas File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/matplotlib/backends/backend_gtk py" line 33 in <module> from matplotlib backends backend_gdk import RendererGDK FigureCanvasGDK File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/matplotlib/backends/backend_gdk py" line 29 in <module> from matplotlib backends _backend_gdk import pixbuf_get_pixels_array ImportError: No module named _backend_gdk ```` I have python 2 7 matplotlib 1 2 0 and pygtk 2 24 installed Can anyone figure out where the error is located? I think it might be connected to some linkin issues? Thanks a lot! | Note that the Debian/Ubuntu package you need is not 'pygtk2-devel' but 'python-gtk2-dev': ````sudo apt-get install python-gtk2-dev ```` should fix this error on these platforms |
Python dict call function as value I am creating a dict in python using returned json one of the values I want to be a shortuuid so I put a function as the value I want that function called and the value replaced with what that function returns Is this correct ? ```` tvMSG = {'fromEMAIL': uEmail 'toCHANNELID': channelID 'timeSENT': uTime} for msgkey subdict instakey in (('profilePIC' 'user' 'profile_picture') ('userNAME' 'user' 'username') ('msgBODY' 'caption' 'text') ('mainCONTENT' 'images' 'standard_resolution' ) ('tvshowID' shortuuid uuid())): this is the key/value in question: ('tvshowID' shortuuid uuid())): ```` This is the error I get: ```` TypeError: 'tuple' object is not callable ```` If not how do I make it work? | Thanks for the hints @ThierryJ ```` tvMSG = {'fromEMAIL': uEmail 'toCHANNELID': channelID 'timeSENT': uTime 'tvshowID: shortuuid uuid()} for msgkey subdict instakey in (('profilePIC' 'user' 'profile_picture') ('userNAME' 'user' 'username') ('msgBODY' 'caption' 'text') ('mainCONTENT' 'images' 'standard_resolution')): ```` There was no need to do any iteration I just took that call out of the loop and put it in the variable above the loop |
Is it possible to create unique instances that have the same input? I am working on code in Python that creates Compound objects (as in chemical compounds) that are be composed of Bond and Element objects These Element objects are created with some inputs about them (Name symbol atomic number atomic mass etc) If I want to populate an array with Element objects and I want the Element objects to be unique so I can do something to one and leave the rest unchanged but they should all have the information related to a 'Hydrogen' element This question <a href="http://stackoverflow com/questions/9467604/python-creating-multiple-instances-for-a-single-object-class">Python creating multiple instances for a single object/class</a> leads me to believe that I should create sub-classes to Element - ie a Hydrogen object and a Carbon object etc Is this doable without creating sub-classes and if so how? | Design your object model based on making the concepts make sense not based on what seems easiest to implement If in your application hydrogen atoms are a different type of thing than oxygen atoms then you want to have a `Hydrogen` class and an `Oxygen` class both probably subclasses of an `Element` class * If on the other hand there is nothing special about hydrogen or oxygen (e g if you do not want to distinguish between say oxygen and sulfur since they both have the same valence) then you do not want subclasses Either way you can create multiple instances It is just a matter of whether you do it like this: ````atoms = [Hydrogen() Hydrogen() Oxygen() Oxygen()] ```` ⦠or this: ````atoms = [Element(1) Element(1) Element(-2) Element(-2)] ```` If your instances take a lot of arguments and you want a lot of instances with the same arguments repeating yourself like this can be a bad thing But you can use a loopâeither an explicit statement or comprehensionâto make it better: ````for _ in range(50): atoms append(Element(group=16 valence=2 number=16 weight=32 066)) ```` ⦠or: ````atoms extend(Element(group=16 valence=2 number=16 weight=32 066) for _ in range(50)) ```` <hr> * Of course you may even want further subclasses e g to distinguish Oxygen-16 Oxygen-17 Oxygen-18 or maybe even different mixtures like the 99 762% Oxygen-16 with small amounts of -18 and tiny bits of the others that is standard in Earth's atmosphere vs the different mixture that was common millions of years ago⦠|
I cannot change the style of text in Word documents with Python-docx I created a word document which contains the text <blockquote> Hello You owe me ${debt} Please pay me back soon </blockquote> in Times New Roman size 12 The file name is debtTemplate docx I would like to replace {debt} by an actual number (1 20) using python-docx I tried that following code: ````from docx import Document document = Document("debtTemplate docx") paragraphs = document paragraphs debt = "1 20" paragraph = paragraphs[0] text = paragraph text newText = text format(debt=debt) paragraph clear() paragraph add_run(newText) document save("debt docx") ```` This results in a new document with the desired text but in Calabri font size 11 I would like the font to be like the original: Times New Roman size 12 I know that you can add a style variable to `paragraph add_run()` so I tried that but nothing work Eg `paragraph add_run(newText style="Strong")` did not even change anything Does anyone know what I can do? EDIT: here is a modified version of my code that I had hoped would work but did not ````from docx import Document document = Document("debtTemplate docx") document save("debt docx") paragraphs = document paragraphs debt = "1 20" paragraph = paragraphs[0] style = paragraph style text = paragraph text newText = text format(debt=debt) paragraph clear() paragraph add_run(newText style) document save("debt docx") ```` | This page in the docs should help you understand why the style is not having an effect It is a pretty easy fix: <a href="http://python-docx readthedocs org/en/latest/user/styles html" rel="nofollow">http://python-docx readthedocs org/en/latest/user/styles html</a> I like a couple other things about what you have found though: - Using the str format() method to do placeholder replacement is a nice easy way to do lightweight text replacement I will have to add that to the documentation as an approach to simple custom document generation - In the XML for a paragraph there is an optional element called `<w:defRPr>` which Word uses to indicates the default formatting for any new text added to the paragraph like if you started typing after placing your insertion point at the end of the paragraph Right now `python-docx` ignores that element That is why you are getting the default Calibri 11 instead of the Times New Roman 12 you started with But a useful feature might be to use that element if present to assign run properties to any new runs added at the end of the paragraph If you want to add that as a feature request to the GitHub tracker we will take a look at getting it implemented |
graphQL multiple mutations transaction Apparently graphQL mutations are executed one by one sequentially Source : - <a href="https://learngraphql com/basics/invoking-mutations/4" rel="nofollow">https://learngraphql com/basics/invoking-mutations/4</a> <blockquote> In GraphQL mutations are executed as a sequence Otherwise it is hard to detect errors like adding the same author again and again It is totally up to the GraphQL server implementation to implement mutations like this Reference NodeJS implementation and other community implementations for Python and Scala follow this </blockquote> If I understand it right this does this prevent : - executing the requests in parallel - the use of transactions over multiple requests What is the rationale behind this design decision ? Are there other projects that do it differently ? | Actually GraphQL highly encourages request concurrency Requests can be handled in parallel Each request performs the individual mutations of that request in serial but multiple requests can be processed concurrently It is important to denote the difference between mutations and requests especially regarding concurrency It is also important to denote that GraphQL tells you nothing about how the edits are applied outside of a single request It is your code abstraction you decide whether to use SQL begin commit to block database writes or just make the update calls directly and roll the dice Serial edit processing on a transaction is a very common practice in database design and a set of commands for this can be seen in most database languages In SQL mutations are often bracketed by BEGIN and COMMIT In Redis a MULTI EXEC block offers this functionality This is primarily what I have seen However unordered parallel edit processing certainly is possible as long as you make sure the results are path independent and you can find a way to guarantee all ACID properties hold There are ways to do this in many languages but I can only think of an example implementation of this in Redis off the top of my head You could map the edits to a set of short Lua scripts that check the value of the transaction key for their serialized selves and if they found a match they would return before applying Otherwise they would apply the edit and append the serialized edit to the transaction body NOTE: If you have dependent edits (Create table push entry to table) you can really shoot yourself in the foot avoiding serial edit execution As far as transactions over multiple requests? I have never really used them and this thread is more suited for that question <a href="http://stackoverflow com/questions/7035259/multi-step-database-transaction-split-across-multiple-http-requests">Multi-step database transaction split across multiple HTTP requests</a> |
Have an SQLAlchemy SQLite "create_function" issue with datetime representations We have sqlite databases and datetimes are actually stored in Excel format (there is a decent reason for this; it is our system's standard representation of choice and the sqlite databases may be accessed by multiple languages/systems) Have been introducing Python into the mix with great success in recent months and SQLAlchemy is a part of that The ability of the sqlite3 dbapi layer to swiftly bind custom Python functions where SQLite lacks a given SQL function is particularly appreciated I wrote an ExcelDateTime type decorator and that works fine when retrieving result sets from the sqlite databases; Python gets proper datetimes back However I am having a real problem binding custom python functions that expect input params to be python datetimes; I would have thought this was what the bindparam was for but I am obviously missing something as I cannot get this scenario to work Unfortunately modifying the functions to convert from excel datetimes to python datetimes is not an option and neither is changing the representation of the datetimes in the database as more than one system/language may access it The code below is a self-contained example that can be run "as-is" and is representative of the issue The custom function "get_month" is created but fails because it receives the raw data not the type-converted data from the "Born" column At the end you can see what I have tried so far and the errors it spits out Is what I am trying to do impossible? Or is there a different way of ensuring the bound function receives the appropriate python type? It is the only problem I have been unable to overcome so far would be great to find a solution! ````import sqlalchemy types as types from sqlalchemy import create_engine Table Column Integer String MetaData from sqlalchemy sql expression import bindparam from sqlalchemy sql import select text from sqlalchemy interfaces import PoolListener import datetime # setup type decorator for excel<>python date conversions class ExcelDateTime( types TypeDecorator ): impl = types FLOAT def process_result_value( self value dialect ): lxdays = int( value ) lxsecs = int( round((value-lxdays) * 86400 0) ) if lxsecs == 86400: lxsecs = 0 lxdays = 1 return ( datetime datetime fromordinal(lxdays+693594) datetime timedelta(seconds=lxsecs) ) def process_bind_param( self value dialect ): if( value < 200000 ): # already excel float? return value elif( isinstance(value datetime date) ): return value toordinal() - 693594 0 elif( isinstance(value datetime datetime) ): date_part = value toordinal() - 693594 0 time_part = ((value hour*3600) (value minute*60) value second) / 86400 0 return date_part time_part # time part = day fraction # create sqlite memory db via sqlalchemy def get_month( dt ): return dt month class ConnectionFactory( PoolListener ): def connect( self dbapi_con con_record ): dbapi_con create_function( 'GET_MONTH' 1 get_month ) eng = create_engine('sqlite:///:memory:' listeners=[ConnectionFactory()]) eng dialect dbapi enable_callback_tracebacks( 1 ) # show better errors from user functions meta = MetaData() birthdays = Table('Birthdays' meta Column('Name' String primary_key=True) Column('Born' ExcelDateTime) Column('BirthMonth' Integer)) meta create_all(eng) dbconn = eng connect() dbconn execute( "INSERT INTO Birthdays VALUES('Jimi Hendrix' 15672 NULL)" ) # demonstrate the type decorator works and we get proper datetimes out res = dbconn execute( select([birthdays]) ) tuple(res) # >>> ((you'Jimi Hendrix' datetime datetime(1942 11 27 0 0)) ) # simple attempt (blows up with "AttributeError: 'float' object has no attribute 'month'") dbconn execute( text("UPDATE Birthdays SET BirthMonth = GET_MONTH(Born)") ) # more involved attempt( blows up with "InterfaceError: (InterfaceError) Error binding parameter 0 - probably unsupported type") dbconn execute( text( "UPDATE Birthdays SET BirthMonth = GET_MONTH(:Born)" bindparams=[bindparam('Born' ExcelDateTime)] typemap={'Born':ExcelDateTime} ) Born=birthdays c Born ) ```` Many thanks | Instead of letting Excel/Microsoft dictate how you store date/time it would be less trouble and work for you to rely on standard/"obvious way" of doing things - Process objects according to the standards of their domain - Python's way (datetime objects) inside Python/SQLAlchemy SQL's way inside SQLite (native date/time type instead of float!) - Use APIs to do the necessary translation between domains (Python talks to SQLite via SQLAlchemy Python talks to Excel via <a href="http://www python-excel org/" rel="nofollow">xlrd/xlwt</a> Python talks to other systems Python is your glue ) Using standard date/time types in SQLite allows you to write SQL without Python involve in standard readable way (`WHERE date BETWEEN '2011-11-01' AND '2011-11-02'` makes much more sense than `WHERE date BETWEEN 48560 9999 AND 48561 00001`) It allows you to easily port it to another DBMS (without rewriting all those ad-hoc functions) when your application/databse needs to grow Using native datetime objects in Python allows you to use a lot of freely available well tested and non-EEE (embrace extend extinguish) APIs SQLAlchemy is one of those And I hope you are aware of that slight but dangerous difference between Excel datetime floats in Mac and Windows? Who knows that one of your clients would in the future submit an Excel file from a Mac and crash your application (actually what is worse is they suddenly earned a million dollars from the error)? So my suggestion is for you to use <a href="http://www python-excel org/" rel="nofollow">xlrd/xlwt</a> when dealing with Excel from Python (there is another package out there for reading Excel 2007 up) and let SQLALchemy and your database use standard datetime types However if you insist on continuing to store datetime as Excel float it could save you a lot of time to reuse code from <a href="http://www python-excel org/" rel="nofollow">xlrd/xlwt</a> It has functions for converting Python objects to Excel data and vice-versa EDIT: for clarity You have no issues reading from the database to Python because you have that class that converts the float into Python datetime You <them>have issues</them> writing to the database through SQLAlchemy or using other native Python functions/modules/extensions because you are <them>trying to force a non-standard type when they are expecting the standard Python datetime</them> <strong>ExcelDateTime type from the point of view Python is a float not datetime </strong> Although Python uses dynamic/duck typing it still is <them>strongly typed</them> It will not allow you to do "nonsense/silliness" like adding integers to string or <them>forcing float for datetime</them> At least two ways to address that: - <strong>Declare a custom type</strong> - Seems to be the path you wanted to take Unfortunately this is the <them>hard way</them> It is quite difficult to create a type <them>that is a float that can also pretend to be datetime</them> Possible yes but requires a lot of study on type instrumentation Sorry you have to grok the documentation for that on your own - <strong>Create utility functions</strong> - Should be the easier way IMHO You need 2 functions: a) float_to_datetime() for converting data from the database to return a Python datetime and b) datetime_to_float() for converting Python datetime to Excel float About solution #2 as I was saying that you could simplify your life by reusing the <strong>xldate_from_datetime_tuple()</strong> from <a href="http://www python-excel org/" rel="nofollow">xlrd/xlwt</a> That function "Convert a datetime tuple (year month day hour minute second) to an Excel date value " Install xlrd then go to /path_to_python/lib/site-packages/xlrd The function is in xldate py - the source is well documented for understanding |
using dicts for code and variables I have the following code that finally works! But as a Python newb I would like to see if there is a better way to do it In particular as part of a project in monte carlo chemical kinetics the loop at the bottom has to be really really fast Should this be fast? Is there a better (more pythonic) way to do this than bundling everything up in dictionaries? These dictionaries could have as many as hundreds of entries Thanks! ````def split(pops src dst): pops[src] -= 1; pops[dst] = 2 def join(pops src dst) : pops[src] -= 1; pops[dst] = 2 def jump(pops src dst) : pops[src] -= 1; pops[dst] = 2 j1 j2 s1 s2 m1 m2 d1 d2 = 1 1 1 1 1 1 1 1 pops = {'mon1':1000 # initial population of monomers in regime 1 'dim1':1000 # " dimers " 'mon2':1000 # etc 'dim2':1000} trns = {'j1':(j1 join 'mon1' 'dim1') # j1=prob of 2 monomers joining to form a dimer 'j2':(j2 join 'mon2' 'dim2') 's1':(s1 split 'dim1' 'mon1') 's2':(s2 split 'dim2' 'mon2') 'm1':(m1 jump 'mon1' 'mon2') 'm2':(m2 jump 'mon2' 'mon1') 'd1':(d1 jump 'dim1' 'dim2') 'd2':(d2 jump 'dim2' 'dim1')} while True: event = 's1' # this would be derived from rand() called many times action = trns[event][1] # either join split or jump source = trns[event][2] # source species dest = trns[event][3] # destination species action(pops source dest) # make the function call to update populations break print pops ```` | You might find it easier to do Monte Carlo simulations using a library that has already been optimized to do them efficiently allowing you do work more directly in your problem domain rather than in the python code writing domain See for example Monte Carlo in the python library scipy <a href="http://www scipy org/topical-software html" rel="nofollow">http://www scipy org/topical-software html</a> |
What is a major barrier to videoconferencing between some communities? | language |
What complication does the use of combinations of antibiotics for active TB prevent? | antibiotic resistance |
Changing of pixel values after writing the same image using imwrite opencv python function ````import cv2 import numpy as np i am=cv2 imread('test jpg') cv2 imwrite('result jpg' i am) ```` Here test jpg have size 19 KB and result jpg have 41 KB even though they are same images I observed that there is change in the pixel values of these two images How to prevent this one ? | Re-writing or 'saving' an image in any library or tool will always create a new file or 'new pixel values' This happens because the image is being processed for saving at a certain quality The saved image quality and size depends on the library doing the save So default values depth compression etc need to be provided If you just want to create a copy of the image in a new file either copy the file directly via `sys` or binary read the whole file and write it to a new one - without using any image processing libs |
pexpect echoes sendline output twice causing unwanted characters in buffer I am getting myself familiarized with pexpect I have written the below snippet of code to unflap a port-channel of a cisco router Up until line 89 when I see the stdout outputs there is no problem Below is the code snippet: ```` 54 deviceEnable = data[0] ">" 55 deviceExec = data[0] "#" 56 deviceConfig = data[0] "(config)#" 57 deviceIfConfig = data[0] "(config-if)#" 58 k = device expect([deviceEnable deviceExec deviceConfig]) 59 if k == 0: 60 device sendcontrol('c') 61 device expect(deviceEnable) 62 device sendline('enable') 63 device expect('Password:') 64 device sendline(data[4]) 65 elif k == 1: 66 device sendcontrol('c') 67 elif k == 2: 68 device sendcontrol('c') 69 device expect(deviceConfig) 70 device sendline('end') 71 ################################### 72 # Uplink Unflap SOP 73 ################################### 74 device logfile = sys stdout 75 device expect(deviceExec) 76 device sendline('show int status | in Po') 77 device expect(deviceExec) 78 pcStatus1 = device before 79 temp1 = pcStatus1 split('\n') 80 temp2 = temp1[2] 81 pcStatus = temp2 split() 82 print("\n %s \n" % (pcStatus)) 83 match1 = re match('Err-Disable' pcStatus[1]) 84 # match1 = re match('notconnect' pcStatus[2]) 85 if match1: 86 print("Entered here \n") 87 # device expect(deviceExec) 88 print("Task 0 complete") 89 device sendline('conf t') 90 device expect(deviceConfig) 91 print("Task 1 complete") ```` However for line 89 "conf t" gets sent twice See below: ````<device-name>#show int status | in Po show int status | in Po Port Name Status Vlan Duplex Speed Type Po1 Err-Disable notconnect routed auto auto <device-name># ['Po1' 'Err-Disable' 'notconnect' 'routed' 'auto' 'auto'] Entered here conf t conf t Enter configuration commands one per line End with CNTL/Z <device-name>(config)#Exception in thread Thread-1: Traceback (most recent call last): File "/usr/local/lib/python2 7/threading py" line 530 in __bootstrap_inner self run() File "/home/nseshan/unflapper/ThreadPool py" line 202 in run cmd(args) File "/home/nseshan/unflapper/deviceLogin py" line 91 in devLogin device expect(deviceConfigEntry) File "/usr/local/lib/python2 7/site-packages/pexpect/__init__ py" line 1451 in expect timeout searchwindowsize) File "/usr/local/lib/python2 7/site-packages/pexpect/__init__ py" line 1466 in expect_list timeout searchwindowsize) File "/usr/local/lib/python2 7/site-packages/pexpect/__init__ py" line 1568 in expect_loop raise TIMEOUT(str(err) '\n' str(self)) TIMEOUT: Timeout exceeded <pexpect spawn object at 0x86b162c> version: 3 3 command: /usr/bin/ssh args: ['/usr/bin/ssh' 'sjc17-1-tea005'] searcher: <pexpect searcher_re object at 0x86b19ac> buffer (last 100 chars): 'conf t\r\nEnter configuration commands one per line End with CNTL/Z \r\nsjc17-1-tea005(config)#' before (last 100 chars): 'conf t\r\nEnter configuration commands one per line End with CNTL/Z \r\nsjc17-1-tea005(config)#' after: <class 'pexpect TIMEOUT'> match: None match_index: None exitstatus: None flag_eof: False pid: 8182 child_fd: 4 closed: False timeout: 30 delimiter: <class 'pexpect EOF'> logfile: <open file '<stdout>' mode 'w' at 0x8232078> logfile_read: None logfile_send: None maxread: 2000 ignorecase: False searchwindowsize: None delaybeforesend: 0 05 delayafterclose: 0 1 delayafterterminate: 0 1 ```` Notice that the buffer now holds unwanted characters as well as the prompt that I am expecting This causes a timeout from pexpect's side when I try to issue the next command using sendline() in the "conf t" prompt I am unsure what to do next as I have been trying to fix this all day and have not gotten anywhere with it Any suggestions? | Use ` logfile_read=sys stdout` instead of ` logfile=sys stdout` <a href="http://pexpect readthedocs org/en/latest/api/pexpect html#spawn-class" rel="nofollow">to log only what the child sends back</a> |
plugins pattern sub command I will do an command line application with plugin capability each new plugins will be invoked by a sub command from a `__main__ py` script I used to use argparse I wonder if it possible with argparse to implement sud command plugin looking like (I found some tool but using deprecated packages) ? ````myfantasticCLI âââ __main__ py âââ plugins âââ create py âââ notify py âââ test py ```` I know that I could use argparse for sub command but do not know how to use it with a dynamic loading way :/ | If you initialize the `argparse` subparsers with ````sp = parser add_subparsers(dest='cmd' ) ```` then after parsing `args cmd` will be the name of the chosen subparser or command Then a simple `if` tree could import and run the desired modules ````cmd = args cmd if cmd in ['module1' ]: import plugins module1 as mod: mod run( ) elif cmd in ['module2' ]: import plugins module2 as mod: ```` There are fancier ways of doing this but I prefer starting with the obvious Also my focus is on getting the `cmd` name from the parser not on the details of importing a module given the name You do not need `argparse` to test the `import given a name` part of the problem |
Google REST HTTP request delivers more properties than google-api-python-client Python request If I issue a request directly using the HTTP REST interface like this: ````GET https://www googleapis com/drive/v2/files/1el16TSNYvaGQndXXDZjheN_CANnWbA9wSA?key={YOUR_API_KEY} ```` I get a set of metadata for the file that contains among other things the file `properties` elements If instead I call using the Python library to the drive API like this: ````md = service files() get(fileId='1el16TSNYvaGQndXXDZjheN_CANnWbA9wSA') execute() ```` I get a dict in `md` that contains a much more limited set of data `properties` is missing among others I have not expressed fields filtering Is this a limitation of the Google Python lib or do I need to set some option? <strong>Update:</strong> Based on comments I went back and checked authentication I am using Oauth2 straight from the Google cookbook and the full auth scope (`'https://www googleapis com/auth/drive'`) The code below is (I think) the minimum necessary to demonstrate the result ````import httplib2 import os import json from apiclient import discovery import oauth2client from oauth2client import client from oauth2client import tools try: import argparse flags = argparse ArgumentParser(parents=[tools argparser]) parse_args() except ImportError: flags = None SCOPES = 'https://www googleapis com/auth/drive' CLIENT_SECRET_FILE = 'client_secret json' APPLICATION_NAME = 'Other Client 1' def get_credentials(): home_dir = os path expanduser('~') credential_dir = os path join(home_dir ' credentials') if not os path exists(credential_dir): os makedirs(credential_dir) credential_path = os path join(credential_dir 'drive-tagger json') store = oauth2client file Storage(credential_path) credentials = store get() if not credentials or credentials invalid: flow = client flow_from_clientsecrets(CLIENT_SECRET_FILE SCOPES) flow user_agent = APPLICATION_NAME if flags: credentials = tools run_flow(flow store flags) else: # Needed only for compatability with Python 2 6 credentials = tools run(flow store) print('Storing credentials to ' credential_path) return credentials def main(): credentials = get_credentials() http = credentials authorize(httplib2 Http()) service = discovery build('drive' 'v2' http=http) md = service properties() list(fileId='1el16TSNYvaGQndXXDZjheN_CANnWbA9wSA') execute() print "properties() list() returns:" print json dumps(md indent=4) print "***Done***" if __name__ == '__main__': main() ```` If I run this I get an authentication browser pop-up and authenticate to the same domain as I use in API explorer The result is this: ````Authentication successful Storing credentials to C:\Users\scott_jackson\ credentials\drive-tagger json properties() list() returns: { "items": [] "kind": "drive#propertyList" "etag": "\"amKkzAMv_fUBF0Cxt1a1WaLm5Nk/vyGp6PvFo4RvsFtPoIWeCReyIC8\"" "selfLink": "https://www googleapis com/drive/v2/files/1el16TSNYvaGQndXXDZjheN_CANnWbA9wSA/properties?alt=json" } ***Done*** ```` Note that `"items"` is empty If however I use the API explorer and authenticate into the same domain and request the same fileId I get: ````{ "kind": "drive#propertyList" "etag": "\"amKkzAMv_fUBF0Cxt1a1WaLm5Nk/BEYHBcaVZiElhupVVaqT2nEhnc0\"" "selfLink": "https://www googleapis com/drive/v2/files/1el16TSNYvaGQndXXDZjheN_CANnWbA9wSA/properties" "items": [ { "kind": "drive#property" "etag": "\"amKkzAMv_fUBF0Cxt1a1WaLm5Nk/Mg7GWY95vfY7E-2gvlxRbl7MLDk\"" "selfLink": "https://www googleapis com/drive/v2/files/1el16TSNYvaGQndXXDZjheN_CANnWbA9wSA/properties/md5sum?visibility=PRIVATE" "key": "md5sum" "visibility": "PRIVATE" "value": "a61b0d91a294364b0c4eebb3ee83c09a" } ] } ```` which has the `'items'` I am looking for Any insights would be appreciated | You should check the properties endpoint: <a href="https://developers google com/drive/v2/reference/properties" rel="nofollow">https://developers google com/drive/v2/reference/properties</a> So the Python sentence would be: ````props = service properties() list(fileId='1el16TSNYvaGQndXXDZjheN_CANnWbA9wSA') execute() ```` |
When did Beyoncé become the highest paid black musician, ever? | April 2014. |
Comparison with boolean numpy arrays VS PEP8 E712 `PEP8 E712` requires that "comparison to `True` should be `if cond is True:` or `if cond:`" But if I follow this `PEP8` I get different/wrong results Why? ````In [1]: from pylab import * In [2]: a = array([True True False]) In [3]: where(a == True) Out[3]: (array([0 1]) ) # correct results with PEP violation In [4]: where(a is True) Out[4]: (array([] dtype=int64) ) # wrong results without PEP violation In [5]: where(a) Out[5]: (array([0 1]) ) # correct results without PEP violation but not as clear as the first two imho "Where what?" ```` | That advice only applies to `if` statements testing for the "truthiness" of a value `numpy` is a different beast ````>>> a = np array([True False]) >>> a == True array([ True False] dtype=bool) >>> a is True False ```` Note that `a is True` is always `False` because `a` is an array not a boolean and `is` does a simple reference equality test (so only `True is True`; `None is not True` for example) |
how to redirect to another page in python pyramid via ajax when success I want to redirect user to another route if the code matches depending upon the values returned in the ajax success Its working great now How do I write the script to redirect to the next route like redirect in php AJAX to check the code matches and its working great What next ````<script> $("#codeModal form") submit(function(event) { event preventDefault(); var codetyped=$(this) find(" codeinput") val(); $ ajax({ type:"post" url:"{{request route_url('testingajax')}}" data:{'codetyped':codetyped} success:function(res){ alert(res); } }); }); </script> ```` <strong>Here is the view config</strong> ````@view_config(route_name='testingajax' renderer='json') def hello(request): # return request session['secretcode'] # return request params get('codetyped') if int(request params get('codetyped')) == request session['secretcode']: return 'Success' else: return 'Error' ```` | I am not into Python but it seems you are returning either 'Success' or 'Error' to your Ajax call Within your Ajax success callback function you could simply check the returned value and redirect from there ````success:function(res){ // Test if there is a returned value in the res variable if(typeof res !== 'undefined'){ // Check the res variable from a few options switch(res){ case 'Success': window location href='/some-success-page html'; break; case 'Error': window location href='/some-error-page html'; break; } } } ```` |
WindowsError: [Error 5] Access is denied in Flask I am trying to move an uploaded file to a specific folder in my Windows system and it gives me WindowsError: [Error 5] Access is denied error The solutions I happen to see for such problems are run python as Administrator from cmd line I am not sure if that is possible since it is a web app and i am using the default flask server for development purpose my code is ````@app route('/test' methods=['POST']) def test(): import os if not os path exists("history_plugin"): os makedirs("test") f = open('test/abc txt' 'w+') f close() ```` | I had been running the application directly from Pycharm which does not run it in administrator mode I tried running it using command prompt as administrator and it worked for me |
multidimensional boolean array indexing in numpy I have a two 2D arrays one of numbers and one of boolean values: ````x = array([[ 0 0 0 0 0 0 0 0 0 0 ] [ 1 1 1 1 1 1 1 1 1 1 ] [ 2 2 2 2 2 2 2 2 2 2 ] [ 3 3 3 3 3 3 3 3 3 3 ] [ 4 4 4 4 4 4 4 4 4 4 ] [ 5 5 5 5 5 5 5 5 5 5 ] [ 6 6 6 6 6 6 6 6 6 6 ] [ 7 7 7 7 7 7 7 7 7 7 ] [ 8 8 8 8 8 8 8 8 8 8 ] [ 9 9 9 9 9 9 9 9 9 9 ]]) idx = array([[False False False False False False False False False False] [False True True True True True False False False False] [False True True True True True False False False False] [False True True True True True False False False False] [False False False True True True True False False False] [False False False False True True True False False False] [False False False False False False True False False False] [False False False False False False False True False False] [False False False False False False False False False False] [False False False False False False False False False False]] dtype=bool) ```` When I index the array it returns a 1D array: ````x[idx] array([ 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 5 5 5 6 7 ]) ```` How do I index the array and return a 2D array with the expected output: ````x[idx] array([[ 1 1 1 1 1 ] [ 2 2 2 2 2 ] [ 3 3 3 3 3 ] [ 4 4 4 4 ] [ 5 5 5 ] [ 6 ] [ 7 ]]) ```` | Your command returns a 1D array since it is impossible to fulfill without (a) destroying the column structure which is usually needed e g the `7` in your requested output originally belonged to column 7 and now it is on column 0; and (b) `numpy` does not afaik support high dimensional array with different sizes on the same dimension What I mean is that numpy cannot have an array whose first three rows are of length 5 4th row of length 4 etc - all the rows (same dimension) need to have the same length I think the best result you could hope for is an array of arrays (and not a 2D array) This is how I would construct it though there are probably better ways I do not know of: ````In [9]: from itertools import izip In [11]: array([r[ridx] for are ridx in izip(x idx) if ridx sum() > 0]) Out[11]: array([array([ 1 1 1 1 1 ]) array([ 2 2 2 2 2 ]) array([ 3 3 3 3 3 ]) array([ 4 4 4 4 ]) array([ 5 5 5 ]) array([ 6 ]) array([ 7 ])] dtype=object) ```` |
Dictionary not assigning value to variable I am trying to assign a value from a dictionary to a variable but the variable remains unchanged The value is another dictionary The code I used to generate the dictionaries can be found here: <a href="http://pastebin com/Q2Hc8Ktp" rel="nofollow">http://pastebin com/Q2Hc8Ktp</a> I wrote it myself and tested it without this problem Here is the code snipit of me trying to copy the dictionary from the dictionary ````_classes = {} def populateClasses(): print "Classes Exist" cp = Preferences('' join([resource_path "resources ini"])) print cp _classes = cp getPreferences()['Classes'] populateClasses() print _classes ```` When I print out cp it shows the correct data but when I try to print _classes it only shows {} Note: printing _classes from within the function works as expected but not from outside the function _classes is defined in the global scope -Edit- Here is also some sample data: ````[Classes] Wizard = Arcana Bluff Warrior = Endurance Intimidate Ranger = Nature Perception Bard = Heal History ```` | If you want to change the value of the global variable `_classes` you need to use <a href="http://docs python org/2/reference/simple_stmts html#grammar-token-global_stmt" rel="nofollow">`global`</a>: ````def populateClasses(): global _classes # <<<<<< THIS print "Classes Exist" cp = Preferences('' join([resource_path "resources ini"])) print cp _classes = cp getPreferences()['Classes'] ```` Without this your method creates a separate local variable also called `_classes` This variable goes out of scope as soon as your method returns |
How many executive orders were issued to help provide discrimination? | null |
What have been beneficial in the reconstruction of Middle Chinese? | Sino-Xenic pronunciations |
How to fire a query with a number of variables from local array Sorry if title is not properly set The problem is I want to filter DataFrame by comparing df's column with a couple of values from an array: ````import pandas as pd import numpy as np df = pd DataFrame(np random randint(0 100 size=(100 4)) columns=list('ABCD')) array = np arange(10) #simple query df query('A == %d' %array[3]) ```` Above query runs perfectly fine the below query also runs without issue: ````df query('A == [3 4 5]') ```` Logically below code should work too because I select values from 3rd to 5-th from `array`: ````df query('A == %d' %array[3:5]) ```` Nevertheless it gives me an error: ````TypeError: %d format: a number is required not numpy ndarray ```` Kindly suggest the path I should follow Thank you! | This returns an integer accepted by the `%d` format ````df query('A == %d' %array[3]) ```` This returns an array not the same object! refused by the latter ````df query('A == [%s]' %array[3:5]) ```` I suggest: ````df query('A == [%s]' % " " join([str(a) for a in array[3:6]])) ```` this will send `'A == [3 4 5]'` to the query |
pip ignore proxy setting I am trying to search/install some packages on the server using a pip through a proxy Server does not have direct access to <a href="https://pypi python org" rel="nofollow">https://pypi python org</a> I run: ````pip search --proxy 1 1 1 1:1111 lxml ```` (instead 1 1 1 1:1111 using real ip and port) and get python trace: ````Exception: Traceback (most recent call last): File "/usr/lib/python2 7/dist-packages/pip/basecommand py" line 139 in main status = self run(options args) File "/usr/lib/python2 7/httplib py" line 772 in connect self timeout self source_address) File "/usr/lib/python2 7/socket py" line 571 in create_connection raise err error: [Errno 111] Connection refused ```` When I try to run this command on the local computer (has direct access to <a href="https://pypi python org" rel="nofollow">https://pypi python org</a>) I get the correct result: ```` pip search --proxy 1 1 1 1:1111 lxml lxml - Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API ```` but when I run in local computer with wrong proxy settings : ```` search --proxy blablabla lxml ```` I got correct result: ```` lxml - Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API ```` This is mean that pip ignore `--proxy` settings? UPD: pip 1 3 1 | It is an old bug of `pip` but the authors seem to choose to just ignore it See <a href="https://github com/pypa/pip/pull/1180" rel="nofollow">https://github com/pypa/pip/pull/1180</a> and <a href="https://github com/pypa/pip/issues/1017" rel="nofollow">https://github com/pypa/pip/issues/1017</a> |
Python: How can I add a counter to the replacement argument of re sub() I would like to add ids to html tags For example I would like to change: ````<p>First paragraph</p> <p>Second paragraph</p> <p>Third paragraph</p> ```` to ````<p id="1">First paragraph</p> <p id="2">Second paragraph</p> <p id="3">Third paragraph</p> ```` IIRC it is possible to use a lambda function to achieve this functionality but I cannot remember the exact syntax | I would use an HTML parser like <a href="http://www crummy com/software/BeautifulSoup/bs4/doc/">`BeautifulSoup`</a> The idea is to iterate over all paragraphs using <a href="https://docs python org/2/library/functions html#enumerate">`enumerate()`</a> for indexing starting with `1`: ````from bs4 import BeautifulSoup data = """ <p>First paragraph</p> <p>Second paragraph</p> <p>Third paragraph</p> """ soup = BeautifulSoup(data 'html parser') for index p in enumerate(soup find_all('p') start=1): p['id'] = index print soup ```` Prints: ````<p id="1">First paragraph</p> <p id="2">Second paragraph</p> <p id="3">Third paragraph</p> ```` |
Grep a string in python Friends I have a situation where i need to grep a word from a string ````[MBeanServerInvocationHandler]com bea:Name=itms2md01 Location=hello Type=ServerRuntime ```` What I want to grep is the word that assigned to the variable `Name` in the above string which is `itms2md01` In my case i have to grep which ever string assigned to `Name=` so there is no particular string i have to search Tried: ```` import re import sys file = open(sys argv[2] "r") for line in file: if re search(sys argv[1] line): print line ```` | Deak is right As I am not having enough reputation to comment I am depicting it below I am not going to the file level Just see as an instance:- ````import re str1 = "[MBeanServerInvocationHandler]com bea:Name=itms2md01 Location=hello Type=ServerRuntime" pat = '(?<=Name=)\w+(?= )' print re search(pat str1) group() ```` Accordingly you can apply your logic with the file content with this pattern |
What was a benefit of the use of magnetic pickup cartridge? | high quality cuts |
Deploying Django with Virtualenv and Apache I would like to deploy a site that was created with Django The production environment is a rented virtual server I would like to deploy the application with Django Therefore I changed all settings according to the documentation (especially created a folder from where all collected static files can be served) and tried it out on my local development machine Because the site is now ready I pushed the whole project to the virtual server I use Ubuntu 14 04 LTS both on my development machine and on the virtual host Although I tested the project on my local machine with the apache I experienced some difficulties during the deployment phase The project is called <them>kleyboldt</them> My virtualenv is stored in the <strong>/root</strong> directory and the project lives under <strong>/var/www</strong> Here are the important files: <strong>/etc/apache2/sites-available/mks conf</strong> ````WSGIDaemonProcess mathias-kleyboldt-stiftung de python-path=/var/www/kleyboldt_homepage$ WSGIProcessGroup mathias-kleyboldt-stiftung de <VirtualHost *:80> DocumentRoot /var/html/kleyboldt_homepage WSGIScriptAlias / /var/www/kleyboldt wsgi ServerName mathias-kleyboldt-stiftung de ServerAlias www mathias-kleyboldt-stiftung de <LocationMatch "\ (jpg|css|gif|pdf|ico)$"> SetHandler None </LocationMatch> Alias /media/ /var/www/kleyboldt_homepage/static/media/ Alias /static/ /var/www/kleyboldt_homepage/static/static-only/ <Directory /var/www/kleyboldt_homepage/> Require all granted Order allow deny Allow from all </Directory> <Directory /var/www/kleyboldt_homepage/static/static-only> Require all granted Order allow deny Allow from all </Directory> ErrorLog /var/www/kleyboldt_homepage/apache_error log LogLevel debug </VirtualHost> ```` <strong>/var/www/kleyboldt wsgi</strong> ````import os import sys sys path append('/var/www/kleyboldt_homepage') os environ['DJANGO_SETTINGS_MODULE'] = 'kleyboldt_homepage settings' import django core handlers wsgi application = django core handlers wsgi WSGIHandler() ```` The project structure under <strong>/var/www/kleyboldt_homepage</strong>: ````root@somewhere:/var/www/kleyboldt_homepage# ls apache_error log homepage index html manage py static db sqlite3 homepage log kleyboldt_homepage site txt ```` To manage the dependencies for this project I used the virtualenvwrapper to create a env under <strong>/root/virtualenvs</strong> called kleyboldt-homepage: ````root@somewhere:~/virtualenvs/kleyboldt-homepage/lib/python2 7/site-packages# ls crispy_forms markdown2 pyc django markdown_deux Django-1 6 5 dist-info _markerlib django_crispy_forms-1 4 0-py2 7 egg-info pagedown django_grappelli-2 5 3-py2 7 egg-info pip django_markdown_deux-1 0 4-py2 7 egg-info pip-1 5 4 dist-info django_pagedown-0 1 0-py2 7 egg-info pkg_resources py easy_install py pkg_resources pyc easy_install pyc setuptools grappelli setuptools-2 2 dist-info markdown2-2 2 1-py2 7 egg-info south markdown2 py South-1 0-py2 7 egg-info ```` After reloading the apache2 server and refreshing the page I get a 500 Internal Server error I looked it up in the debug file I specified in the apache conf file <strong>/var/www/kleyboldt_homepage/apache_error log</strong> ````[Mon Aug 18 17:04:50 226000 2014] [authz_core:debug] [pid 966:tid 139697743423232] mod_authz_core c(802): [client 92 224 193 119:56235] AH01626: authorization result of Require all granted: granted [Mon Aug 18 17:04:50 226104 2014] [authz_core:debug] [pid 966:tid 139697743423232] mod_authz_core c(802): [client 92 224 193 119:56235] AH01626: authorization result of <RequireAny>: granted [Mon Aug 18 17:04:50 226227 2014] [authz_core:debug] [pid 966:tid 139697743423232] mod_authz_core c(802): [client 92 224 193 119:56235] AH01626: authorization result of Require all granted: granted [Mon Aug 18 17:04:50 226239 2014] [authz_core:debug] [pid 966:tid 139697743423232] mod_authz_core c(802): [client 92 224 193 119:56235] AH01626: authorization result of <RequireAny>: granted [Mon Aug 18 17:04:50 241584 2014] [:info] [pid 965:tid 139697924556544] [remote 92 224 193 119:14076] mod_wsgi (pid=965 process='mathias-kleyboldt-stiftung de' application='mathias-kleyboldt-stiftung de|'): Loading WSGI script '/var/www/kleyboldt wsgi' [Mon Aug 18 17:04:50 242108 2014] [:error] [pid 965:tid 139697924556544] [remote 92 224 193 119:14076] mod_wsgi (pid=965): Target WSGI script '/var/www/kleyboldt wsgi' cannot be loaded as Python module [Mon Aug 18 17:04:50 242118 2014] [:error] [pid 965:tid 139697924556544] [remote 92 224 193 119:14076] mod_wsgi (pid=965): Exception occurred processing WSGI script '/var/www/kleyboldt wsgi' [Mon Aug 18 17:04:50 242137 2014] [:error] [pid 965:tid 139697924556544] [remote 92 224 193 119:14076] Traceback (most recent call last): [Mon Aug 18 17:04:50 242161 2014] [:error] [pid 965:tid 139697924556544] [remote 92 224 193 119:14076] File "/var/www/kleyboldt wsgi" line 7 in <module> [Mon Aug 18 17:04:50 242215 2014] [:error] [pid 965:tid 139697924556544] [remote 92 224 193 119:14076] import django core handlers wsgi [Mon Aug 18 17:04:50 242233 2014] [:error] [pid 965:tid 139697924556544] [remote 92 224 193 119:14076] ImportError: No module named django core handlers wsgi ```` The import of <them>django core handlers wsgi</them> seems to fail I checked my python path specified behind the <them>WSGIDaemonProcess</them> but everything seems to be fine But the import is still failing Does anybody know how to fix this? | Two potential mistakes <strong>Django settings file must be a Python module</strong> Based on input you give in your case it is not a Python module and your folder structure is wrong ```` sys path append('/var/www/kleyboldt_homepage') os environ['DJANGO_SETTINGS_MODULE'] = 'kleyboldt_homepage settings' ```` Above means that py files in folder /var/www/kleyboldt_homepage go to top level Python namespace E g settings py file is module "settings" not 'kleyboldt_homepage settings' <strong>Virtualenv path must be in sys path</strong> Here is an example `django wsgi` Please take this as guidelining example not a tested solution specific your deployment: ````# Must be in the project root or production deployment does not work import os import sys from os path import abspath dirname join # This is /srv/django/yoursite PROJECT_PATH=abspath(join(dirname(__file__) " ")) import site import os # Assume virtualenv is in relative subdirectory "venv" to the project root vepath = PROJECT_PATH+'/venv/lib/python2 7/site-packages' prev_sys_path = list(sys path) # add the site-packages of our virtualenv as a site dir site addsitedir(vepath) # reorder sys path so new directories from the addsitedir show up first new_sys_path = [p for p in sys path if p not in prev_sys_path] for item in new_sys_path: sys path remove(item) sys path[:0] = new_sys_path # import from down here to pull in possible virtualenv django install from django core handlers wsgi import WSGIHandler os environ['DJANGO_SETTINGS_MODULE'] = 'myproject settings' application = WSGIHandler() ```` |
Where does Valencia's port rank among Spanish ports in terms of total traffic? | second |
Issue while trying to copy pyside object I am having a rather frustrating problem using <strong><them>pyside</them></strong> and I would welcome any advice <strong>First some context</strong> I have created a simple GUI using <strong><them>Qt Designer</them></strong> and I have used `pyside-uic exe` onto my ` ui` file in order to generate the associated <strong><them>Python</them></strong> file I am using <strong><them>Python 3 3</them></strong> and <strong><them>pyside 1 2 1</them></strong> with <strong><them>Qt Designer 4</them></strong> (<strong><them>Qt 4 8 5</them></strong>) I am using the following code to launch my GUI: ````class my_dialog(QMainWindow my_gui Ui_main_window): def __init__(self parent=None): super(my_dialog self) __init__(parent) self setupUi(self) if ("__main__" == name): app = QApplication(sys argv) main_dialog = my_dialog() # (1) main_dialog show() sys exit(app exec_()) ```` <strong>What I would like to achieve</strong> My GUI features several tabs The number of tabs is not pre-determined and is evaluated at run time As a result I have decided to create one tab in <strong><them>Qt Designer</them></strong> to use as a template The first time I need to add a tab I modify this template and if I need any additionnal tab I was planning on <strong><them>making a copy of that tab</them></strong> and then <strong><them>modify that copy</them></strong> appropriately <strong>The issue I have encountered</strong> My problem is that I cannot seem to find a way to copy the tab widget After some research I thought the <a href="http://docs python org/3 3/library/copy html" rel="nofollow">`copy`</a> module (or the <a href="http://docs python org/3 3/library/pickle html" rel="nofollow">`pickle`</a> module see edit) might do the trick (the following code was inserted at <strong><them>(1)</them></strong>): ````new_tab = copy deepcopy(main_dialog my_tab) main_dialog my_tabs addTab(new_tab "") ```` But that triggered the following error: <blockquote> ```` main_dialog my_tabs addTab(new_tab "") ```` RuntimeError: Internal C++ object (Pyside QtGui QWidget) already deleted </blockquote> <strong>What I could find on my own</strong> I have seen on SO and other sites that there may be issues when using <strong><them>pyside</them></strong> of objects being collected because there is no reference to them in <strong><them>Python</them></strong> The fact remains however that even if I move this code to very `setupUi()` method in the ` py` file generated by <strong><them>pyside</them></strong> I still get the exact same error It is also worth noting that I am able to access the `my_tab` object to modify its content without any trouble I am able to create another tab from scratch in my code and `main_dialog my_tabs addTab(new_tab "")` works perfectly fine in that context After some experimentations I realized the problem probably occurs at the copy of the `my_tab` object Indeed copying a tab object that I just created I could see that trying to add the copy to the GUI tabs failed too and with the same error It looks like the copy fails somehow or the object is being immediately deleted for some reason That is what I am infering anyway <strong>My question</strong> Considering all this I would like to find a way to either succeed in the object copy find another way to use an existing <strong><them>pyside</them></strong> object as template for other similar objects I <strong><them>could</them></strong> of course take the code for the tab out of the generated file and code my own `addTab()` method However I am expected to build from an existing ` ui` file and avoid hardcoding GUI elements <strong>EDIT:</strong> When using `pickle`: ````new_tab = pickle loads(pickle dumps(main_dialog my_tab -1)) ```` I get the following error: <blockquote> ```` new_tab = pickle loads(pickle dumps(main_dialog my_tab -1)) ```` _pickle PicklingError: Cannot pickle <class 'Pyside QtCore SignalInstance'>: attribute lookup Pyside QtCore SignalInstance failed </blockquote> | After some more research I believe copying a pyside object using one of those techniques is not possible The first thing to note is that <a href="http://qt-project org/faq/answer/how_can_i_clone_a_widget" rel="nofollow">there is no built-in function to clone a Qt widget</a> so the cloning should be done using modules like `copy` `pickle` or `marshal` Using `pickle` or `marshal` fails because the object is found to be <them>not pickable</them> Whilst the `copy copy` or `copy deeepcopy` do not raise any warning/exception/error the copy does not occur or is deleted right afterwards for some reason When trying to pass in the `deepcopy` as parameter to `addTab` no warning/exception/error is thrown yet the program stops at that line and exits back to the Python command prompt The fact that it takes a few seconds on that line before exiting makes me assume `deepcopy` tries to browse through the object attributes and fails at some point Doing the same with `copy` results in the previous `C++ object deleted` error mentionned in the question so I can only infer the `deepcopy` operation does fail As a result the only advice I could give someone looking for a similar answer is to implement their own `copy-widget` function which is ultimately what I will do now Still I wish to understand how is that `deepcopy` fails like this so silently yet provoking the end of the execution I started a thread to try and find an answer to this <a href="http://stackoverflow com/questions/22707756/python-copy-deepcopy-fails-without-raising-warning-exception-or-error">there</a> <strong>EDIT:</strong> I found a solution for this problem that respects my requirements of not hard-coding GUI elements and of using Qt Designer to create the GUI and templates for repeatable elements I hope it helps anyone having the same issue: The idea is that it is possible using Qt -- and pyside -- to load a given ` ui` file at run time using the `QUiLoader()` method It is thus possible to parse the ` ui` file to extract a given widget (` ui` files are simple XML files) and use the following code to use it: ````loader = QUiLoader() ui_file = QFile("path_to_ui_file ui") ui_file open(QFile ReadOnly) new_tab = loader load(ui_file) ui_file close() main_dialog my_tabs addTab(new_tab "") ```` And it works! A few things about the above example: - the second line assumes you have isolated your widget in the file `path_to_ui_file ui` - in my example the widget is a tab of course it works with any widget you might have done the last line is only provided to show that the error is no longer thrown - finally this approach has the pro of allowing you to use tools like Qt Designer to develop your GUI elements even when some variables are involved such as <them>just how many of that tabs do you want?</them> |
Transfer data from PC application to remotely hosted database Basically I want to be able to transfer data from PC application to remotely hosted database I have developed a wxPython which application which is used for parsing and analysing test data I want to update the application so that it can transfer the analysed data to an online MySQL database As it is for just for in house use by about a dozen individuals it does not need to be ultra robust just a simple solution that works I do not have much experience on server side stuff so I Am not really sure how this would usually be done I know that directly connecting to the database from the app is not possible due to security restrictions I Am thinking the I could use `ftplib` to upload the data as files and then call a `PHP` script (to insert the data in the db)with `urllib2` Will this work? | Do you have access server side? If so you could enable secured remote mysql or add a php page that allows access For example Python could use urllib to open "mypage/search php?name=name&otherdata=otherdata" and read the result This would also work in a web browser It seems uploading the file and then calling a script to run would be a bit inefficient and less secure and would require access to the server Why not have secure remote access for the database? I would look into some Python Mysql examples if there is a possibility of setting up Mysql connections: <a href="http://www kitebird com/articles/pydbapi html" rel="nofollow">http://www kitebird com/articles/pydbapi html</a> First get it working in a small example script then you can paste the code into your project |
precision in variables extracted from array using numpy I am loading an array using `numpy genfromtxt` I extract a variable from the array and save it into another txt file however the output will have many trailing digits Here is an example of my script: ````import numpy as np import csv data_points = np genfromtxt('input_arrray txt' dtype = None) # dtype = None since the array contains numbers and strings csvfile = "/home/User/Desktop/output_array txt" with open(csvfile "w") as output: writer = csv writer(output delimiter='\t') for row in range(len(data_points)): parameter = data_points[row][5] writer writerow([parameter]) ```` Let us say the value in the `input_array` was `0 33625` in the `output_array txt` it will be `0 33624999999999999`' To fix this problem I am using: ````writer writerow(['%1 5f' % parameter]) ```` However I am not satisfied by the results My original array is made of 1900 rows and 38 columns I want to extract 10 columns out of the 38 But when I use the `'%2 5f% parameter` my data is not aligned Is there another way to fix this problem? | You can force to keep the original data as string: ````data_points = np gengromtxt('input_array txt' dtype=str) ```` and then write using `np savetxt` with `fmt='%s`` to avoid the conversion`string -> float` that is causing the round error: ````np savetxt('output txt' data_points fmt='%s') ```` |
ModelForms in Django with Bootstrap I can see that you can use django-bootstrap-toolkit to generate bootstrap forms in Django This looks pretty cool However I have 2 questions - Can django-bootstrap-toolkit be used also for modelforms ? - If I had a large form that is created with django-bootstrap-toolkit what would be the best method for me to pass the form to the template for it to automatically render whilst allowing me to place sections of text (i e headers throughout the form) ? Thanks | Ok Maybe <a href="https://docs djangoproject com/en/dev/topics/forms/modelforms/#overriding-the-default-fields" rel="nofollow">This</a> helps you Example (Maybe that works): ````class Foo(Model): name = CharField( ) date = models DateField( ) class FooForm(ModelForm): model = Foo class Meta: widgets = { 'name': BootstrapTextInput(prepend='P') } ```` Hope this helps |
pandas read_html returns only one table I try to read ec2 pricing tables with pandas Based on <a href="http://pandas pydata org/pandas-docs/dev/io html#io-read-html" rel="nofollow">documentation</a> I expect list of DataFrames but got one table as a list <strong>Code example</strong> ````import pandas link = 'http://aws amazon com/ec2/pricing/' data = pandas read_html(link) print type(data) print data[0] ```` <strong>Output</strong> ````<type 'list'> 0 1 2 0 Reserved Instance Volume Discounts NaN NaN 1 Total Reserved Instances Upfront Discount Hourly Discount 2 Less than $250 000 0% 0% 3 $250 000 to $2 000 000 5% 5% 4 $2 000 000 to $5 000 000 10% 10% 5 More than $5 000 000 Contact Us Contact Us ```` <strong>Environment:</strong> - Ubuntu 14 10 - python 2 7 8 - pandas 0 14 1 | <a href="http://aws amazon com/ec2/pricing/" rel="nofollow">http://aws amazon com/ec2/pricing/</a> uses JavaScript to fill in the data in the tables Unlike what you see when you point your GUI browser at the link the data is missing if you download the HTML using urllib2: ````import urllib2 response = urllib2 urlopen(link) content = resonse read() ```` (Then search the contents for `<table>` tags ) To process the JavaScript you will need an automated browser engine like Selenium or WebKit or Spidermonkey Here is a solution using Selenium: ````import selenium webdriver as webdriver import contextlib import pandas as pd @contextlib contextmanager def quitting(thing): yield thing thing quit() with quitting(webdriver Firefox()) as driver: link = 'http://aws amazon com/ec2/pricing/' driver get(link) content = driver page_source with open('/tmp/out html' 'wb') as f: f write(content encode('utf-8')) data = pd read_html(content) print len(data) ```` yields ````238 ```` |
How can I use a modified openssl library (written in C) in my python code? I am writing a python script that imports ssl library I need to create ssl socket However I find that I will need to use a modified version of openssl library The author of the modified version told me that the underlying implementation of ssl module is using the openssl library The author provided me with a file named `ssllib c` I searched the folder of the openssl library that I installed: `openssl-0 9 8k_X64` but I could not find any `ssl_lib c` file Also the author referring to openssl by `openssl-1 0 1e` which is another version than mine My question: How can I compile my python script with a modified version of openssl? Please consider that I am using Windows x64 system and Python 2 7 | You will need to install the modified OpenSSL Python merely has bindings which will then call the functions in the compiled OpenSSL libraries If the modified OpenSSL library is installed and in your path completely replacing the original OpenSSL library then Python will "use" it This assumes that the modified library is in fact compatible with the original OpenSSL On a side-note using modified cryptographic libraries is a terrible idea from a security perspective |
When did Dell's internet site lose popularity? | null |
How to Add Two BLObs to an MySQL Row via Python I can now insert a BLOb into a row using the following code snippets:- ````TheData = open("/home/mjh/Documents/DemoData/sjp bin" 'rb') read() sql = "Insert into Results (idResults idClient TestDateTime ResultBinary) Values (10 7 '2014-11-05 14:09:11' %s)" cursor execute(sql (TheData )) ```` However I want to put multiple BLObs into the same row and expanded the code to:- ````sql = "Insert into Results (idResults idClient TestDateTime ResultBinary SecondResult) Values (10 7 '2014-11-05 14:09:11' %s %s)" cursor execute(sql (TheData SecondData )) ```` This generates the error:- ````_mysql_exceptions OperationalError: (1241 'Operand should contain 1 column(s)') ```` This seems a logical change (to me) based on extending the insert to add other field types Does this mean I have to do an insert (for the first BLOb) followed by an UPDATE (for the second BLOb)? | Have you tried using named parameters viz: ````sql = "Insert into Results (idResults idClient TestDateTime ResultBinary SecondResult) Values (10 7 '2014-11-05 14:09:11' %(one)s %(two)s)" cursor execute(sql { 'one': TheData 'two': SecondData}) ```` <a href="http://dev mysql com/doc/connector-python/en/connector-python-api-mysqlcursor-execute html" rel="nofollow">Reference</a> |
digits in a string fit in between max and min So I am trying to get my string to be verified between 1000 to 9999 and when I try to run this it comes back saying it is unverified Curious on how to fix this problem so that it can actually read the 4 digits that a user input as a string I have it as a string for a reason and it works fine for other parts of my code it is just this one specifically is giving me a hard time ````def choiceTwo(digitCode): max = 9999 min = 1000 digitCode = [int(num) for num in digitCode] for x in digitCode: if x > max: return True if x < min: return False def main(): digitCode = str(input("Please enter a 4 digit code: ")) if choiceTwo(digitCode) is True: print("Number is verified!") else: print("ERROR not a valid ") ```` | It is probably easier to do this straight in your if statement: ````if int(digitCode) < int(max) and int(digitCode) > int(min): print("Number is verified!") else: print("ERROR not a valid ") ```` Edit: I made sure that both digitCode max and min were all integers since you mentioned that you used them as strings elsewhere in the code Edit 2: Also double check your statements inside your function; what happens if x is in between max and min (currently it only returns True if x > max) |
What evidence suggested that humans used animal products for clothing? | finds of large numbers of bone and antler pins |
Restrictons of Python compared to Ruby: lambda's I was going over some pages from WikiVS that I quote from: <blockquote> because lambdas in Python are restricted to expressions and cannot contain statements </blockquote> I would like to know what would be a good example (or more) where this restriction would be preferably compared to the Ruby language Thank you for your answers comments and feedback! | The most commonly encountered situation regarding statements is probably Python 2 X's `print` statement For example ````say_hi = lambda name: "Hello " name ```` works as expected But this will not compile: ````say_hi = lambda name: print "Hello " name ```` because `print` is not a proper function in Python 2 ````>>> say_hi = lambda name: "Hello " name >>> say_hi("Mark") 'Hello Mark' >>> >>> say_hi = lambda name: print "Hello " name SyntaxError: invalid syntax ```` The rest of the statements besides `print` can be found <a href="http://docs python org/reference/simple_stmts html" rel="nofollow">in the Python documentation online</a>: <blockquote> ````simple_stmt ::= expression_stmt | assert_stmt | assignment_stmt | augmented_assignment_stmt | pass_stmt | del_stmt | print_stmt | return_stmt | yield_stmt | raise_stmt | break_stmt | continue_stmt | import_stmt | global_stmt | exec_stmt ```` </blockquote> You can try the rest of these out in the REPL if you want to see them fail: ````>> assert(True) >>> assert_lambda = lambda: assert(True) SyntaxError: invalid syntax >>> pass >>> pass_lambda = lambda: pass SyntaxError: invalid syntax ```` I am not sure what parallels there are between Python's `lambda` restrictions and Ruby's `proc` or `lambda` In Ruby everything is a message so you do not have keywords (okay you do have <them>keywords</them> but you do not have keywords that appear to be functions like Python's `print`) Off the top of my head there is no easily-mistaken Ruby constructs that will fail in a `proc` |
What always encapsulates only one transaction? | null |
Where was Emanuel Goldberg from? | null |
Why does this instruction not work? Hi I am trying to count using keys only and get an error message when using this line `self response out write(A all(keys_only=True) count(100000000))` The error message I get is `TypeError: all() got an unexpected keyword argument 'keys_only'` Is not it supposed to work this way? What am I doing wrong? Thanks ` UPDATE: I found this way worked: ```` query = A all() query _keys_only = True self response out write(query count(100000000)) ```` | There is problem with SearchableModel and keys_only you can do some think like this ````query = A all() query _keys_only = True ```` |
Python: ValueError: could not convert string to float: '4623634 0' Here is my code: ````import csv with open ("Filename1 txt") as f: dict1 = {} are = csv reader(f delimiter="\t") for row in r: a b v = row dict1 setdefault((a b) []) append(v) #for key in dict1: #print(key[0]) #print(key[1]) #print(d[key][0]]) with open ("Filename2 txt") as f: dict2 = {} are = csv reader(f delimiter="\t") for row in r: a b v = row dict2 setdefault((a b) []) append(v) #for key in dict2: #print(key[0]) count = 0 for key1 in dict1: for key2 in dict2: if (key1[0] == key2[0]) and abs(float(key1[1])) - (float(key2[1])) < 10000: count = 1 ```` Previously I was getting this error: ````Traceback (most recent call last): File "/Users/macbookpro/Desktop/MainDict py" line 28 in <module> if key1[0] == key2[0] and abs(key1[1] - key2[1]) < 10000: TypeError: unsupported operand type(s) for -: 'str' and 'str' ```` So of course I tried turning those strings into integers However I then got this error: ```` Traceback (most recent call last): File "/Users/macbookpro/Desktop/MainDict py" line 28 in <module> if (key1[0] == key2[0]) and abs((int(key1[1])) - (int(key2[1]))) < 10000: ValueError: invalid literal for int() with base 10: '1002569 1' ```` Then I tried using float and now I get this error which is where I am stuck now: ```` Traceback (most recent call last): File "/Users/macbookpro/Desktop/MainDict py" line 28 in <module> if (key1[0] == key2[0]) and abs(float(key1[1])) - (float(key2[1])) < 10000: ValueError: could not convert string to float: '2486997 2' ```` Here are examples of what my input files consist of: Filename1 ````1 11383002 8 1 16E-05 1 159962368 1 17E-05 2 133623587 1 26E-05 2 1002569 1 3 30E-06 3 168940139 1 40E-05 3 49736942 1 43E-05 ```` Filename2 ````10 11383002 8 1 16E-05 5 159962368 1 17E-05 7 133623587 1 26E-05 9 1002569 1 3 30E-06 8 168940139 1 40E-05 1 49736942 1 43E-05 ```` Now my question is why am I getting this error Is there something in the code specifically? Or is there something wrong with my text files What suggestions do you having on fixing this problem and how can I alter my code (if that is the case) to do so? | I can only guess what number `1002569 1` should be but maybe use `split(" ")[0]` to get just the first part of it and convert that into an `int`? ````if and abs(float(key1[1] split(" ")[0])) - (float(key2[1] split(" ")[0])) < 10000: ```` This will also work if there is no space (like in the other rows) |
Using Python to remove incomplete line from the end of a JSON formatted log file I have some JSON formatted log files that I am copying to S3 so I can run Hive queries on them using Elastic Map Reduce The script I use to copy the log files to S3 is written in Python Every once in a while I encounter a file with an incomplete line typically at the end of the file This causes any Hive queries that need that file to fail I have been manually fixing the files by removing the bad line but I would like to integrate this step into my Python script to prevent these failures Here is an example of the type of file I am working with: ````{"logLine":{"browserName":"FireFox" "userAgent":"Mozilla/5 0 (Windows NT 5 1; rv:5 0) Gecko/20100101 Firefox/5 0"}} {"logLine":{"browserName":"Pre" "userAgent":"Mozilla/5 0 (X11; Linux x86_64) AppleWebKit/534 24 (KHTML like Gecko; Google Web Preview) Chrome/11 0 696 Safari/534 24"}} {"logLine":{"browserName":"Internet Explorer" "userAgent":"Mozilla/4 0 (compatible; MSIE 7 0; Windows NT 6 1 ```` In that case I want to remove the last line since it is incomplete I know it is incomplete because it is missing the end of line character(s) and also because it is not valid JSON due to the missing end quote and curly braces Is there an easy way to identify and remove that file from the file using Python? | You can grab each line and pass them through a filter function This function would be something like ````def isLineComplete(line): return line[-1] == "}" ```` Overview: ````myFile = cleanLines = filter(isLineComplete myFile readlines()) ```` |
Convert CSV to YAML with Unicode? I am trying to convert a CSV file containing Unicode strings to a YAML file using Python 3 4 Currently the YAML parser escapes my Unicode text into an ASCII string I want the YAML parser to export the Unicode string <them>as</them> a Unicode string without the escape characters I am misunderstanding something here of course and I would appreciate any assistance <them>Bonus points</them>: how might this be done with Python 2 7? <strong>CSV input</strong> ````id title_english title_russian 1 A Title in English Ðазвание на ÑÑÑÑком 2 Another Title ÐÑÑгой Ðазвание ```` <strong>current YAML output</strong> ````- id: 1 title_english: A Title in English title_russian: "\u041D\u0430\u0437\u0432\u0430\u043D\u0438\u0435 \u043D\u0430\ \ \u0440\u0443\u0441\u0441\u043A\u043E\u043C" - id: 2 title_english: Another Title title_russian: "\u0414\u0440\u0443\u0433\u043E\u0439 \u041D\u0430\u0437\u0432\u0430\ \u043D\u0438\u0435" ```` <strong>desired YAML output</strong> ````- id: 1 title_english: A Title in English title_russian: Ðазвание на ÑÑÑÑком - id: 2 title_english: Another Title title_russian: ÐÑÑгой Ðазвание ```` <strong>Python conversion code</strong> ````import csv import yaml in_file = open('csv_file csv' "r") out_file = open('yaml_file yaml' "w") items = [] def convert_to_yaml(line counter): item = { 'id': counter 'title_english': line[0] 'title_russian': line[1] } items append(item) try: reader = csv reader(in_file) next(reader) # skip headers for counter line in enumerate(reader): convert_to_yaml(line counter) out_file write( yaml dump(items default_flow_style=False) ) finally: in_file close() out_file close() ```` Thanks! | In Python 2 x you should use a Unicode CSV reader as Python's CSV reader does not support that You can use <a href="https://pypi python org/pypi/unicodecsv/0 9 4" rel="nofollow">unicodecsv</a> for this purpose In your current Python 3 x code you should explicitly pass the file encoding when opening it: ````import csv with open('some csv' newline='' encoding='utf-8') as f: reader = csv reader(f) for row in reader: print(row) ```` It may be that your system is already doing the right thing but you are relying on defaults in that case Lastly you need to make sure the YAML file is opened with the correct encoding: `open("yaml_file yaml" "w" encoding="utf-8")` And this encoding should be used later when reading the YAML file I am not sure what the `yaml` library does when given Python objects but you also need to check that `line[0]` and `line[1]` are Unicode strings when you are setting them inside `convert_to_yaml` |
python encryption of any size code with a smaller key So I am trying to work to create a program which can take two inputs such as ````encrypt('12345' '12') ```` and it will return ````'33557' ```` where the code ('12345') and been incremented by the key ('12') working from right to left I have already created one which will work for when the code and key are both 8 long but I cannot work out how to do this should the code be allowed to be any size possibly with nested for statments? Here is the one i did early so you can see better what i am trying to do ````def code_block(char charBlock): if len(char) == 8 and len(charBlock) == 8: #Check to make sure both are 8 didgets c = char cB = charBlock line = "" for i in range(0 8): getDidget = code_char2(front(c) front(cB)) c = last(c) cB = str(last(cB)) line =line getDidget print(line) else: print("Make sure two inputs are 8 didgets long") def front(word): return word[:+1] def last(word): return word[+1:] ```` | Some code tested on Python 3 2: ````from decimal import Decimal import itertools def encrypt(numbers_as_text code): key = itertools cycle(code[::-1]) num = Decimal(numbers_as_text) power = 1 for _ in numbers_as_text: num = power * int(next(key)) power *= Decimal(10) return num if __name__ == "__main__": print(encrypt('12345' '12')) ```` Some explanation: - `code[::-1]` is a cool way to reverse a string <a href="http://stackoverflow com/a/12336158/923794">Stolen from here</a> - <a href="http://docs python org/3 2/library/itertools html" rel="nofollow">`itertools cycle` endlessly repeats your key</a> So the variable `key` now contains a generator which yields `2` `1` `2` `1` `2` `1` etc - <a href="http://docs python org/3/library/decimal html" rel="nofollow">`Decimal` is a datatype which can handle arbitrary precision numbers</a> Actually Python 3's integer numbers would be sufficient because they can handle integer numbers with arbitrary number of digits Calling the type name as a function `Decimal()` calls the <a href="http://stackoverflow com/a/8986413/923794">constructor</a> of the type and as such creates a new object of that type The `Decimal()` constructor can handle one argument which is then converted into a Decimal object In the example the `numbers_as_text` string and the integer `10` are both converted into the type <a href="http://docs python org/3/library/decimal html#decimal-objects" rel="nofollow">`Decimal` with its constructor</a> - `power` is a variable that starts with `1` is multiplied by `10` for every digit that we have worked on (counting from the right) It is basically a pointer to where we need to modify `num` in the current loop iteration - The `for` loop header ensures we are doing one iteration for each digit in the given input text We could also use something like `for index in range(len(numbers_as_text))` but that is unnecessarily complex Of course if you want to encode text this approach does not work But since that was not in your question's spec this is a function focused on dealing with integers |
Django: Dynamically add apps as plugin building urls and other settings automatically I have following structure of my folders in Django: ```` /project_root /app /fixtures/ /static/ /templates/ /blog/ /settings py /urls py /views py /manage py /__init__ py /plugin /code_editor /static /templates /urls py /views py /__init__ py /code_viewer /static /templates /urls py /views py /__init__ py ```` So how can I make root urls py automatically build up the list of urls by looking for other urls py based on the INSTALLED_APPS? I change settings py in order to build INSTALLED_APPS TEMPLATES_DIR STATICFILES_DIRS dynamically (It means i do not know how many plugins will be installed in different servers It should dynamically check it on run time and add it )on: ````python manage py runserver ```` Here is code for adding INSTALLED_APPS TEMPATES_DIR STATICFILES_DIR ````PLUGINS_DIR = '/path_to/plugins/' for item in os listdir(PLUGINS_DIR): if os path isdir(os path join(PLUGINS_DIR item)): plugin_name = 'app plugins %s' % item if plugin_name not in INSTALLED_APPS: INSTALLED_APPS = INSTALLED_APPS+(plugin_name ) template_dir = os path join(PLUGINS_DIR '%s/templates/' % item) if os path isdir(template_dir): if template_dir not in TEMPLATE_DIRS: TEMPLATE_DIRS = TEMPLATE_DIRS+(template_dir ) static_files_dir = os path join(PLUGINS_DIR '%s/static/' % item) if os path isdir(static_files_dir): if static_files_dir not in STATICFILES_DIRS: STATICFILES_DIRS = STATICFILES_DIRS (static_files_dir ) ```` Any help will be appreciated Thank you in advance <blockquote> <h2>SOLUTION:</h2> <strong>EDIT:</strong> So what i did are as following: I include two modules like this: </blockquote> ````from django conf import settings from django utils importlib import import_module ```` <blockquote> And then in root urls py I add following code: </blockquote> ````def prettify(app): return app rsplit(' ' 1)[1] for app in INSTALLED_APPS: try: _module = import_module('%s urls' % app) except: pass else: if 'eats plugins' in app: urlpatterns = patterns('' url(r'^plugins/%s/' % prettify(app) include('%s urls' % app)) ) ```` <blockquote> <strong>Thank you a lot @Yuka Thank you Thank you Thank you Thank you </strong> You make my day </blockquote> | Have you tried something like: ````for app in INSTALLED_APPS: # You will want to check that you can import the urls of that app here urlpatterns = patterns('' url(r'^%s/' % app include('%s urls' % app) ) ) ```` |
python matrix multiplication: how to handle very large matrices? ````a = numpy zeros((17770 5)) b = numpy zeros((5 20000)) ma = numpy matrix(a) mb = numpy matrix(b) ```` That is `ma shape = (17770 5)` `mb shape = (5 20000)` both are `numpy matrix` I need `ma*mb` But I get the error message `"ValueError: array is too big"` Are these matrices too large for Python multiplication? BY THE WAY I tested with python2 6 6/32bit/3GB RAM | I can compute `ma*mb` on my machine (`Python 2 7 1 |EPD 7 0-2 (64-bit)` on 64-bit Ubuntu) Make sure you are using 64-bit Python on a 64-bit OS since a 17770x20000 matrix of double-precision floats requires 2 8GB of RAM which exceeds (or is very close to) what most 32-bit platforms can handle Depending on your requirements using single-precision floats (`numpy float32`) might also be a possibility Lastly if your matrices are sparse or have structure you might want to look into exploiting that to reduce memory requirements |
IP address conversion using Python I want to convert an ip address read from a file in the decimal format (192 168 65 72) to one in binary format {110000001010100001000001010001000} I am reading the ip address in the decimal format from a file Find the code snippet below ````/*contains 192 168 65 72*/ filter = open("filter txt" "r") for line in filter: bytePattern = "([01]?\d\d?|2[0-4]\d|25[0-5])" regObj = re compile("\ " join([bytePattern]*4)) for match in regObj finditer(line): m1 m2 m3 m4 = match groups() print "%s %s %s %s" %(m1 m2 m3 m4) ```` I want to convert m1 m2 m3 and m4 each into an 8bit binary value I cannot figure out a way to do that I am new to python Any help will be greatly appreciated Cheers | ````>>> bin( 192 ) '0b11000000' ```` String manipulation does the rest Note however that in an IP all but the first parts are allowed to be zero |
how to write path of an executable in case of windows when application name has versions appened to it I am totally new to parsing XML I am making an app that reads the configuration from an xml file from disk if its not found it creates a config file but based on system wether it is *nix or windows I want to generate a configuration file XML in linux its easy but for windows I would have to browse the exe file so in windows case how should I handle call to `makeConfFile()` I have made it as public function so if someone uses the API call to this method can happen from anwhere apart from that ````import os import subprocess as sp import xml etree ElementTree as ET from xml dom import minidom from xml etree ElementTree import Element SubElement def getConfFilePath(): confFilePath = os path join(os path expanduser('~') 'moveEditor conf') if os path isfile(confFilePath): return confFile else: # make a configuration xml file at this path if sys platform startswith('linux') or sys platform startswith('darwin'): return confFilePath if makeConfFile(confFilePath) else raise IOError("Configuration file not found ") elif sys platform startswith('win'): # How to do this part def makeConfFile(filePath imageMagic=None ffmpeg=None): root = Element('tools') # set defaults on *nix ffmpeg = ffmpeg if ffmpeg else 'ffmpeg' imageMagic = imageMagic if imageMagic else 'convert' ffmpeg_child = SubElement(root 'ffmpeg' value=ffmpeg) imageMagic_child = SubElement(root 'imageMagic' value=imageMagic) rough_string = ET tostring(root 'utf-8') reparsed = minidom parseString(rough_string) data = reparsed toprettyxml(indent=" ") try: with open(filePath 'w') as fw: fw write(data) except IOError err: print err return return True tree = ET parse(getConfFilePath()) elem = tree getroot() FFMPEG_BINARY = elem[0] attrib['value'] IMAGEMAGICK_BINARY = elem[1] attrib['value'] ```` any help will be appreciated!! | You should be able to get the information you need using Python's <strong>os</strong> module This should work: ````home = os getenv('USERPROFILE') or os getenv('HOME') ```` I found this handy trick here: - <a href="http://ubuntuforums org/showthread php?t=820043" rel="nofollow">http://ubuntuforums org/showthread php?t=820043</a> |
What is the name of the writer of the rule of binding precedent? | null |
dict pop does not free the memory Looks like garbage collector does not collect the values `pop` would from the `dict()` in python 2 7 (did not try on python 3) Here is the example: ````a = dict() # fill the memory (dict) for i in xrange(0 9999999): a[i] = i # Memory usage is about 600 MB # try to free the memory for i in xrange(0 9999999): a pop(i) # print the dict and see it is empty print "%r" % a # prints: {} # Memory usage is about 600 MB import copy a = copy copy(a) # Memory usage decreased to about 200 MB import gc gc collect() # Memory usage decreased to about 10 MB ```` Anybody knows why this happens and how to solve this memory leak issue? | There is no memory leak issue since the memory is freed when the dictionary is not used any more Dictionaries use internal tables to store the entries These tables are not freed when using `pop` because every key is mapped to a hash modulo size of the internal table So the last existing key could lie at the end of this table To illustrate this I will use `sys getsizeof`: ````>>> a= {} >>> sys getsizeof(a) 288 >>> for i in range(9999999): a[i]=i >>> sys getsizeof(a) 402653280 >>> for i in range(9999999): del a[i] >>> sys getsizeof(a) 402653280 >>> a = copy copy(a) >>> sys getsizeof(a) 288 >>> ```` Instead of using excessive `pop`s you should create new dictionaries if necessary |
Parse string with date and timezone to UTC datetime I am coding a Python 3 script that receives a full date with offset and I want to be able to compare it to another date without offset The main problem I am facing is that Python does not seem to like different offset datetime objects as it complains when you try to do any operation with those: ````>>> date_string 'Wed 8 May 2013 15:33:29 0200' >>> new_date = datetime strptime(date_string '%a %d %b %Y %H:%M:%S %z') >>> new_date datetime datetime(2013 5 8 15 33 29 tzinfo=datetime timezone(datetime timedelta(0 7200))) >>> new_date - datetime today() Traceback (most recent call last): File "<stdin>" line 1 in <module> TypeError: cannot subtract offset-naive and offset-aware datetimes ```` As a workaround I have stripped `date_string` into two strings once with the date and one with the offset creating two objects: one date and one delta them sum them up: ````>>> date_string 'Wed 8 May 2013 15:33:29 0200' >>> match = re match(r'( *)(\s\+\d{4})' date_string) >>> match group(1) 'Wed 8 May 2013 15:33:29' >>> match group(2) ' 0200' >>> parsed_date = datetime strptime(match group(1) '%a %d %b %Y %H:%M:%S') >>> match_delta = re match(r'\s\+(\d{2})(\d{2})' match group(2)) >>> parsed_date_delta = timedelta(minutes=int(match_delta group(2)) hours=int(match_delta group(1))) >>> parsed_date_complete = parsed_date parsed_date_delta >>> parsed_date_complete datetime datetime(2013 5 8 17 33 29) ```` With this I am able to get a final hour with the correct offset applied so any comparision with another normal datetime object would not raise an error The thing I am wondering is if there is an easier or more efficient way of achieving this The idea is to receive a string like this one: `Wed 8 May 2013 15:33:29 0200` and be able to convert it in a datetime object without offset so I can work with UTC times EDIT: To explain a little more the issue `new_date` has an offset value of `+0200` while `datetime today()` and `datetime utcnow()` does not have that offset so trying to compare it or do any operation gives the following error in Python: `TypeError: cannot subtract offset-naive and offset-aware datetimes` If date is `Wed 8 May 2013 15:33:29 0200` what I want is a way to calculate date like this: `Wed 8 May 2013 17:33:29` without the offset value but with the right time (offset applied to time) That way since I do not have the offset I can freely do stuff with `datetime today()` and `datetime utcnow()` | I am no python guru but according to <a href="http://docs python org/3 3/library/datetime html" rel="nofollow">these docs</a>: <blockquote> <them>classmethod</them> `datetime today()` Return the current local datetime with `tzinfo` None </blockquote> There is no time zone associated with the return value It is not at UTC the offset is just unspecified It therefore makes sense that it would not allow you to compare the two The result would be meaningless In other words what would you expect the result of this to be? ````10 May 2013 13:00 0200 > 10 May 2013 12:00 ```` - It <them>could</them> be true because `13:00` is a greater <them>value</them> than `12:00` - It <them>could</them> be false because maybe the local time zone offset is `-0100` so you are comparing the moments `11:00Z > 13:00Z` But who knows if we even meant to use the local offset since we did not specify Since we are referring to an exact moment on the left side but an ambiguous one on the right the operation gives an error It is good that Python gives an error when you try to do this Other frameworks such as Net make some assumptions and return results that might not be what you were expecting (<a href="http://noda-time blogspot com/2011/08/what-wrong-with-datetime-anyway html" rel="nofollow">Read here</a> if you are interested ) So going back to your question you said: <blockquote> The idea is to receive a string like this one: Wed 8 May 2013 15:33:29 0200 and be able to convert it in a datetime object without offset so I can work with UTC times </blockquote> The string you have is already reflecting its offset from UTC The way you are parsing it is just fine You just need to compare it to something that is more meaningful such as `datetime now(timezone utc)` |
How long did a student have to study law, in early Islamic law graduate schools, in order to graduate? | ten or more years |
Differentiate celery kombu PyAMQP and RabbitMQ/ironMQ I want to upload images to S3 server but before uploading I want to generate thumbnails of 3 different sizes and I want it to be done out of request/response cycle hence I am using celery I have read the docs here is what I have understood Please correct me if I am wrong - Celery helps you manage your task queues outside the request response cycle - Then there is something called carrot/kombu - its a django middleware that packages tasks that get created via celery - Then the third layer PyAMQP that facilitates the communication of carrot to a broker eg RabbitMQ AmazonSQS ironMQ etc - Broker sits on a different server and does stuff for you Now my understanding is - if multiple users upload image at the same time celery will queue the resizing and the resizing will actually happen at the ironMQ server since it offers a cool addon on heroku Now the doubts: - But what after the image is resized will ironMQ push it to the S3 server or will it notify once the process is completed i am not clear about it - What is the difference between celery and kombu/carrot could you explain vividly | IronMQ does not process your tasks for you; it simply serves as the backend for Celery to keep track of what jobs need to be performed So here is what happens Assume you have two servers your web server and your Celery server Your web server is responsible for handling requests your Celery server creates the thumbnails and uploads them to S3 Here is what a typical request looks like: - Your user uploads the image to your web server - You store that image somewhere--I would recommend putting it on S3 right then personally but you could also store it in for example <a href="http://iron io/cache">IronCache</a> base64-encoded The point is to put it somewhere your Celery server can access it - You queue up a job on Celery passing the location of the image to your Celery server - Your Celery server downloads the image generates your thumbnails and uploads them to S3 It then stores the S3 URLs in the job results - Your web server waits until the job finishes then has access to the results Alternatively you could have your Celery server store the results in the database itself The point is that the Celery server does the heavy lifting (generating the thumbnails) and does not hold up the request loop while it does I wrote an example for using IronMQ on Heroku You can see it here: <a href="http://iron-celery-demo herokuapp com">http://iron-celery-demo herokuapp com</a> You can see the source for the example <a href="https://github com/iron-io/heroku-iron-celery-demo">on Github</a> and <a href="https://github com/iron-io/heroku-iron-celery-demo/blob/master/tutorial md">read the tutorial</a> which explains pretty thoroughly and step-by-step how to deploy Celery on Heroku To clear up the AMQP stuff: - IronMQ is a cloud-based message queue service developed by Iron io - AMQP is an open messaging specification - RabbitMQ is the most popular implementation (that I know of) of the AMQP specification - PyAMQP is a Python library that let us Python clients communicate with any implementation of AMQP including RabbitMQ One of the biggest differences between IronMQ and RabbitMQ/AMQP is that IronMQ is hosted and managed so you do not have to host the server yourself and worry about uptime The spec offers a bunch more in terms of differentiation and there are underlying differences but Celery abstracts most of those away Because you are using Celery the only difference you are liable to notice is that IronMQ is hosted so you do not have to stand up and manage your own server <strong>Full disclosure</strong>: I am employed by Iron io the company behind IronMQ |
What kind of climate does Punjab have? | tropical wet and dry |
Python subprocess stderr redirection redirect warnings still raise on control flow stoppers? I did basic due diligence and could not find a good answer to this anywhere I want to call subprocess Popen in a way that they will still raise a Python exception when control flow is interrupted but I want to redirect stderr at the same time The use case is for a command line client that should not output warnings but whose internal logic still wants to know about subprocess problems As an example this silently redirects all errors: ````subprocess Popen(command stderr=subprocess PIPE) ```` Calling a Python module as a subprocess with contents: ````raise(Exception("AVAST!")) ```` Does not raise anything It would be great if it redirected all error text but still raised on anything that halted control flow prematurely Do I need to use return code and hope that all subprocesses I call have implemented this correctly? The best thing I have thought of so far is manually parsing the redirected errors which is a pretty poor implementation in my mind Is there a clean canonical way to do this? | There is no way to pass exceptions across a text pipe like `stderr` because all you can pass across a text pipe is text But there are a few options: - Make sure all of your children exit with non-zero status on exception (which is the default if you do not do anything) and do not do so in any other cases (unless there are cases you want to treat the same as an exception) - Parse for exceptions in `stderr` - Create some other communication channel for the parent and children to share - Do not use `subprocess` For example if the only reason you are running Python scripts via `subprocess` is to get core parallelism (or memory space isolation) it may be a lot easier to use `multiprocessing` or `concurrent futures` which have already built the machinery to propagate exceptions for you <hr> From your comment: <blockquote> My use case is calling a bunch of non-Python third party things If return codes are how the standard library module propagates errors I am happy enough using them </blockquote> No the Python standard library propagates errors using `Exception`s And so should you Return codes are how non-Python third party things propagate errors (Actually how they propagate both errors and unexpected signals butâ¦Â do not worry about that ) That is limited to 7 bits worth of data and the meanings are not really standardized so it is not as good But it is all you have in the POSIX child process model so that is what generic programs use What you probably want to do is the same thing <a href="http://docs python org/2/library/subprocess html#subprocess check_call" rel="nofollow">`subprocess check_call`</a> doesâif the return code is not zero raise an exception (In fact if you are not doing anything asynchronous ask yourself whether you can use `check_call` in the first place instead of using a `Popen` object explicitly ) For example if you were doing this: ````output errors = p communicate() ```` Change it to this: ````output errors = p communicate() if p returncode: raise subprocess CalledProcessError(p returncode 'description') ```` (The description is usually the subprocess path or name ) Then the rest of your code can just handle exceptions |
What people do Jews show the least genetic resemblance to? | null |
Django 1 6 Admin page CSS not working with Bluehost I am trying to upload my Django App to Bluehost and for some reason none of the CSS is working on the admin page I have been touring the web for hours looking for solutions but I have not found any specifically tailored to Bluehost or Django 1 6 This Question was the closest thing to what I need but it still is not working <a href="http://stackoverflow com/questions/14883036/djangos-admin-is-missing-css-images-etc-unable-to-properly-set-up-static-fi">Django's admin is missing css images etc - unable to properly set up static files on shared host</a> All of my static files are now in /public_html/static/ it worked for that person so I am curious to know what I am doing wrong Here is the pertinent code ````# Static files (CSS JavaScript Images) # https://docs djangoproject com/en/1 6/howto/static-files/ MEDIA_ROOT = '/home/mySite/Django/WebApp/quiz' MEDIA_URL = '/media/' STATIC_ROOT = '/home/mySite/public_html/QuizApp/static/' STATIC_URL = '/static/' ADMIN_MEDIA_PREFIX = '/static/admin/' STATICFILES_DIRS = ( os path join(BASE_DIR "static") ) ```` After I edited settings py I did run collectstatic Also I feel like this may be important as well ````TEMPLATE_DIRS = [os path join(BASE_DIR 'tempaltes')] ```` and the htaccess file because Bluehost does not allow you to access the httpd conf file ````AddHandler fcgid-script fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !=/favicon ico RewriteCond %{REQUEST_URI} !^/static/ RewriteRule ^( *)$ QuizApp fcgi/$1 [QSA L] ```` If you all need anything else let me know This is my first experience with web development python and CSS in general so please be as detailed as possible I literally know nothing | If you copied and pasted your code directly from your files then the first thing you might try is correcting the typo in your template_dirs assignment You have: ````TEMPLATE_DIRS = [os path join(BASE_DIR 'tempaltes')] ```` It should be: ````TEMPLATE_DIRS = [os path join(BASE_DIR 'templates')] ```` Also you might check whether your STATIC_ROOT is set accurately It should be: ````STATIC_ROOT = '/home/<user>/public_html/subdirectory/static/' ```` Your subdirectory looks fine (QuizApp) but it looks like you have the name of your project site 'mySite' where your username should be With bluehost you want your static_root to be /home/username/ UPDATED ANSWER: Your STATIC_ROOT is the full-path to the directory where your static files will be moved to and stored (when you run collectstatic) Your STATIC_URL is the url your app will refer to when sourcing static files With bluehost anything in a file path after '/public_html/' is accessible by the web This means any folder after '/public_html/' can/must show up in a url (after your domain name) pointing to any file in that folder In this case your STATIC_ROOT is '/home/mySite/public_html/QuizApp/static/' but your STATIC_URL is '/static/' so it looks like your url is pointing to the wrong place So if for example your domain name is "coolsite com" then your static files are being stored in 'www coolsite com/QuizApp/static/' but the url you are pointing to is just 'www coolsite com/static/' So try changing your STATIC_URL to '/QuizApp/static/' and then run 'python manage py collectstatic' again |
How to extract hours and minutes from a datetime datetime object? I am required to extract the time of the day from the datetime datetime object returned by the created_at attribute But I do not understand how to do that This is my code for getting the datetime datetime object ````from datetime import * import tweepy consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' auth = tweepy OAuthHandler(consumer_key consumer_secret) auth set_access_token(access_token access_secret) api = tweepy API(auth) tweets = tweepy Cursor(api home_timeline) items(limit = 2) t1 = datetime strptime('Wed Jun 01 12:53:42 0000 2011' '%a %b %d %H:%M:%S 0000 %Y') for tweet in tweets: print (tweet created_at-t1) t1 = tweet created_at ```` I need to only extract the hour and minutes from t1 | Do not know how you want to format it but you can do: ````print("Created at %s:%s" % (t1 hour t1 minute)) ```` for example |
Shared memory and comunication between programs I read this: <a href="http://stackoverflow com/questions/1109870/python-singleton-into-multiprocessing">http://stackoverflow com/questions/1109870/python-singleton-into-multiprocessing</a> but I did not find the solution of my problem I have to run the same program (not process) many times in one time Programs work in the same electronic devices I must synchronized this programs Only one program can use device in the moment Have you got any suggestions how I can resolve this problem? | You could use lockfiles in the filesystem |
Best method to locate object with opencv What is the most efficient way using opencv and python to locate and return the coordinates of a single object that has a constant color? I have seen a few ways to do so using c++ but info on object location using python is sparse | I am not sure what you are looking for exactly but I think you should have a look at the <a href="http://opencv willowgarage com/wiki/FaceDetection" rel="nofollow">Haar classification</a> See here for example: <a href="http://cgi cse unsw edu au/~cs4411/wiki/index php?title=OpenCV_Guide#Haar_Classifier" rel="nofollow">http://cgi cse unsw edu au/~cs4411/wiki/index php?title=OpenCV_Guide#Haar_Classifier</a> |
Pandas: Get label for value in Series Object How is it possible to retrieve the labe of a particular value in a pandas Series object: For example: ````labels = ['a' 'b' 'c' would' 'e'] s = Series (arange(5) * 4 labels) ```` Which produces the Series: ````a 0 b 4 c 8 d 12 e 16 dtype: int64 ```` How is it possible to get the label of value '12'? Thanks | You can get the subseries by: ````In [90]: s[s==12] Out[90]: d 12 dtype: int64 ```` Moreover you can get those labels by ````In [91]: s[s==12] index Out[91]: Index([d] dtype=object) ```` |
Isolating Values in Python Dictionary List I am trying to write the code for most flexible eater as the organism that eats the greatest number of other organisms in the food chain which happens to be the bird So far the code I have written is: <hr> ````foodweb = {} with open('AquaticFoodWeb txt') as input: for line in input: animal prey = line strip() split(' eats ') foodweb setdefault(animal []) append(prey) print ("Predators and Prey:") for animal prey in sorted(foodweb items()): if len(prey) > 1: print ("{} eats {} and {}" format(animal " " join(prey[:-1]) prey[-1])) else: print ("{} eats {}" format(animal " " join(prey))) print (" ") #Apex values = [item strip() for sub in foodweb values() for item in sub] for apex in foodweb keys(): if apex strip() not in values: print("Apex Predators: " apex) print (" ") #Producers producers = [] allpreys = [item strip() for sub in foodweb values() for item in sub] for p in allpreys: if p strip() not in foodweb keys() and p not in producers: producers append(p) print("The Producers Are:") print(formatList(producers)) ```` <hr> So I have written the code for isolating Apex Predators and Producers and was wondering if the code needed to write the flexible eat is along the lines of this? I apologize for not having attempted writing the flexible eater code I do not understand what part of the keys and values need to be entered in order to isolate the bird value <strong>For reference this is the list:</strong> ````Bird eats Prawn Bird eats Mussels Bird eats Crab Bird eats Limpets Bird eats Whelk Crab eats Mussels Crab eats Limpets Fish eats Prawn Limpets eats Seaweed Lobster eats Crab Lobster eats Mussels Lobster eats Limpets Lobster eats Whelk Mussels eats Phytoplankton Mussels eats Zooplankton Prawn eats Zooplankton Whelk eats Limpets Whelk eats Mussels Zooplankton eats Phytoplankton ```` <strong>And the output is supposed to say:</strong> Most Flexible Eaters: Bird Any tips would be greatly appreciated | You can calculate this value directly from your existing `foodweb` dictionary as follows: ````print("Most Flexible Eaters: {}" format(sorted(foodweb items() key=lambda x: -len(x[1]))[0][0])) ```` This would display: ````Most Flexible Eaters: Bird ```` This works by sorting the dictionary items by the length of their values and selecting the first item in the list To avoid the use of a `lambda` it could be written as follows: ````def get_length(x): return -len(x[1]) print("Most Flexible Eaters: {}" format(sorted(foodweb items() key=get_length)[0][0])) ```` Note adding a `-` to the returned length is just a trick to reverse the sort order alternatively `reverse=True` could be added as an argument to the sort to have the same effect |
Pip does not know where numpy is installed Trying to uninstall numpy I tried pip uninstall numpy It tells me that it is not installed However numpy is still installed at /Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/numpy How can I make sure pip finds numpy? | Maybe run <strong>deactivate</strong> if you are running virtualenv? |
How to install OpenCV for python HI! I am trying to install opencv and use it with python but when I compile it I get no errors but I cannot import cv module from python: ````patrick:release patrick$ python Python 2 6 1 (r261:67515 Feb 11 2010 00:51:29) [GCC 4 2 1 (Apple Inc build 5646)] on darwin Type "help" "copyright" "credits" or "license" for more information >>> import cv Traceback (most recent call last): File "<stdin>" line 1 in <module> ImportError: No module named cv ```` The code I used to compile it is this: ````cd opencv mkdir release cd release cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON make sudo make install ```` how can I get it working with python? | You could try <a href="http://code google com/p/ctypes-opencv/" rel="nofollow">ctypes-opencv</a> -- not sure why building and installing with `-D BUILD_PYTHON_SUPPORT=ON` did not work for you (maybe it does not know where to install the Python wrappers in osx ?) but the ctypes wrappers should in theory work anyway |
Infinite loop in python treasure hunt game So I have been creating this simple treasure hunt game where you look for three treasures in a board But after 6 guesses it gets me stuck in a loop! The X's represent areas you have searched and the $ signs are treasures you have found Please help!!!! ````import random def hide_treasure(board): treasures=0 while treasures<=3: random_row=random randrange(0 5) random_col=random randrange(0 5) if(0<=random_row<5) and(0<=random_col<5) and (board[random_row] [random_col]==" "): board[random_row][random_col]="T" treasures+=1 def display_board(board show_treasure=False): for col in range(5): print " %d " %col print for row in range(5): print " %d:" %(row) " | " join(board[row]) replace("T" " ") print " ---+---+---+---+---" if show_treasure==True: " " replace(" " "T") def make_user_move(board): valid_move=False while not valid_move: try: ask_row=input("What row would you like to search (0-4): ") ask_col=input("What col would you like to search (0-4): ") if board[ask_row][ask_col]=="T": board[ask_row][ask_col]="$" print print"YES! You found a treasure " return True elif board[ask_row][ask_col]=="$" or board[ask_row][ask_col]=="X": print print"You already tried there please pick again " else: board[ask_row][ask_col]="X" print print"Nothing there " break except ValueError: print"Integers only for row and column values Please try again!" continue except IndexError: print print"Sorry invalid location Please try again!" def main(): board=[[" " " " " " " " " "] [" " " " " " " " " "] [" " " " " " " " " " ] [" " " " " " " " " "] [" " " " " " " " " "]] print"WELCOME TO TREASURE HUNT!" guess=10 treasures=0 while guess!=0 and treasures!=3: print print"You have" guess "guesses left and have found" treasures "/3 treasures" hide_treasure(board) display_board(board) guess-=1 if make_user_move(board): treasures+=1 if guess==0 and treasures!=3: display_board(show_treasure=True) print"OH NO! You only found %d"%treasures "/3 treasures " print print"*** GAME OVER ***" elif treasures==3: display_board(board) print"CONGRATULATIONS! You found ALL of the hidden treasure " print print"*** GAME OVER ***" main() ```` | The issue you are having is that you run out of room to hide treasures in Your current code does not just hide three treasures it hides three treasures at the start then three more after each guess After six guesses there are no spaces left that have not either been guessed already or have a treasure hidden in them You probably want to move the call to `hide_treasure` out of the `while` loop in `main` Just call it once at the start and you will be good: ````def main(): board=[[" " " " " " " " " "] [" " " " " " " " " "] [" " " " " " " " " " ] [" " " " " " " " " "] [" " " " " " " " " "]] print"WELCOME TO TREASURE HUNT!" guess=10 treasures=0 hide_treasure(board) ### call this here instead of in the loop below while guess!=0 and treasures!=3: print print"You have" guess "guesses left and have found" treasures "/3 treasures" # hide_treasure(board) ### remove this! display_board(board) guess-=1 if make_user_move(board): treasures+=1 # ```` |
How python multiprocessing works? I would like to know how multiprocessing works Does this mean I can run `8 separate process` simultaneously if I have `8 core CPU` ? For example here is my processor information ````giri@ubuntu:~$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit 64-bit Byte Order: Little Endian CPU(s): 1 On-line CPU(s) list: 0 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 42 Stepping: 7 CPU MHz: 2294 762 BogoMIPS: 4589 52 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 3072K NUMA node0 CPU(s): 0 ```` Here is my cpuinfo ````giri@ubuntu:~$ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i3-2350M CPU @ 2 30GHz stepping : 7 microcode : 0x23 cpu MHz : 2294 762 cache size : 3072 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx lahf_lm arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid bogomips : 4589 52 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical 48 bits virtual power management: ```` As you can see my number of CPU(s) is `1` Does this mean multiprocessing is useless in my situation? | You can run N thread simultaneously if you have N cores (if threads per core = 1) Though if you try to run more threads than you have processors the operating system will run your threads one after the other by small 'slices' and they will seem to run in parallel So multi threading will not allow your code to be faster in this case but it is often interesting you can for instance have a thread waiting for the disk to reply while another waits for the network and a third one runs i3-2350M should be 2 cores / 2 threads per core so 4 threads running at the same time |
Comparing Values of two lists and Printing to a file I have a program that is supposed to be comparing the contents of a list to values returned by a tinter treeview and if the values do not match writing the element of the list to a file The idea is to allow the user to remove an element in the tree (which is populated by reading from a the same file I am trying to write to) here is the code: ```` selected_book = info_box focus() del_book = info_box item(selected_book 'values') title_file_clear = open("titles" 'w') author_file_clear = open("authors" 'w') title_file_clear close() author_file_clear close() title_file_3 = open("titles" "a") author_file_3 = open("authors" "a") for i in range(0 len(titles)): if titles[i] == del_book[0] is False: print(titles[i] file=title_file_3) for i in range(0 len(authors)): if authors[i] == del_book[1] is False: print(authors[i] file=author_file_3) title_file_3 close() author_file_3 close() ```` But all it seems to do is blank the files (I do know this is not likely to be the most efficient piece of code but I have been tweaking it for a while to try to get it to work) | Use `if titles[i] != del_book[0]:` instead of `if titles[i] == del_book[0] is False:` To append file instead of writing new lines take a look <a href="http://stackoverflow com/questions/4706499/how-do-you-append-to-file-in-python">at this question</a> |
AttributeError in tkinter gui programming I want to display my calculated output in a Gui window in python I am trying with Tkinter But I am having problems displaying the output on Tkinter level widget I am putting input data as address information in text field of Tkinter window and want latitude longitude of that inputed address to the text label Can anyone please help me out of this? I am just quite new to this Tkinter code is below: ````def initialize(self): self grid() self entry = Tkinter Entry(self) self entry grid(column=0 row=0 sticky='EW') button = Tkinter Button(self text=you"Get Geo information !" command=self OnButtonClick) button grid(column=1 row=0) self labelVariable = Tkinter StringVar() label = Tkinter Label(self textvariable=self labelVariable anchor="w" fg="black" bg="white") label grid(column=0 row=1 columnspan=2 sticky='EW') self grid_columnconfigure(0 weight=1) self resizable(True False) def OnButtonClick(self): outf = open(out_file 'w') outf_failed = open(out_file_failed 'w') #inf = open(addr_file 'r') inf = codecs open(addr_file 'r' 'iso-8859-1') for address in inf: #get latitude and longitude of address data = geocode(address) #output results and log to file if len(data)>1: self labelVariable set( self entryVariable get()+" (Latitude )" data['lat'] ) self labelVariable set( self entryVariable get()+" (Longitude )" data['lng'] ) outf write(address strip()+data['lat']+' '+data['lng']+'\n') outf flush() else: self labelVariable set( self entryVariable get()+" Geocoding of '"+addr_file+"' failed with error code "+data['code'] ) outf_failed write(address) outf_failed flush() time sleep(sleep_time) #clean up inf close() outf close() outf_failed close() if __name__ == "__main__": app = simpleapp_tk(None) app title('Your Location') app mainloop() ```` And I got error: ````File "F:\JavaWorkspace\Test\src\gui_geo_location py" line 94 in OnButtonClick self labelVariable set( self entryVariable get()+" (Latitude )" data['lat'] ) File "C:\Python25\lib\lib-tk\Tkinter py" line 1721 in __getattr__ return getattr(self tk attr) AttributeError: entryVariable ```` Here is my `__init__` method: ````def __init__(self parent): Tkinter Tk __init__(self parent) self parent = parent self initialize() def initialize(self): self grid() self entry = Tkinter Entry(self) self entry grid(column=0 row=0 sticky='EW') self entry bind("<Return>" self OnPressEnter) button = Tkinter Button(self text=you"Get Geo information !" command=self OnButtonClick) button grid(column=1 row=0) self labelVariable = Tkinter StringVar() label = Tkinter Label(self textvariable=self labelVariable anchor="w" fg="black" bg="white") label grid(column=0 row=1 columnspan=2 sticky='EW') self grid_columnconfigure(0 weight=1) self resizable(True False) ```` | On line 94 in `F:\JavaWorkspace\Test\src\gui_geo_location py` you are using `self entryVariable` but that object does not have an `entryVariable` attribute Based on your `__init__` it seems you have not defined `entryVariable` anywhere Try adding: ````self entryVariable = Tkinter StringVar() ```` to your `__init__` method See <a href="http://sebsauvage net/python/gui/" rel="nofollow">this page</a> for more information |
Python : Getting an Indexing error when trying to append data to a list I have data in a list: ls1 which prints fine when giving the print function ````[5 2 7 4 3 9 8 6 10] ```` However I am getting an error when trying this: ````P81=[] P81 append(ls1[5]) ```` Is there anything wrong with the code? Here is a full copy for reference The code is just a key generation function which accepts a 10 element list and performs some permutations and shifts leftShift is a function which just performs a shift operation on the list ````def keyGen(key): import numpy #3 5 2 7 4 10 1 9 8 6 P10=[] P10 append(key[2]) P10 append(key[4]) P10 append(key[1]) P10 append(key[6]) P10 append(key[3]) P10 append(key[9]) P10 append(key[8]) P10 append(key[7]) P10 append(key[5]) #Now P10 contains the keys after initial permutation #Take 2 halves and perform left shift ls1a=leftShift(P10[0:5]) ls1b=leftShift(P10[5:10]) ls1=ls1a+ls1b P81=[] #6 3 7 4 8 5 10 9 print ls1 P81 append(ls1[5]) P81 append(ls1[2]) P81 append(ls1[6]) P81 append(ls1[3]) P81 append(ls1[7]) P81 append(ls1[4]) P81 append(ls1[9]) P81 append(ls1[8]) #For the second set of keys perform the second shift ls2a=leftShift(ls1a) ls2b=leftShift(ls1b) ls2=ls2a+ls2b P82=[] P82 append(ls2[5]) P82 append(ls2[2]) P82 append(ls2[6]) P82 append(ls2[3]) P82 append(ls2[7]) P82 append(ls2[4]) P82 append(ls2[9]) P82 append(ls2[8]) return([P81 P82]) ```` | The index error is for indexing into `ls1` <them>not</them> the ` append()` call Your `ls1` does not have 10 elements yet you try to index that many: ````P81 append(ls2[9]) P81 append(ls2[8]) ```` You only ever appended 9 elements to `P10` (you ignored `key[0]`) so your assumptions already fall apart there As a result provided `leftShift` does not lose any more elements `ls1` is 9 elements long so: ````P81 append(ls1[9]) ```` will fail Even if it does not you ignore `ls1[0]` and `ls1[1]` `ls2` suffers from the same problem; there are 9 elements in that list not 10 provided `leftShift` does not drop any elements |
Replace all ascii symbols (other than alphabets) into HTML number in Python I need to replace all the ascii symbols other than alphabets into HTML number (<a href="http://www ascii cl/htmlcodes htm" rel="nofollow">http://www ascii cl/htmlcodes htm</a>) From this post(<a href="http://stackoverflow com/questions/701704/convert-html-entities-to-unicode-and-vice-versa">Convert HTML entities to Unicode and vice versa</a>) I could use this code but I still cannot get `*` (or maybe many other characters) working What could be the solution? Just simple replacements could be the only solution? ````>>> from BeautifulSoup import BeautifulStoneSoup as bs >>> import cgi >>> cgi escape("<*>") encode('ascii' 'xmlcharrefreplace') '&lt;*&gt;' ```` | Your question is a bit vague I will assume that by "alphabets" you mean all characters from a-z and their uppercase variants Then you can achieve the desired result using a regular expression: ````>>> f = lambda s: re sub(r'([^a-zA-Z])' lambda x: '&#{};' format(ord(x group(0))) s) >>> f("<hi>") '&#60;hi&#62;' >>> f("<*>") '&#60;&#42;&#62;' ```` Please note that without knowing about your special application this looks like a weird thing to do There might be a better approach to solve the real underlying problem |
What two centres of Mamluk power was the country located? | Cairo and Damascus |
Nested memcache lookups in Python o(n) good/bad? Is something like this bad with memcache? ````1 GET LIST OF KEYS 2 FOR EACH KEY IN LIST OF KEYS - GET DATA ```` I am expecting the list of keys to be around ~1000 long If this is bad I am wondering if there is a better way to do this? I figured memcache might be fast enough where such an O(n) query might not be so important I would never do this in MySQL for example Thanks | This will be slower than it needs to be because each request will wait for the previous one to complete before being sent If there is any latency at all to the memcache server this will add up quickly: if there is just 100uS of latency (a typical Ethernet round-trip time) these 1000 lookups will take a tenth of a second which is a long time in many applications The correct way of doing this is making batch requests: sending many requests to the server simultaneously then receiving all of the responses back so you do not take a latency penalty repeatedly The python-memcache module has the `get_multi` method to do this for you |