input_text
stringlengths 1
40.1k
| target_text
stringlengths 1
29.4k
⌀ |
---|---|
Python/ materialized paths: recursively create nested dict from flat list I am trying to create a nested dict structure from a flat list of dicts that contain a path string from mongodb in order to build up a tree that will be displayed in d3 For instance here is an example set of data: ``` [ { "_id" : 1 "name" : "var" "path" : "/" } { "_id" : 2 "name" : "var" "path" : "/var/" } { "_id" : 3 "name" : "log" "path" : "/var/var/" } { "_id" : 4 "name" : "log2" "path" : "/var/var/" } { "_id" : 5 "name" : "uwsgi" "path" : "/var/var/log/" } { "_id" : 6 "name" : "nginx" "path" : "/var/var/log2/" } { "_id" : 7 "name" : "error" "path" : "/var/var/log2/nginx/" } { "_id" : 8 "name" : "access" "path" : "/var/var/log2/nginx/" } ] ``` I need to get the data into this sort of format of nodes with a name attribute and a list of children to get the chart to display ``` { 'name': 'var' '_id': 1 'children': [ { 'name': 'var' '_id': 2 'children': [ { '_id': 3 'name': 'log' 'children': [ { '_id':5 'name': 'uwsgi' 'children': [] } ] } { '_id': 4 'name': 'log2' 'children': [ { '_id': 6 'name': 'nginx' 'children': [ { '_id': 7 'name': 'error' 'children': [] } { '_id': 8 'name' 'access' 'children': [] } ] } ] } ] } ] } ``` I tried something like this which did not work: ``` def insert_node(d res): if not res get("children"): res["children"] = [] if d["path"] == res["path"]: res["children"] append(d) else: for c in res["children"]: insert_node(d c) root = nodes[0] for node in nodes[1:] insert_node(node root) ``` Is there a classy recursive way to populate the nested dict structure? | Possibly this will fix it? ````def insert_node(d res): if not res get("children"): res["children"] = [] if d["path"] == res["path"]+res['name']+'/': res["children"] append(d) else: for c in res["children"]: insert_node(d c) root = nodes[0] for node in nodes[1:] insert_node(node root) ```` |
Class instance as static attribute Python 3 does not allow you to reference a class inside its body (except in methods): ````class A: static_attribute = A() def __init__(self): ```` This raises a `NameError` in the second line because `'A' is not defined` <h1> Alternatives </h1> I have quickly found one workaround: ````class A: @property @classmethod def static_property(cls): return A() def __init__(self): ```` Although this is not exactly the same since it returns a different instance every time (you could prevent this by saving the instance to a static variable the first time) Are there simpler and/or more elegant alternatives? <them><strong>EDIT:</strong></them> I have moved the question about the reasons for this restriction to a <a href="http://stackoverflow com/questions/34558324/why-does not-python-allow-referencing-a-class-inside-its-definition">separate question</a> | The expression `A()` cannot be run until the class `A` has been defined In your first block of code the definition of `A` is not complete at the point you are trying to execute `A()` Here is a simpler alternative: ````class A: def __init__(self): A static_attribute = A() ```` |
Where are Whitehead's works primarily studied in English-speaking countries? | Claremont and a select number of liberal graduate-level theology and philosophy programs |
Having trouble quitting sched scheduler module? I am using Selenium Webdriver in my program in order to try and automate something I am then parsing th resulting page and checking for a specific element in the page If the page does not have the specific element then I use sched scheduler to re-automate the task by having the user click a button (in the Tkinter GUI) The button runs a function which schedules a task for sched scheduler and has the task be sent to a function in which I created a new process from the multiprocessing module This is basically what it is: ````import time import sched from multiprocessing import Process #the function needs to run for the first time then waits for user input if an error shows up #if it is the second time around the worker function runs the scheduler global first_time_happening first_time_happening = True terminate = False scheduler = sched scheduler(time time time sleep) def worker(): #insert some working process here using selenium webdriver print("Worker happened!") global first_time_happening if first_time_happening: first_time_happening = False elif not first_time_happening: global relay_to_timer relay_to_timer = scheduler enter(5 2 timer) scheduler run() def process(): p = Process(target=worker) #p daemon = True p start() def timer(): if not terminate: global relay_to_process relay_to_process = scheduler enter(5 2 process) scheduler run() if terminate: scheduler cancel(relay_to_process) scheduler cancel(relay_to_timer) def quit_button(): global terminate terminate = True if scheduler empty: print("The line is empty") elif not scheduler empty: print("Something in the queue!") while not scheduler empty: scheduler cancel(relay_to_process) scheduler cancel(relay_to_timer) worker() #simulating where the GUI asks a question person presses a button and the button redirects them #to function worker() worker() #simulating a user press the quit button quit_button() ```` It keeps running even after I "hit" quit (or call the quit function in this case) I keep getting the queue is empty but I am not sure why it is not working? Any help is appreciated thanks!! | The scheduler keeps running even with an empty queue just in case somebody (presumably another thread) entered something again I believe the way to make it end is to raise an exception (whether from the action or delay function) -- ` run` will propagate it and you can catch it To wit ````class AllDoneException(Exception): pass def worker(): #insert some working process here using selenium webdriver print("Worker happened!") global first_time_happening if first_time_happening: first_time_happening = False elif not first_time_happening: global relay_to_timer relay_to_timer = scheduler enter(5 2 timer) try: scheduler run() except AllDoneException: pass ```` and in function `timer` ```` if terminate: raise AllDoneException ```` |
parsing nested xml in python I have this XML file: <pre class="lang-xml prettyprint-override">`<?xml version="1 0" ?><XMLSchemaPalletLoadTechData xmlns="http://tempuri org/XMLSchemaPalletLoadTechData xsd"> <TechDataParams> <RunNumber>sample</RunNumber> <Holder>sample</Holder> <ProcessToolName>sample</ProcessToolName> <RecipeName>sample</RecipeName> <PalletName>sample</PalletName> <PalletPosition>sample</PalletPosition> <IsControl>sample</IsControl> <LoadPosition>sample</LoadPosition> <HolderJob>sample</HolderJob> <IsSPC>sample</IsSPC> <MeasurementType>sample</MeasurementType> </TechDataParams> <TechDataParams> <RunNumber>sample</RunNumber> <Holder>sample</Holder> <ProcessToolName>sample</ProcessToolName> <RecipeName>sample</RecipeName> <PalletName>sample</PalletName> <PalletPosition>sample</PalletPosition> <IsControl>sample</IsControl> <LoadPosition>sample</LoadPosition> <HolderJob>sample</HolderJob> <IsSPC>sample</IsSPC> <MeasurementType>XRF</MeasurementType> </TechDataParams> </XMLSchemaPalletLoadTechData> ```` And this is my code for parsing the xml: <pre class="lang-python prettyprint-override">`for data in xml getElementsByTagName('TechDataParams'): #parse xml runnum=data getElementsByTagName('RunNumber')[0] firstChild nodeValue hold=data getElementsByTagName('Holder')[0] firstChild nodeValue processtn=data getElementsByTagName('ProcessToolName'[0] firstChild nodeValue) recipedata=data getElementsByTagName('RecipeName'[0] firstChild nodeValue) palletna=data getElementsByTagName('PalletName')[0] firstChild nodeValue palletposi=data getElementsByTagName('PalletPosition')[0] firstChild nodeValue control = data getElementsByTagName('IsControl')[0] firstChild nodeValue loadpos=data getElementsByTagName('LoadPosition')[0] firstChild nodeValue holderjob=data getElementsByTagName('HolderJob')[0] firstChild nodeValue spc = data getElementsByTagName('IsSPC')[0] firstChild nodeValue mestype = data getElementsByTagName('MeasurementType')[0] firstChild nodeValue ```` but when i print each node i am only getting one set of 'TechDataParams' but I want to be able to get all 'TechDataParams' from the XML Let me know if my question is a bit unclear | Here is an example for you Replace `file_path` with your own I replace value of `RunNumber` with `001` and `002` ````# -*- coding: utf-8 -*- #!/usr/bin/python from xml dom import minidom file_path = 'C:\\temp\\test xml' doc = minidom parse(file_path) TechDataParams = doc getElementsByTagName('TechDataParams') for t in TechDataParams: num = t getElementsByTagName('RunNumber')[0] print 'num is ' num firstChild data ```` OUTPUT: ````num is 001 num is 002 ```` |
Python pandas check if dataframe is not empty I have if statement where it checks if dataframe is not empty The way I do it is: ```` if dataframe empty: pass else: #do something ```` But really I need: ```` if dataframe is not empty: #do something ```` My question is there a method not empty to achieve this? I also wanted to ask if the second version is better in terms of performance? Otherwise maybe it makes sense for me to leave it as it is i e first version? | Just do ````if not dataframe empty: // insert code here ```` The reason this works is because `dataframe empty` returns `True` if dataframe is empty To invert this we can use the negation operator `not` which flips `True` to `False` and vice-versa This is exactly identical to how `!` works in other languages: you can do `!false` and have that evaluate to `true` in Java and C++ Exactly the same thing happens here but with `not` rather than `!` |
how to call function from DLL in C#/Python I have next C++ code for create DLL file ````// MathFuncsDll h #ifdef MATHFUNCSDLL_EXPORTS #define MATHFUNCSDLL_API __declspec(dllexport) #else #define MATHFUNCSDLL_API __declspec(dllimport) #endif namespace MathFuncs { // This class is exported from the MathFuncsDll dll class MyMathFuncs { public: // Returns a b static MATHFUNCSDLL_API double Add(double a double b); // Returns a - b static MATHFUNCSDLL_API double Subtract(double a double b); // Returns a * b static MATHFUNCSDLL_API double Multiply(double a double b); // Returns a / b // Throws const std::invalid_argument& if b is 0 static MATHFUNCSDLL_API double Divide(double a double b); }; } // MathFuncsDll cpp : Defines the exported functions for the DLL application // #include "stdafx h" #include "MathFuncsDll h" #include <stdexcept> using namespace std; namespace MathFuncs { double MyMathFuncs::Add(double a double b) { return a b; } double MyMathFuncs::Subtract(double a double b) { return a - b; } double MyMathFuncs::Multiply(double a double b) { return a * b; } double MyMathFuncs::Divide(double a double b) { return a / b; } } ```` after compile I have dll file and i want to call for example ADD function ````using System; using System Collections Generic; using System Linq; using System Text; using System Runtime InteropServices; namespace call_func { class Program { [DllImport("MathFuncsDll dll" CallingConvention = CallingConvention Cdecl)] public static extern double MyMathFuncs::Add(double a double b); static void Main(string[] args) { Console Write(Add(1 2)); } } } ```` but got this message <a href="http://i stack imgur com/9PJcM png" rel="nofollow">error img</a> or in python code ````Traceback (most recent call last): File "C:/Users/PycharmProjects/RFC/testDLL py" line 6 in <module> result1 = mydll Add(10 1) File "C:\Python27\lib\ctypes\__init__ py" line 378 in __getattr__ func = self __getitem__(name) File "C:\Python27\lib\ctypes\__init__ py" line 383 in __getitem__ func = self _FuncPtr((name_or_ordinal self)) AttributeError: function 'Add' not found ```` please help how I can fix this code and call for example ADD function Thank you | Since it is C++ you are compiling the exported symbol name will be <a href="https://en m wikipedia org/wiki/Name_mangling" rel="nofollow"><them>mangled</them></a> You can confirm this by looking at your DLL's exports list using a tool like <a href="http://www nirsoft net/utils/dll_export_viewer html" rel="nofollow">DLL export viewer</a> It is best to provide a plain C export from DLLs when you intend to call them via an <a href="https://en m wikipedia org/wiki/Foreign_function_interface" rel="nofollow">FFI</a> You can do this using <a href="http://stackoverflow com/questions/1041866/in-c-source-what-is-the-effect-of-extern-c">`extern "C"`</a> to write a wrapper around your C++ methods See also: - <a href="http://stackoverflow com/questions/2045774/developing-c-wrapper-api-for-object-oriented-c-code">Developing C wrapper API for Object-Oriented C++ code</a> |
Python: Function with matrix in argument So in my program I have a "main" function which changes two elements of a given matrix The matrix is an element of a list (in the example the list is the variable `solved`) and then I want to append three new elements ````def main(matrix direction): index16 = indexOf(16 matrix) matrix[index16[0]][index16[1]] matrix[index16[0]-1][index16[1]]=matrix[index16[0]-1][index16[1]] matrix[index16[0]][index16[1]] return matrix solved = [[[2 1 3 4] [5 6 7 8] [9 10 11 12] [13 14 15 16] ]] not_solved = [[0 "up"] [0 "left"] ] while not_solved: solved append(main(solved[not_solved[0][0]] not_solved[0][1])) break ```` When I execute the program I can see the "solved" array However the initial matrix stays the same as in the beginning ````[[[2 1 3 4] [5 6 7 8] [9 10 11 16] [13 14 15 12]] [[2 1 3 4] [5 6 7 8] [9 10 11 16] [13 14 15 12]]] ```` How can I repair that? Sorry for my English I am still learning | The problem is your main function ````def main(matrix direction): index16 = indexOf(16 matrix) matrix[index16[0]][index16[1]] matrix[index16[0]-1][index16[1]]=matrix[index16[0]-1][index16[1]] matrix[index16[0]][index16[1]] return matrix ```` In this function you are returning matrix but you are also changing matrix which is your original matrix Consider this simple example: ````>>> a=[1 2 3] >>> def test(b): b[1]=4 return b >>> c = test(a) >>> c [1 4 3] >>> a [1 4 3] ```` A possible solution is to use the copy module ````>>> import copy >>> a=[1 2 3] >>> def test(b): c=copy deepcopy(b) c[1]=4 return c >>> c = test(a) >>> c [1 4 3] >>> a [1 2 3] ```` |
What heats clouds of gas and makes them readily detected? | null |
Python gspread login error 10060 I am attempting to log in to my Google account with gspread However it just times out with a `Socket Errno 10060` I have already activated POP and IMAP access on my email ````import gspread print 1 gc = gspread Client(auth=('***@gmail com' '*****')) print 2 gc login() print 2 sht = gc open_by_url('https://docs google com/spreadsheets/d/1XEThXRqWc_Vs4j_6oIuSPXoybj7aUp4h3g1bqPnBzRM/edit#gid=0') print 3 val = sht acell('B1') value ```` My error <a href="http://tinypic com/r/ws0ndh/8" rel="nofollow">http://tinypic com/r/ws0ndh/8</a> Thank guys! | `Socket Errno 10060` means the host is not responding at all to you Are you able to access the spreadsheet with your browser? This may be a network settings issue you may be blocked (I have seen some reports of this online they may block you if you have been making lots of requests) etc It is hard to tell as it is not a fault with your code |
swig wrapping Issues with c++ templates I have a swig file which contains the following code: ````%module vgSofa #define VG_SOFA_API %import vgd/vgd i %import vgm/vgm i %template(BasicHandlerShp1) boost::shared_ptr<vgSofa::handler::BasicHandler>; %{ #include "vgSofa/handler/BasicHandler hpp" %} %template(BasicHandlerShp) vgd::Shp<vgSofa::handler::BasicHandler>; %include "vgSofa/handler/BasicHandler hpp" ```` Where `vgd::Shp` inherits to `boost::shared_ptr` This code generates the wrapper and python file but in the python file generated I do not have access to `vgSofa::handler::BasicHandler` functions in my template `BasicHandlerShp` How can I do to access these functions in `BasicHandlerShp` ? Thank you !! | Does the BasicHandlerShp1 class work for you? I am not sure why you would want vgd::Shp to inherit from shared_ptr is that really what you want to do? |
What other discipline was combined with phonology by the generativists? | morphophonology |
Compute a âmirrorâ where the first half of the image is copied flipped (l-r) and then copied into the second half using tensorflow I want to solve this problem using tensorflow but I searched the web and found git issue#206 states that indexing and slicing is still not supported on a tensor variable initialized from a numpy array Otherwise I would have done ````image = mpimg imread(filename) height width depth = image shape x = tf Variable(image name='x') model = tf initialize_all_variables() with tf Session() as session: session run(model) result = session run(x[::1 :width*0 5 ::1]) #this step is not possible ```` What shall I use?? | You have to use `tf slice` and `tf reverse` and then concatenate the results <pre class="lang-py prettyprint-override">`image = tf placeholder(tf float32 [height width depth]) half_left = tf slice(image [0 0 0] [height width/2 depth]) half_right = tf reverse(half_left [False True False]) res = tf concat(1 [half_left half_right]) ```` The code will also work with a Variable |
Python: '%2s'% does not work after breaking the row i need to produce output with specified number of spaces It is a table with some columns for saving output in to the file i use line: ````save_line = ('%8s' % label[each_atom str(int(k) 1)] '%10s' % str(int(i) 1) '\n' '%2s' % x[i] '%20s' % y[i] '%20s' %z[i] '\n') ```` but the `'%2s'%x[i]` does not produce two spaces in output I cannot use `+" "+` there Any ideas what I can do? Here is output of my code: ```` C1 1 2 482705 1 332897 13 175184 ```` And finally here is how my output should looks (it is example from another input my task is to produce my basing on this): ```` C1 1 2 42416980 4 14117720 4 71196000 ```` It is no problem to change any number of spaces between any columns The only one that does not work is the first one in every second row It does not mater that the numbers do not mach The problem is in the spaces | Please combine those templates ````save_line = "%8s%10s\n%2s%20s%20s\n" % ( label[each_atom str(int(k) 1)] str(int(i) 1) x[i] y[i] z[i]) ```` The right side of the % operator should be a tuple None of your values are tuples (from what I can see) and that is a great way to get output you do not expect If you only want to format one item: ````print "Hello %s!" % ("world" ) ```` Note the trailing comma This is because ````("World") ```` is a string (with parenthesis around it) while ````("World" ) ```` is a tuple containing one item |
Ossetic is a version of which Scythian language? | Saka |
how to extract all quotes in a document/text using regex? I am trying to extract all the quotations that appear inside of a document using python regex I have code as follows but it is not working: ````import re hand = open('citi txt') for line in hand: line = line rstrip() if re search('(?:"( *?)")' line): print line ```` | You can use `re findall('(?:"( *?)")' line)` to extract only the quoted text from the line rather than printing the whole line even if there are more than one occurences per line Your code can be modified as follows: ````import re # This will make sure citi txt is properly closed after opening it # infl read() will read the whole file as single string so no need to loop with open('citi txt' 'r') as infl: hand = infl read() # And look for occurences of your string match = re findall('(?:"( *?)")' hand) if match: print match ```` e g if `line == 'This is "a sample" line with "two quoted" substrings'` this code will print `['a sample' 'two quoted']` <strong>Edit: Adapted to unicode</strong> It seems that your quotes are unicode characters Note the subtle differences between " “ ” (that I have not spotted initially either) My original answer and your code example are based on ASCII strings but you will need a regex string like this: ````match = re findall(you'(?:\u201c( *?)\u201d)' hand) ```` Explanation: `\u201c` is for <them>left double quote</them> and `\u201d` for <them>right double quote</them> the `you` marks the string as Unicode This now works with the excerpt you have provided |
Python app to run JasperReport libraries - i e no JasperServer I am looking to try and run Jasper reports (that have been written in iReports and exported to xml) from within a python application without having to communicate with a JasperServer instance Is this possible? I have done some googling and only come across a 2 year old SO question (where the suggested answer actually requires JasperServer): <a href="http://stackoverflow com/questions/7903557/run-jasper-report-created-with-ireport-from-within-python-without-jasperserver">Run jasper report (created with iReport) from within python without jasperserver?</a> And something that looks kind of promising except for the "It is obsolete" in the title: <a href="http://code activestate com/recipes/576969-python-jasperreport-integration-it-is-obsolete/" rel="nofollow">http://code activestate com/recipes/576969-python-jasperreport-integration-it-is-obsolete/</a> I am hoping it is obsolete because this is now an officially supported thing (dream on Dave) but I cannot find anything about it if it is | Actually Jasper Reports are not implemented in Python so the only way to have it serving to your Python code is to have Jasper Server running and awaiting Python requests over REST or other remote way of communication Simply - no way to have Jasper without Jasper (server) in Python |
how to mask the specific array data based on the shapefile <h3>Here is my question:</h3> - the 2-d numpy array data represent some property of each grid space - the shapefile as the administrative division of the study area(like a city) <h3>For example:</h3> <img src="http://i4 tietuku com/84ea2afa5841517a png" alt=""> The whole area has 40x40 grids network and I want to extract the data inside the purple area In other words I want to mask the data outside the administrative boundary into np nan <h3>My early attempt</h3> I label the grid number and select the specific array data into np nan <img src="http://i4 tietuku com/523df4783bea00e2 png" alt=""> ```` value[0 :] = np nan value[1 :] = np nan ```` Can Someone show me a easier method to achieve the target? <h3>Add</h3> Found an answer <a href="http://basemaptutorial readthedocs org/en/latest/clip html" rel="nofollow">here</a> which can plot the raster data into shapefile but the data itself does not change <h3>Update -2016-01-16</h3> I have already solved this problem inspired by some answers Someone which are interested on this target check this two posts which I have asked: 1 <a href="http://stackoverflow com/questions/34825074/testing-point-with-in-out-of-a-vector-shapefile">Testing point with in/out of a vector shapefile</a> 2 <a href="http://stackoverflow com/questions/25701321/how-to-use-set-clipped-path-for-basemap-polygon/34224925#34224925">How to use set clipped path for Basemap polygon</a> The key step was to test the point within/out of the shapefile which I have already transform into shapely polygon | Step 1 Rasterize shapefile Create a function that can determine whether a point at coordinates `(x y)` is or is not in the area See <a href="http://geospatialpython com/2011/02/clip-raster-using-shapefile html" rel="nofollow">here</a> for more details on how to rasterize your shapefile into an array of the same dimensions as your target mask ````def point_is_in_mask(mask point): # this is just pseudocode return mask contains(point) ```` Step 2 Create your mask ````mask = np zeros((height width)) value = np zeros((height width)) for y in range(height): for x in range(width): if not point_is_in_mask(mask (x y)): value[y][x] = np nan ```` |
Which book does Feynman detail was to pick up girls? | "Surely Youre Joking, Mr. Feynman" |
How to print gantt-charts generated on web using python? I want to print or save gantt-chart(in pdf format) These charts are generated on web after a particular input Our chart is a plug-in for Trac I have used Genshi library to generate charts | There is an open source python library for generating PDF files by <a href="http://www reportlab com/software/opensource/" rel="nofollow">Report Labs</a> I have not used it myself but other questions & answers on SO have revolved around this library <a href="http://www reportlab com/software/opensource/rl-toolkit/" rel="nofollow">Report Lab Toolkit</a> |
Datetime Module Python I am experimenting with the Datetime module in Python and decided to make a program to count days forward and backward Relevant code: ````if bORf == 'f': howfarforward = input("How far forward would you like to count?: ") def countforward(howfarfor): day = datetime date today() day -= howfarfor print(day) countback(howfarfor) ```` I am getting the error ````Traceback (most recent call last): File "datecount py" line 11 in <module> countback(howfarback) File "datecount py" line 9 in countback day -= howfarback TypeError: unsupported operand type(s) for -=: 'datetime date' and 'str' ```` And I know why I just do not know how to fix it How do I do this? Rest of Code: ````import datetime print("Today is" datetime date today()) bORf = input("Would you like to count backwards or forwards? (b/f)") if bORf == 'b': howfarback = input("How far back would you like to count?: ") def countback(howfarback): day = datetime date today() day -= howfarback print(day) countback(howfarback) ```` | You cannot subtract a string from a datetime Try converting it into a timedelta first |
Python - using grid layout to format a window I am teaching myself Python and attempting to build a local app that uses a window GUI I am having a lot of trouble trying to layout the screen with grid() I have searched and tried lots of different snippets of code but I have the same problem the frames and widgets do not seem to be formatting The code below is really simplistic but my end goal is to master how to use grid() so I can build any gui I like in the future Also any ideas on general improvements would be appreciated I want to be able to do the following: ```` --Window----------------- | Section 1 | Section 2 | | | | ------------------------- | Section 3 | | | | | ------------------------- from Tkinter import Button Frame Entry Tk Label Menubutton Menu IntVar class MainScreen(Frame): def __init__(self master): Frame __init__(self master) self grid() self searchSection() self quitButton() def searchSection(self): # Create Search Section self searchFrame = Frame(self master bg='grey' relief='sunken' width=200 height=200) self searchFrame grid(row=0 column=0 rowspan=5 columnspan=30 sticky="wens") Label(self searchFrame text="Search :" bg='grey') grid(row=1 column=1 columnspan=20 sticky='w') self searchField = Entry(self searchFrame) self searchField grid(row=2 column=1 columnspan=7 sticky='w') #Create Menu Options self search = Menubutton(self searchFrame text = "Search" bg='grey') self search grid(row=2 column=8 columnspan=3 sticky='w') self search menu = Menu(self search tearoff = 0) self search['menu'] = self search menu self SearchType1Var = IntVar() self search menu add_checkbutton(label="SearchType1" variable = self SearchType1Var) def quitButton(self): ## Provide a quit button to exit the rogram self quitFrame = Frame(self master bg='grey' width=50 height=50) self quitFrame grid(row=0 column=20 rowspan=5 columnspan=5 sticky='ewns') self quitButton = Button(self quitFrame text="Quit" command=exit) self quitButton grid() if __name__ == '__main__': root = Tk() root title("Learn Grid GUI") root geometry("800x600+200+150") main = MainScreen(root) root mainloop() ```` | If I understood correctly your problem is that you cannot sort the GUI The answer is pretty simple - if you want to sort the GUI just put the code in the same order that you want the window to show For example: ```` --Window-- | text box | | button | | | ------------ -Code- [text box code here] [button code here] root mainloop() ```` I Have been working with tkinter for a long time and it worked for me Good luck and happy holydays! LStyle |
need only link as an output I have multiple html tag I want to extract only content of 1st href=" " for example this single line of data ````<a class="product-link" data-styleid="1424359" href="/tops/biba/biba-beige--pink-women-floral-print-top/1424359/buy?src=search"><img _src="http://assets myntassets com/h_240 q_95 w_180/v1/assets/images/1424359/2016/9/28/11475053941748-BIBA-Beige--Pink-Floral-Print-Kurti-7191475053941511-1_mini jpg" _src2="http://assets myntassets com/h_307 q_95 w_230/v1/assets/images/1424359/2016/9/28/11475053941748-BIBA-Beige--Pink-Floral-Print-Kurti-7191475053941511-1_mini jpg" alt="BIBA Beige &amp; Pink Women Floral Print Top" class="lazy loading thumb" onerror="this className='thumb error'" onload="this className='thumb'"/><div class="brand">Biba</div><div class="product">Beige &amp; Pink Women Floral Print Top</div><div class="price">Rs 899</div><div class="sizes">Sizes: S L XL XXL</div></a> ```` I want only `/tops/biba/biba-beige--pink-women-floral-print-top/1424359/buy?src=search` as output The code is as follows: ````from bs4 import BeautifulSoup import urllib x=urllib urlopen("http://www myntra com/tops-tees-menu/") soup2 = BeautifulSoup(x 'html parser') for i in soup2 find_all('a' attrs={'class': 'product-link'}): print i print i find('a')['href'] ```` | If you need a single "product link" just use `find()`: ````soup2 find('a' attrs={'class': 'product-link'})["href"] ```` Note that you can use a <a href="https://www crummy com/software/BeautifulSoup/bs4/doc/#css-selectors" rel="nofollow">CSS selector</a> location technique as well: ````soup2 select_one('a product-link')["href"] ```` |
Is there an iterative way to create tk Frames and configure them? Right now I have: ````# create window frames self f1 = tk Frame(self root) self f2 = tk Frame(self root) self f3 = tk Frame(self root) self f4 = tk Frame(self root) self f5 = tk Frame(self root) self f6 = tk Frame(self root) # place frames on window for f in (self f1 self f2 self f3 self f4 self f5 self f6): f configure(bg="white") f configure(width=self width height=self height bg="white") f place(x=0 y=0) ```` I am going to be adding a lot more frames I was wondering if there was an iterative way to create all the frames as well as place them in the windows and configure them without having to type "self f7 self f8 self f9" etc | Add every new `Frame` to the list and then iterate over list ````frames = [] self f1 = tk Frame(self root) frames append(self f1) # Do that for all frames for f in frames: f configure(bg="white") f configure(width=self width height=self height bg="white") f place(x=0 y=0) ```` Edit to answer the comment: Create a method for that: ````def add_frames(self how_many_frames): for i in range(how_many_frames): f = tk Frame(self root) self frames[i] = f f configure(bg="white") f configure(width=self width height=self height bg="white") f place(x=0 y=0) ```` You also need to have `self frames = dict()` initialised in `__init__` method Now call `add_frames(30)` to create 30 frames store then in dictionary under `self frames` and configure them at the same time |
Efficiency of Python sorted() built-in function vs list insert() method I am not new to it but I do not use Python much and my knowledge is rather broad and not very deep in the language perhaps someone here more knowledgeable can answer my question I find myself in the situation when I need to add items to a list and keep it sorted as items as added A quick way of doing this would be ````list append(item) // O(1) list sort() // ?? ```` I would imagine if this is the only way items are added to the list I would hope the sort would be rather efficient because the list is sorted with each addition However there is also this that works: ````inserted = False for i in range(len(list)): // O(N) if (item < list[i]): list insert(i item) // ?? inserted = True break if not inserted: list append(item) ```` Can anyone tell me if one of these is obviously more efficient? I am leaning toward the second set of statements however I really have no idea | What you are looking for is the bisect module and most possible the <a href="http://docs python org/2/library/bisect html#bisect insort_left">insort_left</a> So your expression could be equivalently written as from ````some_list append(item) // O(1) some_list sort() // ?? ```` to ````bisect insort_left(some_list item) ```` |
What was the third largest group in the House of Commons? | null |
Allophones are similar in what two languages? | null |
What is the fourth and final stomach compartment in ruminants? | The abomasum |
Cannot get Pandas to install ! Help! (pip install pandas) I am trying to install Pandar but I cannot get pandas to install on my linux Centos 6 4 Running `pip install pandas` leads to this error: ````gcc: error trying to exec 'cc1plus': execvp: No such file or directory error: command 'gcc' failed with exit status 1 ```` What should I do to fix this? | I Am pretty sure that is a compiler error so try installing `g++` on the system if you are working on linux run this ````sudo apt-get install g++ ```` |
How do I override a template on readthedocs? I recently added <a href="http://blowdrycss readthedocs org/en/latest/index html" rel="nofollow">sphinx documentation for blowdrycss</a> to readthedocs I want to override the layout html template on <a href="https://readthedocs org/" rel="nofollow">readthedocs</a> My current template override works fine on `localhost` but not on readthedocs The project uses the Alabaster theme which extends the basic theme Project directory structure can be seen <a href="https://github com/nueverest/blowdrycss/tree/master/docs" rel="nofollow">here</a> <strong>The relevant parts are:</strong> ````blowdrycss/ docs/ _templates/ layout html conf py ```` <strong>Template setting in `conf py`:</strong> ````templates_path = ['_templates'] ```` <strong>Contents of layout html:</strong> ````{% extends '!layout html' %} {% block sidebarsearch %} {{ super() }} <a href="https://flattr com/submit/auto?user_id=nueverest&url=https%3A%2F%2Fgithub com%2Fnueverest%2Fblowdrycss" target="_blank"><img src="http://button flattr com/flattr-badge-large png" alt="Flattr this" title="Flattr this" border="0"></a> {% endblock %} {% block footer %} <div class="footer" style="text-align: center;"> <a href="https://flattr com/submit/auto?user_id=nueverest&url=https%3A%2F%2Fgithub com%2Fnueverest%2Fblowdrycss" target="_blank"><img src="http://button flattr com/flattr-badge-large png" alt="Flattr this" title="Flattr this" border="0"></a> </div> {{ super() }} {% endblock %} ```` How do I override the `layout html` template on readthedocs? <strong>Update</strong> I have also tried: <strong>The relevant parts are:</strong> ````blowdrycss/ docs/ custom_templates/ layout html conf py ```` <strong>Template setting in `conf py`:</strong> ````templates_path = ['custom_templates'] ```` | While readthedocs does not support the `templates_path` straight away you can use a custom theme with templates inside of it <a href="http://sphinx-doc org/theming html" rel="nofollow">http://sphinx-doc org/theming html</a> Simply create a new theme directory and add this to your `conf py`: ````html_theme = 'your_theme' html_theme_path = [' '] ```` You can see an example in one of my projects: - <a href="http://django-utils2 readthedocs org/en/latest/" rel="nofollow">http://django-utils2 readthedocs org/en/latest/</a> - <a href="https://github com/WoLpH/django-utils/blob/master/docs/conf py#L109-L117" rel="nofollow">https://github com/WoLpH/django-utils/blob/master/docs/conf py#L109-L117</a> |
Cannot unshorten bit ly urls? I am using the code in <a href="http://stackoverflow com/questions/7153096/how-can-i-un-shorten-a-url-using-python/7153185#7153185">this</a> stackoverflow post to unshorten urls ````import httplib import urlparse def unshorten_url(url): parsed = urlparse urlparse(url) h = httplib HTTPConnection(parsed netloc) resource = parsed path if parsed query != "": resource = "?" parsed query h request('HEAD' resource ) response = h getresponse() if response status/100 == 3 and response getheader('Location'): return unshorten_url(response getheader('Location')) # changed to process chains of short urls else: return url ```` All shortened links get unshortned 'cept for newly created bit ly urls I get this error: ````>>> unshorten_url("bit ly/1atTViN") Traceback (most recent call last): File "<stdin>" line 1 in <module> File "<stdin>" line 7 in unshorten_url File "/System/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/httplib py" line 955 in request self _send_request(method url body headers) File "/System/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/httplib py" line 989 in _send_request self endheaders(body) File "/System/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/httplib py" line 951 in endheaders self _send_output(message_body) File "/System/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/httplib py" line 811 in _send_output self send(message) File "/System/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/httplib py" line 773 in send self connect() File "/System/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/httplib py" line 754 in connect self timeout self source_address) File "/System/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/socket py" line 571 in create_connection raise err socket error: [Errno 61] Connection refused ```` What gives? | Probably bit ly is refusing connections from tools like httplib You can try to change user agent like this: ````h putheader('User-Agent' 'Mozilla/5 0 (X11; YOU; Linux i686; pl-PL; rv:1 7 10) Gecko/20050717 Firefox/1 0 6') ```` |
How many days did the President of South Vietnam visit the US for in 1957? | ten |
DjangoREST using DELETE and UPDATE with serializers So I followed the <a href="http://www django-rest-framework org/tutorial/quickstart/" rel="nofollow">Quickstart Guide</a> on the DjangoREST framewok site and ended up with the following code: <strong>serializers py:</strong> ````class UserSerializer(serializers HyperlinkedModelSerializer): class Meta: model = User fields = ('url' 'username' 'email' 'groups') class GroupSerializer(serializers HyperlinkedModelSerializer): class Meta: model = Group fields = ('url' 'name') ```` <strong>views py:</strong> ````class UserViewSet(viewsets ModelViewSet): """ API endpoint that allows users to be viewed or edited """ queryset = User objects all() order_by('-date_joined') serializer_class = UserSerializer class GroupViewSet(viewsets ModelViewSet): """ API endpoint that allows groups to be viewed or edited """ queryset = Group objects all() serializer_class = GroupSerializer ```` <strong>urls py:</strong> ````router = routers DefaultRouter() router register(r'users' views UserViewSet) router register(r'groups' views GroupViewSet) router register(r'rooms' views RoomViewSet) router register(r'devices' views DeviceViewSet) router register(r'deviceTypes' views DeviceTypeViewSet) # Wire up our API using automatic URL routing # Additionally we include login URLs for the browsable API urlpatterns = [ url(r'^admin/' admin site urls) url(r'^' include(router urls)) url(r'^api-auth/' include('rest_framework urls' namespace='rest_framework')) ] ```` Now this all works fine but I cannot find out how to DELETE or UPDATE a user or a group seems I can only add users and groups and view them So my question is: How can I modify this code to make it possible to delete/update users and groups? | The code is fine you just need to use `PUT` and `DELETE` data methods for update and delete respectively (instead of `GET/POST`) You can see from the code example for a `ModelViewSet` in the <a href="http://www django-rest-framework org/tutorial/6-viewsets-and-routers/#refactoring-to-use-viewsets" rel="nofollow">docs</a> ````class SnippetViewSet(viewsets ModelViewSet): """ This viewset automatically provides `list` `create` `retrieve` `update` and `destroy` actions ```` and the docs for <a href="http://www django-rest-framework org/api-guide/viewsets/#modelviewset" rel="nofollow">`ModelViewSet`</a> <blockquote> The actions provided by the ModelViewSet class are ` list()` ` retrieve()` ` create()` ` update()` ` partial_update()` and ` destroy()` </blockquote> |
How to ignore empty lines while using next_sibling in BeautifulSoup4 in python As i want to remove duplicated placeholders in a html website i use the next_sibling operator of BeautifulSoup As long as the duplicates are in the same line this works fine (see data) But sometimes there is a empty line between them - so i want next_sibling to ignore them (have a look at data2) That is the code: ````from bs4 import BeautifulSoup Tag data = "<p>method-removed-here</p><p>method-removed-here</p><p>method-removed-here</p>" data2 = """<p>method-removed-here</p> <p>method-removed-here</p> <p>method-removed-here</p> <p>method-removed-here</p> <p>method-removed-here</p> """ soup = BeautifulSoup(data) string = 'method-removed-here' for p in soup find_all("p"): while isinstance(p next_sibling Tag) and p next_sibling name== 'p' and p text==string: p next_sibling decompose() print(soup) ```` Output for data is as expected: ````<html><head></head><body><p>method-removed-here</p></body></html> ```` Output for data2 (this needs to be fixed): ````<html><head></head><body><p>method-removed-here</p> <p>method-removed-here</p> <p>method-removed-here</p> <p>method-removed-here</p> <p>method-removed-here</p> </body></html> ```` I could not find useful information for that in the BeautifulSoup4 documentation and next_element is also not what i am looking for | I could solve this issue with a workaround The problem is described in the <a href="https://groups google com/forum/#!topic/beautifulsoup/F3sdgObXbO4" rel="nofollow">google-group for BeautifulSoup</a> and they suggest to use a preprocessor for html-files: ```` def bs_preprocess(html): """remove distracting whitespaces and newline characters""" pat = re compile('(^[\s]+)|([\s]+$)' re MULTILINE) html = re sub(pat '' html) # remove leading and trailing whitespaces html = re sub('\n' ' ' html) # convert newlines to spaces # this preserves newline delimiters html = re sub('[\s]+<' '<' html) # remove whitespaces before opening tags html = re sub('>[\s]+' '>' html) # remove whitespaces after closing tags return html ```` That is not the very best solution but one |
How to split/crop a pdf along the middle using pyPdf I have a pdf that looks like <a href="http://i stack imgur com/jrfI2 jpg" rel="nofollow">this</a> and i would like to crop all the text out almost right down the middle of the page I found this script that does something simmilar: ````def splitHorizontal(): from pyPdf import PdfFileWriter PdfFileReader input1 = PdfFileReader(file("in pdf" "rb")) output = PdfFileWriter() numPages = input1 getNumPages() print "document has %s pages " % numPages for i in range(numPages): page = input1 getPage(i) print page mediaBox getUpperRight_x() page mediaBox getUpperRight_y() page trimBox lowerLeft = (25 25) page trimBox upperRight = (225 225) page cropBox lowerLeft = (50 50) page cropBox upperRight = (200 200) output addPage(page) outputStream = file("out pdf" "wb") output write(outputStream) outputStream close() ```` However these crop dimensions are tuned to that specific example Can anyone show me how to find the correct crop dimensions | I originally got the script from here --> <a href="http://stackoverflow com/questions/457207/cropping-pages-of-a-pdf-file">Cropping pages of a pdf file</a> I read more into what the author had said finally realizing that he had said: <blockquote> The resulting document has a trim box that is 200x200 points and starts at 25 25 points inside the media box The crop box is 25 points inside the trim box </blockquote> meaning ````page cropBox upperRight = (200 200) ```` must control the ultimate margins i therefore adjusted the statement to ````page cropBox upperLeft = (290 792) ```` To mirror the cropping onto the other side and make sure the cropping holds the full vertical value |
Storing a groupby group in a Series in Pandas I am trying to join two datasets in Pandas What I want to do is put the results of `df2 groupby('BuildingID')` into a new Series in `df1` The reason being that the building ID is the level I will be working with while the ItemID is a collection of items within the building Example: ````df1 BuildingID Blah 3 'a' 4 'b' 5 'c' 7 would' df2 ItemID BuildingID EnergyID 7 3 2 11 3 11 12 3 12 13 4 2 14 5 12 15 4 10 16 7 2 17 7 3 ```` So that I end up with the following: ````df1 DataID Blah Grouped 3 'a' <groupby object> 4 'b' <groupby object> 5 'c' <groupby object> 7 would' <groupby object> ```` So my questions are 1 how do I achieve this and 2 is it a good idea or is there a better way of representing this data - perhaps with suffixed headings for all the headings in each group? | It might depend a bit on what you want to do next but i would go for something like: ````from StringIO import StringIO import pandas as pd indf1 = StringIO("""BuildingID Blah 3 'a' 4 'b' 7 'c' 7 would' 7 'x'""") indf2 = StringIO("""ItemID BuildingID EnergyID 7 3 2 11 3 11 12 3 12 13 4 2 14 5 12 17 4 10 17 7 2 17 7 3 17 7 4""") df1 = pd read_csv(indf1 delim_whitespace=True index_col='BuildingID') df2 = pd read_csv(indf2 delim_whitespace=True index_col='ItemID') dfboth = df1 merge(df2 right_on='BuildingID' left_index=True how='left') dfboth set_index('BuildingID' append=True inplace=True) dfboth reorder_levels(['BuildingID' 'ItemID']) Blah EnergyID BuildingID ItemID 3 7 'a' 2 11 'a' 11 12 'a' 12 4 13 'b' 2 17 'b' 10 7 17 'c' 2 17 'c' 3 17 'c' 4 17 would' 2 17 would' 3 17 would' 4 17 'x' 2 17 'x' 3 17 'x' 4 ```` |
Writing a set plus a formatted string to a CSV? I am trying to write the contents of a set and a formatted string to a CSV file using Python 2 I have a set full of domain names and I need to write the domain name to the first column skip one column then write the word 'Related' and repeat on the next row until the set is empty I have tried a few different things but this is where I am at now (broken code) ````#!/usr/bin/python #CSV WRITE TESTING import csv testSet = set(['thissucks in' 'whateverlife in' 'crapface in' 'lasttry in']) with open('mag csv' 'ab+') as f: writer = csv writer(f ' ' 'Related' dialect= 'excel' delimiter= ' ') for each in testSet: writer writerow(list(testSet)) ```` | ````import csv testSet = set(['thissucks in' 'whateverlife in' 'crapface in' 'lasttry in']) with open('mag csv' 'ab+') as f: writer = csv writer(f dialect= 'excel' delimiter= ' ') for domain in testSet: writer writerow([domain '' 'Related'] ) ```` |
Error during removing an element from a 2D array In a 285 x 507 array I am trying to find the lowest seam and remove it using pop I have the below problem while using pop Code: ```` for i in range(0 len(img)): for j in range(0 len(img[0]): a[i] pop(j) ```` Can anyone please help me why this error comes "'numpy ndarray' object has no attribute 'pop'" and how to rectify Note: This works perectly fine in a 5x5 array when it comes to huge dimensions I seem to be facing issues | From the looks of things numpy does not use `pop()` but instead you can use `delete()` <a href="http://docs scipy org/doc/numpy/reference/generated/numpy delete html" rel="nofollow">http://docs scipy org/doc/numpy/reference/generated/numpy delete html</a> |
gae bigquery works from dev_appserver but credential errors during tests I have a gae app and it has a view that does this: ````from apiclient discovery import build from oauth2client client import GoogleCredentials # Grab the application's default credentials from the environment oCredentials = GoogleCredentials get_application_default() # Construct the service object for interacting with the BigQuery API oBigQuerySerivice = build('bigquery' 'v2' credentials=oCredentials) oTables = oBigQuerySerivice tables() d = oTables list(projectId=PROJECT datasetId=DATASET) execute() ```` And it works just great Now I want to run the same code in my unit tests the same virtualenv is activated It refers to the same dataset the same project and the same json service key file i would expect the result to be the same But alas: ````HttpError: <HttpError 401 when requesting https://www googleapis com/bigquery/v2/projects/projectwaxed/datasets/bi_audit_dev/tables?alt=json returned "Invalid Credentials"> ```` My question is: How am I supposed to test that my code is doing valid big query stuff? The docs are all over the place and I am just not finding answers | I am going to poke at this not sure how helpful it will be Apologies Is your app not authenticating with GAE in this instance or just big query? I struggled against a lot of the auth issues dealing with BT and App Engine I am trying to dig up my notes from when I dealt with it The Google BigQuery analytics book is pretty solid if you are able to purchase it but some of the Auth stuff is already out of date If you type printenv (unix) do you see the GOOGLE_APPLICATION_CREDENTIALS path declared? I had issues where my env vars were not surviving new vm instances BY THE WAY I used this youtube video for the oauth2 walkthrough - <a href="https://www youtube com/watch?v=HoUdWBzUZ-M" rel="nofollow">https://www youtube com/watch?v=HoUdWBzUZ-M</a> |
Where is Trasianka used? | Belarus |
On what plain is New Delhi located? | the Indo-Gangetic Plain |
When was Toynbee's report put together? | 24th May 1916 |
What book by Adam Smith was published in 1776? | Wealth of Nations |
What hemisphere of stars did Edmond Halley want to study with the telescope? | Southern |
How to handle ajax response behind changing state of an element? I am crawling an asp net page with one form on it which contains multiple select tags with different options Each select tag has a JavaScript function attached which is triggered every time a different value is selected That JS function performs an AJAX call which returns a text response similar to JSON but it is text Here it is <blockquote> 51 772425|0 00|21 33|0 00|5000|51 772425|0 </blockquote> I want to intercept it with Scrapy but instead of getting just this little piece of string I got the whole page 'NJGroup123390' It is the ID of select tag Here is my code: ```` def after_login(self response): return Request(url='https://**** com/NexJobPage asp?Id=445' callback=self parse_form) def parse_form(self response): return [FormRequest from_response(response formdata={'NJGroup123390':'5000'} dont_click=True callback=self parse_form2)] # here I should have the response returned by AJAX: 51 772425|0 00|21 33|0 00|5000|51 772425|0 def parse_form2(self response): f = open('logo2' 'wb') f write(response body) f close() ```` Thanks | You might have missing an additional argument or header added through javascript Inspect the request sent in your browser check for missing parameters headers or cookies and add them to your request object You can use the she will to see what is the data filled by `FormRequest`: ````$ scrapy she will https://stackoverflow com/users/signup 2014-02-12 19:38:12-0400 [scrapy] INFO: Scrapy 0 22 1 started (bot: scrapybot) In [1]: from scrapy http import FormRequest In [2]: req = FormRequest from_response(response formnumber=1) In [3]: import urlparse In [4]: urlparse parse_qs(req body True) Out[4]: {'display-name': [''] 'email': [''] 'fkey': ['324799e03d5f73e1af72134e6d943f58'] 'password': [''] 'password2': [''] 'submit-button': ['Sign Up']} ```` |
What is Nigeria's branded electronics manufacturer? | Zinox |
Count rows that match string and numeric with pandas I have 1-12 numbers in `SAMPLE` column and for each number I try to count mutation numbers(A:T C:G etc ) This code works but how can I modify this code to gives me all 12 condition for each mutation instead of writing the same code 12 times and also for each mutation? In this example; AT gives me the number while `SAMPLE=1` I am trying to get number of AT for each sample number(1 2 12) So how can modify this code for that? I will appreciate for any help Thank you ```` SAMPLE MUT 0 11 chr1:100154376:G:A 1 2 chr1:100177723:C:T 2 9 chr1:100177723:C:T 3 1 chr1:100194200:-:AA 4 8 chr1:10032249:A:G 5 2 chr1:100340787:G:A 6 1 chr1:100349757:A:G 7 3 chr1:10041186:C:A 8 10 chr1:100476986:G:C 9 4 chr1:100572459:C:T 10 5 chr1:100572459:C:T d= df["SAMPLE" "MUT" ] chars1 = "TGC-" number = {} for item in chars1: dm= d[(d["MUT"] str contains("A:" item)) & (d["SAMPLE"] isin([1]))] num1 = dm count() number[item] = num1 AT=number["T"] AG=number["G"] AC=number["C"] A_=number["-"] ```` | You can create a column with the mutation type (A->T G->C) with a regular expression substitution then apply pandas groupby to count ````import pandas as pd import re df = pd read_table('df tsv') df['mutation_type'] = df['MUT'] apply(lambda x: re sub(r'^ *?:([^:]+:[^:]+)$' r'\1' x)) df groupby(['SAMPLE' 'mutation_type']) agg('count')['MUT'] ```` The output is like this for your data: ````SAMPLE mutation_type 1 -:AA 1 A:G 1 2 C:T 1 G:A 1 3 C:A 1 4 C:T 1 5 C:T 1 8 A:G 1 9 C:T 1 10 G:C 1 11 G:A 1 Name: MUT dtype: int64 ```` |
Embed a Python interpeter in a (Windows) C++ application I am building a window application written in C++ I would like to utilize several python libraries I do not need any fancy Python interop here My method is like this: - Open a thread to run Python interpreter - Send commands from C++ to the Python interpreter The C++ may need to write some intermediate files for the interop This method is dirty but it will work for a lot of interpreter-like environments e g gnuplot lua My question is that what kind of API are there for me to use for this task Maybe I need some Win32 API? <strong>EDIT:</strong> I do not need any Python specific I really want the general method So that my application could also work with gnuplot etc | If you have the source distribution of Python you can look in the `Demo/embed` directory for samples The pertinent documentation is <a href="http://docs python org/extending/embedding html" rel="nofollow">here</a> |
How to subscribe to real-time XMPP RSS feeds with Superfeedr I am trying to subscribe to feeds with Superfeedr and I have got a python wrapper for XMPP up and running and I am receiving the `dummy xml` successfully I do not quite understand how to add more sources however? I have tried adding a few `superfeedr com/track/`'s but I get no new feeds from it (though I do seem to get a confirmation of subscription) I would like to add as many real-time (non-POLL) feeds as possible perhaps by using PubSubHub servers I would really appreciate some help towards this - where do I find such feeds? Can I subscribe to the whole superfeedr com real-time feed just by adding `/track/` ? Or will that only filter the feeds I am subscribing to? Also as I am subscribing from my `XMPP py` client on my Amazon server what exactly is my Subscriber URL (callback) ? Where do I go from here? I will add more info if needed just let me know | Superfeedr is an API which will help you gather data from feeds that you are supposed to curate yourself So the whole process starts with you collecting a list of feeds to which you want to subscribe The <a href="http://documentation superfeedr com/misc html#track" rel="nofollow">Track API</a> does not help you find feeds but rather helps you build <them>virtual feeds</them> that match a given criteria For example if you want any mention ot 'stackoerflow' in any feed you could use track for that Think of it as RSS feeds for search results but in realtime (forward looking) Finally if you use <a href="http://documentation superfeedr com/subscribers html#xmpppubsub" rel="nofollow">XMPP</a> you do not need a callback url as these are part of the <a href="http://documentation superfeedr com/subscribers html#webhooks" rel="nofollow">PubSubHubbub API</a> |
What era did Ireland enter when the Roman Empire ended? | golden age |
How to read the last few lines within a file using Python? I am reading a folder with a specific file name I am reading the content within a file but how do I read specific lines or the last 6 lines within a file? ````************************************ Test Scenario No 1 TestcaseID = FB_71125_1 dpSettingScript = FB_71125_1_DP txt ************************************ Setting Pre-Conditions (DP values Sqlite DB): cp /fs/images/nfs/FileRecogTest/MNT/test/Databases/FB_71125_1_device sqlite $NUANCE_DB_DIR/device sqlite "sync" twice Starting the test: 0#00041511#0000000000# FILERECOGNITIONTEST: = testScenarioNo (int)1 = 0#00041514#0000000000# FILERECOGNITIONTEST: = TestcaseID (char*)FB_71125_1 = 0#00041518#0000000000# FILERECOGNITIONTEST: = dpSettingScript (char*)FB_71125_1_DP txt = 0#00041520#0000000000# FILERECOGNITIONTEST: = UtteranceNo (char*)1 = 0#00041524#0000000000# FILERECOGNITIONTEST: = expectedEventData (char*)0||none|0||none = 0#00041528#0000000000# FILERECOGNITIONTEST: = expectedFollowUpDialog (char*) = 0#00041536#0000000000# FILERECOGNITIONTEST: /fs/images/nfs/FileRecogTest/MNT/test/main_menu wav#MEDIA_COND:PAS_MEDIA&MEDIA_NOT_BT#>main_menu global<#<FS0000_Pos_Rec_Tone><FS1000_MainMenu_ini1> 0#00041789#0000000000# FILERECOGNITIONTEST: Preparing test data done 0#00043768#0000000000# FILERECOGNITIONTEST: /fs/images/nfs/FileRecogTest/MNT/test/Framework wav##>{any_device_name}<#<FS0000_Pos_Rec_Tone><FS1400_DeviceDisambig_<slot>_ini1> 0#00044008#0000000000# FILERECOGNITIONTEST: Preparing test data done 0#00045426#0000000000# FILERECOGNITIONTESTWARNING: expected >{any_device_name}< got >lowconfidence1@FS1000_MainMenu< 1900#00046452#0000000000# FILERECOGNITIONTESTERROR: expected <FS0000_Pos_Rec_Tone><FS1400_DeviceDisambig_<slot>_ini1> got <FS0000_Misrec_Tone><FS1000_MainMenu_nm1_004><pause300><FS1000_MainMenu_nm_001> 0#00046480#0000000000# FILERECOGNITIONTEST: Preparing test data done 0#00047026#0000000000# FILERECOGNITIONTEST: Stopping dialog immediately [VCALogParser] Scenario 1 FAILED ```` Can someone suggest me how to read specific lines or the last 6 lines within a file ? | I can think of two methods If your files are not too big you can just read all lines and keep only the last six ones: ````f = open(some_path) last_lines = f readlines()[-6:] ```` But that is really brute-force Something cleverer is to make a guess using the `seek()` method of your file object: ````file_size = os stat(some_path) st_size # in _bytes_ so take care depending on encoding f = open(some_path) f seek(file_size - 1000) # here is the guess Adjust with expected line length last_lines = f readline()[-6:] ```` |
Display field for User ForeignKey in Django admin ````class Lab(Model): responsible = ForeignKey(User) ```` This is a very simplified version of my Django model Basically the problem is in the Django admin when I want to edit or add a new `Lab` object the drop-down list containing the `User` objects only displays the `User username` value which are only numbers in my case I want the drop-down list to display `User last_name` value How can I do this? | You have to define your own user choice field for lab admin ````from django import forms from django contrib import admin class UserChoiceField(forms ModelChoiceField): def label_from_instance(self obj): return obj last_name # Now you have to hook this field up to lab admin class LabAdmin(admin ModelAdmin): def formfield_for_foreignkey(self db_field request=None **kwargs): if db_field name == 'responsible': kwargs['form_class'] = UserChoiceField return super(LabAdmin self) formfield_for_foreignkey(db_field request **kwargs) ```` Something like that I did not tested it so there may be some typos Hope this helps! |
Python and Javascript Have a problem Have this code: ````response = bytearray(unichr(int(code)) 'UTF-8') ```` It returns byte array from code where code = int Need similar code for javascript I have tried: ````var arr = new Uint8Array(1); arr[0]=(+code) toString(16); ```` Tried ````var arr = new Uint8Array(1); arr[0]=String fromCharCode(code); ```` Could you help me? Thanks a lot | Encoding and decoding strings is much easier if you are willing and able to use the <a href="https://developer mozilla org/en-US/docs/Web/API/Encoding_API" rel="nofollow">Encoding API</a> as shown in this code example <div class="snippet" data-lang="js" data-hide="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override">`//Encoding var encoder = new TextEncoder(); var byteArray = new Uint8Array(encoder encode("ABCããã")); //Output: [65 66 67 227 131 128 227 131 129 227 131 130] // ^ ^ ^ ^ ^ ^ // | | | | | | // // A B C ã ã ã console log(byteArray); //Decoding var decoder = new TextDecoder(); var decodedString = decoder decode(byteArray); //Output: "ABCããã" console log(decodedString);```` </div> </div> |
unknown RT error message I am trying to debug a script that is trying to talk to RT (Request Tracker) and I am getting the following output: ````RT/3 6 6 409 Syntax Error # Syntax Error >>ARRAY(0x2b3495f37750) ```` I have no idea what this error means in the context of RT given the astounding lack of detail making it difficult to debug Here is the associated code for a little context it is a script trying to create a ticket ````import requests def combDicts(dicts): out = {} for d in dicts: out update(d) return out operPath = 'ticket/new' credentials = {'user': 'myuser' 'pass': 'mypassword'} content = { 'content': { 'id': 'ticket/new' 'Subject': 'Python Script Test' 'Queue': 'General - unassigned' } } are = requests post('https://rt hdms com/REST/1 0/' operPath params=combDicts((credentials content)) verify = False) print r text ```` If I comment out all but the Queue line of the content dict the error changes to: ````RT/3 6 6 409 Syntax Error # Syntax Error >> Queue ```` The crux of my question is this: Does anyone know what this error means or know where I can find documentation on what all the RT errors are and what could cause them? | You will find much more information in the logs on the RT server itself especially if you up the log level to debug You might have better luck using one of the <a href="http://requesttracker wikia com/wiki/REST#Convenience_libraries" rel="nofollow">python libraries</a> available for calling RT However the version of RT you are running is fairly old released in January 2008 You may have trouble using current libraries with an old version of RT |
Who's temples are influenced by Khmer and Mon traditions? | null |
os system does not work in Python I am working on windows vista but I am running python from DOS command I have this simple python program (It is actually one py file named test py) ````import os os system('cd ') ```` When I execute "python test py" from a Dos command it does not work For example if the prompt Dos Command before execution was this: ````C:\Directory> ```` After execution must be this: ````C:\> ```` Help Plz | First you generally do not want to use `os system` - take a look at the <a href="http://docs python org/library/subprocess html">subprocess module</a> instead But that will not solve your immediate problem (just some you might have down the track) - the actual reason `cd` will not work is because it changes the working directory of the <them>subprocess</them> and does not affect the process Python is running in - to do that use <a href="http://docs python org/library/subprocess html">`os chdir`</a> |
os pipe() function in google app engine I am currently trying to zip a large file (> 1GB) using python on google app engine and I have used the following solution due to the limitations google app engine places on the memory cache for a process <a href="http://stackoverflow com/q/297345/2561122">Create a zip file from a generator in Python?</a> When I run the code on the app engine I get the following error ````Traceback (most recent call last): File "/base/data/home/apps/s~whohasfiles/frontend 379535120592235032/gluon/restricted py" line 212 in restricted exec ccode in environment File "/base/data/home/apps/s~whohasfiles/frontend 379535120592235032/applications/onefile/controllers/page py" line 742 in <module> File "/base/data/home/apps/s~whohasfiles/frontend 379535120592235032/gluon/globals py" line 194 in <lambda> self _caller = lambda f: f() File "/base/data/home/apps/s~whohasfiles/frontend 379535120592235032/applications/onefile/controllers/page py" line 673 in download zip_response = page_store gcs_zip_page(page visitor) File "applications/onefile/modules/page_store py" line 339 in gcs_zip_page w = z start_entry(ZipInfo('%s-%s' %(file created_on file name) )) File "applications/onefile/modules/page_store py" line 481 in start_entry are w = os pipe() OSError: [Errno 38] Function not implemented ```` Does the google app engine not support the OS pipe() function? How can I get a work around please? | The 'os' module is available but with unsupported features disabled such as pipe() as it operates on file objects [1] You would need to use a Google Cloud Storage bucket as a temporary object as there is no concept of file objects you can use for storage local to the App Engine runtime The GCS Client Library will give you file-like access to a bucket which you can use for this purpose [2] Every app has access to a default storage bucket which you may need to first activate [3] [1] <a href="https://cloud google com/appengine/docs/python/#Python_Pure_Python" rel="nofollow">https://cloud google com/appengine/docs/python/#Python_Pure_Python</a> [2] <a href="https://cloud google com/appengine/docs/python/googlecloudstorageclient/" rel="nofollow">https://cloud google com/appengine/docs/python/googlecloudstorageclient/</a> [3] <a href="https://cloud google com/appengine/docs/python/googlecloudstorageclient/activate" rel="nofollow">https://cloud google com/appengine/docs/python/googlecloudstorageclient/activate</a> |
Forecasting using Pandas OLS I have been using the <a href="http://statsmodels sourceforge net/generated/scikits statsmodels regression linear_model OLS predict html#scikits statsmodels regression linear_model OLS predict" rel="nofollow">scikits statsmodels OLS predict</a> function to forecast fitted data but would now like to shift to using Pandas The documentation <a href="http://pandas pydata org/pandas-docs/stable/computation html#standard-ols-regression" rel="nofollow">refers to OLS</a> as well as to a function called <a href="http://pandas sourceforge net/stats plm html#pandas stats plm MovingPanelOLS y_predict" rel="nofollow">y_predict</a> but I cannot find any documentation on how to use it correctly By way of example: ````exogenous = {"1998": "4760" "1999": "5904" "2000": "4504" "2001": "9808" "2002": "4241" "2003": "4086" "2004": "4687" "2005": "7686" "2006": "3740" "2007": "3075" "2008": "3753" "2009": "4679" "2010": "5468" "2011": "7154" "2012": "4292" "2013": "4283" "2014": "4595" "2015": "9194" "2016": "4221" "2017": "4520"} endogenous = {"1998": "691" "1999": "1580" "2000": "80" "2001": "1450" "2002": "555" "2003": "956" "2004": "877" "2005": "614" "2006": "468" "2007": "191"} import numpy as np from pandas import * ols_test = ols(y=Series(endogenous) x=Series(exogenous)) ```` However while I can produce a fit: ````>>> ols_test y_fitted 1998 675 268299 1999 841 176837 2000 638 141913 2001 1407 354228 2002 600 000352 2003 577 521485 2004 664 681478 2005 1099 611292 2006 527 342854 2007 430 901264 ```` Prediction produces nothing different: ````>>> ols_test y_predict 1998 675 268299 1999 841 176837 2000 638 141913 2001 1407 354228 2002 600 000352 2003 577 521485 2004 664 681478 2005 1099 611292 2006 527 342854 2007 430 901264 ```` In scikits statsmodels one would do the following: ````import scikits statsmodels api as sm ols_model = sm OLS(endogenous np column_stack(exogenous)) ols_results = ols_mod fit() ols_pred = ols_mod predict(np column_stack(exog_prediction_values)) ```` How do I do this in Pandas to forecast the endogenous data out to the limits of the exogenous? UPDATE: Thanks to Chang the new version of Pandas (0 7 3) now has this functionality as standard | is your issue how to get the predicted y values of your regression? Or is it how to use the regression coefficients to get predicted y values for a different set of samples for the exogenous variables? pandas y_predict and y_fitted should give you the same value and both should give you the same values as the predict method in scikits statsmodels If you are looking for the regression coefficients do ols_test beta |
Can Python's Bunch be used recursively? Using <a href="https://pypi python org/pypi/bunch/1 0 1" rel="nofollow"><strong>`bunch`</strong></a> can `Bunch` be used recursively? For example: ````from bunch import Bunch b = Bunch({'hello': {'world': 'foo'}}) b hello >>> {'world': 'foo'} ```` So obviously: ````b hello world --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-effaad77643b> in <module>() ---> 1 b hello world AttributeError: 'dict' object has no attribute 'world' ```` I know I could do ````b = Bunch({'hello': Bunch({'world': 'foo'})}) ```` but that is awful | Dug into the source code this can be done with the `fromDict` method ````b = Bunch fromDict({'hello': {'world': 'foo'}}) b hello world >>> 'foo' ```` |
Error when deploying scrapy project on the scrapy cloud I am using scrapy 0 20 on Python 2 7 I want to deploy my scrapy project on <a href="http://scrapinghub com/scrapy-cloud" rel="nofollow">scrapy cloud</a> - I developed my scrapy project with simple spider - navigate to my scrapy project folder - typed `scrapy deploy scrapyd -d koooraspider` on cmd Where `koooraspider` is my project's name and `scrapyd` is my target I got the following error: ````D:\Walid-Project\Tasks\koooraspider>scrapy deploy scrapyd -p koooraspider Packing version 1395847344 Deploying to project "koooraspider" in http://dash scrapinghub com/api/scrapyd/a ddversion json Traceback (most recent call last): File "C:\Python27\lib\runpy py" line 162 in _run_module_as_main "__main__" fname loader pkg_name) File "C:\Python27\lib\runpy py" line 72 in _run_code exec code in run_globals File "C:\Python27\lib\site-packages\scrapy-0 20 2-py2 7 egg\scrapy\cmdline py" line 168 in <module> execute() File "C:\Python27\lib\site-packages\scrapy-0 20 2-py2 7 egg\scrapy\cmdline py" line 143 in execute _run_print_help(parser _run_command cmd args opts) File "C:\Python27\lib\site-packages\scrapy-0 20 2-py2 7 egg\scrapy\cmdline py" line 89 in _run_print_help func(*a **kw) File "C:\Python27\lib\site-packages\scrapy-0 20 2-py2 7 egg\scrapy\cmdline py" line 150 in _run_command cmd run(args opts) File "C:\Python27\lib\site-packages\scrapy-0 20 2-py2 7 egg\scrapy\commands\de ploy py" line 105 in run if not _upload_egg(target egg project version): File "C:\Python27\lib\site-packages\scrapy-0 20 2-py2 7 egg\scrapy\commands\de ploy py" line 194 in _upload_egg return _http_post(req) File "C:\Python27\lib\site-packages\scrapy-0 20 2-py2 7 egg\scrapy\commands\de ploy py" line 210 in _http_post f = urllib2 urlopen(request) File "C:\Python27\lib\urllib2 py" line 126 in urlopen return _opener open(url data timeout) File "C:\Python27\lib\urllib2 py" line 391 in open response = self _open(req data) File "C:\Python27\lib\urllib2 py" line 409 in _open '_open' req) File "C:\Python27\lib\urllib2 py" line 369 in _call_chain result = func(*args) File "C:\Python27\lib\urllib2 py" line 1173 in http_open return self do_open(httplib HTTPConnection req) File "C:\Python27\lib\urllib2 py" line 1142 in do_open h request(req get_method() req get_selector() req data headers) File "C:\Python27\lib\httplib py" line 946 in request self _send_request(method url body headers) File "C:\Python27\lib\httplib py" line 987 in _send_request self endheaders(body) File "C:\Python27\lib\httplib py" line 940 in endheaders self _send_output(message_body) File "C:\Python27\lib\httplib py" line 801 in _send_output message = message_body UnicodeDecodeError: 'ascii' codec cannot decode byte 0xcc in position 373: ordina l not in range(128) ```` <h3>scrapy cfg</h3> ````# Project: KoooraSpider [deploy:scrapyd] url = http://dash scrapinghub com/api/scrapyd/ username = MyUserName password = project = 2431 ```` <h3>Note</h3> When I did the previous steps `project egg-info` folder `build` folder and `setup py` script have been generated Any help will be appreciated | remove `project egg-info` & `build` folders and then try command `scrapy deploy scrapyd -p 2431` |
Google Drive OAuth2 I am trying to sync between python and google drive with the following details: Authorized JavaScript origins: <a href="http://localhost:8080" rel="nofollow">http://localhost:8080</a> Authorized redirect URIs: <a href="http://localhost:8080/" rel="nofollow">http://localhost:8080/</a> I copied the json file to the directory and ran this code: ````from pydrive auth import GoogleAuth gauth = GoogleAuth() gauth LocalWebserverAuth() ```` and I got this error: ````from oauth2client locked_file import LockedFile ImportError: No module named locked_file ```` Can you please help me? | Had the same issue It looks there was a change in the newest version of the `oauth2client` v2 0 0 which broke compatibility with the `google-api-python-client` module which now got fixed <a href="https://github com/adrian-the-git/google-api-python-client/commit/2122d3c9b1aece94b64f6b85c6707a42cca8b093" rel="nofollow">https://github com/adrian-the-git/google-api-python-client/commit/2122d3c9b1aece94b64f6b85c6707a42cca8b093</a> so an upgrade of the `google-api-python-client` restores compatibility and make everything working again: ````$ pip install --upgrade git+https://github com/google/google-api-python-client ```` |
python memory exception when not at full memory usage I am using Ubuntu 64bit 12 04 My machine has 64gigs of RAM I am running a script where I have to store ~9gig of data into a dictionary It is a simple dictionary where keys are 30 characters and value is just a integer However the script is throwing a memory exception at around 58% memory usage What is going on here? Is there a max limit to dictionary size? | I do not think there is a max value that limits the size of a dictionary in Python Assuming your script is running under Unix you can increase the memory limit your process can consume via standard library module <a href="https://docs python org/2/library/resource html" rel="nofollow">resource</a> ````>>> import resource >>> resource setrlimit(resource RLIMIT_AS (10**9 10**9)) ```` You may also want to periodically check the memory usage with `resource getrusage()` function The resulting object has the attribute `ru_maxrss` which gives total memory usage for the calling process ````>>> import resource >>> resource getrusage(resource RUSAGE_SELF) ru_maxrss >>> 20631552 ```` By this way at least you can make sure that it is your script which eats the memory |
Segmentation fault using python layer in caffe I am trying to use python layer as described <a href="http://chrischoy github io/research/caffe-python-layer/" rel="nofollow">here</a> But I am getting this exception: ````I1007 17:48:31 366592 30357 layer_factory hpp:77] Creating layer loss *** Aborted at 1475851711 (unix time) try "date -d @1475851711" if you are using GNU date *** PC: @ 0x7f32895f1156 (unknown) *** SIGSEGV (@0x0) received by PID 30357 (TID 0x7f328b07fa40) from PID 0; stack trace: *** @ 0x7f328883ecb0 (unknown) @ 0x7f32895f1156 (unknown) @ 0x7f3289b43dfe (unknown) @ 0x7f32429d0d9c google::protobuf::MessageLite::ParseFromArray() @ 0x7f3242a1f652 google::protobuf::EncodedDescriptorDatabase::Add() @ 0x7f32429da012 google::protobuf::DescriptorPool::InternalAddGeneratedFile() @ 0x7f3242a2b33e google::protobuf::protobuf_AddDesc_google_2fprotobuf_2fdescriptor_2eproto() @ 0x7f3242a5aa75 google::protobuf::StaticDescriptorInitializer_google_2fprotobuf_2fdescriptor_2eproto::StaticDescriptorInitializer_google_2fprotobuf_2fdescriptor_2eproto() @ 0x7f3242a56beb __static_initialization_and_destruction_0() @ 0x7f3242a56c00 _GLOBAL__sub_I_descriptor pb cc @ 0x7f328aeca10a (unknown) @ 0x7f328aeca1f3 (unknown) @ 0x7f328aecec30 (unknown) @ 0x7f328aec9fc4 (unknown) @ 0x7f328aece37b (unknown) @ 0x7f327d91b02b (unknown) @ 0x7f328aec9fc4 (unknown) @ 0x7f327d91b62d (unknown) @ 0x7f327d91b0c1 (unknown) @ 0x7f3288f412ae (unknown) @ 0x7f3288f09dae (unknown) @ 0x7f3288f88729 (unknown) @ 0x7f3288ebccbf (unknown) @ 0x7f3288f81d66 (unknown) @ 0x7f3288e47a3f (unknown) @ 0x7f3288f12d43 (unknown) @ 0x7f3288f8b577 (unknown) @ 0x7f3288f6dc13 (unknown) @ 0x7f3288f7154d (unknown) @ 0x7f3288f71682 (unknown) @ 0x7f3288f71a2c (unknown) @ 0x7f3288f88016 (unknown) Segmentation fault (core dumped) ```` I am using Ubuntu 14 04 GPU caffe installation Python layer and prototxt are <a href="https://gist github com//shelhamer/8d9a94cf75e6fb2df221" rel="nofollow">here</a> Could anybody suggest anything? I do not know what I am doing wrong | From my experience this is one of the most prevalent weaknesses in Caffe: this SegFault without an error message I generally get this when I have not connected the data layer properly For instance I have not started the data server or there is a severe mismatch in format |
Who wrote 'The Grand Concourse'? | Jacob M. Appel |
Sorted/unique list of object instances from a larger list? I have a list of object instances that I want to sort/uniqueify into a new list Each object implements a variety of properties but the three properties of importance are `a` `b` and `c` All three properties return an integer value with `a` and `b` sorted low-to-high and `c` sorted high-to-low Example list: ````>>> x >>> [<Foo object at 0x2b371b90> <Foo object at 0x2b371f38> <Foo object at 0x2b3719e0> <Foo object at 0x2b376320> <Foo object at 0x2b3765f0>] ```` <br/>If I loop and printed the value of `a` `b` and `c` in a tuple for each object they would look like this: ````>>> for o in x: print (o a o b o c) (2 78342112 9) (2 78342117 3) (2 78342112 10) (2 78342112 8) (2 78342117 4) ```` <br/>I have figured out how to sort the list by `a`/`b` being low-to-high and `c` being high-to-low by defining a `key()` function in the object's class: ````def key(self): return (self a self b -self c) ```` <br/>And passing that to `sorted()`: ````x2 = sorted(x key=lambda x:x key()) >>> for o in x2: print (o a o b o c) (2 78342112 10) (2 78342112 9) (2 78342112 8) (2 78342117 4) (2 78342117 3) ```` <br/>For these specific objects uniqueness between instances depends on if `a` and `b` are the same values between two instances If they are different then `c` is never considered else we favor the largest value of `c` What I would like to do is generate a new list from either `x` or `x2` in my examples above that only contains one instance for each case when `a` and `b` are the same and retain the one whose `c` value is largest The new list `x3` would look like this: ````>>> x3 = <magic sorting/unique function called here> >>> for o in x3: print (o a o b o c) (2 78342112 10) (2 78342117 4) ```` <br/>I think I can do this using `reduce()` and a custom function but the algorithm/logic to do so simply stumps me at the moment Ideas? | One way to do this is using <a href="http://docs python org/2/library/itertools html#itertools groupby" rel="nofollow">`itertools groupby()`</a>: ````import itertools x3 = [next(g) for k g in itertools groupby(x2 lambda x:(x a x b))] ```` This picks the first element of each group with the same key |
Magic-line nav causing some troubles #works only on hrefs included in menu I Am using multiple (2) jquery versions on a site i thought that it was a problem but it does not seem so I found out that i actually got a problem with magic-line navigation <a href="http://css-tricks com/jquery-magicline-navigation/" rel="nofollow">http://css-tricks com/jquery-magicline-navigation/</a> What happens is that it works only on sites that are included in the menu itself Let Us say i am on projects site(that is included in the menu itself) everything works ok but when i open a view of a certain project the "underline" effect stops working this is how it looks: <img src="http://dl dropbox com/you/26827941/ScreenShot141 png" alt="a not workin magic line"> i did not sleep for 4 days i am late for deadline my brain hurts plx help <strong>EDIT:</strong> i do not think its my code but if it was here it is some additional info about the code: base html ```` <ul class="group" id="example-one"> {% for i in mains %} <li class="{% block activetab %}{% endblock %}"><a href="{{ i menulink }}">{{ i name }}</a></li> {% endfor %} </ul> ```` what is included in certain views: ```` {% block activetab %} {% ifequal request get_full_path|cut:"/" i menulink|cut:"/" %}current_page_item{% endifequal %} {% endblock %} ```` | It seems that menu buggs if there is no single li with current_page_item class i hacked it by forcing: ````{% ifequal ourprojects i menulink|cut:"/" %}current_page_item{% endifequal %} ```` on single project view ALthough it works now it will not if someone changes the path to ourprojects in the admin panel I would still like to find a proper solution that would work even if someone changes the path #or i could just disallow changing the path but that is another sucky solution and so it stayed |
Editing property of foreignkey object I am trying to access a ForeignKey object in Django but only manage to show its values not to edit them ````class ShippingAddress(models Model): name = models CharField(max_length=63 primary_key=True) street = models CharField(max_length=63) houseNumber = models CharField(max_length=10) zipCode = models CharField(max_length=10) city = models CharField(max_length=63) country = models CharField(max_length=63) class MainClass(models Model): name = models CharField(max_length=63) creationDate = models DateTimeField(blank=True) aShippingAddress = models ForeignKey(ShippingAddress) ```` this would be an example I would now like to make the values of the ShippingAddress model directly accesible and editable within the main class Right now though I manage to only access the object itself not every value of it directly (admin py) ````def editZipCode(self obj): return obj aShippingAddress zipCode ```` this way I manage to show each value at least but that is it Any ideas are welcome | You need to use an <a href="https://docs djangoproject com/en/1 8/ref/contrib/admin/#django contrib admin InlineModelAdmin" rel="nofollow">inline</a>: ````class MainClassInline(admin TabularInline): model = MainClass class ShippingAddressAdmin(admin ModelAdmin): inlines = (MainClassInline ) ```` |
How many inches of precipitation does NYC get in a year? | 49.9 |
How do I make new columns in dataframe from a row of a different column? Here is my current dataframe: ````>>>df = {'most_exhibitions' : pd Series(['USA (1) Netherlands (5)' 'United Kingdom (2)' 'China (3) India (5) Pakistan (8)' 'USA (11) India (4)'] index=['a' 'b' 'c' would']) 'name' : pd Series(['Bob' 'Joe' 'Alex' 'Bill'] index=['a' 'b' 'c' would'])} >>> df name most_exhibitions a Bob USA (1) India (5) b Joe United Kingdom (2) c Alex China (3) India (5) USA (8) d Bill USA (11) India (4) ```` I am trying to figure out how to split each cell and then potentially create a new column from the country and place the respective count in the right row If the country is already an existing column I want to just put the count in the right row So the final dataframe would look like this: ````# name most_exhibitions USA United Kingdom China India #a Bob USA (1) India (5) 1 5 #b Joe United Kingdom (2) 2 #c Alex China (3) India (5) USA (8) 8 3 5 #d Bill USA (11) India (4) 11 4 ```` I wanted to write a loop or a function that would split the data and then add the new column but I could not figure out how to do it I ended up splitting and cleaning the data through a series of dictionaries and now am stuck with how to make the final dictionary into its own dataframe I think if I can make this new dataframe I will be able to append it to the old one I also think I am making this harder than it should be and am interested in any solutions that are more elegant Here is what I have done so far: ````>>>country_rank_df['country_split'] = indexed_rankdata['most_exhibitions'] str split(" ") astype(str) from collections import defaultdict total_dict = defaultdict(list) dict2 = defaultdict(list) dict3 = defaultdict(list) dict4 = defaultdict(list) dict5 = defaultdict(list) dict6 = defaultdict(list) for name country_count in zip(head_df['name'] head_df['most_exhibitions']): total_dict[name] append(country_count) for key value in total_dict iteritems(): for line in value: new_line = line split('(') dict2[key] append(new_line) for key list_outside in dict2 iteritems(): for list_inside in list_outside: for value in list_inside: new_line = value split(' ') dict3[key] append(new_line) for key list_outside in dict3 iteritems(): for list_inside in list_outside: for value in list_inside: new_line = value split(')') dict4[key] append(new_line) for key list_outside in dict4 iteritems(): for list_inside in list_outside: for value in list_inside: new_line = value strip() new_line = value lstrip() dict5[key] append(new_line) for key list_outside in dict5 iteritems(): new_line = filter(None list_outside) dict6[key] append(new_line) >>>dict6['Bob'] [['USA' '1' 'India' '5']] ```` | You can try this approach which use mainly <a href="http://pandas pydata org/pandas-docs/stable/text html" rel="nofollow">string methods</a> Then I <a href="http://pandas pydata org/pandas-docs/stable/generated/pandas DataFrame pivot html" rel="nofollow">`pivot`</a> and <a href="http://pandas pydata org/pandas-docs/stable/generated/pandas DataFrame fillna html" rel="nofollow">`fillna`</a> dataframe I lost original column `most_exhibitions` but I hope it is unnecessary ````import pandas as pd df = {'most_exhibitions' : pd Series(['USA (1) Netherlands (5)' 'United Kingdom (2)' 'China (3) India (5) Pakistan (8)' 'USA (11) India (4)'] index=['a' 'b' 'c' would']) 'name' : pd Series(['Bob' 'Joe' 'Alex' 'Bill'] index=['a' 'b' 'c' would'])} df = pd DataFrame(df) #cange ordering of columns df = df[['name' 'most_exhibitions']] print df # name most_exhibitions #a Bob USA (1) Netherlands (5) #b Joe United Kingdom (2) #c Alex China (3) India (5) Pakistan (8) #d Bill USA (11) India (4) #remove '(' and last ')' df['most_exhibitions'] = df['most_exhibitions'] str replace('(' '') df['most_exhibitions'] = df['most_exhibitions'] str strip(')') #http://stackoverflow com/a/34065937/2901002 s = df['most_exhibitions'] str split(')') apply(pd Series 1) stack() s index = s index droplevel(-1) s name = 'most_exhibitions' print s #a USA 1 #a Netherlands 5 #b United Kingdom 2 #c China 3 #c India 5 #c Pakistan 8 #d USA 11 #d India 4 #Name: most_exhibitions dtype: object df = df drop( ['most_exhibitions'] axis=1) df = df join(s) print df # name most_exhibitions #a Bob USA 1 #a Bob Netherlands 5 #b Joe United Kingdom 2 #c Alex China 3 #c Alex India 5 #c Alex Pakistan 8 #d Bill USA 11 #d Bill India 4 #exctract numbers and convert them to integer df['numbers'] = df['most_exhibitions'] str extract("(\d+)") astype('int') #exctract text of most_exhibitions df['most_exhibitions'] = df['most_exhibitions'] str rsplit(' ' n=1) str[0] print df # name most_exhibitions numbers #a Bob USA 1 #a Bob Netherlands 5 #b Joe United Kingdom 2 #c Alex China 3 #c Alex India 5 #c Alex Pakistan 8 #d Bill USA 11 #d Bill India 4 #pivot dataframe df = df pivot(index='name' columns='most_exhibitions' values='numbers') #NaN to empty string df = df fillna('') ```` ````print df #most_exhibitions India Netherlands Pakistan China USA United Kingdom #name #Alex 5 8 3 #Bill 4 11 #Bob 5 1 #Joe 2 ```` EDIT: I try add all columns as recommended output by function <a href="http://pandas pydata org/pandas-docs/stable/generated/pandas DataFrame merge html" rel="nofollow">`merge`</a>: ````import pandas as pd df = {'most_exhibitions' : pd Series(['USA (1) Netherlands (5)' 'United Kingdom (2)' 'China (3) India (5) Pakistan (8)' 'USA (11) India (4)'] index=['a' 'b' 'c' would']) 'name' : pd Series(['Bob' 'Joe' 'Alex' 'Bill'] index=['a' 'b' 'c' would'])} df = pd DataFrame(df) #cange ordering of columns df = df[['name' 'most_exhibitions']] print df # name most_exhibitions #a Bob USA (1) Netherlands (5) #b Joe United Kingdom (2) #c Alex China (3) India (5) Pakistan (8) #d Bill USA (11) India (4) #copy original to new dataframe for joining original df df1 = df reset_index() copy() #remove '(' and last ')' df['most_exhibitions'] = df['most_exhibitions'] str replace('(' '') df['most_exhibitions'] = df['most_exhibitions'] str strip(')') #http://stackoverflow com/a/34065937/2901002 s = df['most_exhibitions'] str split(')') apply(pd Series 1) stack() s index = s index droplevel(-1) s name = 'most_exhibitions' print s #a USA 1 #a Netherlands 5 #b United Kingdom 2 #c China 3 #c India 5 #c Pakistan 8 #d USA 11 #d India 4 #Name: most_exhibitions dtype: object df = df drop( ['most_exhibitions'] axis=1) df = df join(s) print df # name most_exhibitions #a Bob USA 1 #a Bob Netherlands 5 #b Joe United Kingdom 2 #c Alex China 3 #c Alex India 5 #c Alex Pakistan 8 #d Bill USA 11 #d Bill India 4 #exctract numbers and convert them to integer df['numbers'] = df['most_exhibitions'] str extract("(\d+)") astype('int') #exctract text of most_exhibitions df['most_exhibitions'] = df['most_exhibitions'] str rsplit(' ' n=1) str[0] print df # name most_exhibitions numbers #a Bob USA 1 #a Bob Netherlands 5 #b Joe United Kingdom 2 #c Alex China 3 #c Alex India 5 #c Alex Pakistan 8 #d Bill USA 11 #d Bill India 4 #pivot dataframe df = df pivot(index='name' columns='most_exhibitions' values='numbers') #NaN to empty string df = df fillna('') df = df reset_index() ```` ````print df #most_exhibitions name India Netherlands Pakistan China USA United Kingdom #0 Alex 5 8 3 #1 Bill 4 11 #2 Bob 5 1 #3 Joe 2 print df1 # index name most_exhibitions #0 a Bob USA (1) Netherlands (5) #1 b Joe United Kingdom (2) #2 c Alex China (3) India (5) Pakistan (8) #3 d Bill USA (11) India (4) df = pd merge(df1 df on=['name']) df = df set_index('index') ```` ````print df # name most_exhibitions India Netherlands Pakistan \ #index #a Bob USA (1) Netherlands (5) 5 #b Joe United Kingdom (2) #c Alex China (3) India (5) Pakistan (8) 5 8 #d Bill USA (11) India (4) 4 # # China USA United Kingdom #index #a 1 #b 2 #c 3 #d 11 ```` |
django-admin py startproject mysite not working python 2 7 I have read a ton of articles about adding things to the windows PATH variable but none of them have worked so far I also read about editing regedit and finding some things however I was unable to find what people were referring to So I am trying to run: ````django-admin py startproject mysite ```` Previously it would open a window asking me which program to open the file with I stupidly added it to python exe however I think this is now causing more problems as when I run the above command it opens a little black windows then vanishes as Python exe now tries to open the file But I think this could be preventing the command working? It certainly is not creating anything Would this indeed because problems or are things just as broke as previously? Really need some help here it is very frustrating that nothing seems to get it working Is there is additional information you need please let me know | I think your system is not finding the file django-admin py Use cmd and navigate to the directory that contains this file and try to execute the command from there: ````python django-admin py startproject mysite ```` If it works I recommend you to create an environment variable (e g DJANGO_ADMIN) to point do django-admin py so you can execute the command below from anywhere: ````python %DJANGO_ADMIN% startproject mysite ```` <strong>EDIT:</strong> Creating an environment variable on <strong>Windows XP</strong>: - Open System in Control Panel - On the Advanced tab click Environment Variables then click the name of the user variable or system variable you want to change as follows Click 'New' to add a new variable name and value Creating an environment variable on <strong>Windows 7</strong>: - Open the Start Menu and right click on Computer Select Properties - Select Advanced system settings - In the Advanced tab select Environment Variables - Select 'New' Creating an environment variable on <strong>Windows 8</strong>: Start -> All Apps -> Control panel -> System -> Advanced System Settings -> Advanced -> Environment variables |
Python -Uploading multiple files on GAE using Blobstore I am using the following html form to upload two files on gae ````<form id="insert-budget-form" method="POST" action="" enctype="multipart/form-data" onsubmit="return validate()"> Budget Book Name:<br> <input type = "text" id = "bookName" name = "bookName" placeholder = "E g Budget Book 2016"/> <br><br> File:<br> <input type = "file" id = "bookFile" name = "bookFile"/> <br><br> Highlight:<br> <input type="file" id = "highlightFile" name = "highlightFile"/> <br><br> <input type="date" id="bookDate" name="bookDate"/> <input type="submit" id="insert-budget-sub" value="Insert"/> </form> ```` I am generating upload url using ajax when the user selects the first file and using jQuery I am assigning the url as an action to form Now at my server side I am getting the first file how to obtain the second file Here is the server side code: ````class BudgetBookUploadHandler(blobstore_handlers BlobstoreUploadHandler): def post(self): bookName = self request POST get('bookName') bookDatetime = self request POST get('bookDate') dateParts = bookDatetime split("-") date = datetime date(int(dateParts[0]) int(dateParts[1]) int(dateParts[2])) if bookName != "" and date: q = BudgetBook query(BudgetBook bookName == bookName) if q get(): self redirect("/manage_budgetbook?success=dup") else: bookUpload = self get_uploads()[0] highlightUpload = self get_uploads()[1] budgetBook = BudgetBook( bookBlobKey = bookUpload key() highlightBlobKey = None bookName = bookName bookDate = date) budgetBook put() self redirect("/manage_budgetbook?success=true") ```` Is syntactically getuploads()[1] correct to use for getting the second file and storing it? | Instead of `getuploads()[0]` and `getuploads()[1]` use `getuploads('name of the form field')[0]` |
Python-twitter - unexpected keyword argument 'access_token_key' ````#!/usr/bin/python # -*- coding: iso-8859-15 -*- import twitter class twitt(): def __init__(self): consumer_key = ' ' consumer_secret = ' ' access_key = ' ' access_secret = ' ' encoding = 'iso-8859-15' self api = twitter Api(consumer_key=consumer_key consumer_secret=consumer_secret access_token_key=access_key access_token_secret=access_secret input_encoding=encoding) def run(self): statuses = self api GetPublicTimeline() print statuses h = twitt() h run() ```` <hr> This code does not work it is taken directly from the readme and every example i find I found something about "get_access_token py" and there is no reference to it!? user@host:~# updatedb user@host:~# locate get_access_token user@host:~# <strong>Error:</strong> `TypeError: __init__() got an unexpected keyword argument 'access_token_key'` References: - <a href="http://code google com/p/python-twitter/issues/detail?id=215" rel="nofollow">http://code google com/p/python-twitter/issues/detail?id=215</a> - <a href="https://github com/bear/python-twitter/tree/master/examples" rel="nofollow">https://github com/bear/python-twitter/tree/master/examples</a> | I can only think that maybe `twitter` is not the Twitter library you think it is: Try to see if you get something similar to this: ````>>> import twitter >>> twitter __file__ '/usr/local/lib/python2 7/dist-packages/python_twitter-0 8 2-py2 7 egg/twitter pyc' >>> import inspect >>> inspect getargspec(twitter Api __init__) ArgSpec(args=['self' 'consumer_key' 'consumer_secret' 'access_token_key' 'access_token_secret' 'input_encoding' 'request_headers' 'cache' 'shortner' 'base_url' 'use_gzip_compression' 'debugHTTP'] varargs=None keywords=None defaults=(None None None None None None <object object at 0x7f023505a220> None None False False)) ```` |
What was built to help Bern grow bigger than the peninula? | bridges |
Encode unicode special character (symbol) in Python? I am using Python (Tornado framework in specific) to develop a corp website When I output the data to html there were some special characters such as (r) or (tm) in unicode format for example: <strong>Google%u2122</strong> How can I encode (or convert) it to the correct one such as Google (TM) I try using encode() method but it did not work Thank you very mych | Thanks Ikke for the link as I am working in web environment (using Tornado as I mentioned) so the primary answer there did not meet my requirement instead this one <a href="http://stackoverflow com/a/300556/183846">http://stackoverflow com/a/300556/183846</a>: ````from urllib import unquote def unquote_u(source): result = unquote(source) if '%you' in result: result = result replace('%you' '\\you') decode('unicode_escape') return result print unquote_u('Tan%u0131m') ```` To apply to Tornado template I create the f |
I cannot locate the source of my 'NoneType' error in my version of Pong I posted a version of Pong to another popular site yesterday and got a lot of advice on how I can clean it up and got a lot of tips in general One is that all coordinates should be tuples I have been rewriting it all morning to try to improve it(specifically with the tuple thing) but I am stuck with a NoneType somewhere which I cannot find for the life of me <blockquote> Traceback (most recent call last): File "C:\Users\Jassie\Desktop\Pygame_Pong_v2 py" line 172 in my_ball update() File "C:\Users\Jassie\Desktop\Pygame_Pong_v2 py" line 64 in update self pos = (self pos[0] self vel[0] self pos[1] self vel[1]) TypeError: 'NoneType' object has no attribute '<strong>getitem</strong>' </blockquote> ````def update(self): #Update ball's position self pos = (self pos[0] self vel[0] self pos[1] self vel[1]) self pos_x = int(self pos[0]) self pos_y = int(self pos[1]) ```` There is the error and the bit of code referenced in the error <a href="https://github com/Jassie411/pong" rel="nofollow">https://github com/Jassie411/pong</a> There is a link to the full code if anyone could take a look and help me out I have looked at the code all morning(since about 4AM actually) looking at where the initial velocity tuple comes from where the initial position tuple comes from and where they are going I just cannot see where the NoneType is coming from I have also tried completely rewriting that part of it but I cannot solve this myself If someone could please help me with this I would be so grateful I know after I get past this I can finish the rest of it simply But this error is killing me Edit: Sorry I should probably describe the behavior and such I know it is an error with either the position or velocity attribute becoming None but I do not know which it is and I do not know where What I do know is that it only throws the error when the ball hits the gutter or paddle It does not matter which it hits it throws the same error either way Naturally I thought it would be in the code that checks that collision but I cannot find it there | The problem is `spawn_ball` function: ````def spawn_ball(direction = random choice('01')): #direction should always be left if AI is enabled for paddle2 #0 is left 1 is right if direction == 0: return (random uniform(1 0 3 0) * -1 random uniform(1 0 3 0) * -1) elif direction == 1: return (random uniform(1 0 3 0) random uniform(1 0 3 0) * -1) ```` First of all you should not initialize default variables like this `random choice('01')` is evaluated once when function object is created So default direction will always be the same But what is causing your error is that `0 != '0'` So when you call `spawn_ball` without arguments no `if` statement is matched and function returns `None` Here is correct variant: ````def spawn_ball(direction=None): if direction is None: direction = random choice((0 1)) #direction should always be left if AI is enabled for paddle2 #0 is left 1 is right if direction == 0: return (random uniform(1 0 3 0) * -1 random uniform(1 0 3 0) * -1) elif direction == 1: return (random uniform(1 0 3 0) random uniform(1 0 3 0) * -1) ```` |
With the Python httplib "HTTPConnection" object how do you send a header without a value? I have the following code but httplib still treats the header 'emptyheader' as key:value pair ````h = httplib HTTPConnection("somewhere:5000") headers = {} headers['emptyheader'] = None h request('POST' '/somewhere' '' headers) ```` How do I send the sane request but with a valueless header 'emptyheader'? | Set the header value to `''`: ````headers['emptyheader'] = '' ```` would output: ````emptyheader: \r\n ```` which is conform with the <a href="http://www w3 org/Protocols/rfc2616/rfc2616-sec4 html#sec4 2" rel="nofollow">HTTP specification for message headers</a> |
GObject add_emission_hook usage I was kindly directed to use <a href="http://www pygtk org/docs/pygobject/gobject-functions html#function-gobject--add-emission-hook" rel="nofollow">GObject's "add_emission_hook"</a> following a <a href="http://stackoverflow com/questions/2074372/connecting-signals-in-gtk-anonymous-way">recent question on SO</a> but I cannot seem to find a usage example Does anyone have one to share please? | After a discussion with helpful folks on IRC #pygtk here is what I came up with: ````import gobject class Signals(gobject GObject): __gsignals__ = { "lastfm_username_changed": (gobject SIGNAL_RUN_FIRST gobject TYPE_NONE (gobject TYPE_STRING )) #@UndefinedVariable } def __init__(self): gobject GObject __init__(self) class Bus(object): """ Message Bus - Borg pattern """ _signals=Signals() @classmethod def emit(cls name *pa **kwa): cls _signals emit(name *pa **kwa) @classmethod def add_emission_hook(cls name callback): gobject add_emission_hook(cls _signals name callback) if __name__=="__main__": def callback(self data): print "callback: data=%s" % data Bus add_emission_hook("lastfm_username_changed" callback) Bus emit("lastfm_username_changed" "jldupont") ```` |
Python - Raising an exception or not? Here is a description of my problem: I have the task to take a bunch of tablenames and put a prefix in front of them like so: ````PREFIX = 'foo_'; prefixed_tablename = "" join([PREFIX tablename[:27]]) ```` (The tablename must never exceed 30 characters (some Oracle DB restriction) this is why I only take the first 27 chars) Now sometimes this can lead to duplicate tablenames (if only the last 3 chars of a tablename differ) I could implement some fancy algorithm for creating unique tablenames but at the moment detecting duplicate names would be sufficient So I thought about storing them in a set and if creating a prefixed tablename check the set to see if such a tablename already exists Now for the <them><strong>real</strong></them> problem: If a duplicate gets detected I need to stop my script executing showing some kind of error In Java I would just raise an exception but I do not know if this would be the preferred way in Python Should I raise an exception or just print out a message and exit? | Exceptions in Python are actually used somewhat more liberally than in Java (for example iterators use them internally to signal when iteration is to stop) If this is really an "Exceptional" situation -- that is if this will not be happening often as a matter of course -- then this is a perfect situation for which to raise an exception |
How do I close an ssh Tunnel opened through Popen in python I need to create an ssh tunnel then do something then tear the tunnel down I have been trying to do it like this: ````def runCmd(self cmd): args = shlex split(cmd) return subprocess Popen(args) def openTunnel cmd = 'ssh -f -N -L 1313:localhost:1313 userid@server com' self TunnelObj = self runCmd(cmd) ```` That creates my Tunnel I can then do the stuff I need to do Now I want to tear down the tunnel ```` def closeSocket(self): print '\nClosing Tunnel\n' if self TunnelObj returncode == None: print '\nabout to kill\n' self TunnelObj kill() ```` But the tunnel is still open An ssh session still exists and the port is still assigned How can I shut this tunnel down? | Part of the problem is that the tunnel process is a subprocess of self TunnelObj You can try to omit the `-f` flag so you hold the tunnel process directly Another option would be to look at the <a href="http://www lag net/paramiko/" rel="nofollow">paramiko</a> library and <a href="http://stackoverflow com/questions/2777884/shutting-down-ssh-tunnel-in-paramiko-programatically">this</a> question |
Golang - Packing and hashing binary data I am attempting to learn Golang and have a background in Python I am currently trying to get my head around how to pack variables into a binary format (with a checksum) In Python I would use something like: ````import struct import hashlib a = 100 b = "foo\x00\x00" # Padded to fixed length packet = struct pack('<B5s' a b) digest = hashlib sha256(packet) digest() packet = digest ```` To do the same thing in Go I am trying code like this: ````package main import ( "crypto/sha256" "fmt" "encoding/binary" "bytes" ) type packet struct { a uint8 b string } func main() { var p = packet{} p a = 1 p b = "foo\x00\x00" buf := new(bytes Buffer) binary Write(buf binary LittleEndian &p) h := sha256 New() h Write(buf String()) fmt Printf("% x\n" p) } ```` Unfortunately however I attack it I seem to get into a nightmare of clashing variable types (buffers byte arrays and strings) I would appreciate some guidance as to whether I am taking even remotely the right approach | Updated to something that works ````package main import ( "bytes" "crypto/sha256" "encoding/binary" "fmt" ) type packet struct { a uint8 b []byte } func main() { var p = packet{} p a = 1 p b = []byte("foo\x00\x00") buf := bytes Buffer{} err := binary Write(&buf binary BigEndian p a) if err != nil { fmt Println(err) } _ err = buf Write(p b) if err != nil { fmt Println(err) } h := sha256 New() h Write(buf Bytes()) hash := h Sum([]byte{}) fmt Printf("% x\n" hash) } ```` <a href="http://play golang org/p/t8ltu_WCpe" rel="nofollow">http://play golang org/p/t8ltu_WCpe</a> You are right that it is a bit painful to write structs with possibly dynamic length items in them (slices and strings) using encoding/binary You might be interested in checking out the "encoding/gob" package that encodes strings automatically (although it is not compatible with the padded string you have got here) |
Django ModelForm not calling clean I am performing a basic Django ModelForm create/validate/save operation My custom clean methods are not being called when is_valid() is being called when running the code under the Eclipse debugger and I set a breakpoint after the form creation and the call to is_valid() I have traced through the Django base code numerous times and it appears that the error dictionary on the ModelForm class is never set to None which triggers the validation I suspect that this is due to an interaction with the debugger accessing the _errors attribute of the ModelForm to display in the variables pane When I remove all breakpoints and let the code flow naturally I can prove that the custom clean code is running by issuing print statements Is this a flaw in the Django ModelForm design an Eclipse problem or am I barking up the wrong tree? models py ````from django db import models class TestModel1(models Model): field1 = models CharField(max_length=45) field2 = models IntegerField(default=2) field3 = models CharField(max_length=45 null=True blank=True) ```` forms py ````from order models import TestModel1 from django forms import ModelForm class OrderTestForm(ModelForm): def clean_field1(self): return self cleaned_data['field1'] def clean_field2(self): return self cleaned_data['field2'] class Meta: model = TestModel1 ```` My test harness: ````from forms import OrderTestForm row = {'field1': 'test value' 'field2': '4' } ff = OrderTestForm(row) #ff full_clean() if ff is_valid(): ff save() else: print ff errors ```` | What if you try: ````from order models import TestModel1 from django forms import ModelForm class OrderTestForm(ModelForm): class Meta: model = TestModel1 def clean_field1(self): value = self cleaned_data['field1'] print value return value def clean_field2(self): value = self cleaned_data['field2'] print value return value ```` |
how to find the excel value from row col names in python? I am working on xlsx using python & I am using openpyxl for the same I have the column name and row number Can I find the value of that box from xlsx? for example : ````column - P row - 369 ```` Can I find the value from Pth column & 369th row of xlsx ? | How about: ````d = ws cell(row = 4 column = 2) print d value ```` See the <a href="http://packages python org/openpyxl/tutorial html#accessing-one-cell" rel="nofollow">documentation</a> |
Find in the matrix the first row in which all elements are arranged in descending order Find in the matrix the first row in which all elements are arranged in descending order Change the order of the elements of this row in ascending order For example: In this matrix the second array contains the elements in descending order ````matrix = [[-5 -6 2] [3 1 -7] [8 -4 9]] ```` the output should be: ````[-7 1 3] ```` | If all differences between the neighboring numbers in on row are greater zero print the row in reversed order: ````matrix = [[-5 -6 2] [3 1 -7] [8 -4 9]] for row in matrix: if all(x - y > 0 for x y in zip(row[:-1] row[1:])): print(list(reversed(row))) break ```` Output: ````[-7 1 3] ```` |
How to remove frequency from signal I want to remove one frequency (one peak) from signal and plot my function without it After fft I found frequency and amplitude and I am not sure what I need to do now For example I want to remove my highest peak (marked with red dot on plot) ````import numpy as np import matplotlib pyplot as plt # create data N = 4097 T = 100 0 t = np linspace(-T/2 T/2 N) f = np sin(50 0 * 2 0*np pi*t) 0 5*np sin(80 0 * 2 0*np pi*t) #plot function plt plot(t f 'r') plt show() # perform FT and multiply by dt dt = t[1]-t[0] ft = np fft fft(f) * dt freq = np fft fftfreq(N dt) freq = freq[:N/2+1] amplitude = np abs(ft[:N/2+1]) # plot results plt plot(freq amplitude 'o-') plt legend(('numpy fft * dt') loc='upper right') plt xlabel('f') plt ylabel('amplitude') #plt xlim([0 1 4]) plt plot(freq[np argmax(amplitude)] max(amplitude) 'ro') print "Amplitude: " str(max(amplitude)) " Frequency: " str(freq[np argmax(amplitude)]) plt show() ```` | You can design a bandstop filter: ````wc = freq[np argmax(amplitude)] / (0 5 / dt) wp = [wc * 0 9 wc / 0 9] ws = [wc * 0 95 wc / 0 95] b a = signal iirdesign(wp ws 1 40) f = signal filtfilt(b a f) ```` |
Python iterating RegEx that extracts text from between delimiters I have a regex function that extracts string elements from in between two predefined separators (start & end): ````def Findy(start end anystring): result = (anystring split(start)[1] split(end)[0]) return result string = "I want A and I want B and I want C and " Findy('want' 'and' string) >>>'A' ```` But this only grabs the first match it finds; what I need is to get all the matches so that; ````Findy('want' 'and' string) >>>['A' 'B' 'C'] ```` I have tried incorporating re findall but I do not always know what the pattern will be that exists between my start and end delimiters so no luck there How can I refactor Findy to iterate over a string and pick up all matches? | Do not know whether this code satisfies you: ````def findy(start end anystr): res = [] tmp = anystr split(start)[1:] for e in tmp: res append(e split(end)[0] strip()) return res ```` |
What happened in 1913? | null |
Matlab to python conversion matrix operations Hi I am trying to covert this distance formula for rectilinear distance from matlab to python X1 and X2 are two matrices of two dimensional points and could be differing lengths ````nd = size(X1); n = nd(1); d = nd(2); m = size(X2 1); D = abs(X1(: ones(1 m)) - X2(: ones(1 n))') abs(X1(: 2*ones(1 m)) - X2(: 2*ones(1 n))'); ```` I think the problem I am having most in python is appending the ones matrices with X1 and X2 since they are np arrays | First your code: ````octave:1> X1=[0 1 2 3;2 3 1 1]' octave:2> X2=[2 3 2;4 2 4]' <your code> octave:21> D D = 4 3 4 2 3 2 3 2 3 4 1 4 ```` Matching `numpy` code: ````X1=np array([[0 1 2 3] [2 3 1 1]]) T X2=np array([[2 3 2] [4 2 4]]) T D=np abs(X1[: None :]-X2[None : :]) sum(axis=-1) ```` produces `D`: ````array([[4 3 4] [2 3 2] [3 2 3] [4 1 4]]) ```` `numpy` broadcasts automatically so it does not need the `ones()` to expand the dimensions Instead I use `None` (same as `np newaxis`) to create new dimensions The difference is then `3D` which is then summed on the last axis I forgot how spoilt we are with the `numpy` `broadcasting` Though newer Octave has something similar: ````D = sum(abs(reshape(X1 [] 1 2)-reshape(X2 1 [] 2)) 3) ```` |
Make numpy sum() return a sum of matrices instead of a single number I am doing a fairly complicated summation using a matrix with numpy The shape of the matrix is `matrix shape = (500 500)` and the shape of the array is `arr shape = (25 )` The operation is as follows: ````totalsum = np sum([i * matrix for i in arr]) ```` Here is what I do not understand: `np sum()` is very slow and returns a single float `float64` Doing the same operation with Python's `sum ()` i e ````totalsum2 = sum([i*matrix for i in arr]) ```` Preserves the shape of the matrix That is the resulting shape is `totalsum2 shape() = (500 500)` Huh? I also think it is strange that `np sum()` takes longer than `sum()` particularly when we are working with numpy ndarrays What exactly is going on here? How is `np sum()` summing the above values in comparison to `sum()`? I would like `np sum()` to preserve the matrix shape How can I set the dimensions such that `np sum()` preserves the matrix size and does not return a single float? | ````[i*matrix for i in arr] # list of matrices ```` The list above is a list of matrices so when you use sum it will add the arrays ````In [6]: matrix = np array([[1 2] [3 4]]) In [7]: matrix Out[7]: array([[1 2] [3 4]]) In [9]: [i * matrix for i in (2 4 8)] Out[9]: [array([[2 4] [6 8]]) array([[ 4 8] [12 16]]) array([[ 8 16] [24 32]])] ```` Please check the help for np sum ```` File: /home/ale/ virtualenvs/ml/local/lib/python2 7/site-packages/numpy/core/fromnumeric pyaxis=None dtype=None out=None keepdims=False) Docstring: Sum of array elements over a given axis Parameters ---------- a : array_like Elements to sum axis : None or int or tuple of ints optional Axis or axes along which a sum is performed The default (`axis` = `None`) is perform a sum over all the dimensions of the input array `axis` may be negative in which case it counts from the last to the first axis versionadded:: 1 7 0 ```` It says that if you do not define an axis it will sum over all the dimensions Example: ````In [4]: np sum(np array([[1 2] [3 4]])) # 1 2 3 4 Out[4]: 10 ```` Why is np sum taking longer? well intuition says that in the expression `[i*matrix for i in arr]` you are creating a new array for each `i` which then np sum will sum over all arrays There might be other reason but I am guessing it is that |
Use python to generate graph using node constraints I Am trying to solve a problem i have with my system (Python/Storm) but not sure what is the best tool <strong>The Goal: create the edges of a graph using constraints on the Node input and output </strong> i have around 400+ python function(apache storm she will bolts each bolt wrap one function - Storm does not really matter in this case i will treat them as nodes ) Each bolt/function/Node has a defined input and output name-attributes list i have a source (which has output but NO input) Nodes (have input and output list) Sink (only input no output) To Make it more clear let us say i have: ````S = Source Input = [] Output = ["a" "b" "c" "d"] ("a" "b" "c" "d" are attributes the sources produces) A = Node Input = ["a" "b"] output = ["e"] B = Node Input = ["a" "e"] output = ["f"] Si = Sink Input = ["a" "b" "c" "d" "e" "f"] Output = [] ```` I would like NetworkX (or other graph library) to create the edges alone using those constraints on the Node Each node output is ONLY the output list not output+input the output i want is the list of edges: ````S A S B A B B Si A Si S Si ```` <a href="http://i stack imgur com/HPBlV png" rel="nofollow"><img src="http://i stack imgur com/HPBlV png" alt="enter image description here"></a> *in the graph C=Si Does NetworkX support such a build? and if so how can i implement it? | You could build a bipartite graph (I think directed?) from your data and than "project" it onto one set of nodes to make the graph you want E g if you have the directed edges S->a and a->T the two node sets are {S T} and {a} Projecting onto the node set {S T} gives S->T because there is a path from S->T in the original bipartite graph ````import networkx as nx data = [("S" [] ["a" "b" "c" "d"]) ("A" ["a" "b"] ["e"]) ("B" ["a" "c"] ["f"]) ("Si" ["a" "b" "c" "d" "e" "f"] [])] G = nx DiGraph() #G = nx Graph() # maybe you want an undirected graph? nodes = [] for n inedges outedges in data: nodes append(n) for s in inedges: G add_edge(s n) for t in outedges: G add_edge(n t) P = nx projected_graph(G nodes) print list(P nodes()) print list(P edges()) # OUTPUT # ['A' 'S' 'B' 'Si'] # [('A' 'Si') ('S' 'A') ('S' 'Si') ('S' 'B') ('B' 'Si')] ```` |
Finding repeating operands using regex - Python I am trying to find through a file expressions such as <strong>A*B</strong> A and B could be anything from `[A-Z]` `[a-z]` `[0-9]` and may include `<` `>` `(` `)` `[` `]` `_` ` ` etc but not <them>commas</them> <them>semicolon</them> <them>whitespace</them> <them>newline</them> or any other arithmetic operator `(+ - \ *)` These are the 8 delimiters Also there can be spaces between A and * and B Also the number of opening brackets need to be the same as closing brackets in A and B I unsuccessfully tried something like this (not taking into account operators inside A and B): ````import re fp = open("test" "r") for line in fp: p = re compile("( | |;)( *)[*]( *)( | |;|\n)") m = p match(line) if m: print 'Match found ' m group() else: print 'No match' ```` <strong>Example 1:</strong> `(A1 * B1 list() C * D * E)` should give 3 matches: - A1 * B1 list() - C * D - D * E An extension to the problem statement could be that <them>commas</them> <them>semicolon</them> <them>whitespace</them> <them>newline</them> or any other arithmetic operator (+ - \ *) are allowed in A and B if inside backets: <strong>Example 2:</strong> `(A * B max(C * D E))` should give 2 matches: - A * B max(C * D E) - C * D I am new to regular expressions and curious to find a solution to this | Regular expressions have limits The border between regular expressions and text parsing can be tight I AM GOING TO using a parser is a more robust solution in your case The examples in the question suggest recursive patterns A parser is again superior than a regex flavor in this area Have a look to this proposed solution: <a href="http://stackoverflow com/questions/594266/equation-parsing-in-python">Equation parsing in Python</a> |
traversing an object tree I am trying to find information on different ways to traverse an object tree in python I do not know much about the language in general yet so any suggestions/techniques would be welcome Thanks so much jml | See the <a href="http://docs python org/library/inspect html" rel="nofollow">`inspect`</a> module It has functions for accessing/listing all kinds of object information |
Python tkinter grid manager not working columns Python version 2 7 (I know it is dated) I have searched throught several answers and have not found a solution I am trying to get this label : ````w = Label(root text="This label" fg="red" font=("Helvetica" 16)) w grid(row=5 column=20) ```` to basically any other column than the one it is in (the center) Simply put the rows are working and the columns are not This is the script: ````from Tkinter import * root = Tk() root wm_title("Title:D") root geometry('{}x{}' format(500 250)) photo = PhotoImage(file="spaz gif") label = Label(root image=photo) label grid(row=1 column=1) w = Label(root text="This label" fg="red" font=("Helvetica" 16)) w grid(row=5 column=20) root mainloop() ```` | Rows and columns that are empty have a size of zero The code is working exactly like it is designed to work The label <them>is</them> in column 20 it is just that columns 0 and 2-19 are invisible |
What feature of the Birmingham Quran fragments' text make some doubt that it is older than other known versions of the Quran? | dots and chapter separators |
What is one way a match can end? | in a draw |
How to extract hyperlinked hrefs from html using urllib2 I am using urllib2 to pull the html contents of a web page My plan is to iterate through the page numbers provided at the bottom of the page (a take on pagination) However the link for each of the page listings on the bottom of the page are provided by hyperlinks in the href tag For example the links to the corresponding web page for each page number is a link associated with the '#' symbol (i e right clicking on the '#' and opening the link in a new tab leads to the page): ```` <li class="currentPage">3</li> <li><a class = "_pageNo" href='#'>4</a></li> <li><a class = "_pageNo" href='#'>5</a></li> <li><a class = "_pageNo" href='#'>6</a></li> ```` When i pull the contents the '#' are retrieved as characters rather than their underlying links Any thoughts? | Inspecting the page you mentioned in the comments I found out that when you click a link a `POST` is sent back to the server informing which page to see next so to fetch a specific page you will need to do this: ````from urllib import urlencode import urllib2 url ='http://online wsj com/search/term html?KEYWORDS=alibaba' data = urlencode({'page_no':3}) contents = urllib2 urlopen(url data=data) read() ```` I would also suggest using the lib `requests` for this which would simplify the code |
Multiple MySQL JOINs and duplicated cells I have two MySQL queries that give the results that I am looking for I would ultimately like to combine these two queries to produce a single table and I am stuck QUERY 1: ````SELECT scoc isr outcome_concept_id concept_name as outcome_name FROM standard_case_outcome AS sco INNER JOIN concept AS c ON sco outcome_concept_id = c concept_id INNER JOIN standard_case_outcome_category AS scoc ON scoc isr = sco isr WHERE scoc outc_code = 'CA' ```` RESULT 1: <a href="https://i stack imgur com/ZEoRc png" rel="nofollow"><img src="https://i stack imgur com/ZEoRc png" alt="RESULT OF QUERY 1 (TRUNCATED)"></a> QUERY 2: ````SELECT scoc isr drug_seq concept_id concept_name as drug_name FROM standard_case_drug AS scd INNER JOIN concept AS c ON scd standard_concept_id = c concept_id INNER JOIN standard_case_outcome_category AS scoc ON scoc isr = scd isr WHERE scoc outc_code = 'CA' ```` RESULT 2: <a href="https://i stack imgur com/zjJDM png" rel="nofollow"><img src="https://i stack imgur com/zjJDM png" alt="RESULT OF QUERY 2 (TRUNCATED)"></a> DESIRED RESULT: <a href="https://i stack imgur com/MKlMJ png" rel="nofollow"><img src="https://i stack imgur com/MKlMJ png" alt="DESIRED RESULT"></a> I am pretty sure I can figure out how to do it using Python/pandas but I was wondering if there is (a) a way to do this MySQL (b) any benefit to doing it with MySQL ** If you are curious <a href="http://datadryad org/resource/doi:10 5061/dryad 8q0s4" rel="nofollow">this is the entire dataset</a> Here is the db structure for the pertinent tables: ````# Dump of table concept # ------------------------------------------------------------ CREATE TABLE `concept` ( `concept_id` int(11) NOT NULL `concept_name` varchar(255) NOT NULL `domain_id` varchar(20) NOT NULL `vocabulary_id` varchar(20) NOT NULL `concept_class_id` varchar(20) NOT NULL `standard_concept` varchar(1) DEFAULT NULL `concept_code` varchar(50) NOT NULL `valid_start_date` date NOT NULL `valid_end_date` date NOT NULL `invalid_reason` varchar(1) DEFAULT NULL PRIMARY KEY (`concept_id`) UNIQUE KEY `idx_concept_concept_id` (`concept_id`) KEY `idx_concept_code` (`concept_code`) KEY `idx_concept_vocabluary_id` (`vocabulary_id`) KEY `idx_concept_domain_id` (`domain_id`) KEY `idx_concept_class_id` (`concept_class_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; # Dump of table standard_case_drug # ------------------------------------------------------------ CREATE TABLE `standard_case_drug` ( `primaryid` varchar(512) DEFAULT NULL `isr` varchar(512) DEFAULT NULL `drug_seq` varchar(512) DEFAULT NULL `role_cod` varchar(512) DEFAULT NULL `standard_concept_id` int(11) DEFAULT NULL KEY `idx_standard_case_drug_primary_id` (`primaryid`(255) `drug_seq`(255)) KEY `idx_standard_case_drug_isr` (`isr`(255) `drug_seq`(255)) KEY `idx_standard_case_drug_standard_concept_id` (`standard_concept_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; # Dump of table standard_case_outcome # ------------------------------------------------------------ CREATE TABLE `standard_case_outcome` ( `primaryid` varchar(512) DEFAULT NULL `isr` varchar(512) DEFAULT NULL `pt` varchar(512) DEFAULT NULL `outcome_concept_id` int(11) DEFAULT NULL `snomed_outcome_concept_id` int(11) DEFAULT NULL KEY `idx_standard_case_outcome_primary_id` (`primaryid`(255)) KEY `idx_standard_case_outcome_isr` (`isr`(255)) KEY `idx_standard_case_outcome_outcome_concept_id` (`outcome_concept_id`) KEY `idx_standard_case_outcome_snomed_outcome_concept_id` (`snomed_outcome_concept_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; # Dump of table standard_case_outcome_category # ------------------------------------------------------------ CREATE TABLE `standard_case_outcome_category` ( `primaryid` varchar(512) DEFAULT NULL `isr` varchar(512) DEFAULT NULL `outc_code` varchar(512) DEFAULT NULL COMMENT 'Code for a patient outcome (See table below) CODE MEANING_TEXT ----------------DE Death LT Life-ThreateningHO Hospitalization - Initial or ProlongedDS DisabilityCA Congenital AnomalyRI Required Intervention to Prevent PermanentImpairment/DamageOT Other Serious (Important Medical Event) NOTE: The outcome from the latest version of a case is provided If there is more than one outcome the codes willbe line listed ' `snomed_concept_id` int(11) DEFAULT NULL KEY `idx_standard_case_outcome_category_primary_id` (`primaryid`(255)) KEY `idx_standard_case_outcome_category_isr` (`isr`(255)) KEY `idx_standard_case_outcome_category_snomed_concept_id` (`snomed_concept_id` `outc_code`(255)) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ```` | This should get you the desired results ````SELECT `scoc` `isr` AS `isr` `sco` `outcome_concept_id` AS `outcome_concept_id` `c1` `concept_name` AS `outcome_name` `scd` `drug_seq` AS `drug_seq` `scd` `concept_id` AS `concept_id` `c2` `concept_name` AS `drug_name` FROM `standard_case_outcome` AS `sco` INNER JOIN `concept` AS `c1` ON `sco` `outcome_concept_id` = `c1` `concept_id` LEFT JOIN `standard_case_drug` AS `scd` ON `sco` `isr` = `scd` `isr` INNER JOIN `concept` AS `c2` ON `scd` `outcome_concept_id` = `c2` `concept_id` INNER JOIN `standard_case_outcome_category` AS `scoc` ON `scoc` `isr` = `sco` `isr` WHERE `scoc` `outc_code` = 'CA' ```` <strong>EDIT</strong> Note that I left out the `concept` table as you are not selecting anything from it or filtering the results with it <strong>SECOND EDIT</strong> Updated to include the `concept` table Updated question showed that it is in fact needed in the `SELECT` <strong>THIRD EDIT</strong> Needs to select `concept` `name` for `sco` and `scd` respectively |
What does using the sun for direction involve? | the need for making compensation based on the time |
How can I execute Python code in a virtualenv from Matlab I am creating a Matlab toolbox for research and I need to execute Matlab code but also Python code I want to allow the user to execute Python code from Matlab The problem is that if I do it right away I would have to install everything on the Python's environment and I want to avoid this using virtualenv The problem is that I do not know how to tell Matlab to user the virtual enviornment created | You can either modify the `PATH` environment variable in MATLAB prior to calling python from MATLAB <pre class="lang-matlab prettyprint-override">`% Modify the system PATH so it finds the python executable in your venv first setenv('PATH' ['/path/to/my/venv/bin' pathsep getenv('PATH')]) % Call your python script system('python myscript py') ```` Or the better way would be to specify the full path to the python binary ````system('/path/to/my/venv/bin/python myscript py') ```` |
get UTC offset from time zone name in python How can I get UTC offset from time zone name in python? For example: I have "Asia/Jerusalem" and I want to get "+0200" | Because of DST (Daylight Saving Time) the result depends on the time of the year: ````import datetime pytz datetime datetime now(pytz timezone('Asia/Jerusalem')) strftime('%z') # returns '+0300' (because 'now' they have DST) pytz timezone('Asia/Jerusalem') localize(datetime datetime(2011 1 1)) strftime('%z') # returns '+0200' (because in January they did not have DST) ```` |
When did the Royal Institute oppose setting lower limits on the size of new homes in Britain? | null |