prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
When use "cv2.drawMatches", error occurs: "outImg is not a numpy array, neither a scalar" I have the following code fo the keyframe matching by ORB: ``` import numpy as np import cv2 from matplotlib import pyplot as plt img1 = cv2.imread("C:\\Users\\user\\Desktop\\picture\\Pikachu_Libre.png",0) img2 = cv2.imread("C:\\Users\\user\\Desktop\\picture\\Pikachu_Libre.png",0) # Initiate STAR detector orb = cv2.ORB_create() # find the keypoints with ORB kp1, des1 = orb.detectAndCompute(img1,None) kp2, des2 = orb.detectAndCompute(img2,None) bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) # Match descriptors. matches = bf.match(des1,des2) # Sort them in the order of their distance. matches = sorted(matches, key = lambda x:x.distance) # Draw first 10 matches. img3 = cv2.drawMatches(img1,kp1,img2,kp2,None,matches[:10], flags=2) plt.imshow(img3),plt.show() ``` After i run I get the following error: ``` img3 = cv2.drawMatches(img1,kp1,img2,kp2,None,matches[:10], flags=2) TypeError: outImg is not a numpy array, neither a scalar ``` Anyone can help me on this?
Notice the prototype of [`cv2.drawMatches()`](https://docs.opencv.org/3.4.1/d4/d5d/group__features2d__draw.html#ga7421b3941617d7267e3f2311582f49e1): ``` cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]) -> outImg ``` So your parameters' order is wrong. --- **From**: ``` img3 = cv2.drawMatches(img1,kp1,img2,kp2,None,matches[:10], flags=2) ``` **To**: ``` img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], None,flags=2) ```
Spring-Security-Oauth2: Default login success url Is it possible to set a default login successurl for a Spring Oauth2 Sso service? Following szenario 1. browser reqeuests `index.html` 2. sso service: Not protected ==> return the `index.html` 3. index.html contains `manifest` attribute ==> browser requests the manifest 4. sso service: Manifest is protected ==> returns 401 5. client redirects to `${sso.host}/login` 6. sso service redirects to auth server 7. authentication ==> redirects back to `${sso.host}/login` with the code in the query-String 8. sso service: requests token and redirects to the manifest file Is there a way to NOT redirect to the last requested resource which was protected, but redirect to 'index.html' by default? Please let me know even if there isn't a way to achieve this
I have (I think) a similar issue: in my case, once the SSO request succeeds the user is redirected to /, which is not what I want. There is a built-in solution that took a bit of digging to find. The `AbstractAuthenticationProcessingFilter` has a method `setAuthenticationSuccessHandler` that allows you to control this, so if you have access to the `OAuth2ClientAuthenticationProcessingFilter` you can set it to what you want. If you have a setup similar to the tutorial: <https://spring.io/guides/tutorials/spring-boot-oauth2/#_social_login_manual> then you can simply add the following to the `OAuth2ClientAuthenticationProcessingFilter` that is created in the tutorial: ``` OAuth2ClientAuthenticationProcessingFilter oauth2Filter = new OAuth2ClientAuthenticationProcessingFilter("/XXXProvider/login"); oauth2Filter.setAuthenticationSuccessHandler(new SimpleUrlAuthenticationSuccessHandler() { public void onAuthenticationSuccess(HttpServletRequest request, HttpServletResponse response, Authentication authentication) throws IOException, ServletException { this.setDefaultTargetUrl("/my_preferred_location"); super.onAuthenticationSuccess(request, response, authentication); } }); ```
Mercurial: Convert clones to branches I am trying to clean up some cloned repositories I have, basically converting clones into named branches. What's the easiest way to do this?
The only way you can make them named branches would be to rewrite their history, because the name of the branch is part of the history. However, if you can survive without having the branches be named in the past (the future is fine) then you can just pull all the repos into the main one (without merging!) and then label your branches once they're in. For example, if you have MainRepo, Feature1Repo, and Feature2Repo, you would do the following: ``` $ cd MainRepo $ hg pull ../Feature1Repo $ hg pull ../Feature2Repo ``` Now, if you look at the DAG in something like TortoiseHg, you'll see a couple different heads. From here, you just `hg up` and `hg branch`: ``` $ hg up feature1headrev $ hg branch Feature1 $ hg up feature2headrev $ hg branch Feature2 ``` From here forward, you'll have your named branches. Note: If the `hg pull` complains about the repositories not being related, you can do an `hg pull -f`, which will create multiple tails in the repository. This is usually not how true branches are, but it is sometimes useful.
Raise Chrome JS heap limit? I have a JavaScript application that uses way too much memory. It doesn't crash out the tab, but it can take minutes to load, most of which is spent in GC. I'm using the heap profiler to see what functions are allocating the most memory, which works great. Is there some way to get Chrome to allow a larger JS heap per process, so that I can turn around test runs on reducing memory pressure without waiting minutes for GC? A command-line argument I couldn't find, perhaps?
Yes the `jsHeapSizeLimit` reported in the console: ``` > console.memory MemoryInfo {totalJSHeapSize: 42100000, usedJSHeapSize: 29400000, jsHeapSizeLimit: 1620000000} ``` is calculated from two limits: ``` size_t MaxReserved() { return 2 * max_semi_space_size_ + max_old_generation_size_; } ``` [chromium source](https://chromium.googlesource.com/v8/v8.git/+/c59f78f611280240d65ab74a8874ce51161bfcd2/src/heap/heap.h#1368 "jsHeapSizeLimit") You may set these maximums (in MB) with v8 flags: ``` chromium --js-flags="--max_old_space_size=512 --max_semi_space_size=512" ``` (or you can find the similarly named initial allocation flags with `--js-flags=--help` if you prefer.) A short write up of how the gc manages the heap spaces is [here](https://v8project.blogspot.ch/2015/08/getting-garbage-collection-for-free.html).
Bitwise multiplication to achieve result Is there a general set of rules to achieve a certain number by multiplying a known and unknown number. e.g. Suppose x = 13 and z = 9 Is there a way to find y such that ``` x * y = z => 13 * y = 9 ``` I don't want to utilize them as mathematical integers, but rather in terms of bits. So, I want to keep z an int. Obviously, there is an overflow in the bit representation, but I'm unsure how to find z without brute forcing all values in a 32-bit machine. I think y would be a very small number, negative. Note: this isn't or a hw assignment, so you could change x and y to whatever, these are just examples.
First, find the [modular multiplicative inverse](http://en.wikipedia.org/wiki/Modular_multiplicative_inverse) of `x` (13) mod 232. There are several ways to do it, many people will recommend the extended Euclidean algorithm, but it is not the simplest for this case. See Hacker's Delight chapter 10 (integer division by constants) for a simpler algorithm. Anyway, once you know that the modular multiplicative inverse of 13 mod 232 is `0xc4ec4ec5`, multiply that number by `z` to get `y`. `0xc4ec4ec5 * 9 = 0xec4ec4ed` So `y = 0xec4ec4ed`, and `0xec4ec4ed * 13` is indeed 9. Note that if `x` is even, it has no inverse. If it's "at most as even as `z`" (that is, it has as many or fewer trailing zeroes than `z`) you can solve it this way after dividing out their highest common power of 2.
Open an Event object created by my service from my application I have created a windows service. Under which I am creating a event "test". I want to use the same event object to be set/reset by my application. But I do not seem to get the Handle of the event object through my app. But can see the Event being listed in the BaseNamed objects. I think I need to do something with the security Attribute of the create Event. I am creating this event in my service CreateEvent(NULL, TRUE, FALSE, TEXT("Test")) and using OpenEvent in my application. OpenEvent( EVENT\_ALL\_ACCESS, TRUE, TEXT("Test")) Please suggest what changes I would need to make, for my app to get the handle. **update** After Replacing `TEXT("Test") with TEXT("Global\\Test")`. Still I didn't get the Event object handle. Yes, now at least its recognizing the existence of the event object with the Error return(Access Denied). It was getting a Error return (the system cannot find the file specified) earlier. As I said, I think there is some security aspect here. This is what I found out: As the session creates the Event in Session 0. It cannot be inherited by my application which is being created in Session 1. For that while creating the Event object, I need to specify the security attribute structure with a proper Security Dispatcher to do so. Could somebody tell me how to do so?
Try this: ``` PSECURITY_DESCRIPTOR psd = (PSECURITY_DESCRIPTOR) LocalAlloc(LPTR, SECURITY_DESCRIPTOR_MIN_LENGTH); InitializeSecurityDescriptor(psd, SECURITY_DESCRIPTOR_REVISION); SetSecurityDescriptorDacl(psd, TRUE, NULL, FALSE); SECURITY_ATTRIBUTES sa = {0}; sa.nLength = sizeof(sa); sa.lpSecurityDescriptor = psd; sa.bInheritHandle = FALSE; HANDLE hEvent = CreateEvent(&sa, TRUE, FALSE, TEXT("Global\\Test")); LocalFree(psd); ``` ``` HANDLE hEvent = OpenEvent(SYNCHRONIZE, FALSE, TEXT("Global\\Test")); ```
Is it possible to declare a type at function scope in F#? Let's say I have a function which does something pretty complicated and it is implemented with the help of subfunctions. To make things easier, instead of tuples I would like to use some intermediate structures which are private to the implementation of this function. I don't want the declaration of these structures to leak outside. So I want something like this: ``` let someComplexFun p = type SomeRecord = {i:int; x:int; y:int;} type SomeOtherRecord = {...} let innerFunctionA (x:SomeRecord) = ... let innerFunctionB (x:SomeOtherRecord) = ... ... ``` I tried it but of course the compiler doesn't let me do this. I looked at the documentation and I can't see anywhere quickly that the types must be declared at the module level. In LISP for example, it seems that it's all entirely legal, e.g.: ``` (defun foo (when) (declare (type (member :now :later) when)) ; Type declaration is illustrative and in this case optional. (ecase when (:now (something)) (:later (something-else)))) ``` So, am I missing something? Is this possible if F# at all?
To verify that this is not allowed according to the specification, take a look at the grammar of F# expressions in the specification: [Section 6: Expressions](http://research.microsoft.com/en-us/um/cambridge/projects/fsharp/manual/spec.html#_Toc270597435). It lists various constructs that can be used in place of `expr` and none of them is a type declaration `type-defn` (described in [Section 8: Type Declarations](http://research.microsoft.com/en-us/um/cambridge/projects/fsharp/manual/spec.html#_Toc270597534)). The (simplified) syntax for function declarations is `let ident args = expr`, so the body has to be an expression (and you cannot declare types inside expressions).
pdo::beginTransaction does it lock table? Can I use this function instead of 'LOCK TABLES' query? Example: ``` pdo::beginTransaction(); SELECT id AS last_id FROM t WHERE... INSERT INTO t (id,...) VALUES (last_id+1,....) pdo::commit(); ```
The answer to this is: it depends. The most important resource to help you understand exactly what it does, what it doesn't do and how it works is [here](http://www.php.net/manual/en/pdo.transactions.php). For a kick off, it depends whether the underlying driver (MySQL, MSSQL, etc etc) supports transaction functionality at all. If the driver does not support transactions, `pdo::beginTransaction();` will fail and return `FALSE`, and all your queries will be executed immediately. This is not to say that a [`LOCK TABLES`](http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html) query would fail - it depends whether the underlying database engine supports it. In fact, in MySQL at least, `pdo::beginTransaction()` follows the same rules as a [`START TRANSACTION`](http://dev.mysql.com/doc/refman/5.5/en/commit.html) statement. It does not lock the table immediately, it simply ensures that that the queries that are part of the transaction follow the rules of [ACID](http://en.wikipedia.org/wiki/ACID). I won't go into full details of exactly which combinations will work and what they will do because they are covered at length in the documents I linked to, and there is no sense in me writing it all out again.
How is processing done when streaming data? If streams in programming is about reading data in chunks, how then is it possible to process the data in the scenarios where the data cannot be processed bit by bit, but the processing logic needs the full data? I can imagine in some cases it might be possible to process data bit by bit, but it is not hard to imagine that in other scenario, the shape of the data or the processing algorithm needs to have the full data before the processing can effectively be applied. How is this scenario taken care of?
> > how then is it possible to process the data in the scenarios where the data cannot be processed bit by bit, but the processing logic needs the full data? > > > It's simple: Use pre-processing and meta data. For example, without every chunk, you don't know how many chunks there are, unless something upstream counted them and told you. You don't have to count them if the meta data tells you how many there are. That may be a trivial example, but formats and even transmission protocols have been devised specifically to solve this exact problem. You can only work with what you have. When people use the word “streaming” they're usually talking about transmission over the network, but it's the same problem when parsing a file from the hard drive. You can load the entire file into memory (sometimes called [slurping](https://blog.appsignal.com/2018/07/10/ruby-magic-slurping-and-streaming-files.html) or [batch processing](https://www.precisely.com/blog/big-data/big-data-101-batch-process-streams)) or you can load a line/chunk at a time (effectively streaming). Which one is appropriate depends on the size of the file, size of the memory, and your parsing/processing needs. If you don't mind looking at a few lines of Ruby code, I recommend referring to the comparison of performance metrics in [this SO answer](https://stackoverflow.com/a/25189286/1493294). It shows that there are some non-linear impacts to consider. Every coder should keep this stuff in mind before assuming there's only one obvious answer here.
React Native Android crashes on enabling debug mode Shaking the android device and hit Debug, and it crashes every time right away. From the Android Studio logcat, it shows **No source URL loaded, have you initialised the instance?**: ``` java.lang.AssertionError: No source URL loaded, have you initialised the instance? at com.facebook.infer.annotation.Assertions.assertNotNull(Assertions.java:35) at com.facebook.react.modules.debug.SourceCodeModule.getTypedExportedConstants(SourceCodeModule.java:39) at com.facebook.fbreact.specs.NativeSourceCodeSpec.getConstants(NativeSourceCodeSpec.java:35) at com.facebook.react.bridge.JavaModuleWrapper.getConstants(JavaModuleWrapper.java:129) at com.facebook.react.bridge.queue.NativeRunnable.run(Native Method) at android.os.Handler.handleCallback(Handler.java:938) at android.os.Handler.dispatchMessage(Handler.java:99) at com.facebook.react.bridge.queue.MessageQueueThreadHandler.dispatchMessage(MessageQueueThreadHandler.java:27) at android.os.Looper.loop(Looper.java:223) at com.facebook.react.bridge.queue.MessageQueueThreadImpl$4.run(MessageQueueThreadImpl.java:226) at java.lang.Thread.run(Thread.java:923) ``` The same react native codebase, turning on dubug mode on ios is working fine, but on Android it always crash when debug mode is turned on. I can't see what's causing it by the error log. Here are the dependencies I have for the react native app, and I'm using redux and redux-devtools-extension for debugging. Am I missing any libraries? ``` "dependencies": { "@react-native-async-storage/async-storage": "^1.15.14", "@reduxjs/toolkit": "^1.7.0", "expo": "~42.0.1", "expo-permissions": "12.1.0", "expo-splash-screen": "~0.11.2", "expo-status-bar": "~1.0.4", "expo-updates": "~0.8.1", "react": "17.0.2", "react-dom": "17.0.2", "react-native": "0.64.1", "react-native-fast-image": "^8.5.11", "react-native-gesture-handler": "~1.10.2", "react-native-navigation": "^7.14.0", "react-native-reanimated": "~2.1.0", "react-native-screens": "3.2.0", "react-native-unimodules": "~0.13.3", "react-native-web": "0.16.3", "react-redux": "^7.2.6", "redux-persist": "^6.0.0", "tslint": "^6.1.3", "tslint-react": "^5.0.0" }, "devDependencies": { "@babel/core": "^7.9.0", "@types/react": "17.0.5", "@types/react-native": "0.64.5", "babel-preset-expo": "~8.3.0", "jest-expo": "~41.0.0", "redux-devtools-extension": "^2.13.9", "typescript": "4.2.4" }, ```
After some more search arounds, found this is a known issue in react-native-reanimated. As their [website](https://docs.swmansion.com/react-native-reanimated/docs/2.2.0/installation/#installing-the-package) points out > > Please note that Reanimated 2 doesn't support remote debugging, only > Flipper can be used for debugging. > > > Another [github issue](https://github.com/software-mansion/react-native-reanimated/issues/1990#issuecomment-831888705) also pointed out this issue > > This is expected, you can't use remote debugging with turbomodules > (which Reanimated v2 is using). Check out Flipper to debug your app. > > > <https://docs.swmansion.com/react-native-reanimated/docs/#known-problems-and-limitations> > > > <https://github.com/software-mansion/react-native-reanimated/issues/1990> Removing this library fixed the issue. 1. Remove the react-native-reanimated dependency in package.json 2. Remove related code in android's MainApplication.java 3. yarn install or npm install 4. Go to the ios folder and run `pod install` 5. Go the the android folder and run `./gradlew clean` 6. Rebuild the app. `yarn android` and `yarn ios` Another alternative is to use Flipper for debugging instead.
SignalR not working with Windows-integrated authentication I have an ASP.NET MVC 4 app (.NET 4.5) and SIgnalR works fine with forms-based authentication (hosted via IIS/IIS Express) As soon as I change the app to windows-integrated authentication (`<authentication mode="Windows"/>` in "web.config") it stops working. > > *jquery.signalR-2.2.2.min.js:9* WebSocket connection to `ws://localhost:51030/signalr/connect?transport=webSockets&blhablahblah` failed: **Error during WebSocket handshake: Unexpected response code: 403** > > > After adding the `[Authorize]` attribute to my hub, the error changes to > > WebSocket connection to `ws://localhost:51030/signalr/connect?transport=webSocketsblahblah` failed: HTTP Authentication failed; no valid credentials available > > > Other parts of the app are working just fine, windows-auth is enabled on the server and works, etc. etc. **How do I solve this?** And if it is unsolvable for some reason (it could be Chrome not supporting windows auth on websocket connections or something else) - **why doesn't it fall back to non-websocket protocol?** and how do I force the fallback? UPDATE: I created a github issue <https://github.com/SignalR/SignalR/issues/3953>. The problem is not that I can't connect. The problem is that I cannot handle the error to fall back to another transport. Neither `.fail()` not `.error()` are being invoked. Try-catch doesn't help either.
Update from 2020: looks like Chrome now supports NTLM on WS-connections, not an issue any more --- *...10 hours later after asking the question...* **Partially solved** (answering my own question) After playing with it I can confirm, that adding the `[Authorize]` attribute to my hub (or alternatively, adding `GlobalHost.HubPipeline.RequireAuthentication();` to your "Startup.cs") actually does help. **It does fall back now** to an alternative transport, even though the error is still thrown into the browser's console. You can also specify **which** transport it falls back to, by calling: ``` $.connection.hub.start( { transport: ['webSockets', 'longPolling'] }); ``` in case you don't like the default priority (I guess, "hidden iframe" is the default second option). **The reason** The error is caused by Chrome, it does not support NTLM on websocket connections. Funny enough, IE, MS Edge and Firefox do support it ("Chrome is the new IE" huh). There's an open issue in Chromium bugtracker for this here <https://bugs.chromium.org/p/chromium/issues/detail?id=423609> if anyone wants to add any input to Chromium devs.
R ggplot2 set color line by variable without grouping I'm trying to plot three different lines with ggplot2 that show the temporal series of three different variable (max, min and mean temperature). The problem comes when I want to set the color for each line/variable. ggplot command: ``` plot.temp=ggplot(data,aes_string(x="date", y="Tmed",colour="Tmed")) + geom_line() + geom_line(data=data,aes_string(x="date",y="Tmin",colour="Tmin")) + geom_line(data=data,aes_string(x="date",y="Tmax",colour="Tmax")) ``` I don't use group as each row stands for one single time, not available categories here. I have tried different optinos found in another posts like `scale_color_manual` but then a `Continuous value supplied to discrete scale` error message appears You can find data file at <http://ubuntuone.com/45LqzkMHWYp7I0d47oOJ02> that can be easily read with `data = read.csv(filename,header=T, sep=",",na.strings="-99.9")` I'd just like to set the color line manually but can't find the way. Thanks in advance.
First, you need to convert `date` to Date object because now it is treated as factor. If date is treated as factor then each `date` value is assumed as separate group. ``` data$date<-as.Date(data$date) ``` Second, as you are using `aes_string()` also `colour="Tmed"` is interpreted as colors that depend from actual `Tmed` values. Use `aes()` and quotes only for `colour=` variable. Also there is no need to repeat `data=` argument in each `geom_line()` because you use the same data frame. ``` ggplot(data,aes(x=date, y=Tmed,colour="Tmed")) + geom_line() + geom_line(aes(y=Tmin,colour="Tmin")) + geom_line(aes(y=Tmax,colour="Tmax")) ``` Of course you can also melt your data and then you will need only one `geom_line()` call (but still you need to change date column). ``` library(reshape2) data2<-melt(data[,1:4],id.vars="date") ggplot(data2,aes(date,value,color=variable))+geom_line() ``` ![enter image description here](https://i.stack.imgur.com/GGKnq.png)
What does it mean to hydrate an object? When someone talks about hydrating an object, what does that mean? I see a Java project called Hydrate on the web that transforms data between different representations (RDMS to OOPS to XML). Is this the general meaning of object hydration; to transform data between representations? Could it mean reconstructing an object hierarchy from a stored representation?
### With respect to the more generic term *hydrate* Hydrating an object is taking an object that exists in memory, that doesn't yet contain any domain data ("real" data), and then populating it with domain data (such as from a database, from the network, or from a file system). From Erick Robertson's comments on this answer: > > deserialization == instantiation + hydration > > > If you don't need to worry about blistering performance, and you aren't debugging performance optimizations that are in the internals of a data access API, then you probably don't need to deal with hydration explicitly. You would typically use [deserialization](https://en.wikipedia.org/wiki/Serialization) instead so you can write less code. Some data access APIs don't give you this option, and in those cases you'd also have to explicitly call the hydration step yourself. For a bit more detail on the concept of Hydration, see [Erick Robertson's answer](https://stackoverflow.com/a/20787106/232593) on this same question. ### With respect to [the Java project called hydrate](http://hydrate.sourceforge.net/) You asked about this framework specifically, so I looked into it. As best as I can tell, I don't think this project used the word "hydrate" in a very generic sense. I see its use in the title as an approximate synonym for "serialization". As explained above, this usage isn't entirely accurate: See: <http://en.wikipedia.org/wiki/Serialization> > > translating data structures or object state into a format that can be stored [...] and reconstructed later in the same or another computer environment. > > > I can't find the reason behind their name directly on [the Hydrate FAQ](http://hydrate.sourceforge.net/featurefaq.html), but I got clues to their intention. I think they picked the name "Hydrate" because the purpose of the library is similar to the popular sound-alike [Hibernate framework](http://hibernate.org/), but it was designed with the exact opposite workflow in mind. Most ORMs, Hibernate included, take an in-memory object-model oriented approach, with the database taking second consideration. The Hydrate library instead takes a database-schema oriented approach, preserving your relational data structures and letting your program work on top of them more cleanly. Metaphorically speaking, still with respect to this library's name: *Hydrate* is like "making something ready to use" (like re-hydrating [Dried Foods](https://en.wikipedia.org/wiki/List_of_dried_foods)). It is a metaphorical opposite of *Hibernate*, which is more like "putting something away for the winter" (like [Animal Hibernation](https://en.wikipedia.org/wiki/Hibernation)). The decision to name the library Hydrate, as far as I can tell, was not concerned with the generic computer programming term "hydrate". When using the generic computer programming term "hydrate", performance optimizations are usually the motivation (or debugging existing optimizations). Even if the library supports granular control over when and how objects are populated with data, the timing and performance don't seem to be the primary motivation for the name or the library's functionality. The library seems more concerned with enabling end-to-end mapping and schema-preservation.
Sqlx Get with prepared statements I am trying to fetch some data from postgress table using prepared statements If I try with database.Get() everything is returned. Table: ``` create table accounts ( id bigserial not null constraint accounts_pkey primary key, identificator text not null, password text not null, salt text not null, type smallint not null, level smallint not null, created_at timestamp not null, updated timestamp not null, expiry_date timestamp, qr_key text ); ``` Account struct: ``` type Account struct { ID string `db:"id"` Identificator string `db:"identificator"` Password string `db:"password"` Salt string `db:"salt"` Type int `db:"type"` Level int `db:"level"` ExpiryDate time.Time `db:"expiry_date"` CreatedAt time.Time `db:"created_at"` UpdateAt time.Time `db:"updated_at"` QrKey sql.NullString `db:"qr_key"` } ``` BTW i tried using ? instead of $1 & $2 ``` stmt, err := database.Preparex(`SELECT * FROM accounts where identificator = $1 and type = $2`) if err != nil { panic(err) } accounts := []account.Account{} err = stmt.Get(&accounts, "asd", 123) if err != nil { panic(err) } ``` The error I get is > > "errorMessage": "scannable dest type slice with \u003e1 columns (10) in result", > > > In the table there are no records I tried to remove all fields except the ID from Account (struct), however it does not work.
Documentation for sqlx described [Get and Select](http://jmoiron.github.io/sqlx/) as: > > `Get` and `Select` use rows.Scan on scannable types and rows.StructScan on > non-scannable types. They are roughly analagous to QueryRow and Query, > where Get is useful for fetching a single result and scanning it, and > Select is useful for fetching a slice of results: > > > For fetching a single record use `Get`. ``` stmt, err := database.Preparex(`SELECT * FROM accounts where identificator = $1 and type = $2`) var account Account err = stmt.Get(&account, "asd", 123) ``` If your query returns more than a single record use `Select` with statement as: ``` stmt, err := database.Preparex(`SELECT * FROM accounts where identificator = $1 and type = $2`) var accounts []Account err = stmt.Select(&accounts, "asd", 123) ``` In your case if you use `stmt.Select` instead if `stmt.Get`. It will work.
Delay program using int 21h with ah = 2Ch I'm working on a project for school in assembly 8086 (using DOSBox), and I was trying to delay my program for 0.5 second. I tried to create a loop which compares the current time to the initial time, using int 21h, function 2Ch with the hundredths value in `DL`, but it seems to be too slow... ``` mov ah, 2Ch int 21h mov al, dl ;hundredths mov bx, 0 wait_loop: one_hun: int 21h cmp al, dl je one_hun mov al, dl inc bx cmp bx, 50 jne wait_loop ```
I would rather use BIOS [`int 1Ah`](http://stanislavs.org/helppc/int_1a.html), which is used as source for that `int 21h` service any way, and this may be a bit simpler to use. But beware by default (unless you plan to reprogram the timer chip) this is ticking 18.2 times per second, so to wait for half a second you can wait either 9 (cca. 440 to 494.51ms) or 10 (cca. 495 to 549.45ms) ticks, the precision will be limited to the default +-~50ms). If you will reprogram the timer chips, you may get a bit higher precision, but don't expect something like [ten] thousands of second to work perfectly reliably under DOS (probably emulated under modern OS). --- About your current code: the hundredths in the `dl` are not incrementing by one, so you are counting in the `bx` the number of those 18.2Hz ticks, not the hundredths (i.e. your code waits for ~2.7s, right?). Also don't do `je` in similar code, always make the condition `<=` or `>=`, because if for whatever reason (OS didn't run your code for a while) miss that exact `50` hundredths difference, you will create loop which will run almost infinitely (until it will hit that exact 50 by accident after one of the overflows). To do it your way, you have to calculate the delta: ``` mov ah, 2Ch int 21h mov al, dl ;hundredths wait_loop: nop ; burn the CPU a bit less int 21h sub dl,al ; calculate delta of hundredths (-99..+99) jnc delta_positive add dl,100 ; adjust the delta to be positive 1-99 delta_positive: cmp dl,50 jb wait_loop ```
How to convert text to Image on iOS? I am trying to write a function to convert the textfield text to an image ([example](https://stackoverflow.com/questions/4488549/convert-text-to-image-in-php)). I have tried to search for an example, most of the sample also is overwrite text on image. Is it possible to give me some example or hints on how to do that?
Several approaches are possible. 1. If you have an existing `UITextField`, `UITextView` or `UILabel` that you just want to render as an image, you can employ the traditional snapshot approaches, such as: ``` - (UIImage *)imageForView:(UIView *)view { UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0); if ([view respondsToSelector:@selector(drawViewHierarchyInRect:afterScreenUpdates:)]) [view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES]; // if we have efficient iOS 7 method, use it ... else [view.layer renderInContext:UIGraphicsGetCurrentContext()]; // ... otherwise, fall back to tried and true methods UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return image; } ``` 2. If you wanted a generic "create image from text" routine, in iOS 7, it would look like: ``` - (UIImage *)imageFromString:(NSString *)string attributes:(NSDictionary *)attributes size:(CGSize)size { UIGraphicsBeginImageContextWithOptions(size, NO, 0); [string drawInRect:CGRectMake(0, 0, size.width, size.height) withAttributes:attributes]; UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return image; } ``` The above will create an image whose size will vary based upon the text. Clearly, if you just want a fixed size image, then use constants `frame`, rather than dynamically building it. Anyway, you could then use the above like so: ``` NSString *string = @"Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."; NSDictionary *attributes = @{NSFontAttributeName : [UIFont systemFontOfSize:20], NSForegroundColorAttributeName : [UIColor blueColor], NSBackgroundColorAttributeName : [UIColor clearColor]}; UIImage *image = [self imageFromString:string attributes:attributes size:self.imageView.bounds.size]; ``` 3. If you need to support earlier iOS versions, you could use this technique: ``` - (UIImage *)imageFromString:(NSString *)string font:(UIFont *)font size:(CGSize)size { UIGraphicsBeginImageContextWithOptions(size, NO, 0); [string drawInRect:CGRectMake(0, 0, size.width, size.height) withFont:font lineBreakMode: NSLineBreakByWordWrapping]; UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return image; } ``` There are many, many permutations of each of these. It just depends upon what you are trying to achieve. --- Another approach is to simply have both `UIImageView` and `UILabel`/`UITextView` objects in the view, and if you have an image from the server, set the image of the `UIImageView`, and text, set the `text` of the `UILabel`/`UITextView`.
How to Multi-Head learning I have about 5 models that work pretty well trained individually but I want to fuse them together in order to have one big model. I'm looking into it because one big model is more easy to update (in production) than many small model this is an image of what I want to achieve. [![enter image description here](https://i.stack.imgur.com/hyQdX.png)](https://i.stack.imgur.com/hyQdX.png) my question are, is it ok to do it like this ? having one dataset per head model, how am I supposed to train the whole model ?
> > my question are, is it ok to do it like this > > > Sure you can do that. This approach is called [multi-task learning](https://ruder.io/multi-task/). Depending on your datasets and what you are trying to do, it will maybe even increase the performance. Microsoft used a [multi-task model](https://github.com/namisan/mt-dnn) to achieve some good results for the NLP Glue benchmark, but they also noted that you can increase the performance further by finetuning the joint model for each individual task. > > having one dataset per head model, how am I supposed to train the whole model? > > > All you need is pytorch [ModuleList](https://pytorch.org/docs/stable/nn.html#modulelist): ``` #please note this is just pseudocode and I'm not well versed with computer vision #therefore you need to check if resnet50 import is correct and look #for the imports of the task specific stuff from torch import nn from torchvision.models import resnet50 class MultiTaskModel(nn.Module): def __init__(self): #shared part self.resnet50 = resnet50() #task specific stuff self.tasks = nn.ModuleList() self.tasks.add_module('depth', Depth()) self.tasks.add_module('denseflow', Denseflow()) #... def forward(self, tasktag, ...): #shared part resnet_output = self.resnet50(...) #task specific parts if tasktag == 'depth': return self.tasks.depth(resnet_output) elif tasktag == 'denseflow': return self.tasks.denseflow(resnet_output) #... ```
Keep div displayed when input inside of it has focus ``` <div> <input type="text"/> </div> ``` I have a `div` element with `input` inside of it. Now, this `div` has `display: none` property on it. Now, when I display the `div` (by hovering at another element on my site, not important here), I can now put some text in the `input` element. But when I have focus on that `input`, and I accidentally move my mouse out of the `div` element, this `div` element disappears. Is there any chance that I can keep my `div` element displayed when the `input` has focus no matter where the mouse is? Edit: After editing Sergio's example, here is the problem I am facing: <http://jsfiddle.net/2thMQ/1/> (try typing something in that input field and then move your mouse away)
Use `input:focus { display:block; }` **[DEMO](http://jsfiddle.net/2thMQ/)** Only CSS ``` div:hover input { display:block; } input:focus { display:block; } input { display:none; } ``` If you want to style the parent `div` it will be possible in CSS4. So far here is a javascript solution for this: ``` var inp = document.getElementsByTagName('input'); for (var i = 0; i < inp.length; i++) { inp[i].onfocus = function(){ this.parentNode.style.display='block';} inp[i].onblur = function(){ this.parentNode.style.display='none';} }; ``` **[demo](http://jsfiddle.net/j4j5K/)**
Shopify HMAC parameter verification failing in Python I'm having some trouble verifying the HMAC parameter coming from Shopify. The code I'm using per the [Shopify documentation](https://help.shopify.com/api/getting-started/authentication/oauth#verification) is returning an incorrect result. Here's my annotated code: ``` import urllib import hmac import hashlib qs = "hmac=96d0a58213b6aa5ca5ef6295023a90694cf21655cf301975978a9aa30e2d3e48&locale=en&protocol=https%3A%2F%2F&shop=myshopname.myshopify.com&timestamp=1520883022" ``` **Parse the querystring** ``` params = urllib.parse.parse_qs(qs) ``` **Extract the hmac value** ``` value = params['hmac'][0] ``` **Remove parameters from the querystring per documentation** ``` del params['hmac'] del params['signature'] ``` **Recombine the parameters** ``` new_qs = urllib.parse.urlencode(params) ``` **Calculate the digest** ``` h = hmac.new(SECRET.encode("utf8"), msg=new_qs.encode("utf8"), digestmod=hashlib.sha256) ``` **Returns `False`!** ``` hmac.compare_digest(h.hexdigest(), value) ``` That last step should, ostensibly, return true. Every step followed here is outlined as commented in the Shopify docs.
At some point, recently, Shopify started including the `protocol` parameter in the querystring payload. This itself wouldn't be a problem, *except* for the fact that Shopify doesn't document that `:` and `/` are not to be URL-encoded when checking the signature. This is unexpected, given that they themselves *do* URL-encode these characters in the query string that is provided. To fix the issue, provide the `safe` parameter to `urllib.parse.urlencode` with the value `:/` (fitting, right?). The full working code looks like this: ``` params = urllib.parse.parse_qsl(qs) cleaned_params = [] hmac_value = dict(params)['hmac'] # Sort parameters for (k, v) in sorted(params): if k in ['hmac', 'signature']: continue cleaned_params.append((k, v)) new_qs = urllib.parse.urlencode(cleaned_params, safe=":/") secret = SECRET.encode("utf8") h = hmac.new(secret, msg=new_qs.encode("utf8"), digestmod=hashlib.sha256) # Compare digests hmac.compare_digest(h.hexdigest(), hmac_value) ``` Hope this is helpful for others running into this issue!
make directory if it doesn't exist with `mogrify -path` command I convert images in a directory named `foo` to `bar` by like this. ``` $ mkdir bar $ mogrify -path bar -negate foo/*.png ``` Is there option in Imagemagick that create a folder with `-path` option, if it not exist.
To take arguments from the `mogrify` command, and do other actions based on their content, you can "override" the program with a function of your own, then pass the original arguments on to the function: ``` mymog(){ [[ $1 == "-path" ]] && [[ ! -d $2 ]] && mkdir "$2" mogrify "$@" } ``` To use it, just replace `mymog` wherever you would use `mogrify`: ``` mymog -path bar -negate foo/*.png ``` The function tests to see if the first argument is `-path`. If so, it goes on to test if the second argument is *not* an existing directory. If it is not, then it creates that directory. (The `[[ ]] &&` is just another way to write if-then statements.) *In either case* it goes on to pass all the arguments to the `mogrify` command. The only warning is that you have to put the `-path` argument first -- you can't stick it elsewhere in the line. You should be able to use this wherever you would normally use `mogrify` and it will just ignore the `mkdir` part if there is no `-path` definition. To have this available in your daily usage, add those 4 lines to your `.bash_profile` or `.bashrc` file, depending on your operating system. **EDIT #2:** New answer with no dependence on position of `-path` ``` function mogmod(){ args=("$@") for ((i=0; i < $#; i++)) { if [[ ${args[$i]} = "-path" ]] then mypath=${args[((i+1))]} [[ ! -d "$mypath" ]] && mkdir "$mypath" fi } mogrify "$@" } ```
In Go, do non-capturing closures harm performance? For instance, [github.com/yhat/scrape](https://godoc.org/github.com/yhat/scrape#Find) suggests using a closure like this: ``` func someFunc() { ... matcher := func(n *html.Node) bool { return n.DataAtom == atom.Body } body, ok := scrape.Find(root, matcher) ... } ``` Since `matcher` doesn’t actually capture any local variables, this could equivalently be written as: ``` func someFunc() { ... body, ok := scrape.Find(root, matcher) ... } func matcher(n *html.Node) bool { return n.DataAtom == atom.Body } ``` The first form looks better, because the matcher function is quite specific to that place in the code. But does it perform worse at runtime (assuming `someFunc` may be called often)? I guess there must be some overhead to creating a closure, but this kind of closure could be optimized into a regular function by the compiler? (Obviously the language spec doesn’t require this; I’m interested in what gc actually does.)
It seems like there is no difference. We can check in the generated machine code. Here is a toy program: ``` package main import "fmt" func topLevelFunction(x int) int { return x + 4 } func useFunction(fn func(int) int) { fmt.Println(fn(10)) } func invoke() { innerFunction := func(x int) int { return x + 8 } useFunction(topLevelFunction) useFunction(innerFunction) } func main() { invoke() } ``` And here is its disassembly: ``` $ go version go version go1.8.5 linux/amd64 $ go tool objdump -s 'main\.(invoke|topLevel)' bin/toy TEXT main.topLevelFunction(SB) /home/vasiliy/cur/work/learn-go/src/my/toy/toy.go toy.go:6 0x47b7a0 488b442408 MOVQ 0x8(SP), AX toy.go:6 0x47b7a5 4883c004 ADDQ $0x4, AX toy.go:6 0x47b7a9 4889442410 MOVQ AX, 0x10(SP) toy.go:6 0x47b7ae c3 RET TEXT main.invoke(SB) /home/vasiliy/cur/work/learn-go/src/my/toy/toy.go toy.go:13 0x47b870 64488b0c25f8ffffff FS MOVQ FS:0xfffffff8, CX toy.go:13 0x47b879 483b6110 CMPQ 0x10(CX), SP toy.go:13 0x47b87d 7638 JBE 0x47b8b7 toy.go:13 0x47b87f 4883ec10 SUBQ $0x10, SP toy.go:13 0x47b883 48896c2408 MOVQ BP, 0x8(SP) toy.go:13 0x47b888 488d6c2408 LEAQ 0x8(SP), BP toy.go:17 0x47b88d 488d052cfb0200 LEAQ 0x2fb2c(IP), AX toy.go:17 0x47b894 48890424 MOVQ AX, 0(SP) toy.go:17 0x47b898 e813ffffff CALL main.useFunction(SB) toy.go:14 0x47b89d 488d0514fb0200 LEAQ 0x2fb14(IP), AX toy.go:18 0x47b8a4 48890424 MOVQ AX, 0(SP) toy.go:18 0x47b8a8 e803ffffff CALL main.useFunction(SB) toy.go:19 0x47b8ad 488b6c2408 MOVQ 0x8(SP), BP toy.go:19 0x47b8b2 4883c410 ADDQ $0x10, SP toy.go:19 0x47b8b6 c3 RET toy.go:13 0x47b8b7 e874f7fcff CALL runtime.morestack_noctxt(SB) toy.go:13 0x47b8bc ebb2 JMP main.invoke(SB) TEXT main.invoke.func1(SB) /home/vasiliy/cur/work/learn-go/src/my/toy/toy.go toy.go:15 0x47b8f0 488b442408 MOVQ 0x8(SP), AX toy.go:15 0x47b8f5 4883c008 ADDQ $0x8, AX toy.go:15 0x47b8f9 4889442410 MOVQ AX, 0x10(SP) toy.go:15 0x47b8fe c3 RET ``` As we can see, at least in this simple case, there is no structural difference in how `topLevelFunction` and `innerFunction` (`invoke.func1`), and their passing to `useFunction`, are translated to machine code. (It is instructive to compare this to the case where `innerFunction` does capture a local variable; and to the case where, moreover, `innerFunction` is passed via a global variable rather than a function argument — but these are left as an exercise to the reader.)
Avoid newline when using Crypto++ Base64 encoder Im trying to generate the SHA512 of a base64 encoded string with Crypto++. My sample input is: `test123` Base64: `dGVzdDEyMw==` SHA512 of B64 (expected): ``` f78fa0aa79abd53b8181c5d21bdeb882bf45cd462a6e6e1b5043417de1800626 ed2a51b1a56626e9b9558da66a2f609d31db76bd88e80afbb7b03cda518b207d ``` SHA512 of B64 (not expected): `9f012fff26c89f2650f7446a37e80ba6466d69ffc77bb9ffc8c09ab779b24a23bb6a2f3c28512668ebca8628303ab5a31067d930cd1af60c745a2c34e5b4b1d2` SHA512 calculation: ``` byte *digest = new byte[CryptoPP::SHA512::DIGESTSIZE]; std::string encoded; std::string test("test123"); CryptoPP::StringSource ss((byte*)test.data(), test.size(), true, new CryptoPP::Base64Encoder(new CryptoPP::StringSink(encoded))); // StringSource // Calculate hash CryptoPP::SHA512().CalculateDigest(digest, (byte*)encoded.data(), encoded.size()); ``` If i leave out base64 and calculate the SHA512 directly, i get the correct hash. Therefore the calculation can't be completely wrong. But why doesn't it work with base64?
> > SHA512 of B64 (correct): > > > > ``` > f78fa0aa79abd53b8181c5d21bdeb882bf45cd462a6e6e1b5043417de1800626 > ed2a51b1a56626e9b9558da66a2f609d31db76bd88e80afbb7b03cda518b207d > > ``` > > It sounds like whatever is calculating this hash is producing unexpected results. Its probably due to a newline, missing padding, etc. I can reproduce it with the following. It appears to be a newline issue. ``` $ echo 'dGVzdDEyMw' | sha512sum 9c00af94b3dc300fab0fd1fdad7e9eeb20bb0bdff6e6c75d9072e241976b0cc8 56f2a1d355c35f29c3d354895565f971721f58cbb20f0608f57a882b0afb412c $ echo -n 'dGVzdDEyMw' | sha512sum c20144e3136f57d5aae2374aa48759911364bb44167fe642cc8b4da140396584 04ce9e2f3dfc9bd69d69cfbb449384e6ea5377c39a07fdb2c2920d78a2a56a80 $ echo 'dGVzdDEyMw==' | sha512sum 9f012fff26c89f2650f7446a37e80ba6466d69ffc77bb9ffc8c09ab779b24a23 bb6a2f3c28512668ebca8628303ab5a31067d930cd1af60c745a2c34e5b4b1d2 $ echo -n 'dGVzdDEyMw==' | sha512sum f78fa0aa79abd53b8181c5d21bdeb882bf45cd462a6e6e1b5043417de1800626 ed2a51b1a56626e9b9558da66a2f609d31db76bd88e80afbb7b03cda518b207d ``` Here is the constructor for [`Base64Encoder`](https://www.cryptopp.com/wiki/Base64Encoder). You can find the docs at either [the manual](https://www.cryptopp.com/docs/ref/) or [the wiki](https://www.cryptopp.com/wiki/Base64Encoder). ``` Base64Encoder(BufferedTransformation *attachment = NULL, bool insertLineBreaks = true, int maxLineLength = 72) ``` You should use `insertLineBreaks = false`. Maybe something like: ``` StringSource ss((byte*)test.data(), test.size(), true, new Base64Encoder(new StringSink(encoded), false /* Newline */)); ``` Since you are using a [Pipeline](https://www.cryptopp.com/wiki/Pipelining), you can do it in one shot with the following. I unrolled all the `new`'s to help with visualization as data flows from the source to the sink. ``` SHA512 hash; StringSource ss(test /* std::string */, true, new Base64Encoder( new HashFilter(hash, new StringSink(encoded) ), false /* Newline */) ); ```
Angular 2: How to apply a callback when I leave a route Here is the example, I have some routes defined in `AppComponent`: ``` @RouteConfig([ {path:'/', name: 'Index', component: IndexComponent, useAsDefault: true}, {path:'/:id/...', name: 'User', component: UserComponent}, {path:'/plan', name: 'Plan', component: PlanComponent}, {path:'/foo', name: 'Foo', component: FooComponent} ]} ``` and in `UserComponent` I have another route defined like this : ``` @RouteConfig([ {path:'/info', name: 'UserInfo', component: UserInfoComponent, useAsDefault: true}, {path:'/order', name: 'UserOrder', component: UserOrderComponent}, {path:'/detail', name: 'UserDetail', component: UserDetailComponent} ]} ``` There are 2 separate navigations defined in both `AppComponent` and `UserComponent`: ``` //AppComponent: <a [routerLink]="['User']">User</a> <a [routerLink]="['Plan']">Plan</a> <a [routerLink]="['Foo']">Foo</a> -------------------------------------------- //UserComponent: <a [routerLink]="['UserInfo']">User Info</a> <a [routerLink]="['UserOrder']">User Order</a> <a [routerLink]="['UserDetail']">User Detail</a> ``` The behavior I expect is: When user click on these navigations, a modal will comes up and ask user if he confirm to save before leaving this route. if yes, save the user's change automatically. Since it seems that there is no way to get these changes in `UserComponent`, I want to put the logic into these child components (`UserInfoComponent`, `UserOrderComponent` and `UserDetailComponent`). So is there any way I can trigger a callback when user leave the current route inside the child component? if not, is there any alternative way to achieve that? Thanks
Router already comes with [lifecycle hooks](https://angular.io/docs/ts/latest/guide/router.html#!#lifecycle-hooks) , [CanDeactivate Interface](https://angular.io/docs/ts/latest/api/router/CanDeactivate-interface.html) The hook to allow/deny navigating away is `routerCanDeactivate(nextInstruction, prevInstruction) { ... }`. It will stop the navigation if you return false from this function. Example [With Plunker](http://plnkr.co/edit/IY8lLF3i5ujtRpY14YZk?p=preview): (in the plunker, route 2 will never allow navigating away. You cannot go back to route 1) ``` @Component({ selector:'route-2', template:'route 2 template' }) class SecondRoute{ routerCanDeactivate(){ return false; // false stops navigation, true continue navigation } } ```
scala: Cannot check match for unreachability I'm migrating an app from play 2.0.4 to play 2.1 But the following code raises this warning: ``` def toConditionOperator(value: String): ConditionOperator.Value = { if (value==null) { ConditionOperator.Unknown } else { value.toLowerCase match { case "equal" | "=" | ":" => ConditionOperator.Equal case "notequal" | "!=" | "!:" | "<>" => ConditionOperator.NotEqual case "greaterorequal" | ">=" => ConditionOperator.GreaterOrEqual case "greater" | ">" => ConditionOperator.Greater case "lessorequal" | "<=" => ConditionOperator.LessOrEqual case "less" | "<" => ConditionOperator.Less case "between" => ConditionOperator.Between case "in" => ConditionOperator.In case "startswith" => ConditionOperator.StartsWith case "endswith" => ConditionOperator.EndsWith case "contains" | "$" => ConditionOperator.Contains case "missing" | "" => ConditionOperator.Missing case "unknown" | _ => ConditionOperator.Unknown } } } [info] Compiling 98 Scala sources and 2 Java sources to /home/sas/tmp/ideas-ba/webservice/target/scala-2.10/classes... [warn] /home/sas/tmp/ideas-ba/webservice/app/utils/query/ConditionParser.scala:203: Cannot check match for unreachability. [warn] (The analysis required more space than allowed. Please try with scalac -Dscalac.patmat.analysisBudget=512 or -Dscalac.patmat.analysisBudget=off.) [warn] value.toLowerCase match { [warn] ^ ``` In play 2.0.4 (with scala 2.9.1) it worked ok, with this version (scala 2.10) it yields this warning Any idea what could be wrong?
Maybe [this?](https://github.com/scala/scala/pull/1134) ~~What happens if you add~~ ``` scalacOptions ++= Seq("-Dscalac.patmat.analysisBudget=1024") ``` to your `project/Build.scala`? **[UPDATE / CORRECTION]** I was wrong about `scalacOptions` - `-D` options need to be passed as JVM arguments, not arguments to `scalac`. Since `sbt`/`play` respect the `JAVA_OPTS` environment, variable, maybe you could try running `play` or `sbt` like this? ``` JAVA_OPTS="-Dscalac.patmat.analysisBudget=off" sbt # Or JAVA_OPTS="-Dscalac.patmat.analysisBudget=off" play ``` That's assuming you are on a Unix-y OS.
Don't know why I am getting a stack overflow I don't understand why I am getting a stack overflow immediately when I enter the main function. I am supposed to read from a text file and do some processing. Can someone explain to me the reason and suggest how to solve it? ``` #include <iostream> #include <ctime> #include <cstdlib> #include <fstream> #include <iomanip> using namespace std; const int MAX=100; enum countrytype{S,F}; struct dob { int day; int month; int year; }; struct Local { char country[MAX]; char gender[MAX]; char name[MAX]; dob birthday; int noofmod; char mod[MAX][MAX]; int mark[MAX]; }; struct Foreign { char country[MAX]; char gender[MAX]; char name[MAX]; dob birthday; int noofmod; char mod[MAX][MAX]; int mark[MAX]; }; union Student { Local localstudent; Foreign foreignstudent; }; struct UOWstudent { countrytype ct; Student st; }; void readfile(ifstream &read,UOWstudent noofstudent[MAX]); int main() { UOWstudent noofstudent[MAX]; ifstream read; readfile(read,noofstudent); cout<<endl <<noofstudent[0].st.foreignstudent.country <<endl <<noofstudent[0].st.foreignstudent.gender <<endl <<noofstudent[0].st.foreignstudent.name; system("PAUSE"); } void readfile(ifstream &read, UOWstudent noofstudent[MAX]) { int i=0; char country; char filename[MAX]; cin>>filename; read.open(filename); read>>country; /*if (country =='F') { read.getline(noofstudent[i].st.foreignstudent.country,MAX); read>>noofstudent[i].st.foreignstudent.gender; read.getline(noofstudent[i].st.foreignstudent.name,MAX); } else read.getline(noofstudent[i].st.foreignstudent.country,MAX);*/ } ``` This is my text file ``` F South Korea Male Psy Park Jae Sang 31 - 12 -1977 3 CSCI114 55 CSCI103 44 GangNam ```
Simply, your code is allocating all of its storage on the stack, and you are allocating more than the allowed limit. Looking at why you are surpassing the limit is probably more useful. The first line of `main()` is allocating an array of 100 (MAX = 100) students on the stack: ``` UOWstudent noofstudent[MAX]; ``` How big is a `UOWstudent`? You can figure that out by looking at each field: ``` struct UOWstudent { countrytype ct; // enum. let's assume 4 bytes. (32-bit executable) Student st; // ??? }; ``` How big is a student? ``` union Student { Local localstudent; Foreign foreignstudent; }; ``` It's the size of a Local or a Foreign, so let's just look at one. We need to make another assumption about the size of char. Let's assume `1 byte` (8-bit characters): ``` struct Local { char country[MAX]; // 100 bytes char gender[MAX]; // 100 bytes char name[MAX]; // 100 bytes dob birthday; // 3 ints or 12 bytes (32-bit assumption again) int noofmod; // 4 bytes char mod[MAX][MAX]; // 10,000 bytes int mark[MAX]; // 400 bytes }; // total: 10,716 bytes ``` So that very first line of main() tries to allocate (10,716 + 4) x 100 = 1,072,000 bytes on the stack. And I made the most conservative assumptions about the size of char and int for your compiler settings, they may quite possibly be higher. If the stack limit is indeed one megabyte (1,048,576 bytes), then this initial allocation goes over the limit. [You can use C's sizeof operator to get information about the actual size of your types.](http://en.wikipedia.org/wiki/Sizeof) [Refer to this stackoverflow answer, discussing allocating arrays on the heap, instead of the stack, which is a good step towards solving your problem.](https://stackoverflow.com/questions/1598397/array-of-objects-on-stack-and-heap) (UOWstudent == University of Waterloo student?)
Angularjs | how to get an attribute value from element in which controller is defined I'm still fighting with simple things in Angular. I have jQuery and Backbonejs background, so please do not yell on me. I try hard to understand differences I have HTML in which from rails is given ID of project as data-project-id: ``` <div data-ng-controller="ProjectCtrl as ctrl" data-project-id="1" id="project_configuration"> ``` Is there any chance to get access to this attribute? I need it to my API calls...
To access an elements attributes from a controller, inject `$attrs` into your controller function: HTML ``` <div test="hello world" ng-controller="ctrl"> </div> ``` Script ``` app.controller('ctrl', function($attrs) { alert($attrs.test); // alerts 'hello world' }); ``` In your example, if you want to get data-project-id: ``` $attrs.projectId ``` Or if you want to get id: ``` $attrs.id ``` Demo: ``` var app = angular.module('app',[]); app.controller('ctrl', function($attrs) { alert($attrs.test); }); ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <div ng-app="app" ng-controller="ctrl" test="hello world"> </div> ```
Perl reading huge excel file I have a huge xlsx file (aboutn 127 MB) and want to read using `Spreadsheet::Excel` module, but i am getting "**Out of Memory" errors on 2GB RAM machine**. (Note the script works fine with smaller excel 2007 files) Is there any way to read the excel file line by line without hitting the memory limit.? searching google i came across <http://discuss.joelonsoftware.com/default.asp?joel.3.160328.14> but i am not familar on how to store the spreadsheet into a scalar. Can someone gimme an example of reading excel 2007 files as scalar and printing cell values. Below is the current script i am running on smaller spreadsheets. ``` #!/usr/bin/perl use Excel::Writer::XLSX; use Spreadsheet::XLSX; my $workbook = Excel::Writer::XLSX->new('Book1.xlsx'); my $worksheet = $workbook->add_worksheet(); # use strict; my $excel = Spreadsheet::XLSX -> new ('Book2.xlsx'); my $date_format = $workbook->add_format(); $date_format->set_num_format('dd/mm/yy hh:mm'); # Columns of interest @columns=(0,1,2,5,9,10,12,13,31); @reportlist=("string1","String2","String3"); @actuallist=("ModifiedString1","ModifiedString2","ModifiedString3"); $max_list=$#reportlist; foreach my $sheet (@{$excel -> {Worksheet}}) { printf("Sheet: %s\n", $sheet->{Name}); $sheet -> {MaxRow} ||= $sheet -> {MinRow}; foreach my $row ($sheet -> {MinRow} .. $sheet -> {MaxRow}) { $sheet -> {MaxCol} ||= $sheet -> {MinCol}; for ($c=0;$c<=$#columns;$c++){ $col=$columns[$c]; my $cell = $sheet -> {Cells} [$row] [$col]; if($col==0){ $cell->{Val}=~ s/\ GMT\+11\:00//g; $worksheet->write($row,$c,$cell->{Val},$date_format); } if ($cell) { $worksheet->write($row,$c,$cell -> {Val}); for($z=0;$z<=$#reportisplist;$z++){ if(($cell->{Val})=~ m/$reportlist[$z]/i){ $worksheet->write($row,$c,$actuallist[$z]); } } } } } } $workbook->close(); ```
I'm working on a new module for fast and memory efficient reading of Excel xlsx files with Perl. It isn't on CPAN yet (it needs a good bit more work) but you can get it on [GitHub](https://github.com/jmcnamara/excel-reader-xlsx). Here is a example of how to use it: ``` use strict; use warnings; use Excel::Reader::XLSX; my $reader = Excel::Reader::XLSX->new(); my $workbook = $reader->read_file( 'Book1.xlsx' ); if ( !defined $workbook ) { die $reader->error(), "\n"; } for my $worksheet ( $workbook->worksheets() ) { my $sheetname = $worksheet->name(); print "Sheet = $sheetname\n"; while ( my $row = $worksheet->next_row() ) { while ( my $cell = $row->next_cell() ) { my $row = $cell->row(); my $col = $cell->col(); my $value = $cell->value(); print " Cell ($row, $col) = $value\n"; } } } __END__ ``` **Update**: This module never made it to CPAN quality. Try [Spreadsheet::ParseXLSX](https://metacpan.org/pod/Spreadsheet::ParseXLSX) instead.
Eclipse The deployment descriptor of the module 'xxx.war' cannot be loaded or found There's been a number of similar questions raised on this error on Eclipse: > > The deployment descriptor of the module 'xxx.war' cannot be loaded or > found. > > > This is a very generic error, and after scouring the web I have multiple different causes and solutions. I'm going to attempt to list them all here, so others won't have to go through the same thing I did. If anyone experienced other causes and solutions, please list them here too. =) (Moderators, please wait for me to put the answer up before judging this question)
**Cause: Moved folders around** Solution: "Right click your dynamic web project -> Properties -> Deployment Assembly. In Web Deployment Assembly, change the package structure to reflect your change. That should work." ([Eclipse deployment descriptor not found](https://stackoverflow.com/questions/4981167/eclipse-deployment-descriptor-not-found)) **Cause: Upgraded RAD/Eclipse** Solution A: Add and remove the module in question in application.xml (<http://www-01.ibm.com/support/docview.wss?uid=swg21297984>) Solution B: Go into the file explorer and edit .settings/org.eclipse.wst.common.component file and make sure the dependent module is listed (<https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014042762>) **Cause: The dependent project has errors** Solution: If the dependent project has errors, it will not be built into a WAR, hence can not be referenced. **Cause: Missing web.xml** Solution: This is a tricky one. A project with out web.xml won't show any errors (at least for me). But with out WEB-INF/web.xml file, it can't be regarded as a proper war file (<http://en.wikipedia.org/wiki/WAR_file_format_(Sun)>). Adding web.xml solved the issue. **Cause: Wrong structure** Solution: make sure the war file is included in the ear file. (<https://community.jboss.org/thread/162761?start=0&tstart=0&_sscc=t>) **Cause: In correct facets** Solution: "You must have these facets: - Dynamic Web Module - Java - Websphere Web (Con-existence) - Websphere Web (Extended) In addition, in your Build Path -> Libraries you should have these entries: - EAR Libraries - JRE System Library [IBM JDK] - Web App Libraries - Websphere Application Server [your-ver]" (<http://stubbisms.wordpress.com/2007/11/26/deployment-descriptor-of-the-module-yourappplicationwar-cannot-be-loaded/>) **Cause: Generic Eclipse Problem** Solution: From time to time eclipse just stuffs up for various irrelevant reasons. To fix this, close the project, reopen, clean and recompile. Or even try it in a new work space if all else fails. (<http://stubbisms.wordpress.com/2007/11/26/deployment-descriptor-of-the-module-yourappplicationwar-cannot-be-loaded/>)
Python! (Snake) This is now an Iterative Review. [Next Iteration](https://codereview.stackexchange.com/questions/143784/snake-is-dead-long-live-snake) --- Nowhere near a full game yet. Just a basic overview and initialisation of a `Snake()` class. Thoughts? --- ## `Snake.py` ``` import numpy ''' Game Board: X by Y array of numbers Derived from Snake Object and base parameters (height, width, obstacles?) Board Values: 0 - Empty 1 - Snake Head 2 - Snake Body Snake: Ordered list of X,Y coordinates for each body segment Actions: Snake.Move: Append New Coordinate to list Move into Food: Do Nothing. List is now 1 segment longer. Move Into Non-Food: Remove last Coordinate From List Check for Dead Snake: Dead == New Coordinate is not unique in list. ''' def unit_vector_from_cardinal(cardinal: str) -> numpy.array: if cardinal in ('n', 's', 'e', 'w'): if cardinal == 'n': unit_vector = [0, 1] elif cardinal == 's': unit_vector = [0, -1] elif cardinal == 'e': unit_vector = [1, 0] elif cardinal == 'w': unit_vector = [-1, 0] return numpy.array(unit_vector) else: raise ValueError class Snake(): # An ordered list of X,Y coordinates representing the position of body segments # Method to update said list depending on which direction it is now moving in # Method to delete the last coordinate in the list body_coordinates_list = list() def new_body(self, x_pos: int, y_pos:int, facing: int): self.body_coordinates_list = self.get_initial_coordinates_list(x_pos, y_pos, facing) def get_initial_coordinates_list(self, x_pos: int, y_pos: int, facing: str) -> list: first_coordinate = numpy.array([x_pos, y_pos]) unit_vector = unit_vector_from_cardinal(facing) return [first_coordinate - unit_vector * i for i in range(0, 2 + 1)] if __name__ == '__main__': new_snake = Snake() new_snake.new_body(5, 5, 'n') print(new_snake.body_coordinates_list[0]) print(new_snake.body_coordinates_list[1]) print(new_snake.body_coordinates_list[2]) ``` ## Current Output ``` [5 5] [5 4] [5 3] ```
I'm not really the best at reviewing code (I have my own failures in coding too), but I have a few things that I'd like to point out and share. --- ***Consider setting up the new `Snake` object with an `__init__` instead*** (unless you intend to reuse the same `Snake` object just to create a 'new' snake) You may wish to just move the creation of the Snake body object to an `__init__` for the class like so, rather than having to provide a `new_body` method: ``` class Snake: # An ordered list of X,Y coordinates representing the position of body segments # Method to update said list depending on which direction it is now moving in # Method to delete the last coordinate in the list body_coordinates_list = list() def __init__(self, x_pos: int, y_pos: int, facing: str): self.body_coordinates_list = self.get_initial_coordinates_list(x_pos, y_pos, str(facing)) ... ``` You can then create the new snake with: ``` new_snake = Snake(5, 5, 'n') ``` --- ***Use a dictionary to handle the unit vectors and cardinal directions*** Your current implementation of `unit_vector_from_cardinal` relies on a statically provided list for analysis, and then multiple `if` statements to determine what vector to provide based on that list. I'd consider using a dictionary for the cardinal directions vector, like so: ``` DIRECTION_VECTORS = { 'n': [0, 1], 's': [0, -1], 'e': [1, 0], 'w': [-1, 0] } def unit_vector_from_cardinal(cardinal: str) -> numpy.array: if cardinal in DIRECTION_VECTORS: return numpy.array(DIRECTION_VECTORS[cardinal]) else: raise ValueError ``` This reduces the amount of logical checks, because you won't have to have a large set of `if`, `elif`, etc. checks, and prevents you from having to create a variable that is set based on the specific direction (one less variable to have to init in memory). You could also use a dict that contains tuples instead of lists, and get the same result, as suggested in comments by [Graipher](https://codereview.stackexchange.com/users/98493/graipher), like so: ``` DIRECTION_VECTORS = { 'n': (0, 1), 's': (0, -1), 'e': (1, 0), 'w': (-1, 0) } ``` --- ***Unexpected data types passed to `new_body`, which in turn passes unexpected types to other functions and/or methods later on*** > > (NOTE: If you choose to add an `__init__` to the class, then this applies to that `__init__`) > > > Your `new_body` method should be looked at to make sure you're providing correct data types. Currently, it expects three integers when passed to it, but it seems you're using a string instead when called: ``` new_snake.new_body(5, 5, 'n') ``` Since you are expecting it to be a string in the third item (called `facing` in the function), consider changing the expected variable type for the third item to a `str` instead, which is what you're providing to it when calling it elsewhere in the code. We also need to do this to make sure we can work with strings rather than integers like you are attempting to do in `unit_vector_from_cardinal` when passing `facing` to it: ``` def new_body(self, x_pos: int, y_pos:int, facing: str): ... ``` --- ***`get_initial_coordinates_list` may be a static method*** A static method only relies on the arguments passed to it. Your `get_initial_coordinates_list` function does not directly access any of the properties or variables assigned directly to the class itself (such as `body_coordinates_list`), and only uses the arguments and non-class-specific methods/functions, so we can actually make it a static method, and not have to provide a `self` argument (which means it doesn't need to have access to everything else in the class). At least, in its current form, it can be made into a static method: ``` @staticmethod def get_initial_coordinates_list(x_pos: int, y_pos: int, facing: str) -> list: ... ``` --- ***Optional: Consider a better ValueError message*** Looking at the code for `unit_vector_from_cardinal`, you could have a case where you trigger a ValueError. While this is okay, the error message is empty and potentially not super useful for debugging. Whether you decide to put a custom error message or not is up to you, though I would rather have a more informative reason for *why* the ValueError was raised, than have to be guessing: ``` raise ValueError("An invalid cardinal direction was provided.") ``` This way, when the program dies off, it can be a little more useful of an error message (for someone to debug). --- ***Style: Empty parentheses on `Snake()` are redundant*** This isn't really against PEP8 style, this just irks me a little. At some point you may wish to provide arguments to a new `Snake()` instance, but if not, you don't need the parentheses. (And even if you did wish to pass arguments when creating a new `Snake`, that should be handled in the `__init__` function which doesn't exist here): ``` class Snake: .... ```
Pandas Plotting Display all date values on x-axis (matplolib only displays few values) formatted as MMM-YYYY ``` import os import pandas as pd import matplotlib.pyplot as plt import datetime df = pd.read_excel(DATA_DIR+"/"+file_list[0], index_col="Date") df.head(5) ``` [![enter image description here](https://i.stack.imgur.com/kLkcS.png)](https://i.stack.imgur.com/kLkcS.png) ``` smooth = df['Pur. Rate'].rolling(window=20).mean() smooth.plot() ``` [![enter image description here](https://i.stack.imgur.com/OZi5E.png)](https://i.stack.imgur.com/OZi5E.png) I get the following graph and need to plot all the date values for every MONTH-YEAR on the x-axis. I want to display all the months and years formatted diagonally on the x-axis in the format (Feb-19). I can make the size of the plot larger to fit all as I will save it as jpg. I want the x-axis to have the following values: Jan 16, Feb 16, Mar 16, Apr 16, May 16, Jun 16, Jul 16, Aug 16, Sep 16, Oct 16, Nov 16, Dec 16, Jan 17, Feb 17 … (I want to display all these values, matplotlib automatically truncates this, I want to avoid that)
As mentioned in the comments, you have to set both, the Locator and the Formatter. This is explained well in the matplotlib documentation for [graphs in general](https://matplotlib.org/stable/api/ticker_api.html) and [separately for datetime axes](https://matplotlib.org/stable/api/dates_api.html). See also an explanation of the [TickLocators](https://matplotlib.org/3.1.1/gallery/ticks_and_spines/tick-locators.html). The formatting codes are derived from Python's [strftime() and strptime() format codes](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes). ``` from matplotlib import pyplot as plt import pandas as pd from matplotlib.dates import MonthLocator, DateFormatter #fake data import numpy as np np.random.seed(123) n = 100 df = pd.DataFrame({"Dates": pd.date_range("20180101", periods=n, freq="10d"), "A": np.random.randint(0, 100, size=n), "B": np.random.randint(0, 100, size=n),}) df.set_index("Dates", inplace=True) print(df) ax = df.plot() #defines the tick location ax.xaxis.set_major_locator(MonthLocator()) #defines the label format ax.xaxis.set_major_formatter(DateFormatter("%b-%y")) ax.tick_params(axis="x", labelrotation= 90) plt.tight_layout() plt.show() ``` Sample output:[![![enter image description here](https://i.stack.imgur.com/Fa1Bw.png)](https://i.stack.imgur.com/Fa1Bw.png)
Problems assigning color to bars in Pandas v0.20 and matplotlib I am struggling for a while with the definition of colors in a bar plot using Pandas and Matplotlib. Let us imagine that we have following dataframe: ``` import pandas as pd pers1 = ["Jesús","lord",2] pers2 = ["Mateo","apostel",1] pers3 = ["Lucas","apostel",1] dfnames = pd.DataFrame( [pers1,pers2, pers3], columns=["name","type","importance"] ) ``` Now, I want to create a bar plot with the importance as the numerical value, the names of the people as ticks and use the type column to assign colors. I have read other questions (for example: [Define bar chart colors for Pandas/Matplotlib with defined column](https://stackoverflow.com/questions/38044866/define-bar-cahart-colors-for-pandas-matplotlib-with-defined-column)) but it doesn't work... So, first I have to define colors and assign them to different values: ``` colors = {'apostel':'blue','lord':'green'} ``` And finally use the .plot() function: ``` dfnames.plot( x="name", y="importance", kind="bar", color = dfnames['type'].map(colors) ) ``` Good. The only problem is that all bars are green: [![enter image description here](https://i.stack.imgur.com/eQpLC.png)](https://i.stack.imgur.com/eQpLC.png) Why?? I don't know... I am testing it in Spyder and Jupyter... Any help? Thanks!
As per this [GH16822](https://github.com/pandas-dev/pandas/issues/16822), this is a ***regression bug*** introduced in version `0.20.3`, wherein only the *first* colour was picked from the list of colours passed. This was not an issue with prior versions. The reason, according to one of the contributors was this - > > The problem seems to be in `_get_colors`. I think that `BarPlot` should > define a `_get_colors` that does something like > > > > ``` > def _get_colors(self, num_colors=None, color_kwds='color'): > color = self.kwds.get('color') > if color is None: > return super()._get_colors(self, num_colors=num_colors, color_kwds=color_kwds) > else: > num_colors = len(self.data) # maybe? may not work for some cases > return _get_standard_colors(color=kwds.get('color'), num_colors=num_colors) > > ``` > > --- There's a couple of options for you - 1. The most obvious choice would be to update to the latest version of pandas (currently `v0.22`) 2. If you need a workaround, there's one (also mentioned in the issue tracker) whereby you wrap the arguments within an extra tuple - ``` dfnames.plot(x="name", y="importance", kind="bar", color=[tuple(dfnames['type'].map(colors))] ``` Though, in the interest of progress, I'd recommend updating your pandas.
Ios Swift : Adding or Moving NavigationBar to bottom of the view controller I want to move the navigation controller bar to the bottom of the view controller. How can i get this done ? I tried : ``` self.navigationController!.navigationBar.frame = CGRectMake( 0, UIScreen.mainScreen().bounds.height - 50, UIScreen.mainScreen().bounds.width, 50) ``` This is moving to the bottom but hiding all other controller objects and also back button is not woking.
Sujay U N, You should not try to move the UINavigationBar provided by the embeded UINavigationController to the bottom of the screen. Trying that will obvisoulsy move all the view's below it causing all the controller objects to hide. **Workaround** *Approach 1:* Consider using ToolBar :) Toolbar is designed to be placed at the bottom of the screen. If you are using xib or storyboard you can pick toolbar from components library and place it on your ViewController's bottom and then apply autoresizing masks or constraints properly :) [![enter image description here](https://i.stack.imgur.com/LVDG3.png)](https://i.stack.imgur.com/LVDG3.png) Now in order to show the back button make use of UIBarButtonItems. Change the style to custom and provide it arrow image or provide default style as done. [![enter image description here](https://i.stack.imgur.com/WuVX3.png)](https://i.stack.imgur.com/WuVX3.png) Though now you are all set to go :) You will notice UINavigationBar at the top of your view controller. In order to get rid of it, select your ViewController, select its TopBar property set it to none :) [![enter image description here](https://i.stack.imgur.com/qmJeh.png)](https://i.stack.imgur.com/qmJeh.png) *Approach 2* Use UINavigationBar. Specific about using Navigation bar and dont want to use toolbar, well you can do the same thing with UINavigationBar as well. Drag the UINavigationBar from components library place it at the bottom of the screen. Drag the UIBarButtonItem drop it as leftBarButtonItem, change the barButtonItem image to your back image. ( Same process as UIToolBar just use UINavigationBar instead) [![enter image description here](https://i.stack.imgur.com/ZCrpo.png)](https://i.stack.imgur.com/ZCrpo.png) Understand this is not same as the navigation bar provided by the embeded NavigationController. So get rid of NavigationBar at the top of your ViewController same as I explained above here as well Finally, In both the cases, draw an IBoutlet from barbutton item and handle poping the viewController programmatically. Happy coding :)
Is it possible to set ETags using JAX-RS without resorting to Response objects? In [one of the few questions (with answers)](https://stackoverflow.com/questions/2085411/how-to-use-cxf-jax-rs-and-http-caching) I have found on SO regarding JAX-RS and caching, the answer to generating ETags (for caching) is by setting some values on the Response object. As in the following: ``` @GET @Path("/person/{id}") public Response getPerson(@PathParam("id") String name, @Context Request request){ Person person = _dao.getPerson(name); if (person == null) { return Response.noContent().build(); } EntityTag eTag = new EntityTag(person.getUUID() + "-" + person.getVersion()); CacheControl cc = new CacheControl(); cc.setMaxAge(600); ResponseBuilder builder = request.evaluatePreconditions(person.getUpdated(), eTag); if (builder == null) { builder = Response.ok(person); } return builder.cacheControl(cc).lastModified(person.getUpdated()).build(); } ``` The problem is that will not work for us, since we use the same methods for both SOAP and REST services, by annotating the methods with @WebMethod (SOAP), @GET (and whatever else we might need to expose the service). The previous service would look like this to us (excluding the creation of headers): ``` @WebMethod @GET @Path("/person/{id}") public Person getPerson(@WebParam(name="id") @PathParam("id") String name){ return _dao.getPerson(name); } ``` Is there any way - through some extra configuration - of setting those headers? This is the first time I have found that using Response objects actually has some benefit over just auto-conversion ... We are using Apache CXF.
Yes you might be able to use interceptors to achieve this if you could generate the E-tag AFTER you create your response object. ``` public class MyInterceptor extends AbstractPhaseInterceptor<Message> { public MyInterceptor () { super(Phase.MARSHAL); } public final void handleMessage(Message message) { MultivaluedMap<String, Object> headers = (MetadataMap<String, Object>) message.get(Message.PROTOCOL_HEADERS); if (headers == null) { headers = new MetadataMap<String, Object>(); } //generate E-tag here String etag = getEtag(); // String cc = 600; headers.add("E-Tag", etag); headers.add("Cache-Control", cc); message.put(Message.PROTOCOL_HEADERS, headers); } } ``` If that way isn't viable, I would use the original solution that you posted, and just add your Person entity to the builder: ``` Person p = _dao.getPerson(name); return builder.entity(p).cacheControl(cc).lastModified(person.getUpdated()).build(); ```
Pygame Surface.fill() not working I recently started learning pygame. I made a very basic program (all it does is change it's own background colour): ``` import pygame from pygame.locals import * pygame.init() DISPLAY = pygame.display.set_mode((800,600)) while True: for event in pygame.event.get(): if event == QUIT: pygame.quit() sys.exit() DISPLAY.fill((3,4,5)) pygame.display.update() ``` yet when the program ran all it did was produce a blank, black pygame window. I am using Windows 10 64-bit home with python 3.5.2 and pygame 1.9.1. Please help me figure out why the program didn't work?
It works fine, you're just using a very dark colour (R=3/255, G=4/255, B=5/255). Try this to get a blue screen: ``` DISPLAY.fill((0,0,255)) ``` From the [documentation](https://www.pygame.org/docs/ref/surface.html#pygame.Surface.fill) on `Surface.fill`: > > > ``` > fill(color, rect=None, special_flags=0) -> Rect > > ``` > > ... > > > The color argument can be either a RGB sequence, a RGBA sequence or a mapped color index. > > > Each component of an RGB value ranges from 0 to 255, with 255 representing the maximum intensity of that component. Pure black would be represented as `(0, 0, 0)` and white as `(255, 255, 255)`. --- For better demonstration (or fun), you can use this program to get an idea of what background colours you can get from different RGB values: ``` from itertools import product import pygame import sys pygame.init() DISPLAY = pygame.display.set_mode((200, 200)) for r, g, b in product(range(0, 255, 16), repeat=3): print('r={}, g={}, b={}'.format(r, g, b)) DISPLAY.fill((r, g, b)) pygame.display.update() pygame.time.delay(10) pygame.quit() sys.exit() ```
How to hide Sitecore Client Is there a way to hide the Sitecore client, so it cannot be accessed via <http://hostname/sitecore>?
There are several options to achieve that. 1. IP Level Restriction 2. Disable Access 3. Delete the entire folder. (They are explained in details here: For 3 - [Removing the Sitecore Client](https://sdn.sitecore.net/Articles/Security/Removing%20the%20Sitecore%20Client.aspx) For 1 and 2 - [Restrict Access To The cleint](https://doc.sitecore.net/sitecore_experience_platform/xdb_configuration/restrict_access_to_the_client?sc_lang=en) ) You can also check the verison specific Security Hardening Guide for the instance (like this one [here](https://sdn.sitecore.net/upload/sitecore6/62/sitecore_security_hardening_guide-usletter.pdf))
WEEK\_OF\_YEAR inconsistent on different machines **Update:** ok, I seem to have found half the answer. If I created my Calendar with a no-argument getInstance, I get WEEK\_OF\_YEAR = 52. However, if I create it with supplying Local.getDefaul() to the getInstance, I get WEEK\_OF\_YEAR = 1. Totally didn't expect this... need to re-read the Calendar docs, I guess. Building a Calendar from a timestamp, which corresponds to **Sat, 01 Jan 2011 00:00:00 GMT**. The same code, using java.util.Date, Calendar and TimeZone, is behaving differently on different machines (with the same locale); all the fields in the Calendar are the same, except WEEK\_OF\_YEAR. On my machine it is 52 (on two of my machines, actually). On my coworker's machines it's 1 (which seems to be correct). ``` import java.util.Date; import java.util.TimeZone; import java.util.Calendar; import java.util.Locale; public class CalendarTest { public static void main(String[] args) { Locale l = Locale.getDefault(); System.out.println(l); Long d = new Long(1293840000000l); Calendar c = Calendar.getInstance(); c.setTimeZone(TimeZone.getTimeZone("UTC")); c.setTime(new Date(d)); System.out.println(c.toString()); } ``` .. locale is en\_US, but Calendar is: ``` >java.util.GregorianCalendar[time=1293840000000, areFieldsSet=true, areAllFieldsSet=true, lenient=true, zone=sun.util.calendar.ZoneInfo[ id="UTC", offset=0, dstSavings=0, useDaylight=false, transitions=0,lastRule=null ], firstDayOfWeek=2, minimalDaysInFirstWeek=4, ERA=1, YEAR=2011, MONTH=0, WEEK_OF_YEAR=52, WEEK_OF_MONTH=0, DAY_OF_MONTH=1, DAY_OF_YEAR=1, DAY_OF_WEEK=7, DAY_OF_WEEK_IN_MONTH=1, AM_PM=0,HOUR=0, HOUR_OF_DAY=0, MINUTE=0, SECOND=0, MILLISECOND=0, ZONE_OFFSET=0, DST_OFFSET=0] ``` What might be causing this WEEK\_OF\_YEAR inconsistency?
# `firstDayOfWeek` & `minimalDaysInFirstWeek` Turns out to be a feature, not a bug. The cause of the different behaviors you see is two settings reported in your output shown in the Question: - `firstDayOfWeek` - `minimalDaysInFirstWeek` It’s important to read the doc for both the class and subclass: - [java.util.Calendar](http://docs.oracle.com/javase/7/docs/api/java/util/Calendar.html) - [java.util.GregorianCalendar](http://docs.oracle.com/javase/7/docs/api/java/util/GregorianCalendar.html) The second doc explains in detail how those two settings listed above are crucial to determining a localized week. ## Calendar Note the calendar. The first day of 2011 is a Saturday. The second of the month is a Sunday, and Sunday is the default start-of-week for United States. ![Calendar of month of January 2011 showing the first of the month is a Saturday](https://i.stack.imgur.com/aLZvb.png) On a Mac OS X computer set to United States locale, these settings are both `1`. If the minimum days needed is 1, then the First lands on a localized Week 1. Java reports this. But on your reported problem machine, these settings are 2 and 4, respectively. I don't know how you got these settings altered from the usual defaults, but you did. - `firstDayOfWeek` | `1` versus `2` (Sunday versus Monday) - `minimalDaysInFirstWeek` | `1` versus `4` The minimum of 4 days means that the First does not qualify as a week in the new year. So it is week 52 of the previous year (2010). The first week of 2011 is January 2, 2011 through January 8. So the behavior you are seeing matches expectations given the documentation for the java.util.Calendar class in Java 7. The mystery is how did those settings get changed away from the default on your problem machine? ## ISO 8601 By the way, the doc mentions that settings of 2 & 4 gives you the behavior defined by the [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601) standard, as mentioned in [my other answer](https://stackoverflow.com/a/24829427/642706). That may be the clue as to why these settings are non-default on your problem machine. Someone, a sysadmin or programmer, may be trying to get standard behavior rather than localized behavior. # Example Code Let's demonstrate this with some code. We’ll use a modified version of the code from the Question. Our code here explicitly sets the variables at issue. So you can run this example on any of your machines, normal or problem. First we force the use of the settings found by default on a US Locale machine, `1` & `1`. Then we use the settings reported in the Question, `2` & `4`. ``` Locale l = Locale.getDefault(); System.out.println( l + "\n" ); Long d = new Long( 1293840000000l ); Calendar c = Calendar.getInstance(); c.setTimeZone( TimeZone.getTimeZone( "UTC" ) ); c.setTime( new Date( d ) ); // Running Java 8 Update 11, Mac OS 10.8.5, virtual machine in Parallels 9, hosted on Mac with Mavericks. // Force the use of default settings found on a machine set for United States locale (using Apple defaults). c.setFirstDayOfWeek( 1 ); c.setMinimalDaysInFirstWeek( 1 ); // Reports: WEEK_OF_YEAR=1 System.out.println( "Default US settings:\n" + c.toString() + "\n" ); // Using reported settings (Coincides with ISO 8601 Week definition). c.setFirstDayOfWeek( 2 ); c.setMinimalDaysInFirstWeek( 4 ); // Reports: WEEK_OF_YEAR=52 System.out.println( "Reported settings (ISO 8601):\n" + c.toString() + "\n" ); ``` When run… ``` en_US Default US settings: java.util.GregorianCalendar[time=1293840000000,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2011,MONTH=0,WEEK_OF_YEAR=1,WEEK_OF_MONTH=1,DAY_OF_MONTH=1,DAY_OF_YEAR=1,DAY_OF_WEEK=7,DAY_OF_WEEK_IN_MONTH=1,AM_PM=0,HOUR=0,HOUR_OF_DAY=0,MINUTE=0,SECOND=0,MILLISECOND=0,ZONE_OFFSET=0,DST_OFFSET=0] Reported settings (ISO 8601): java.util.GregorianCalendar[time=1293840000000,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=2,minimalDaysInFirstWeek=4,ERA=1,YEAR=2011,MONTH=0,WEEK_OF_YEAR=52,WEEK_OF_MONTH=0,DAY_OF_MONTH=1,DAY_OF_YEAR=1,DAY_OF_WEEK=7,DAY_OF_WEEK_IN_MONTH=1,AM_PM=0,HOUR=0,HOUR_OF_DAY=0,MINUTE=0,SECOND=0,MILLISECOND=0,ZONE_OFFSET=0,DST_OFFSET=0] ``` # Moral Of The Story Use [ISO 8601 standard weeks](http://en.wikipedia.org/wiki/ISO_8601#Week_dates)! --- Thanks to Marco13, whose comments on the Question sparked this answer.
Cordova 3.0 - Open link in external browser in iOS How do you open links in the devices native browser when using Cordova 3.0 on iOS? People have suggested using `window.open( url, "_system" )` but this does not work in Cordova 3.0. **My Attempt** ``` if( navigator.app ) // Android navigator.app.loadUrl( url, {openExternal:true} ) else // iOS and others window.open( url, "_system" ) // opens in the app, not in safari ``` Does anyone know of a solution that works with Cordova 3.0? Thanks
**NOTE**: to make `window.open('somelink', '_system')` to work you now need a device-level plugin, the inAppBrowser. Here are the installing instructions as of Cordova 3.0 From the Docs for 3.0: As of version 3.0, Cordova implements device-level APIs as plugins. Use the CLI's plugin command, described in The Command-line Interface, to add or remove this feature for a project: ``` $ cordova plugin add https://git-wip-us.apache.org/repos/asf/cordova-plugin-inappbrowser.git $ cordova plugin rm org.apache.cordova.core.inappbrowser ``` These commands apply to all targeted platforms, but modify the platform-specific configuration settings described below: iOS (in config.xml) ``` <feature name="InAppBrowser"> <param name="ios-package" value="CDVInAppBrowser" /> </feature> ``` I just tested this and it works.
How to use 3 external monitors on Ubuntu 17.10 Dell XPS 13 9360? Just bought a new USB Type C to 3 HDMI adapter: [amazon link](https://rads.stackoverflow.com/amzn/click/B07667D37V) and it seems like my laptop sees all 3 monitors, but can't handle them all together. I've tried different cables that work, and different inputs/monitors and everything works, except when all 3 together, Ubuntu sees them (I can change the primary display to any of those 3 external monitors) but only the built-in monitor works. Is there anything specific I need to do in order to support 3 monitors at once? --- **Update** 2 external + built-in: ``` $ xrandr --listmonitors Monitors: 3 0: +*eDP-1 1920/294x1080/165+0+1080 eDP-1 1: +DP-1-1-2 1920/509x1080/286+0+0 unknown output 0x48 2: +DP-1-1-1 1920/509x1080/286+1920+0 unknown output 0x47 ``` 3 external + built-in(Only the built-in monitor works): ``` $ xrandr --listmonitors Monitors: 1 0: +*eDP-1 3200/294x1800/165+0+0 eDP-1 ``` When 3 external monitors are connected, the display settings shows all 3 monitors kinda visible, but not usable: [![enter image description here](https://i.stack.imgur.com/eUZU9.png)](https://i.stack.imgur.com/eUZU9.png) Regarding video graphics, Dell XPS 13 9360 has `Intel® Iris Plus Graphics 640 (Kaby Lake GT3)` which theoretically, is able to handle 3 monitors, but would that mean 3 additional monitors or 3 monitors in total? [source](https://www.notebookcheck.net/Intel-Iris-Plus-Graphics-640.190371.0.html) While I use ubuntu 17.10, I use Xorg (wayland seems to be buggy).
What you are trying--and which desktop you are using--is vague here. There are specific things to try, but I cannot tell what you are trying. My desktop environment is XFCE4 on Ubuntu 17.10 running the X11-based display. Not Wayland. I have Dell Precision 5510 and a brand new USB-C dock. On the Dock itself, there are HDMI and DisplayPort jacks. I can get 3 monitors going if you count the laptop display and the 2 monitors. I have used 3 external successfully when they are plugged into separate jacks, not in a USB-C dock. I'm pretty sure you can get 2 external working via the usb-c, I suggest you try with that. Then worry about 3rd. Here are things to try. Let us know what you see. In terminal, run ``` xrandr --listmonitors ``` to find out if system really does see monitors. You can run xrandr to get a much more verbose listing. Right now, I'm not connected, and end of xrandr output is: ``` DP-1 disconnected (normal left inverted right x axis y axis) HDMI-1 disconnected (normal left inverted right x axis y axis) DP-2 disconnected (normal left inverted right x axis y axis) HDMI-2 disconnected (normal left inverted right x axis y axis) ``` If you see that, then 4 monitors would be possible. Right now, I have removed the nvidia proprietary drivers and I still do have success with 2 external monitors via the dock. I expect you can too. However, configuring will be a problem. Even if system notices your monitors, it will not use them until you configure them. You could try CLI with xrandr, but I don't do that too often anymore. It is easier if you use the GUI for this. My favorite is "arandr", which has worked great for 3 years, until last week it failed to recognize the resolutions. I have no idea what broke it. If arandr fails, there is a much improved program called Display in the XFCE4 settings. I believe it is adapted from the Gnome project, possible your desktop has it, or similar. It lists detected monitors. You click a button "active" separately for each one, it shows them in a tiny rectangle, and you can move them about in the screen to place them left and right. If your dock is like mine, those monitors will not work until you activate. If (IF) you are using the NVIDIA proprietary drivers for X11, run the ``` nvidia-settings ``` program. You'll see what monitors it can detect. You'll see that you cannot get all 4 monitors in one X11 session, but you can get a pairs connected with each other. You will probably not be able to drag a window across all 3 monitors. nvidia-settings will offer to re-write /etc/X11/xorg.conf for you. Make sure you have a copy of the old one before saying yes. At one time (say 2010), I was knee deep in settings for Xinerama and Nvidia Twinview. If you start bumping up against very fine grained video configurations, take a step back. It is still possible to do that stuff, but the whole push in X11 setup is to let users ignore it. I am a little distrustful of the USB-C device you have. Unless you know for sure it works with your computer, it may be trouble. I got this computer May, 2016, and tried to order a dock. Dell said "our dock does not work, you can't buy one." I waited until Nov 2017 to get one because that was the first time the firmware was compatible with Ubuntu Linux. It mostly works now, but not always. The USB poops out about 10% of the time and I have to unplug everything. If a monitor powers down to save power, the laptop often cannot wake it up through the usb dock. If you try and need more help, write back with more information, including what kind of dock this is, the xrandr output, and which video driver you are using, It really is a different game if you run the Nvidia proprietary driver. Oh, if you are in Ubuntu 17.10 with Wayland as graphic environment, then I have nothing to say. I'm avoiding it.
c++ template not reporting error for local variable Why does this report an error: ``` class a { public: void b() {this->c++;} }; int main() { a var; } ``` But this does not? ``` template <typename d> class a { public: void b() {this->c++;} }; int main() { a<int> var; } ``` Despite the fact that "a" is a templated class, the function "b", or at least the access to the variable "c", does not depend on the type "d", so it should report something. However, if I call "var.b();" in the main function it gives an error. I know it is a simple question by I really can't figure it out.
That's because the function `a<int>::b()` is not [instantiated](http://en.cppreference.com/w/cpp/language/class_template#Class_template_instantiation), due to the fact that it is a template. When you try to instantiate it, i.e. call it like `var.b();`, the compiler will spit an error. You have to understand that templates are instantiated "on demand", i.e. when the compiler needs the instantiation. Otherwise only minimal syntactic verifications take place. The details regarding instantiations/name lookups in templates is a rather complicated subject, I highly recommend this book: [C++ Templates: The Complete Guide](https://www.amazon.ca/Templates-Complete-Guide-David-Vandevoorde/dp/0201734842) by David Vandevoode and Nicolai Josuttis. That's not the case with the first code snippet: the function has to be valid from the very beginning.
How to sign ClickOnce application in Post Build Event? I have WinForms application that are deployed through ClickOnce. I have Post Build Event that signs my assemblies with SignTool. I publish application as ClickOnce installer and I see in VS output that files are signed, but after install there is no signed assemblies. How I can sign assemblies that will be included to ClickOnce package?
Thanks to [Mate](https://stackoverflow.com/users/895279/mate) Finally I did that! I added target to \*.cproj file as mentioned in article [here](https://robindotnet.wordpress.com/2013/02/24/windows-8-and-clickonce-the-definitive-answer-2/) It looks like: ``` <Target Name="AfterCompile"> <Exec Command="&quot;C:\Program Files (x86)\Microsoft SDKs\Windows\v7.1A\Bin\SignTool.exe&quot; sign /f cert.pfx /p your_password /v &quot;$(ProjectDir)obj/$(ConfigurationName)/$(TargetFileName)&quot;" /> </Target> ``` There is signed **\*.exe** file in the **obj** directory. Also I use **AfterCompile** target, because I had problems with application manifest when tryed **BeforePublish** target.
What does this list permutations implementation in Haskell exactly do? I am studying the code in the Data.List module and can't exactly wrap my head around this implementation of permutations: ``` permutations :: [a] -> [[a]] permutations xs0 = xs0 : perms xs0 [] where perms [] _ = [] perms (t:ts) is = foldr interleave (perms ts (t:is)) (permutations is) where interleave xs r = let (_,zs) = interleave' id xs r in zs interleave' _ [] r = (ts, r) interleave' f (y:ys) r = let (us,zs) = interleave' (f . (y:)) ys r in (y:us, f (t:y:us) : zs) ``` Can somebody explain in detail how these nested functions connect/work with each other?
Sorry about the late answer, it took a bit longer to write down than expected. --- So, first of all to maximize lazyness in a list function like this there are two goals: - Produce as many answers as possible before inspecting the next element of the input list - The answers themselves must be lazy, and so there the same must hold. Now consider the `permutation` function. Here maximal lazyness means: - We should determine that there are at least `n!` permutations after inspecting just `n` elements of input - For each of these `n!` permutations, the first `n` elements should depend only on the first `n` elements of the input. The first condition could be formalized as ``` length (take (factorial n) $ permutations ([1..n] ++ undefined))) `seq` () == () ``` David Benbennick formalized the second condition as ``` map (take n) (take (factorial n) $ permutations [1..]) == permutations [1..n] ``` Combined, we have ``` map (take n) (take (factorial n) $ permutations ([1..n] ++ undefined)) == permutations [1..n] ``` Let's start with some simple cases. First `permutation [1..]`. We must have ``` permutations [1..] = [1,???] : ??? ``` And with two elements we must have ``` permutations [1..] = [1,2,???] : [2,1,???] : ??? ``` Note that there is no choice about the order of the first two elements, we can't put `[2,1,...]` first, since we already decided that the first permutation must start with `1`. It should be clear by now that the first element of `permutations xs` must be equal to `xs` itself. --- Now on to the implementation. First of all, there are two different ways to make all permutations of a list: 1. Selection style: keep picking elements from the list until there are none left ``` permutations [] = [[]] permutations xxs = [(y:ys) | (y,xs) <- picks xxs, ys <- permutations xs] where picks (x:xs) = (x,xs) : [(y,x:ys) | (y,ys) <- picks xs] ``` 2. Insertion style: insert or interleave each element in all possible places ``` permutations [] = [[]] permutations (x:xs) = [y | p <- permutations xs, y <- interleave p] where interleave [] = [[x]] interleave (y:ys) = (x:y:ys) : map (y:) (interleave ys) ``` Note that neither of these is maximally lazy. The first case, the first thing this function does is pick the first element from the entire list, which is not lazy at all. In the second case we need the permutations of the tail before we can make any permutation. To start, note that `interleave` can be made more lazy. The first element of `interleave yss` list is `[x]` if `yss=[]` or `(x:y:ys)` if `yss=y:ys`. But both of these are the same as `x:yss`, so we can write ``` interleave yss = (x:yss) : interleave' yss interleave' [] = [] interleave' (y:ys) = map (y:) (interleave ys) ``` The implementation in Data.List continues on this idea, but uses a few more tricks. It is perhaps easiest to go through the [mailing list discussion](http://haskell.1045720.n5.nabble.com/Add-subsequences-and-permutations-to-Data-List-ticket-1990-td3173688.html). We start with David Benbennick's version, which is the same as the one I wrote above (without the lazy interleave). We already know that the first elment of `permutations xs` should be `xs` itself. So, let's put that in ``` permutations xxs = xxs : permutations' xxs permutations' [] = [] permutations' (x:xs) = tail $ concatMap interleave $ permutations xs where interleave = .. ``` The call to `tail` is of course not very nice. But if we inline the definitions of `permutations` and `interleave` we get ``` permutations' (x:xs) = tail $ concatMap interleave $ permutations xs = tail $ interleave xs ++ concatMap interleave (permutations' xs) = tail $ (x:xs) : interleave' xs ++ concatMap interleave (permutations' xs) = interleave' xs ++ concatMap interleave (permutations' xs) ``` Now we have ``` permutations xxs = xxs : permutations' xxs permutations' [] = [] permutations' (x:xs) = interleave' xs ++ concatMap interleave (permutations' xs) where interleave yss = (x:yss) : interleave' yss interleave' [] = [] interleave' (y:ys) = map (y:) (interleave ys) ``` The next step is optimization. An important target would be to eliminate the (++) calls in interleave. This is not so easy, because of the last line, `map (y:) (interleave ys)`. We can't immediately use the foldr/ShowS trick of passing the tail as a parameter. The way out is to get rid of the map. If we pass a parameter `f` as the function that has to be mapped over the result at the end, we get ``` permutations' (x:xs) = interleave' id xs ++ concatMap (interleave id) (permutations' xs) where interleave f yss = f (x:yss) : interleave' f yss interleave' f [] = [] interleave' f (y:ys) = interleave (f . (y:)) ys ``` Now we can pass in the tail, ``` permutations' (x:xs) = interleave' id xs $ foldr (interleave id) [] (permutations' xs) where interleave f yss r = f (x:yss) : interleave' f yss r interleave' f [] r = r interleave' f (y:ys) r = interleave (f . (y:)) ys r ``` This is starting to look like the one in Data.List, but it is not the same yet. In particular, it is not as lazy as it could be. Let's try it out: ``` *Main> let n = 4 *Main> map (take n) (take (factorial n) $ permutations ([1..n] ++ undefined)) [[1,2,3,4],[2,1,3,4],[2,3,1,4],[2,3,4,1]*** Exception: Prelude.undefined ``` Uh oh, only the first `n` elements are correct, not the first `factorial n`. The reason is that we still try to place the first element (the `1` in the above example) in all possible locations before trying anything else. --- Yitzchak Gale came up with a solution. Considered all ways to split the input into an initial part, a middle element, and a tail: ``` [1..n] == [] ++ 1 : [2..n] == [1] ++ 2 : [3..n] == [1,2] ++ 3 : [4..n] ``` If you haven't seen the trick to generate these before before, you can do this with `zip (inits xs) (tails xs)`. Now the permutations of `[1..n]` will be - `[] ++ 1 : [2..n]` aka. `[1..n]`, or - `2` inserted (interleaved) somewhere into a permutation of `[1]`, followed by `[3..n]`. But not `2` inserted at the end of `[1]`, since we already go that result in the previous bullet point. - `3` interleaved into a permutation of `[1,2]` (not at the end), followed by `[4..n]`. - etc. You can see that this is maximally lazy, since before we even consider doing something with `3`, we have given all permutations that start with some permutation of `[1,2]`. The code that Yitzchak gave was ``` permutations xs = xs : concat (zipWith newPerms (init $ tail $ tails xs) (init $ tail $ inits xs)) where newPerms (t:ts) = map (++ts) . concatMap (interleave t) . permutations3 interleave t [y] = [[t, y]] interleave t ys@(y:ys') = (t:ys) : map (y:) (interleave t ys') ``` Note the recursive call to `permutations3`, which can be a variant that doesn't have to be maximally lazy. As you can see this is a bit less optimized than what we had before. But we can apply some of the same tricks. The first step is to get rid of `init` and `tail`. Let's look at what `zip (init $ tail $ tails xs) (init $ tail $ inits xs)` actually is ``` *Main> let xs = [1..5] in zip (init $ tail $ tails xs) (init $ tail $ inits xs) [([2,3,4,5],[1]),([3,4,5],[1,2]),([4,5],[1,2,3]),([5],[1,2,3,4])] ``` The `init` gets rid of the combination `([],[1..n])`, while the `tail` gets rid of the combination `([1..n],[])`. We don't want the former, because that would fail the pattern match in `newPerms`. The latter would fail `interleave`. Both are easy to fix: just add a case for `newPerms []` and for `interleave t []`. ``` permutations xs = xs : concat (zipWith newPerms (tails xs) (inits xs)) where newPerms [] is = [] newPerms (t:ts) is = map (++ts) (concatMap (interleave t) (permutations is)) interleave t [] = [] interleave t ys@(y:ys') = (t:ys) : map (y:) (interleave t ys') ``` Now we can try to inline `tails` and `inits`. Their definition is ``` tails xxs = xxs : case xxs of [] -> [] (_:xs) -> tails xs inits xxs = [] : case xxs of [] -> [] (x:xs) -> map (x:) (inits xs) ``` The problem is that `inits` is not tail recursive. But since we are going to take a permutation of the inits anyway, we don't care about the order of the elements. So we can use an accumulating parameter, ``` inits' = inits'' [] where inits'' is xxs = is : case xxs of [] -> [] (x:xs) -> inits'' (x:is) xs ``` Now we make `newPerms` a function of `xxs` and this accumulating parameter, instead of `tails xxs` and `inits xxs`. ``` permutations xs = xs : concat (newPerms' xs []) where newPerms' xxs is = newPerms xxs is : case xxs of [] -> [] (x:xs) -> newPerms' xs (x:is) newPerms [] is = [] newPerms (t:ts) is = map (++ts) (concatMap (interleave t) (permutations3 is)) ``` inlining `newPerms` into `newPerms'` then gives ``` permutations xs = xs : concat (newPerms' xs []) where newPerms' [] is = [] : [] newPerms' (t:ts) is = map (++ts) (concatMap (interleave t) (permutations is)) : newPerms' ts (t:is) ``` inlining and unfolding `concat`, and moving the final `map (++ts)` into `interleave`, ``` permutations xs = xs : newPerms' xs [] where newPerms' [] is = [] newPerms' (t:ts) is = concatMap interleave (permutations is) ++ newPerms' ts (t:is) where interleave [] = [] interleave (y:ys) = (t:y:ys++ts) : map (y:) (interleave ys) ``` Then finally, we can reapply the `foldr` trick to get rid of the `(++)`: ``` permutations xs = xs : newPerms' xs [] where newPerms' [] is = [] newPerms' (t:ts) is = foldr (interleave id) (newPerms' ts (t:is)) (permutations is) where interleave f [] r = r interleave f (y:ys) r = f (t:y:ys++ts) : interleave (f . (y:)) ys r ``` Wait, I said get rid of the `(++)`. We got rid of one of them, but not the one in `interleave`. For that, we can see that we are always concatenating some tail of `yys` to `ts`. So, we can unfold the calculating `(ys++ts)` along with the recursion of `interleave`, and have the function `interleave' f ys r` return the tuple `(ys++ts, interleave f ys r)`. This gives ``` permutations xs = xs : newPerms' xs [] where newPerms' [] is = [] newPerms' (t:ts) is = foldr interleave (newPerms' ts (t:is)) (permutations is) where interleave ys r = let (_,zs) = interleave' id ys r in zs interleave' f [] r = (ts,r) interleave' f (y:ys) r = let (us,zs) = interleave' (f . (y:)) ys r in (y:us, f (t:y:us) : zs) ``` And there you have it, `Data.List.permutations` in all its maximally lazy optimized glory. --- Great write-up by Twan! I (@Yitz) will just add a few references: - The original email thread where Twan developed this algorithm, linked above by Twan, is fascinating reading. - Knuth classifies all possible algorithms that satisfy these criteria in Vol. 4 Fasc. 2 Sec. 7.2.1.2. - Twan's `permutations3` is essentially the same as Knuth's "Algorithm P". As far as Knuth knows, that algorithm was first published by English church bell ringers in the 1600's.
How can I create a table if not exist on Flask with SQLAlchemy? I am using SQLAlchemy and I have the following code: Model: ``` class User(db.Model): __tablename__ = 'user' __table_args__ = {'schema': 'task', 'useexisting': True} id = Column(Integer, primary_key=True, autoincrement=True) firstname = Column(String) ``` .env ``` SQLALCHEMY_DATABASE_URI = os.getenv('SQLALCHEMY_DATABASE_URI') ``` app.py ``` def create_app(config_file): """Create a Flask application using the app factory pattern.""" app = Flask(__name__) """Load configuration.""" app.config.from_pyfile(config_file) """Init app extensions.""" from .extensions import db db.init_app(app) ``` This creates the SQLite file if it does not exist, but not the tables of each model. The question is what can I do in order to create the tables for each model?
Just add: ``` db.create_all() ``` in `app.py` at the end of `create_app()`. `create_all()` will create the tables only when they don't exist and would not change the tables created before. If you want to create the database and the tables from the command line you can just type: ``` python from app.py import db db.create_all() exit() ``` The working example: ``` from flask import Flask from flask_sqlalchemy import SQLAlchemy app = Flask(__name__) app.secret_key = "Secret key" app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///my_database.sqlite3" app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False db = SQLAlchemy(app) class Data(db.Model): id = db.Column(db.Integer, primary_key = True) name = db.Column(db.String(50)) email = db.Column(db.String(50)) phone = db.Column(db.String(50)) db.create_all() # add a row # comment out after the 1st run table_row = Data(name="My Name", email="myemail@mail.com", phone="123456") db.session.add(table_row) db.session.commit() print "A row was added to the table" # read the data row = Data.query.filter_by(name="My Name").first() print "Found:", row.email, row.phone if __name__ == "__main__": app.run(debug=True) ``` This is for Python 2.7, to run with Python 3.x just change the the `print` statements to call the `print()` function. **NOTE:** When using **automatic** model class constructor the arguments passed to model class constructor **must** be `keyword` arguments or there will be an error. Otherwise you can override the `__init__()` inside `Data()` class like this: ``` def __init__(self, name, email, phone, **kwargs): super(Data, self).__init__(**kwargs) self.name = name self.email = email self.phone = phone ``` In that case you don't have to use `keyword` arguments.
How to set different tooltip and confirmation titles on the same element? I am using bootstrap-confirmation and tooltip together on the same element. I need different contents for each of them. For example one could use **confirm-title** attribute and the other one could use **tooltip-title**. Right now they both use the **title** attribute in the html tag. Is there a way to change the default? Here is what i have: [jsfiddle](https://jsfiddle.net/wqccrujg/1/) In this example both confirmaion and tooltip use the title attribute. But as i said i want them to use two different attributes. I know that i could use two different nested tags for each, like: ``` <a href='#' title="text1" ...><span title="text2" ...> button </span></a> ``` But what i am looking for is to use a single tag by changing the source of the text.
You can do **exactly** what you want using **tooltip-title** and **confirm-title**. Do this: HTML: ``` <a href="#Delete" class="btn btn-primary btn-large" tooltip-title="Tooltip title" confirm-title="Confirmation title" confirmation tooltip data-singleton="true" data-placement="top" data-popout="true">Click Me</a> ``` JS: ``` // Uses 'tooltip-title' as title $('a[tooltip]').tooltip({title: function() { return $(this).attr("tooltip-title"); }}); // Uses 'confirm-title' attribute for title $('a[confirmation]').confirmation({title: function() { return $(this).attr("confirm-title"); }}); ``` Demo: <https://jsfiddle.net/Drakes/wqccrujg/7/>
Matlab find intervals of repeated elements in vector I would like to know if there is an easy way to do the following in Matlab. Let say that I have a vector: ``` >> DataVector = [ 1 2 3 3 4 4 4 4 5 6 6 ] ``` I would like to find a function that, for each repeated element, returns the first and last index of the sequence. So for `DataVector` I would like something as: ``` >> Intervals = ComputeIntervals(DataVector) Intervals = 3 4 5 8 10 11 ``` For a simpler case where each element is repeated no more than two times i found this solution ``` >> DataVector = [ 1 2 3 3 4 4 5 6 6 ] Intervals(:,1) = find(diff(DataVector) == 0) Intervals(:,2) = find(diff(DataVector) == 0) + 1 ``` But when an element is repeated three or more times as in the general case I have not found an easy way to generalize. Thanks in advance.
Adapting from [this answer](https://stackoverflow.com/a/34043536/2586922) to a [similar question](https://stackoverflow.com/q/34041857/2586922): ``` DataVector = [ 1 2 3 3 4 4 4 4 5 6 6 ]; DataVector = DataVector(:); %// make column vector ind = find([1; diff(DataVector); 1]); %// index of each element that differs from the next result = [ind(1:end-1) ind(2:end)-1]; %// starts and ends of runs of equal values result = result(diff(result,[],2)~=0,:) %// keep only runs of length greater than 1 ``` --- If, as in your example, *the values can only repeat in a single run* (so `[1 1 2 2 2 3 3 4]` is allowed but `[1 1 2 2 2 1 1 4]` is not), the following approach using [`unique`](http://es.mathworks.com/help/matlab/ref/unique.html) is also possible: ``` [~, starts] = unique(DataVector(:),'first'); %// first occurrence of each value [~, ends] = unique(DataVector(:),'last'); %// last occurrence of each value result = [starts ends]; result = result(diff(result,[],2)~=0,:); %// keep only runs of length greater than 1 ```
Network Manager or WICD? A couple of years ago when I first began using Ubuntu I had issues with Network Manager and so I switched to wicd which works perfectly. (I forget the exact issues, but wicd solved the problems) I am about to do a fresh install and curious as to whether I should continue with wicd? Or is Network Manager up to the job now? Thanks. ***Addendum*** I ask because a friend recently switched his laptop over to Ubuntu and had wireless troubles until switching over to wicd. My situation is with a desktop using wireless.
If you're having a problem with [Network Manager](http://projects.gnome.org/NetworkManager/) it's likely a problem with the driver. Since it's been a few years then you're probably better off with sticking with the default unless you have a problem. There are two ways to fix wireless problems in Linux. Fixing the driver or working around it. As Dan Williams (who is one of the main developer's for network-manager) has [chronicled in his blog](http://blogs.gnome.org/dcbw/category/drivers/), sometimes it takes a long time; there are many moving parts, the kernel, the applet, and talking to the manufacturer of the hardware (if they even care) and then getting all that shipped to users. This takes longer than working around the issue, but in the end is a more sustainable model and gets everyone better drivers in the end. Network Manager and WICD don't exactly have the same feature set yet. WICD tends to deal well with wireless and basic ethernet connectivity, but doesn't yet support VPNs, DSL, 3G/CDMA and many other features which are becoming more and more popular. This isn't meant as a slight towards the WICD folks, it does help people get online and that's great, but fixing it all the way down the stack is a better overall for Linux. Bottom line is, you can choose whichever application you happen to prefer, but if NetworkManager works for you, sticking to the default is probably what will get you the best results in the long run, since you will be able to benefit from the other features if you end up needing them. Network Manager, as the default network tool in Ubuntu is also supported by the Ubuntu team. The linux wireless project [maintains a page](http://linuxwireless.org/en/users/Drivers) of wireless cards and chipsets and what features they support, and is a good guide to supporting manufacturers that maintain good drivers.
What is the Scala equivalent of Clojure's Atom? Clojure [has an Atom](http://clojure.org/atoms) for [changing state between threads in a synchronous and independent manner](http://clojure.org/concurrent_programming), [that is not part of the STM](https://stackoverflow.com/a/18976545/15441). You [use it like this](http://clojuredocs.org/clojure_core/clojure.core/atom): ``` user=> (def my-atom (atom 0)) #'user/my-atom user=> @my-atom 0 user=> (swap! my-atom inc) 1 user=> @my-atom 1 user=> (swap! my-atom (fn [n] (* (+ n n) 2))) 4 ``` My question is: **What is the Scala equivalent of Clojure's Atom?**
As @Shepmaster and @om-nom-nom said, it's a wrapper around `java.util.concurrent.atomic.Atomic...`. An equivalent wrapper could look like this: ``` import java.util.concurrent.atomic._ import scala.annotation.tailrec object Atom { def apply[A](init: A): Atom[A] = new Impl(new AtomicReference(init)) private class Impl[A](state: AtomicReference[A]) extends Atom[A] { def apply(): A = state.get() def update(value: A): Unit = state.set(value) def transformAndGet(f: A => A): A = transformImpl(f) @tailrec private final def transformImpl(fun: A => A): A = { val v = state.get() val newv = fun(v) if (state.compareAndSet(v, newv)) newv else transformImpl(fun) } } } trait Atom[A] { def apply(): A def update(value: A): Unit def transformAndGet(f: A => A): A } ``` Ex: ``` val myAtom = Atom(0) myAtom() // --> 0 myAtom.transformAndGet(_ + 1) // --> 1 myAtom() // --> 1 myAtom.transformAndGet(_ * 4) // --> 4 ``` --- If you use [Scala-STM](http://nbronson.github.io/scala-stm/), that functionality is built into STM references, by using the `.single` view: ``` scala> import scala.concurrent.stm._ import scala.concurrent.stm._ scala> val myAtom = Ref(0).single myAtom: scala.concurrent.stm.Ref.View[Int] = scala.concurrent.stm.ccstm.CCSTMRefs$IntRef@52f463b0 scala> myAtom() res0: Int = 0 scala> myAtom.transformAndGet(_ + 1) res1: Int = 1 scala> myAtom() res2: Int = 1 scala> myAtom.transformAndGet(_ * 4) res3: Int = 4 ``` The advantage is that `Ref.apply` will already give you specialised cells for the primitive types, e.g. `Int` instead of `AnyRef` (boxed).
Android widget: Changing item layout in ListView on notifyAppWidgetViewDataChanged I want to update the layout of some items a in a `ListView` in an android app widget if a trigger is given. So I implemented below in `getView()` method in `RemoteViewsService.RemoteViewsFactory`. ``` public RemoteViews getViewAt(int position) { ... int remoteViewId; if (some condition) { remoteViewId = R.layout.highlighted_item; } else { remoteViewId = R.layout.item; } RemoteViews rv = new RemoteViews(mContext.getPackageName(), remoteViewId); ``` This code works when the widget is loaded for the first time, but when updated using `notifyAppWidgetViewDataChanged` the layout persists and is not changed. How can I update xml layout used for a ListView item?
## Change background If my assumption is right and you are trying to highlight a list item by changing the background color or something similar I´d suggest to use a selector drawable instead of changing the layout programmatically: **drawable/list\_item\_selector.xml** ``` <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_activated="true" android:drawable="@drawable/list_item_background_activated" /> <item android:state_pressed="true" android:drawable="@drawable/list_item_background_pressed" /> <item android:drawable="@drawable/list_item_background" /> </selector> ``` **drawable/list\_item\_background.xml drawable/list\_item\_background\_pressed.xml drawable/list\_item\_background\_activated.xml** Define such a drawable for each state in the selector and replace the 'color' with an appropriate color resource. ``` <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle" > <solid android:color="color" /> <stroke android:width="1dp" android:color="color" /> </shape> ``` **Apply the list selector to the ListView in the widget´s layout:** ``` <ListView android:id="@android:id/list" android:layout_width="fill_parent" android:layout_height="fill_parent" android:listSelector="@drawable/list_item_selector" /> ``` I use this for my own widget and it works perfectly. For further information on state lists see this [link](http://developer.android.com/guide/topics/resources/drawable-resource.html#StateList). ## Change entire layout In order to change the entire layout try the following: ``` public class WidgetListProvider implements RemoteViewsFactory { private boolean alternateLayout = 1; @Override public void onDataSetChanged() { alternateLayout = alternateLayout == 1 ? 2 : 1; } @Override public RemoteViews getViewAt(int position) { final RemoteViews rv = new RemoteViews(_context.getPackageName(), alternateLayout == 1 ? R.layout.list_row : R.layout.list_row2); [...] return rv; } @Override public int getViewTypeCount() { return 2; } [...] } ``` **Important**: Note getViewTypeCount() - Because you use two different layouts, you have to return 2 here. If you return 1, a loading view will be displayed for the second layout.
Finding Consecutive Items in List using Linq Say I have the following array of integers: ``` int[] numbers = { 1, 6, 4, 10, 9, 12, 15, 17, 8, 3, 20, 21, 2, 23, 25, 27, 5, 67,33, 13, 8, 12, 41, 5 }; ``` How could I write a Linq query that finds 3 **consecutive** elements that are, say, greater than 10? Also, it would be nice if I could specify I want say the first, second, third etc. group of such elements. For example, the Linq query should be able to identify: 12,15,17 as the first group of consecutive elements 23,25,27 as the second group 67,33,13 as the third group The query should return to me the 2nd group if I specify I want the 2nd group of 3 consecutive elements. Thanks.
**UPDATE:** While not technically a "linq query" as Patrick points out in the comments, this solution is reusable, flexible, and generic. ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication32 { class Program { static void Main(string[] args) { int[] numbers = { 1, 6, 4, 10, 9, 12, 15, 17, 8, 3, 20, 21, 2, 23, 25, 27, 5, 67,33, 13, 8, 12, 41, 5 }; var consecutiveGroups = numbers.FindConsecutiveGroups((x) => x > 10, 3); foreach (var group in consecutiveGroups) { Console.WriteLine(String.Join(",", group)); } } } public static class Extensions { public static IEnumerable<IEnumerable<T>> FindConsecutiveGroups<T>(this IEnumerable<T> sequence, Predicate<T> predicate, int count) { IEnumerable<T> current = sequence; while (current.Count() > count) { IEnumerable<T> window = current.Take(count); if (window.Where(x => predicate(x)).Count() >= count) yield return window; current = current.Skip(1); } } } } ``` Output: ``` 12,15,17 23,25,27 67,33,13 ``` To get the 2nd group, change: ``` var consecutiveGroups = numbers.FindConsecutiveGroups((x) => x > 10, 3); ``` To: ``` var consecutiveGroups = numbers.FindConsecutiveGroups((x) => x > 10, 3).Skip(1).Take(1); ``` **UPDATE 2** After tweaking this in our production use, the following implementation is far faster as the count of items in the numbers array grows larger. ``` public static IEnumerable<IEnumerable<T>> FindConsecutiveGroups<T>(this IEnumerable<T> sequence, Predicate<T> predicate, int sequenceSize) { IEnumerable<T> window = Enumerable.Empty<T>(); int count = 0; foreach (var item in sequence) { if (predicate(item)) { window = window.Concat(Enumerable.Repeat(item, 1)); count++; if (count == sequenceSize) { yield return window; window = window.Skip(1); count--; } } else { count = 0; window = Enumerable.Empty<T>(); } } } ```
Goroutine: time.Sleep or time.After I wonder what is the better way to do a wait in a goroutine, `time.Sleep()` or `<-time.After()`? What's the difference between the two and how to make choices? Thanks.
I don't think it matters much for the majority of programs. There has been a [question on golang-nuts about this](https://groups.google.com/forum/#!topic/golang-nuts/9BL6v7Nqj_I) but I don't think one can draw any conclusion. In practice `After` is useful in contexts where one already needs to `select` on a number of channels but would also like a timeout: ``` select { case c := <-someChan: .. case c := <-otherChan: .. case <-time.After(time.Second * 42): } ``` By superficially looking at the code `Sleep` seems simpler while `After` builds a new timer, with a period, a closure to send the time when it finishes etc. Again, I don't think it matters in practice but `time.Sleep` seems pretty readable so I would go with that. --- On my implementation both of them perform the exact same system calls and end up waiting: ``` futex(??, FUTEX_WAIT, 0, {41, 999892351} ^^ 41 seconds and change ```
Flex: Display month name in DateField control I'm using the MX DateField control in Flex and want to display the date as 01 Jul 2011 or 01 July 2011. Does anyone know how to do this? I tried setting the formatString to "DD MMM YYYY" but it didn't work.
This works: ``` <fx:Declarations> <mx:DateFormatter id="myDf" formatString="DD MMM YYYY"/> </fx:Declarations> <fx:Script> <![CDATA[ private function formatDate(date:Date):String{ return myDf.format(date); } ]]> </fx:Script> <mx:DateField id="dateField" labelFunction="formatDate" /> ``` Found it in the LiveDocs at <http://livedocs.adobe.com/flex/3/html/help.html?content=controls_12.html> However this does not explain why the formatString property on the component does not work properly. I can confirm that it does not work as expected. Cheers
is there a way to check the next object inside NSArray in a for..in loop? I have this NSArray: ``` NSArray* temp=[[NSArray alloc] initWithObjects:@"one",@"five",@"two",nil]; for(NSString* obj in temp){ NSLog(@"current item:%@ Next item is:%@",obj, ***blank***); } ``` What needs to replace `blank`? Do I need to know the upcoming object?
This only works if your objects are unique (i. e. there are no identical objects in the array): ``` id nxt = nil; int nxtIdx = [temp indexOfObject:idx] + 1; if (nxtIdx < temp.count) { nxt = [temp objectAtIndex:nxtIdx]; } NSLog(@"current item:%@ Next item is:%@", obj, nxt); ``` But in my opinion, this is a hack. Why not use a normal for loop with the index of the object: ``` for (int i = 0; i < temp.count; i++) { id obj = [temp objectAtIndex:i]; id next = (i + 1 < temp.count) ? [temp objectAtIndex:i + 1] : nil; } ``` Or (**recommended**) enumerate it using a block ``` [temp enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) { id next = nil; if (idx + 1 < temp.count) { next = [temp objectAtIndex:idx + 1]; } }]; ``` ---
Type variable introduction for existential types Is there any binder in haskell to introduce a type variable (and constraints) quantified in a type ? I can add an extra argument, but it defeats the purpose. ``` {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE KindSignatures #-} {-# LANGUAGE GADTs #-} data Exists x = forall m. Monad m => Exists (m x) convBad :: x -> Exists x convBad x = Exists @m (return @m x, undefined) --Not in scope: type variable ‘m’typecheck data Proxy (m:: * -> *) where Proxy :: Proxy m convOk :: Monad m => x -> Proxy m -> Exists x convOk x (_ :: Proxy m) = Exists (return @m x) ```
To bring type variables into scope, use `forall` (enabled by [`ExplicitForall`](https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/explicit_forall.html), which is implied by `ScopedTypeVariables`): ``` convWorksNow :: forall m x. Monad m => x -> Exists x convWorksNow x = Exists (return @m x) -- Usage: ex :: Exists Int ex = convWorksNow @Maybe 42 ``` But whether you do it like this or via `Proxy`, keep in mind that `m` must be chosen *at the point of creating `Exists`*. So whoever calls the `Exists` constructor must know what `m` is. If you wanted it to be the other way around - i.e. whoever *unwraps* an `Exists` value chooses `m`, - then your `forall` should be on the inside: ``` newtype Exists x = Exists (forall m. Monad m => m x) convInside :: x -> Exists x convInside x = Exists (return x) -- Usage: ex :: Exists Int ex = convInside 42 main = do case ex of Exists mx -> mx >>= print -- Here I choose m ~ IO case ex of Exists mx -> print (fromMaybe 0 mx) -- Here I choose m ~ Maybe ``` --- Also, as @dfeuer points out in the comments, note that your original type definition (the one with `forall` on the outside) is pretty much useless beyond just signifying the type of `x` (same as `Proxy` does). This is because whoever consumes such value must be able to work with *any* monad `m`, and you can do anything with a monad unless you know what it is. You can't bind it inside `IO`, because it's not necessarily `IO`, you can't pattern match it with `Just` or `Nothing` because it's not necessarily `Maybe`, and so on. The only thing you can do with it is bind it with `>>=`, but then you'll just get another instance of it, and you're back to square one.
How to add proper error handling to cats-effect's Resource I am trying to get some basic file IO (write/read) in a purely functional way using [cats-effect](https://typelevel.org/cats-effect/). After following [this](https://typelevel.org/cats-effect/tutorial/tutorial.html) tutorial, here is what I ended up with for reading a file: ``` private def readFile(): IO[String] = for { lines <- bufferedReader(new File(filePath)).use(readAllLines) } yield lines.mkString def bufferedReader(f: File): Resource[IO, BufferedReader] = Resource.make { IO(new BufferedReader(new FileReader(f))) } { fileReader => IO(fileReader.close()).handleErrorWith(_ => IO.unit) } ``` Now in the `handleErrorWith` function I could log any error occuring, but how can I add proper error handling to this (e.g. return a `Resource[IO, Either[CouldNotReadFileError, BufferedReader]]`)?
Proper error handling can be added via the use of `.attempt` on the returned IO value: ``` import scala.collection.JavaConverters._ val resourceOrError: IO[Either[Throwable, String]] = bufferedReader(new File("")) .use(resource => IO(resource.lines().iterator().asScala.mkString)) .attempt ``` If you want to lift that into your own ADT, you can use `leftMap`: ``` import cats.syntax.either._ final case class CouldNotReadError(e: Throwable) val resourceOrError: IO[Either[CouldNotReadError, String]] = bufferedReader(new File("")) .use(resource => IO(resource.lines().iterator().asScala.mkString)) .attempt .map(_.leftMap(CouldNotReadError)) ``` Additionally, you might be interested in the [ZIO](https://scalaz.github.io/scalaz-zio/datatypes/io.html) datatype, which has [supported cats-effect instances](https://scalaz.github.io/scalaz-zio/interop/catseffect.html), and has a slightly different shape of the form `IO[E, A]` where `E` captures the error effect type.
How to send response before actions in spring mvc Say that my spring controller function receives a large amount of data. I want to return 200 OK, given that the data is structured right, and **after that** I want to perform the processing, which might take a while. To my understanding the only way to send response is by **return** command. But I don't want to end the function on response send. Are there other ways to send response to client at the middle of the function? Creating a new thread run is obvious but other languages (JS) let you handle it more elegantly. ``` @RequestMapping(value = Connectors.CONNECTOR_HEARTBEAT, method = RequestMethod.POST) public ResponseEntity<String> doSomething(@RequestBody List<Message> messages) { HttpStatus code = (messages!=null && !messages.isEmpty()) ? HttpStatus.OK : HttpStatus.NOT_FOUND; return new ResponseEntity<String>(res, code); // how do I add code here?? } ```
You can of course do processing after sending the response. The more general way would be to use the `afterCompletion` method of a `HandlerInterceptor`. By construction, it will be executed after the response have been sent to client, but it forces you to split you logic in 2 components the *before* part in controller, and the *after* part in the interceptor. The alternative way is to forget Spring MVC machinery and manually commit the response in the controller: ``` @RequestMapping(value = Connectors.CONNECTOR_HEARTBEAT, method = RequestMethod.POST) public void doSomething(@RequestBody List<Message> messages, HttpServletResponse response) { int code = (messages!=null && !messages.isEmpty()) ? HttpServletResponse.SC_OK : HttpServletResponse.SC_NOT_FOUND; if (code != HttpServletResponse.SC_OK) { response.sendError(code, res); return; } java.io.PrintWriter wr = response.getWriter(); response.setStatus(code); wr.print(res); wr.flush(); wr.close(); // Now it it time to do the long processing ... } ``` Note the void return code to notify Spring that the response have been committed in the controller. As a side advantage, the processing still occurs in the same thread, so you have full access to session scoped attributes or any other thread local variables used by Spring MVC or Spring Security...
Factorial if statement expect '()' found integer I'm new to rust (coming from programming in c/c++ and python) so to learn I'm writing some basic functions. Below I have a factorial function that takes in a signed integer and has two if checks for it. ``` fn factorial(x: i32) -> i32 { let result = if x > 1 { x * factorial(x-1) } else if x <= 1 { 1 }; result } ``` To my knowledge, the if and else-if blocks should handle every case for it. However, when compiling it throws the following error: ``` error[E0317]: `if` may be missing an `else` clause --> src/main.rs:22:12 | 22 | } else if x <= 1 { | ____________^ 23 | | 1 | | - found here 24 | | }; | |_____^ expected `()`, found integer | = note: `if` expressions without `else` evaluate to `()` = help: consider adding an `else` block that evaluates to the expected type error: aborting due to previous error For more information about this error, try `rustc --explain E0317`. error: could not compile `functions` ``` If I replace the else-if with just an else, it compiles just fine. Why do I need to replace it with an else? Shouldn't the previous else-if be good enough?
As the error message says, if an `if` expression doesn't have an `else` then the type of the expression is `()`. This is because the expression can only have one type, and there is no sensible default value in the general case for if the condition evaluated to `false` - that's what `else` is for! In your case, the compiler ***could*** have figured out that the two predicates are in fact exhaustive. It doesn't, and that really is just how it is. If the compiler could detect exhaustiveness in this case it would be weird if it couldn't also detect it in other "obvious" cases. But predicates can be arbitrary expressions and it would be impossible to implement the check in the general case. In this example, the compiler would have to analyse the body of the `random` function to know if the predicates are exhaustive or not: ``` // random(x) might return a different value each time it's called if random(x) > 1 { 1 } else if random(x) <= 1 { 2 } // Not exhaustive! ``` Being consistent seems like the best choice for the language, since you can always add an `else` at the end.
After network restart, device not ready Xubuntu 16.04, Intel 3945ABG Wireless card When my WiFi drops, I disable and re-enable WiFi. Sometimes, this doesn't fix the problem. It will show no networks. I reset the network by `sudo service network-manager restart` and then I get `device not ready` I end up having to reboot and it fixes things temporarily, until the next time. How do I fix this so that I don't have to restart the computer every now and again? As suggested by @VBF below, I ran these commands and got these results when things stopped working. Here is the output of `ifconfig`: ``` enp9s0 Link encap:Ethernet HWaddr 00:18:8b:dd:24:32 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:18 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:10682 errors:0 dropped:0 overruns:0 frame:0 TX packets:10682 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:724653 (724.6 KB) TX bytes:724653 (724.6 KB) ``` The output of `nmcli co`: ``` NAME UUID TYPE DEVICE NETGEAR63 7796f00a-6ffa-4154-aff1-62af90c3d8e8 802-11-wireless -- NETGEAR63 1 979c058f-cb41-4910-bbab-37b86d76bb5e 802-11-wireless -- Samsung Galaxy Avant 4539 b8601380-8e76-4c9c-be74-cfa7f33865be 802-11-wireless -- Wired connection 1 93b93cbb-b063-30aa-8ea4-7cc61db203f6 802-3-ethernet -- ``` The output of the command `nmcli networking connectivity check` was ``` none ``` And and lastly I tried both `nmcli con up id "NETGEAR63 1"` and `nmcli con up id "NETGEAR63"` and got this output: ``` Error: Connection activation failed: No suitable device found for this connection. ``` Update: My laptop uses an Intel 3945ABG wireless card.
From looking around on the internet, the problem may be related to power management for the WiFi card related to these pages: <http://www.intel.com/content/www/us/en/support/network-and-i-o/wireless-networking/000005875.html> and [Wireless connection keeps dropping with an Intel 3945ABG card](https://askubuntu.com/questions/73607/wireless-connection-keeps-dropping-with-an-intel-3945abg-card) To solve the problem, power management for the card may have to be disabled. The above forum pages solutions doesn't work for the version of Ubuntu I have (16.04). To turn of the power management of the wifi, do the following: Run `iwconfig`. You will see your chip-set as well as whether power management is on or off. Open `/etc/NetworkManager/conf.d/default-wifi-powersave-on.conf`, you should see ``` [connection] wifi.powersave = 3 ``` Change the 3 to a 2, save and reboot. Run `iwconfig` and you should hopefully see that `Power Management:off` Source: <https://sites.google.com/site/easylinuxtipsproject/internet>
How to play multiple sound files on one Web Page? (on click best practice) I have a page where there is a list of vocabularies. I have a TTS for each vocabulary. The current approach that I am using is to include an mp3 flash player for each vocabulary. This creates delay to load all the flash because there can be more than 10 vocabularies in one page. another problem is that the mp3 of the tts file has to be created on the page load, this also gives delay to the loading time. Some alternative approach in my mind is to: - include only one flash player. - load and play the file on click to reduce page load for tts file creation. So my question is, Is there any javascript or jquery plugin that can do either of the 2 other approaches? thank you
You can use the `<audio>` tag (HTML5) and you can control it when to load the files. It is supported in most of the browsers like Google Chrome, Firefox, Opera... It has two ways to set the link: ## Way 1 ``` <audio src="YOUR FILE LINK HERE"> <embed> <!--FALLBACK HERE (FLASH PLAYER FOR IE)--> </embed> </audio> ``` ## Way 2 ``` <audio> <source src="YOUR FILE LINK HERE (MP3)" type="audio/ogg" /> <source src="YOUR OTHER FILE LINK HERE (OGG)" type="audio/mp3" /> <embed> <!--FALLBACK HERE (FLASH PLAYER FOR IE)--> </embed> </audio> ``` --- ## Attributes - Put `controls="controls"` if you want it to display the audio player. - Put `loop="loop"` if you want it to loop the audio. - Put `autoplay="autoplay"` if you want it to play the audio by itself. - Put `preload="preload"` if you want it to preload it. --- ## JavaScript Controlling You can also control it using JavaScript. To play it: `document.getElementById("YOUR AUDIO TAG")`***`.play()`*** To pause it: `document.getElementById("YOUR AUDIO TAG")`***`.pause()`*** --- ## Read more - <http://w3schools.com/html5/tag_audio.asp>
How to allow Class Property be of multiple/flexible types in c#? In C# I have three classes: Person, Cat, and Dog. Both the Cat and Dog classes have the method Eat(). I want the Person class to have a property ‘Pet’. I want to be able to call the Eat method of both the Cat and Dog via the Person via something like Person.Pet.Eat() but I can’t because the Pet property needs to be either of type Cat or Dog. Currently I’m getting round this with two properties in the Person class: PetDog and PetCat. This is okay for now, but if I wanted a 100 different types of animal as pets then I don’t really want to have 100 different Pet properties in the Person class. Is there a way round this using Interfaces or Inheritance? Is there a way I can set Pet to be of type Object but still access the properties of whichever animal class is assigned to it?
You could have the pets derive from a common base class: ``` public abstract class Animal { protected Animal() { } public abstract void Eat(); } ``` And then have Cat and Dog derive from this base class: ``` public class Cat: Animal { public override void Eat() { // TODO: Provide an implementation for an eating cat } } public class Dog: Animal { public override void Eat() { // TODO: Provide an implementation for an eating dog } } ``` And your Person class will have a property of type `Animal`: ``` public class Person { public Animal Pet { get; set; } } ``` And when you have an instance of Person: ``` var person = new Person { Pet = new Cat() }; // This will call the Eat method from the Cat class as the pet is a cat person.Pet.Eat(); ``` --- You could also provide some common implementation for the `Eat` method in the base class to avoid having to override it in the derived classes: ``` public abstract class Animal { protected Animal() { } public virtual void Eat() { // TODO : Provide some common implementation for an eating animal } } ``` Notice that `Animal` is still an abstract class to prevent it from being instantiated directly. ``` public class Cat: Animal { } public class Dog: Animal { public override void Eat() { // TODO: Some specific behavior for an eating dog like // doing a mess all around his bowl :-) base.Eat(); } } ```
What to do when local usernames conflict with network usernames We use Puppet to manage our Linux desktop machines and SSSD to authenticate our users against a central authentication system. Recently when setting up a few new machines we found that puppet was halting in the middle of installing software packages. The culprit was the kdm package, which tries to add a local 'kdm' user when recently a 'kdm' username was added to the central authority. Normally I see this problem handled with a namespace-dividing mechanism (such as Windows domains), but my short time in Linux administration doesn't really help me figure out a good way to do this. I can figure out maybe a few general ideas of how to fix this (in most elegant to least elegant): 1. Figure out an good way to divide up system usernames from central usernames so such future conflicts won't be a problem. 2. Use some flag for dpkg to force the kdm package to add a different username (or to use nobody). 3. Force dpkg to add the user. This won't allow the user to login to our systems but there's a good possibility this won't be an issue anyway. Of course, (2) and (3) don't fix the underlying issue, but if a solution in the vein of (1) is particularly damaging to our current setup, something like (2) or (3) may be more preferable.
**Come up with a better user naming scheme...** (or force "kdm" to use different login credentials) I've had to learn this lesson over the years as I inherited commercial Unix systems with three-letter usernames. Moving those servers to Linux exposed conflicts with system service accounts. The worst case was ***Randy P. McDonald***, or userID "*rpm*". The [RPM package manager](http://en.wikipedia.org/wiki/RPM_Package_Manager) in Redhat-based systems uses the "rpm" account. Other conflicts occurred over time. Usernames "adm", "lp" and "ftp" have been problems at time. My permanent fix was to revise the user naming scheme to be more robust. Three-initials is not that scalable. This is part of knowing your environment. You use desktop Linux (presumably with KDM as a Window Manager instead of Gnome), and the "kdm" user is key to that from a permissions and systems operation standpoint. Any changes you make to the individual package or `dpkg` would require you to remember that step as you upgrade systems, move to new OS versions, etc. Adding the user will probably result in funky permissions.
Why $(this).css doesn't work in ajax success? I don't know if $(this) selector work in ajax success or not , here is the code : ``` $(".up_button").click(function(){ var idup_post=$(this).attr("data-idupactive"); var userup_post=$(this).attr("data-userupactive"); $.ajax({ url:"ajax/up_actv.php", data:{"idup":idup_post,"userup":userup_post}, type:"POST", success:function(data){ if(data=="upactv"){ alert(data); //just to check if ajax response is correct $(this).css({ "background-image":"url('icons/u.png')" }); } if(data=="updisv"){ alert(data); //just to check if ajax response is correct $(this).css({ "background-image":"url('icons/uf.png')" }); } } }); }); ``` I need to change the background of the selected button , So any help will be welcomed
this here refers to XHR(ajax request) object. you can change the reference either using call/apply or copy the reference into a variable and use that check this snippet ``` $(".up_button").click(function() { var self = this; var idup_post = $(this).attr("data-idupactive"); var userup_post = $(this).attr("data-userupactive"); $.ajax({ url: "ajax/up_actv.php", data: { "idup": idup_post, "userup": userup_post }, type: "POST", success: function(data) { if (data == "upactv") { alert(data); //just to check if ajax response is correct $(self).css({ "background-image": "url('icons/u.png')" }); } if (data == "updisv") { alert(data); //just to check if ajax response is correct $(self).css({ "background-image": "url('icons/uf.png')" }); } } }); }); ``` Hope it helps
How to extract out elapsedTime attribute values from file I wish to extracted out elapsedTime attribute values from the file. Records look ``` {"realm":"/test","transactionId":"9e26c614","elapsedTime":17,"elapsedTimeUnits":"MILLISECONDS","_id":"9e26c6asdasd"} ``` The file I am having is in gb's and I want to get the values greater than 10000. I tried to grep but due to colon grep is not working. ``` grep -wo --color 'elapsedTime' fileName -> this just prints attribute names grep -w --color "elapsedTime" fileName -> this just highlights the attribute. ```
The data is JSON format so it's best to use a parser that understands this format. This will pick out the `elapsedTime` value from the JSON in the file `/tmp/data` ``` jq .elapsedTime /tmp/data 17 ``` This will pick out only those values larger than 10000 ``` jq '.elapsedTime | select(. > 10000)' /tmp/data ``` If you really cannot use `jq` then a `sed|awk` construct can be considered. However, this requires that there must be only one `elapsedTime` label and associated value per line. There may be other caveats and I really do not recommend it, but if you're desperate here it is, ``` sed -En 's/^.*"elapsedTime":\s*"?([0-9]+).*$/\1/p' /tmp/data | awk '$1 > 10000' ``` In response to a follow-up question ([comment](https://unix.stackexchange.com/questions/672771/how-to-extract-out-elapsedtime-attribute-values-from-file/672773?noredirect=1#comment1269420_672773)), to pick out two elements you need to filter on a single element from the object, and then display the required elements: ``` jq -r 'select (.elapsedTime > 10000) | [ .elapsedTime, .transactionId ] | @tsv ' /tmp/data ```
Validate HWND using Win32 API From the native Win32 API using C++ is there a way to determine whether the window associated with an HWND is still valid?
You could use the Win32 API [IsWindow](http://msdn.microsoft.com/en-us/library/ms633528(VS.85).aspx). **It is not recommended** to use it though for 2 reasons: 1. Windows handles can be re-used once the window is destroyed, so you don't know if you have a handle to an entirely different window or not. 2. The state could change directly after this call and you will think it is valid, but it may really not be valid. From MSDN (same link as above): > > A thread should not use IsWindow for a > window that it did not create because > the window could be destroyed after > this function was called. Further, > because window handles are recycled > the handle could even point to a > different window. > > > **What can be done?** Perhaps your problem can be re-architected so that you do not have the need to check for a valid handle. Maybe for example you can establish a pipe from the client to the server. You could also create a windows hook to detect when certain messages occur, but this is probably overkill for most needs.
iOS multitasking for an Audio Recording application I am writing an application that records audio. I am looking into the feasibility of supporting multitasking while doing audio recordings (in the background). The answer seems to be a *no* from what I've read so far, especially since the program is meant to release any system resources being used when switched out. So I am wondering, is it possible to let the user switch to another application in iOS while my application continues to capture audio in the background?
You can. Skype does this. You presumably need to set `<key>UIBackgroundModes</key><array><string>audio</string></array>` in Info.plist, and you need to make sure that the audio session is active/running/whatever before you switch apps (the assumption is that you won't suddenly start recording/playing music/whatever when your app is in the background). [The docs](http://developer.apple.com/library/ios/#documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/BackgroundExecution/BackgroundExecution.html) say that "audio" lets you play audio in the background, but presumably this also applies to recording audio. If it doesn't work, there are a few things you could try: - Set both "voip" and "audio". - Play silence (this might be easiest to do with the Audio Queue API).
Disable all payments gateway if there's specifics products in the Cart I would like to disable all payments gateways under special situation: I've 2 special products that I don't want to be combined at checkout with any other product. Lets say that my **"special"** products IDs are **`496`** and **`484`**. All other are **"normal"** products. 1. if one of these **"special"** products is in the cart, I want to disable "paypal" for example. 2. if a customer has in his cart, at once, a **"special"** product and a **"normal"** product, I want to disable **all the payments gateway**, so he can't checkout. This is my code: ``` //disable add to cart if add_filter( 'woocommerce_available_payment_gateways', 'filter_gateways', 1); function filter_gateways( $gateways ) { global $woocommerce; foreach ($woocommerce->cart->cart_contents as $key => $values ) { // store product IDs in array $nonPPproducts = array(496,484); if (in_array( $values['product_id'], $nonPPproducts ) ) { unset($gateways['cod'], $gateways['bacs'], $gateways['cheque'], $gateways['stripe']); } elseif ( in_array( $values['product_id'], $nonPPproducts ) && in_array( $values['product_id'] ) ) { unset($gateways['under-review'], $gateways['cod'], $gateways['bacs'], $gateways['cheque'], $gateways['stripe']); } } return $gateways; } ``` But I can't figure out why the only first if statement works… In other words whatever the situation, all payment gateways are disabled except **`under-review`** payment gateway. What I am doing wrong? How can I achieve this? Thanks
> > Updated for WooCommerce 3+ > > > First I think that **`in_array( $values['product_id'] )`** in your code is not working as a correct condition and so your else statement is never "true". Then as a customer can have many items in his cart, depending on customer successive choices, with your code there will be many redundant **gateway unsets**… Here it is your code revisited *(you will need to put the desire unset gateways in each statement)*: ``` add_filter( 'woocommerce_available_payment_gateways', 'filter_gateways', 1); function filter_gateways( $gateways ){ // Not in backend (admin) if( is_admin() ) return $gateways; // Storing special product IDs in an array $non_pp_products = array( 496, 484 ); // Needed variables $is_non_prod = false; $is_prod = false; $count = 0; foreach ( WC()->cart->get_cart() as $cart_item ) { // count number of items if needed (optional) $count++; $product = $cart_item['data']; if( ! empty($product) ){ $product_id = method_exists( $product, 'get_id' ) ? $product->get_id() : $product->id; if ( in_array( $product_id, $non_pp_products ) && ! $is_non_prod ) $is_non_prod = true; if ( !in_array( $product_id, $non_pp_products ) && !$is_prod ) $is_prod = true; } } if ( $is_non_prod && ! $is_prod ) // only special products { // unset only paypal; unset( $gateways['paypal'] ); } elseif ( $is_non_prod && $is_prod ) // special and normal products mixed { // unset ALL GATEWAYS unset( $gateways['cod'], $gateways['bacs'], $gateways['cheque'], $gateways['paypal'], $gateways['stripe'], $gateways['under-review'] ); } elseif ( ! $is_non_prod && $is_prod ) // only normal products (optional) { // (unset something if needed) } return $gateways; } ``` *Naturally this code goes on functions.php file of your active child theme or theme.*
End loop with counter and condition In Python I can implement a loop with step counter and a stop condition as a classical case of **for loop** : ``` for i in range(50): result = fun(i) print(i, result) if result == 0: break ``` where `fun(x)` is some arbitrary function from integers to integers. I always in doubts if that is the best way to code it (**Pythonically**, and in terms of **readability** and **efficiency**) or is it better to run it as a **while loop**: ``` i = 0 result = 1 while result != 0 and i < 50: result = fun(i) print(i, result) i += 1 ``` which approach is better? In particular - I'm concerned about the usage of `break` statement which doesn't feel right.
The `for` loop is slightly more performant than the `while` because [`range()` is implemented in C](https://stackoverflow.com/a/869295/6260170), meanwhile the `+=` operation is interpreted and requires [more operations](https://stackoverflow.com/a/869347/6260170) and object creation/ destruction. You can illustrate the performance difference using the [`timeit` module](https://docs.python.org/3/library/timeit.html), for example: ``` from timeit import timeit def for_func(): for i in range(10000): result = int(i) if result == -1: break def while_func(): i = 0 result = 1 while result != -1 and i < 10000: result = int(i) i += 1 print(timeit(lambda: for_func(), number = 1000)) # 1.03937101364 print(timeit(lambda: while_func(), number = 1000)) # 1.21670079231 ``` The `for` loop is arguably [more Pythonic in the vast majority of cases when you wish to iterate over an iterable object](https://stackoverflow.com/a/920692/6260170). Furthermore, to quote the [Python wiki](https://wiki.python.org/moin/WhileLoop): "As the for loop in Python is so powerful, while is rarely used except in cases where a user's input is required". There is nothing un-Pythonic about using a `break` statement *per se*. Readability is mostly subjective, I would say the `for` loop is more readable too, but it probably depends on your previous programming background and experience.
When to use handler.post() & when to new Thread() I'm wondering when should I use `handler.post(runnable);` and when should I use `new Thread(runnable).start();` It is mentioned in developers documentation for Handler: > > Causes the Runnable r to be added to the message queue. The runnable > will be run on the thread to which this handler is attached. > > > Does this mean if I write in the `onCreate()` of `Activity` class: ``` Handler handler = new Handler(); handler.post(runnable); ``` then runnable will be called in a separate thread or in the Activity's thread?
You should use `Handler.post()` whenever you want to do operations on the UI thread. So let's say you want to change a `TextView`'s text in the callback. Because the callback is not running on the UI thread, you should use `Handler.post()`. In Android, as in many other UI frameworks, UI elements (widgets) can be only modified from UI thread. Also note that the terms "UI thread" and "main thread" are often used interchangeably. --- Edit: an example of the long-running task: ``` mHandler = new Handler(); new Thread(new Runnable() { @Override public void run () { // Perform long-running task here // (like audio buffering). // You may want to update a progress // bar every second, so use a handler: mHandler.post(new Runnable() { @Override public void run () { // make operation on the UI - for example // on a progress bar. } }); } }).start(); ``` Of course, if the task you want to perform is really long and there is a risk that user might switch to some another app in the meantime, you should consider using a [Service](http://developer.android.com/guide/components/services.html).
Python: datetime tzinfo time zone names documentation I have a date that I build: ``` from datetime import datetime from datetime import tzinfo test = '2013-03-27 23:05' test2 = datetime.strptime(test,'%Y-%m-%d %H:%M') >>> test2 datetime.datetime(2013, 3, 27, 23, 5) >>> test2.replace(tzinfo=EST) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'EST' is not defined >> test2.replace(tzinfo=UTC) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'UTC' is not defined ``` I can't find documentation on the list of time zone `names` that I can assign to tzinfo in the `replace.tzinfo=` call. I have read through the following and there is nothing: <http://docs.python.org/2/library/datetime.html#tzinfo-objects> I have also searched in google. **Edit**: I followed the solution provided by unutbu but I get the following: ``` >>> test = '2013-03-27 00:05' >>> test '2013-03-27 00:05' >>> test2 = dt.datetime.strp(test, '%Y-%m-%d %H:%M') >>> test2 datetime.datetime(2013, 3, 27, 0, 5) >>> est = pytz.timezone('US/Eastern') >>> utc = pytz.utc >>> print(est.localize(test2)) 2013-03-27 00:05:00-04:00 >>> print(utc.localize(test2)) 2013-03-27 00:05:00+00:00 >>> print(est.localize(test2,is_dst=False)) 2013-03-27 00:05:00-04:00 >>> print(est.localize(test2,is_dst=True)) 2013-03-27 00:05:00-04:00 >>> ``` As you can see even when I provide the `is_dst=` flag the offset is still '-04:00', which is EDT and not EST. I appreciate the help. Thank you. The documentation shows the following: *If you insist on working with local times, this library provides a facility for constructing them unambiguously: <http://pytz.sourceforge.net/#problems-with-localtime>* ``` >>> loc_dt = datetime(2002, 10, 27, 1, 30, 00) >>> est_dt = eastern.localize(loc_dt, is_dst=True) >>> edt_dt = eastern.localize(loc_dt, is_dst=False) >>> print(est_dt.strftime(fmt) + ' / ' + edt_dt.strftime(fmt)) 2002-10-27 01:30:00 EDT-0400 / 2002-10-27 01:30:00 EST-0500 ``` eastern was defined earlier in the documentation as `eastern = timezone('US/Eastern')` This seems to indicate that the `is_dst=` flag should further specify whether day light savings is specified or not. I would appreciate help on why this isn't working in my case.
The standard library does not define any timezones -- at least not well (the toy example given in [the documentation](http://docs.python.org/2/library/datetime.html#tzinfo-objects) does not handle subtle problems like the ones [mentioned here](http://pytz.sourceforge.net/#problems-with-localtime)). For predefined timezones, use the third-party [pytz module](http://pytz.sourceforge.net/). ``` import pytz import datetime as DT eastern = pytz.timezone('US/Eastern') utc = pytz.utc test = '2013-03-27 23:05' ``` This is a "naive" datetime: ``` test2 = DT.datetime.strptime(test, '%Y-%m-%d %H:%M') print(test2) # 2013-03-27 23:05:00 ``` This makes a timezone-aware datetime by interpreting `test2` as if it were in the EST timezone: ``` print(eastern.localize(test2)) # 2013-03-27 23:05:00-04:00 ``` This makes a timezone-aware datetime by interpreting `test2` as if it were in the UTC timezone: ``` print(utc.localize(test2)) # 2013-03-27 23:05:00+00:00 ``` Alternatively, you can convert one timezone-aware datetime to another timezone using the `astimezone` method: ``` test2_eastern = eastern.localize(test2) print(test2_eastern.astimezone(utc)) # 2013-03-28 03:05:00+00:00 ```
Java Inner Class extends Outer Class There are some cases in Java where an inner class extends an outer class. For example, java.awt.geom.Arc2D.Float is an inner class of java.awt.geom.Arc2D, and also extends Arc2D. (c.f. <http://download.oracle.com/javase/6/docs/api/java/awt/geom/Arc2D.Float.html>) Also, sun.org.mozilla.javascript.internal.FunctionNode.Jump extends sun.org.mozilla.javascript.internal.Node, which is a superclass of FunctionNode. (sorry... cannot find a link to the javadoc) To me, this seems odd. Could you then create these? ``` new Arc2D.Float.Float() //n.b. I couldn't get this to compile in Intellij IDEA; new FunctionNode.Jump.Jump.Jump(1); // I could get this to compile ``` What purpose does it serve to have a subclass nested as an inner class of the superclass? I wondered whether it was to access something in the superclass, but if you wanted to access any variables/methods in the parent, you could use ``` super.variable; ``` or ``` super.method(); ``` Edit 1: jjnguy has suggested it's to keep the logic in the same place. In which case, why wouldn't you write a file com.mypackage.AbstractTest: ``` abstract class AbstractTest { abstract String getString(); } class ExtensionTest extends AbstractTest { @Override String getString() { return "hello world"; } } ``` ... rather than: ``` abstract class AbstractTest { abstract String getString(); class ExtensionTest extends AbstractTest { @Override String getString() { return "hello world"; } } } ``` Edit 2: It has rightly been pointed out that the suggestion in my previous edit was flawed, as couldn't construct ExtensionTest outside of the package. However, I've had a further think about this over the weekend, so what about the following: ``` abstract class Test { public class ExtensionTest extends AbstractTest { @Override String getString() { return "hello world"; } } private abstract class AbstractTest { abstract String getString(); } } ``` In essence, the best answer I've seen so far is that having an inner class extend its outer class allows the logic to be grouped together. However, I think that this can be done without the extension. In my mind, it seems like bad design to have a class that can have an infinite number of the same subclasses nested within it. (Context: this came up whilst trying to produce a dictionary for a code completion utility, and threw a StackOverflowException. I found a workaround, but I just cannot understand why it had been designed that way.)
Have a look at Java's [`Point2D`](http://download.oracle.com/javase/6/docs/api/java/awt/geom/Point2D.html). It has two inner classes that are sub-classes of it. The important thing to note is that they are `static` inner classes. This has an entirely diffenent meaning that a regular inner class. Just like a static method, a static class is defined at the class-level instead of the object level. In the `Point2D` case, it is done to logically couple the classes and their logic. It helps a user of the `abstract` type `Point2D` find an implementation that they can use. In response to your edit I'd like to point out 1 important fact. A single Java file may only contain one public class, **except** for public inner classes. While both of your examples may compile, they do not allow access to those classes to the public. If you want to present multiple public classes to someone in a single file, you must use public static inner classes.
R ggplot add new roc curve I want to add an ROC curve to a ggplot chart, but it returns an error code. ``` library(ggplot2) library(plotROC) set.seed(2529) D.ex <- rbinom(200, size = 1, prob = .5) M1 <- rnorm(200, mean = D.ex, sd = .65) M2 <- rnorm(200, mean = D.ex, sd = 1.5) test <- data.frame(D = D.ex, D.str = c("Healthy", "Ill")[D.ex + 1], M1 = M1, M2 = M2, stringsAsFactors = FALSE) plot<-ggplot(longtest, aes(d = D, m = M1 )) + geom_roc() + style_roc() plot ``` its ok, but if im add new ROC line its return error ``` plot<-ggplot(longtest, aes(d = D, m = M1 )) + geom_roc() + style_roc() plot+ggplot(test, aes(d = D, m = M2)) + geom_roc() ``` > > Error in p + o : non-numeric argument to binary operator In addition: > Warning message: Incompatible methods ("+.gg", "Ops.data.frame") for > "+" > > > How i can add new line and color all line different color,and add legend
Melt the data frame from wide to long format, then map the variable name to line color within the aesthetics mappings: ``` ggplot(melt_roc(test, "D", c("M1", "M2")), aes(d = D, m = M, color = name)) + geom_roc() + style_roc() ``` [![enter image description here](https://i.stack.imgur.com/BsjyZ.gif)](https://i.stack.imgur.com/BsjyZ.gif) --- You could also do it like this, if you want: ``` ggplot() + geom_roc(aes(d = D, m = M1, color="roc1"), test) + geom_roc(aes(d = D, m = M2, color="roc2"), test) + scale_color_manual(values=c("roc1"="red", "roc2"="blue"), name="color legend", guide="legend") + style_roc() ```
How to make multiple mysql queries in Node with promises G'day all, I'm trying to convert some old php code over to Node, and part of the journey has been trying to figure out the best way to perform sql queries against my database (I'm using SQL so I can port the existing database over). I've got them working, but have encountered the "Pyramid of Doom" problem, and it's subsequent scope issues (i.e. the returned values not baing available to subsequent "then"s). An example of the sort of code I have here is: (dbPool.queryOPromise returns a query wrapped in a promise) ``` dbPool.queryOPromise(query) .then(function(result){ console.log(result); var query = { sql:"INSERT INTO newusers (newuserid, ipaddress, email) VALUES (?,?,?)", values: [newuserid, ipAddress, email] }; dbPool.queryOPromise(query) .then(function(value){ console.log(value); if(value.code==200) { res.status(200).json({code:200, status:"New User Created"}); } else { res.status(400).json({code:value.code, status:"Error creating new user: ".value.status}); } }) }) ``` Does anyone have a view on the best way to attack this situation? Thanks!
You're supposed to ***return*** the subsequent promises instead of calling `.then` on them ``` dbPool.queryOPromise(query) .then(function(result) { console.log(result); var query = { sql: "INSERT INTO newusers (newuserid, ipaddress, email) VALUES (?,?,?)", values: [newuserid, ipAddress, email] }; // RETURN the second promise, return dbPool.queryOPromise(query); }) .then(function(value) { console.log(value); if (value.code == 200) { res.status(200).json({code: 200, status: "New User Created"}); } else { res.status(400).json({code: value.code, status: "Error creating new user: ".value.status }); } }) .catch(console.error); // and always catch the errors at the end. ``` It's a #1 rookie mistake in using promises. [Checkout this wonderfully written article addressing issues exactly like this](http://pouchdb.com/2015/05/18/we-have-a-problem-with-promises.html)
Variable that can contain most memory space? What type of variable that can contain 1,000,000,000(a decimal number) takes the most memory space? 1. int in C 2. string in C 3. string in Java(which uses unicode)
A Java String. Under the hood A Java String consists of an object with 3 fields, one of which points to a separate array object containing the characters. Plus of course, Java Strings are composed of 16 bit characters1. If you are worried about memory usage over all other criteria, don't use Java. But for most applications, memory usage is the least of your concerns. It is worth noting that 1,000,000,000 can be represented using a Java `int` which will be the same size as a C signed or unsigned (32 bit) integer. Furthermore, a C `int` is *not necessarily* big enough to represent 1,000,000,000. On some platforms, `int` is 16 bits, and this is allowed by the C standard. --- 1 - Actually, this is Java platform dependent. For example, in Java 9 they modified the `String` implementation to use one `byte` per character for strings that are composed entirely of characters in the range 0 to 255. See [this article](https://www.vojtechruzicka.com/java-9-compact-strings/). But despite this, a Java string still takes more space than a C string.
How to fetch data only once when using FutureBuilder? I have a fetch request and I am passing it as a future to FutureBuilder - ``` FutureBuilder( future: gettask(), builder: (context, snapshot){ if(snapshot.hasData){ // Stopwatch stopwatchbefore = new Stopwatch()..start(); // print('futurebuilder executed in ${stopwatchbefore.elapsed}'); return SingleChildScrollView(child: listViewWidget(snapshot.data)); }else{ return Center(child: CircularProgressIndicator()); } }, ) ``` The method gettask() however, seems to be fetching events repeatedly. I tried to use <https://pub.dev/packages/memoize> and <https://api.flutter.dev/flutter/package-async_async/AsyncMemoizer-class.html> but i think they are deprecated since it says that AsyncMemoizer is undefined (no import option available). I have also tried <https://github.com/flutter/flutter/issues/11426#issuecomment-414047398> But i wanted to explore an option similar to memoizer. are there any alternatives to fetching data only once (something similar to memoizer)?
Use `StatefulWidget`, then create an `Future` variable (like `_getTaskAsync`) inside `State`. Assign `gettask()` to that future variable in `initState`. Then use that variable as argument to `FutureBuilder`(like `future: _getTaskAsync`) **Code:** ``` class _MyStatefulWidgetState extends State<MyStatefulWidget> { Future _getTaskAsync; ... @override void initState() { _getTaskAsync = gettask(); super.initState(); } ... FutureBuilder( future: _getTaskAsync, builder: (context, snapshot) { if (snapshot.hasData) { //Stopwatch stopwatchbefore = new Stopwatch()..start(); //print('futurebuilder executed in ${stopwatchbefore.elapsed}'); return SingleChildScrollView(child: listViewWidget(snapshot.data)); } else { return Center(child: CircularProgressIndicator()); } }, ); ``` Refer the [document](https://api.flutter.dev/flutter/widgets/FutureBuilder-class.html)
how to make sysfs changes persistent in centos 7 (systemd) Trying to fix up the fn keys on my apple keyboard on CentOS 7, I've set ``` $ cat /etc/modprobe.d/hid_apple.conf options hid_apple fnmode=2 ``` and yet after a reboot ``` $ cat /sys/module/hid_apple/parameters/fnmode 1 ``` Suggestions on the internet include running update-initramfs, which doesn't seem to exist on Centos 7, and doing the "echo 2 >> /sys/module/hid\_apple/parameters/fnmode" in /etc/rc.local, which of course doesn't exist at all any more under systemd. What's the right way to persist that setting?
There are 3 ways in which you can achieve this: 1. rc.local (Still works, remember to chmod +x after adding your lines) 2. systemd 3. udev rules (My own preferred) With systemd: ``` # /etc/systemd/system/hid_apple_fnmode_set.service [Unit] Description=Set Apple keyboard fn mode After=multi-user.target [Service] ExecStart=/usr/bin/bash -c '/usr/bin/echo 2 > /sys/module/hid_apple/parameters/fnmode' [Install] WantedBy=graphical.target ``` Followed by this to make the service run at boot. ``` sudo systemctl enable hid_apple_fnmode_set.service ``` With udev rules: ``` # /etc/udev/rules.d/99-hid_apple.rules SUBSYSTEM=="module", DRIVER=="hid_apple", ATTR{parameters/fnmode}="2" ``` The systemd script and udev rules are put together with some wild guesses, might take some tweaking to work. The following commands can help adjust and debug the udev rule: ``` udevadm info --attribute-walk --path=/module/hid_apple udevadm test /sys/module/hid_apple/ ```
servicestack self-hosted service uses chunked encoding - is unbuffered? I am trying to learn ServiceStack with the hello world examples and self-hosted example. I am making requests for JSON content. I have noticed the following in the response headers: **Basic service hosted in an ASP.Net project:** ``` HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Wed, 10 Apr 2013 12:49:46 GMT X-AspNet-Version: 4.0.30319 X-Powered-By: ServiceStack/3.943 Win32NT/.NET Cache-Control: private Content-Type: application/json; charset=utf-8 Content-Length: 16 <------------------------------------- Connection: Close ``` **Same basic service, self-hosting (command line):** ``` HTTP/1.1 200 OK Transfer-Encoding: chunked <------------------------------- Content-Type: application/json; charset=utf-8 Server: Microsoft-HTTPAPI/2.0 X-Powered-By: ServiceStack/3.943 Win32NT/.NET Date: Wed, 10 Apr 2013 12:48:50 GMT ``` It seems the self-hosted variety does not buffer it's responses? Is this a performance or compatibility concern? How can I turn on buffering when using the self-hosting method? Many thanks.
**How can I turn on buffering when using the self-hosting method?** You could create a ResponseFilter like below. I would say this is kind of aggressive and it would prevent other ResponseFilters from running. You could turn it into a [Filter Attribute](https://github.com/ServiceStack/ServiceStack/wiki/Filter-attributes) and only use it when there is a clear performance benefit for the Response. Otherwise, I would just let the AppHost handle the Response. ``` ResponseFilters.Add((httpReq, httpRes, dto) => { using (var ms = new MemoryStream()) { EndpointHost.ContentTypeFilter.SerializeToStream( new SerializationContext(httpReq.ResponseContentType), dto, ms); var bytes = ms.ToArray(); var listenerResponse = (HttpListenerResponse)httpRes.OriginalResponse; listenerResponse.SendChunked = false; listenerResponse.ContentLength64 = bytes.Length; listenerResponse.OutputStream.Write(bytes, 0, bytes.Length); httpRes.EndServiceStackRequest(); } }); ```
Java string encrypt I am using the encryption class in Objective C for my iPhone app but I am struggling to get the same functionality working in JAVA from my android app. My encryption code is below: ``` NSString * _secret = @"password"; NSString * _key = @"1428324560542678"; StringEncryption *crypto = [[StringEncryption alloc] init]; NSData *_secretData = [_secret dataUsingEncoding:NSUTF8StringEncoding]; CCOptions padding = kCCOptionPKCS7Padding; NSData *encryptedData = [crypto encrypt:_secretData key:[_key dataUsingEncoding:NSUTF8StringEncoding] padding:&padding]; ``` I have tried to replicate it in JAVA but I get a different string when I encode the same data. So I am doing something wrong but I can't figure it out. Here is my JAVA code: ``` byte[] key = "1428324560542678".getBytes(); Cipher c = null; try { c = Cipher.getInstance("AES/ECB/PKCS7Padding"); } catch (NoSuchAlgorithmException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (NoSuchPaddingException e) { // TODO Auto-generated catch block e.printStackTrace(); } SecretKeySpec k = new SecretKeySpec(key, "AES"); try { c.init(Cipher.ENCRYPT_MODE, k); } catch (InvalidKeyException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { EditText tv1passwordText = (EditText) findViewById(R.id.password); String password = URLEncoder.encode(tv1passwordText.getText().toString(), "UTF-8"); byte[] encryptedData = c.doFinal( password.getBytes()); ``` Can anyone see where I am going wrong? Based on the comments below I added getBytes but the strings produced are still different: ``` byte[] key = null; try { key = "1428324560542678".getBytes("UTF-8"); } catch (UnsupportedEncodingException e2) { // TODO Auto-generated catch block e2.printStackTrace(); } Cipher c = null; try { c = Cipher.getInstance("AES/ECB/PKCS7Padding"); } catch (NoSuchAlgorithmException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (NoSuchPaddingException e) { // TODO Auto-generated catch block e.printStackTrace(); } SecretKeySpec k = new SecretKeySpec(key, "AES"); try { c.init(Cipher.ENCRYPT_MODE, k); } catch (InvalidKeyException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { EditText tv1passwordText = (EditText) findViewById(R.id.password); byte[] password = tv1passwordText.getText().toString().getBytes("UTF-8"); byte[] encryptedData = c.doFinal(password); ```
Here is a sample of encryption and decryption: ``` public static SecretKey generateKey() throws NoSuchAlgorithmException, InvalidKeySpecException { return secret = new SecretKeySpec(password.getBytes(), "AES"); } public static byte[] encryptMsg(String message, SecretKey secret) throws NoSuchAlgorithmException, NoSuchPaddingException, InvalidKeyException, InvalidParameterSpecException, IllegalBlockSizeException, BadPaddingException, UnsupportedEncodingException { /* Encrypt the message. */ Cipher cipher = null; cipher = Cipher.getInstance("AES/ECB/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, secret); byte[] cipherText = cipher.doFinal(message.getBytes("UTF-8")); return cipherText; } public static String decryptMsg(byte[] cipherText, SecretKey secret) throws NoSuchPaddingException, NoSuchAlgorithmException, InvalidParameterSpecException, InvalidAlgorithmParameterException, InvalidKeyException, BadPaddingException, IllegalBlockSizeException, UnsupportedEncodingException { /* Decrypt the message, given derived encContentValues and initialization vector. */ Cipher cipher = null; cipher = Cipher.getInstance("AES/ECB/PKCS5Padding"); cipher.init(Cipher.DECRYPT_MODE, secret); String decryptString = new String(cipher.doFinal(cipherText), "UTF-8"); return decryptString; } ``` To encrypt: ``` SecretKey secret = EncUtil.generateKey(); EncUtil.encryptMsg(<String to Encrypt>, secret)) ``` to decrypt ``` EncUtil.decryptMsg(<byte[]>, secret)) ```
Content Manager configuration snap-in "Could not read configuration item" I am on a Tridion 2011 SP1 CM server and I’m trying to start the `SDL Tridion Content Manager configuration` MMC snap-in. I get the following error: > > Could not read configuration item. Modification of this item is not > available on this machine. Account has no permission to access the > protected configuration section 'tridion.security'. Contact your > system administrator. > > > My user is of course part of the local admins. What is going on? how to fix it?
The Content Manager uses a .NET encryption key to ensure the encryption of sensitive configuration data such as passwords. By default nothing is encrypted. The following user accounts automatically have access to this encryption key: - Any Content Manager system account (including the Content Manager user account and impersonation user accounts created during installation) - The user account of the user who originally ran the installer The use of the configuration encryption functionality is completely transparent, so long as the following is true: - The user account that runs the SDL Tridion MMC Snap-in configuration tool is the same user account that originally ran the installer. - The user executing the various SDL Tridion Windows services is not changed from its default value. If you want to run the Snap-in and/or Windows services as another user than specified, you must grant that new user access to the encryption key. To grant this access, log on as the user account of the user who originally ran the installer, or as another, similarly authorized user with access to the encryption key, and do the following: 1. Open a Windows command prompt. 2. Go to a directory on your machine on which a version of the .NET Framework is installed (a subdirectory of `C:\Windows\Microsoft.NET\Framework\` or `C:\Windows\Microsoft.NET\Framework64\`). 3. Enter the following command: `aspnet_regiis -pa "TridionRsaKeyContainer" "<domain>\<account>"` where `<domain>` is the domain of this user and `<account>` is the username of the user.
How to run robot framework test cases parallel and not Test Suite parallel? I'm trying to run my test case from different suites in parallel using the command ``` pabot --verbose --processes 3 --variable --variable url:http://xxxxxxxxx:8080 --include Sanity --output original.xml --randomize all TestCases ``` There are two findings while execution: 1. The suites are executed parallel and not the test case. i.e if there are two suite A and B , if A take 30 mins to complete and B takes 5 mins to complete, the total execution time is 30 mins, simply because each processes pick each suite and not test cases How can i run the test cases parallel and not the Suite parallel ? 2. It creates outputdir separately for each Test Suite `pabot_results\TestCases` that makes my rerunning of failed test cases difficult. How to get a single output.xml file all the suite execution ? I use the below library > > robotframework-pabot==0.53 > > robotframework-seleniumlibrary==3.3.1 > > >
First point: If you read [the GitHub readme page](https://github.com/mkorpela/pabot/blob/master/README.md), in the "Things you should know", it states: > > Pabot will split test execution from suite files and not from individual test level. > > > So there is nothing to do on test level, except if you help develop the tool so it becomes possible to launch testcases in parallel. Second point: Use [rebot](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#rebot). You can merge multiple test execution reports by using the command: ``` rebot --merge output1.xml output2.xml ``` This will generate only html report. To obtain a merged output.xml file, just add an `-o output.xml` in the arguments, like this: ``` rebot -o output.xml --merge output1.xml output2.xml ```
Does gforth contain network socket capability? Often, when learning a language I'll write a server of some sort. Does [gforth](https://github.com/forthy42/gforth) have the capability to use network sockets? I don't see anything about sockets in [the manual](https://www.complang.tuwien.ac.at/forth/gforth/Docs-html/).
Though I don't see any documentation about it, there is a [`socket.fs`](https://github.com/forthy42/gforth/blob/master/unix/socket.fs) which binds to libc. - [You can find some examples of FORTH that use sockets.fs on Rosetta Code](https://rosettacode.org/wiki/Category:Forth), specifically the [`ECHO` server example](https://rosettacode.org/wiki/Echo_server#Forth) Provided under the [GNU FDL, from Rossetta code by IanOsgood](https://rosettacode.org/wiki/Echo_server#Forth) ([commit](https://rosettacode.org/mw/index.php?title=Echo_server&type=revision&diff=59626&oldid=50642)) ``` include unix/socket.fs 128 constant size : (echo) ( sock buf -- sock buf ) begin cr ." waiting..." 2dup 2dup size read-socket nip dup 0> while ." got: " 2dup type rot write-socket repeat drop drop drop ; create buf size allot : echo-server ( port -- ) cr ." Listening on " dup . create-server dup 4 listen begin dup accept-socket cr ." Connection!" buf ['] (echo) catch cr ." Disconnected (" . ." )" drop close-socket again ; 12321 echo-server ``` However, ymmv ``` nc localhost 12321 PING PING PONG PONG ``` There are no Keepalives so you'll logically get disconnects from that.
How to write a linux daemon with .Net Core I could just write a long-running CLI app and run it, but I'm assuming it wouldn't comply to all the expectations one would have of a standards-compliant linux daemon (responding to SIGTERM, Started by System V init process, Ignore terminal I/O signals, [etc.](https://www.python.org/dev/peps/pep-3143/#id1)) Most ecosystems have some best-practice way of doing this, for example, in python, you can use <https://pypi.python.org/pypi/python-daemon/> Is there some documentation about how to do this with .Net Core?
I toyed with an idea similar to how .net core web host waits for shutdown in console applications. I was reviewing it on GitHub and was able to extract the gist of how they performed the `Run` <https://github.com/aspnet/Hosting/blob/15008b0b7fcb54235a9de3ab844c066aaf42ea44/src/Microsoft.AspNetCore.Hosting/WebHostExtensions.cs#L86> ``` public static class ConsoleHost { /// <summary> /// Block the calling thread until shutdown is triggered via Ctrl+C or SIGTERM. /// </summary> public static void WaitForShutdown() { WaitForShutdownAsync().GetAwaiter().GetResult(); } /// <summary> /// Runs an application and block the calling thread until host shutdown. /// </summary> /// <param name="host">The <see cref="IWebHost"/> to run.</param> public static void Wait() { WaitAsync().GetAwaiter().GetResult(); } /// <summary> /// Runs an application and returns a Task that only completes when the token is triggered or shutdown is triggered. /// </summary> /// <param name="host">The <see cref="IConsoleHost"/> to run.</param> /// <param name="token">The token to trigger shutdown.</param> public static async Task WaitAsync(CancellationToken token = default(CancellationToken)) { //Wait for the token shutdown if it can be cancelled if (token.CanBeCanceled) { await WaitAsync(token, shutdownMessage: null); return; } //If token cannot be cancelled, attach Ctrl+C and SIGTERN shutdown var done = new ManualResetEventSlim(false); using (var cts = new CancellationTokenSource()) { AttachCtrlcSigtermShutdown(cts, done, shutdownMessage: "Application is shutting down..."); await WaitAsync(cts.Token, "Application running. Press Ctrl+C to shut down."); done.Set(); } } /// <summary> /// Returns a Task that completes when shutdown is triggered via the given token, Ctrl+C or SIGTERM. /// </summary> /// <param name="token">The token to trigger shutdown.</param> public static async Task WaitForShutdownAsync(CancellationToken token = default (CancellationToken)) { var done = new ManualResetEventSlim(false); using (var cts = CancellationTokenSource.CreateLinkedTokenSource(token)) { AttachCtrlcSigtermShutdown(cts, done, shutdownMessage: string.Empty); await WaitForTokenShutdownAsync(cts.Token); done.Set(); } } private static async Task WaitAsync(CancellationToken token, string shutdownMessage) { if (!string.IsNullOrEmpty(shutdownMessage)) { Console.WriteLine(shutdownMessage); } await WaitForTokenShutdownAsync(token); } private static void AttachCtrlcSigtermShutdown(CancellationTokenSource cts, ManualResetEventSlim resetEvent, string shutdownMessage) { Action ShutDown = () => { if (!cts.IsCancellationRequested) { if (!string.IsNullOrWhiteSpace(shutdownMessage)) { Console.WriteLine(shutdownMessage); } try { cts.Cancel(); } catch (ObjectDisposedException) { } } //Wait on the given reset event resetEvent.Wait(); }; AppDomain.CurrentDomain.ProcessExit += delegate { ShutDown(); }; Console.CancelKeyPress += (sender, eventArgs) => { ShutDown(); //Don't terminate the process immediately, wait for the Main thread to exit gracefully. eventArgs.Cancel = true; }; } private static async Task WaitForTokenShutdownAsync(CancellationToken token) { var waitForStop = new TaskCompletionSource<object>(); token.Register(obj => { var tcs = (TaskCompletionSource<object>)obj; tcs.TrySetResult(null); }, waitForStop); await waitForStop.Task; } } ``` I tried adapting something like a `IConsoleHost` but quickly realized I was over-engineering it. Extracted the main parts into something like `await ConsoleUtil.WaitForShutdownAsync();` that operated like `Console.ReadLine` This then allowed the utility to be used like this ``` public class Program { public static async Task Main(string[] args) { //relevant code goes here //... //wait for application shutdown await ConsoleUtil.WaitForShutdownAsync(); } } ``` from there creating a *systemd* as in the following link should get you the rest of the way [Writing a Linux daemon in C#](https://developers.redhat.com/blog/2017/06/07/writing-a-linux-daemon-in-c/)
std::max() and std::min() not constexpr I just noticed that the new standard defines `min(a,b)` and `max(a,b)` **without** `constexpr`. Examples from 25.4.7, [alg.min.max]: ``` template<class T> const T& min(const T& a, const T& b); template<class T> T min(initializer_list<T> t); ``` Isn't this a pity? I would have liked to write ``` char data[ max(sizeof(A),sizeof(B)) ]; ``` instead of ``` char data[ sizeof(A) > sizeof(B) ? sizeof(A) : sizeof(B) ]; char data[ MAX(sizeof(A),sizeof(B)) ]; // using a macro ``` Any reason why those **can not be `constexpr`**?
## Critical Update *The below analysis is wrong, because it confuses one important thing*. The following statement I did missed one important detail, which requires an entirely different answer. > > The unnamed reference `max` returns will refer to that operand. > > > The problem here is that *function invocation substitution* is **done** at that point. If the invocation susbstitution would include the lvalue to rvalue conversion on that glvalue that `max` yields, everything would be fine, because reading from a glvalue that refers to a temporary not of static storage duration is fine *during computation of the constant expression*. But since the read happens outside of function invocation substitution, the result of function invocation substitution is an *lvalue*. The respective text of the spec says > > A reference constant expression is an lvalue core constant expression that designates an object with static storage duration or a function. > > > But the reference that `max` returns yields an lvalue that designates an object of unspecified storage duration. Function invocation substitution is required to yield a *constant expression*, not merely a *core* constant expression. So `max(sizeof(A), sizeof(B))` is not guaranteed to work. **The following (older) text needs to be read taking the above into account**. --- I can't see any reason at the moment why you wouldn't want to stick a `constexpr` there. Anyway, the following code definitely is useful ``` template<typename T> constexpr T const& max(T const& a, T const& b) { return a > b ? a : b; } ``` Contrary to what other answers write, I think this is legal. Not all instantiations of `max` are required to be constexpr functions. The current n3242 says > > If the instantiated template specialization of a constexpr function template or member function of a class template would fail to satisfy the requirements for a constexpr function or constexpr constructor, that specialization is not a constexpr function or constexpr constructor. > > > If you call the template, argument deduction will yield a function template specialization. Calling it will trigger *function invocation substitution*. Consider the following call ``` int a[max(sizeof(A), sizeof(B))]; ``` It will first do an implicit conversion of the two `size_t` prvalues to the two reference parameters, binding both references to temporary objects storing their value. The result of this conversion is a *glvalue* for each case that refers to a temporary object (see 4p3). Now function invocation substitution takes those two glvalues and substitutes all occurences of `a` and `b` in the function body by those glvalues ``` return (<glval.a>) > (<glval.b>) ? (<glval.a>) : (<glval.b>); ``` The condition will require lvalue to rvalue conversions on these glvalues, which are allowed by 5.19p2 > > - a glvalue of literal type that refers to a non-volatile temporary object initialized with a constant expression > > > The conditional expression will yield a glvalue to either the first or second operand. The unnamed reference `max` returns will refer to that operand. And the final lvalue to rvalue conversion happening in the array dimension size specification will be valid by the same rule quoted above. --- Note that `initializer_list` currently doesn't have `constexpr` member functions. This is a known limitation and will be handled post-C++0x, most likely making those members `constexpr`.
Amazon cdk and boto3 difference I am new to AWS with python. I came across boto3 initially, later somone suggested cdk. What is the difference between aws cdk and boto3?
In simple terms, CDK helps you to programmatically create AWS resources(Infrastructure as Code) while boto3 helps you to programmatically access AWS services. Here is a snippet on CDK and Boto3 from AWS reference links : **CDK:** The AWS Cloud Development Kit (AWS CDK) is an open source software development framework to define your cloud application resources using familiar programming languages. AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. It also enables you to compose and share your own custom constructs that incorporate your organization's requirements, helping you start new projects faster. (Reference: <https://aws.amazon.com/cdk/>) With CDK and Cloudformation, you will get the benefits of repeatable deployment, easy rollback, and drift detection. (Reference: <https://aws.amazon.com/cdk/features/>) **Boto3:** Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to write software that makes use of services like Amazon S3 and Amazon EC2. (Reference: <https://pypi.org/project/boto3/>)
Performance: toFixed() vs. Math.floor(x \* 10000) I need to compare two float values to a certain precision (that is 4 decimal places): ``` var float1 = 0.0025132741228718345; var float2 = 0.0025132812393818293; ``` The two options I see: ``` Math.floor(float1 * 10000) === Math.floor(float2 * 10000); // 25 === 25 ``` ...or: ``` float1.toFixed(4) === float2.toFixed(4) // "0.0025" === "0.0025" ``` Since operation will happen 60 times a second and I was asking myself: 1. which of the options has the higher performance? 2. is one of the options more widely accepted? 3. is there a third option?
``` function floor(f1, f2) { return Math.floor(f1 * 10000) === Math.floor(f2 * 10000); } function toFixed(f1, f2) { return f1.toFixed(4) === f2.toFixed(4); } function subtract(f1, f2) { return Math.abs(f1 - f2) < 0.00001; } function test(fn) { console.time(fn.name); for (let i = 0; i < 1000000; ++i) { fn(Math.random(), Math.random()); } console.timeEnd(fn.name); } for (const fn of [floor, toFixed, subtract]) { test(fn); } ``` # Engine benchmarks ## v8 (Chromium-based browsers) - floor: 204.911 ms - toFixed: 4145.529 ms - subtract: 292.390 ms ## SpiderMonkey (Firefox-based browsers) - floor: 566.81ms - toFixed: 683.56ms - subtract: 423.76ms Between the two options that you gave, The `Math.floor` approach is the faster one. Might be a wise choice to go for `subtract`, though. (Run this benchmark yourself if you don't believe me.)
How to validate multidimensional arrays with Codeigniter and Jquery Hi I need to validate a multidimensional form like this ``` <input type="text" class="input-xlarge span5 req" id="contact_first_name" name="hotel[<?=$id?>][contact_first_name]" value="<?= set_value('hotel[contact_first_name]') ?>"> <input type="text" class="input-xlarge span5 req" id="contact_last_name" name="hotel[<?=$id?>][contact_last_name]" value="<?= set_value('hotel[contact_last_name]') ?>"> ``` I don't know the dimensions of the final array because the inputs are added dynamically via jquery. I'm using Codeigniter Form\_Validation for the server-side and via JQuery with the JQuery Validator for the client-side. This is my form\_validation rules ``` $config['add_hotel'] = array( array( 'field' => 'hotel[][hotel_name]', 'label' => 'Hotel Name', 'rules' => 'required' ), array( 'field' => 'hotel[][contact_first_name]', 'label' => 'First Name', 'rules' => 'trim|required' ), array( 'field' => 'hotel[][contact_last_name]', 'label' => 'Last Name', 'rules' => 'trim|required' ), ``` and this is how i'm doing it via jquery validator ``` $("#add_hotel").validate({ rules: { "hotel[][hotel_name]": "required" /* errorElement: "div", wrapper: "div"*/ }, messages: { "hotel[][hotel_name]": "Please enter the Association Name" }, submitHandler: function(form) { form.submit(); } ``` Don't know how to validate each `Hotel[]` input with its own id, or maybe there is another way to defining the inputs that can be simpler.
Posted array ``` $hotel = $this->input->post('hotel'); if(!empty($hotel)) { // Loop through hotels and add the validation foreach($hotel as $id => $data) { $this->form_validation->set_rules('hotel[' . $id . '][contact_first_name]', 'First name', 'required|trim'); $this->form_validation->set_rules('hotel[' . $id . '][contact_last_name]', 'Last name', 'required|trim'); } } ``` Default rules that apply all the time ``` $this->form_validation->set_rules('username', 'Username', 'required'); if ($this->form_validation->run() == FALSE) { // Errors } else { // Success } ```
get ip address from bssid I am doing some penetration testing, and im trying to find out, if i can get the IP address of an router if i got the bssid, or any i can get with the AIR tools? I use Linux Kali with the Air tools atm. I would say this is pretty damn bad if its possible. Basicly most people´s rounters can be reached through their outside IP. Even companies. :O So far i tried: - Passive TCPDUMP - Active scanning So basicly, is there a way, if so please give me a hint or the answer :-) I am 100% refering to some sort of scanning. All kind of cracking, bruteforce, password guessing, access stealing is not what im asking about :-)
You can't do that if target access point is protected with WPA/WPA2. This is why. Getting WiFi to work involves following steps: - Associate with target access point. If access point is using WPA/WPA2 and you don't know the password, then you cannot proceed to further steps, and certainly cannot know anything about IP address of target access point. - After association, your client (which is typically configured to use DHCP) has no IP address assigned (its IP address is 0.0.0.0). Technically, you can use sniffer at this stage to scan the network and find out IP addressed used, but most sniffers don't like to work with 0.0.0.0 address. To proceed further, your client sends DHCP request, which is served by access point. After getting successful DHCP ack with new IP address, client can proceed to next step. - After getting IP address, client can talk to access point and finally knows its IP address (it was served as default router in DHCP ack) - and that would be the answer to your question (yes, that late in the game!). However, even at that point, full network connectivity cannot be assumed. If access point implements captive portal, then your network access may be restricted until you open up web browser and (depending on wireless provider) either accept usage terms, provide some credentials or pay with credit card. - After passing captive portal, it is possible (but not common) that access point automatically re-associates and gives you completely different IP address (and access point also has different IP address now, from completely different subnet). This would mean that IP address you learned in previous steps was completely useless to you in terms of knowing actual network infrastructure.
Azure File Share - Recursive Directory Search like os.walk I am writing a Python script to download files from Azure File Share. The structure of the File Share is as below: ``` /analytics/Part1/file1.txt /analytics/Part1/file2.txt /analytics/mainfile.txt /analytics/Part1/Part1_1/file11.txt ``` I tried to use the following lines in my script but it looks for files and directories only at the root directory level. ``` fileshareclient = ShareClient( account_url=args.get('AccountURL'), credential=args.get('SASKey'), share_name=args.get('FileShare') ) fileLst = list( fileshareclient.list_directories_and_files('analytics') ) ``` The output is: ``` /analytics/mainfile.txt --> File /analytics/Part1 --> Dir ``` But, I am looking for something like `os.walk()` function in Python here to achieve this recursive directory walk. Any idea if such function is available in Azure File Service Python API?
The [built-in `list_directories_and_files()` method](https://learn.microsoft.com/en-gb/python/api/azure-storage-file-share/azure.storage.fileshare.sharedirectoryclient?view=azure-python#list-directories-and-files-name-starts-with-none----kwargs-) of the [Azure Storage File Share client library for Python `azure-storage-file-share`](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-share) only lists the root directories and files. If you want to something like `os.walk()`, you should write the method by yourself. Here, I write a function which can recursively list all the files / directories and it works fine(please feel free to modify it if it does not meet your need): ``` from azure.storage.fileshare import ShareServiceClient def list_recursive(directory_client,directory_name): sub_client = directory_client.get_subdirectory_client(directory_name) myfiles = sub_client.list_directories_and_files() for file in myfiles: print(file.get('name')) if file.get('is_directory'): list_recursive(sub_client,file.get('name')) if __name__ == '__main__': conn_str="xxxx" file_service = ShareServiceClient.from_connection_string(conn_str) share_client = file_service.get_share_client("your_share_name") d_client = share_client.get_directory_client("your_directory_name") myfiles = d_client.list_directories_and_files() for file in myfiles: print(file.get('name')) if file.get('is_directory'): list_recursive(d_client,file.get('name')) ```
How can I check that assignment of const\_reverse\_iterator to reverse\_iterator is invalid? Consider the following: ``` using vector_type = std::vector<int>; using const_iterator = typename vector_type::const_iterator; using const_reverse_iterator = typename vector_type::const_reverse_iterator; using iterator = typename vector_type::iterator; using reverse_iterator = typename vector_type::reverse_iterator; int main() { static_assert(!std::is_assignable_v<iterator, const_iterator>); // passes static_assert(!std::is_assignable_v<reverse_iterator, const_reverse_iterator>); // fails static_assert(std::is_assignable_v<reverse_iterator, const_reverse_iterator>); // passes } ``` I can check that assignment of `iterator{} = const_iterator{}` is not valid, but not an assignment of `reverse_iterator{} = const_reverse_iterator{}` with this type trait. This behavior is consistent across [gcc 9.0.0](https://wandbox.org/permlink/3zYvqTuhxUF9OiNZ), [clang 8.0.0](https://wandbox.org/permlink/5myhfmpl05HsEdqY), and [MSVC 19.00.23506](http://rextester.com/KUIL48585) This is unfortunate, because the reality is that `reverse_iterator{} = const_reverse_iterator{}` doesn't actually compile with any of the above-mentioned compilers. ### How can I reliably check such an assignment is invalid? This behavior of the type trait implies that the expression ``` std::declval<reverse_iterator>() = std::declval<const_reverse_iterator>() ``` is well formed according to [meta.unary.prop], and this appears consistent with my own attempts at an `is_assignable` type trait.
This trait passes because the method exists that can be found via overload resolution and it is not deleted. Actually calling it fails, because the implementation of the method contains code that isn't legal with those two types. In C++, you cannot test if instantiating a method will result in a compile error, you can only test for the equivalent of overload resolution finding a solution. The C++ language and standard library originally relied heavily on "well, the method body is only compiled in a template if called, so if the body is invalid the programmer will be told". More modern C++ (both inside and outside the standard library) uses SFINAE and other techniques to make a method "not participate in overload resolution" when its body would not compile. The constructor of reverse iterator from other reverse iterators is the old style, and hasn't been updated to "not participate in overload resolution" quality. From [n4713](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4713.pdf), 27.5.1.3.1 [reverse.iter.cons]/3: > > > ``` > template<class U> constexpr reverse_iterator(const reverse_iterator<U>& u); > > ``` > > Effects: Initializes current with u.current. > > > Notice no mention of "does not participate in overload resolution" or similar words. --- The only way to get a trait you want like this would be to 1. Change (well, fix) the C++ standard 2. Special case it I'll leave 1. as an exercise. For 2., you know that reverse iterators are templates over forward iterators. ``` template<class...Ts> struct my_trait:std::is_assignable<Ts...> {}; template<class T0, class T1> struct my_trait<std::reverse_iterator<T0>, std::reverse_iterator<T1>>: my_trait<T0, T1> {}; ``` and now `my_trait` is `is_assignable` except on reverse iterators, where it instead tests assignability of the contained iterators. (As extra fun, a reverse reverse iterator will work with this trait). I once had to do something very similar with `std::vector<T>::operator<`, which also blindly called `T<T` and didn't SFINAE disable it if that wasn't legal. --- It may also be the case that a C++ standard library implementation can make a constructor which would not compile not participate in overload resolution. This change could break otherwise well formed programs, but only by things as ridiculous as your static assert (which would flip) or things logically equivalent.
How to pass parameters using ui-sref in ui-router to the controller I need to pass and receive two parameters to the state I want to transit to using `ui-sref` of ui-router. Something like using the link below for transitioning the state to `home` with `foo` and `bar` parameters: ``` <a ui-sref="home({foo: 'fooVal', bar: 'barVal'})">Go to home state with foo and bar parameters </a> ``` Receiving `foo` and `bar` values in a controller: ``` app.controller('SomeController', function($scope, $stateParam) { //.. var foo = $stateParam.foo; //getting fooVal var bar = $stateParam.bar; //getting barVal //.. }); ``` I get `undefined` for `$stateParam` in the controller. Could somebody help me understand how to get it done? **Edit:** ``` .state('home', { url: '/', views: { '': { templateUrl: 'home.html', controller: 'MainRootCtrl' }, 'A@home': { templateUrl: 'a.html', controller: 'MainCtrl' }, 'B@home': { templateUrl: 'b.html', controller: 'SomeController' } } }); ```
I've created an [example](http://plnkr.co/edit/r2JhV4PcYpKJdBCwHIWS?p=preview) to show how to. Updated `state` definition would be: ``` $stateProvider .state('home', { url: '/:foo?bar', views: { '': { templateUrl: 'tpl.home.html', controller: 'MainRootCtrl' }, ... } ``` And this would be the controller: ``` .controller('MainRootCtrl', function($scope, $state, $stateParams) { //.. var foo = $stateParams.foo; //getting fooVal var bar = $stateParams.bar; //getting barVal //.. $scope.state = $state.current $scope.params = $stateParams; }) ``` What we can see is that the state home now has url defined as: ``` url: '/:foo?bar', ``` which means, that the params in url are expected as ``` /fooVal?bar=barValue ``` These two links will correctly pass arguments into the controller: ``` <a ui-sref="home({foo: 'fooVal1', bar: 'barVal1'})"> <a ui-sref="home({foo: 'fooVal2', bar: 'barVal2'})"> ``` Also, the controller does consume `$stateParams` instead of `$stateParam`. Link to doc: - [URL Parameters](https://github.com/angular-ui/ui-router/wiki/URL-Routing#url-parameters) You can check it [here](http://plnkr.co/edit/r2JhV4PcYpKJdBCwHIWS?p=preview) ### `params : {}` There is also *new*, more granular setting *`params : {}`*. As we've already seen, we can declare parameters as part of **`url`**. But with `params : {}` configuration - we can extend this definition or even introduce paramters which are not part of the url: ``` .state('other', { url: '/other/:foo?bar', params: { // here we define default value for foo // we also set squash to false, to force injecting // even the default value into url foo: { value: 'defaultValue', squash: false, }, // this parameter is now array // we can pass more items, and expect them as [] bar : { array : true, }, // this param is not part of url // it could be passed with $state.go or ui-sref hiddenParam: 'YES', }, ... ``` Settings available for params are described in the documentation of the [$stateProvider](http://angular-ui.github.io/ui-router/site/#/api/ui.router.state.$stateProvider) Below is just an extract - **value - {object|function=}**: specifies the default value for this parameter. This implicitly sets this parameter as optional... - **array - {boolean=}:** (default: false) If true, the param value will be treated as an array of values. - **squash - {bool|string=}:** squash configures how a default parameter value is represented in the URL when the current parameter value is the same as the default value. We can call these params this way: ``` // hidden param cannot be passed via url <a href="#/other/fooVal?bar=1&amp;bar=2"> // default foo is skipped <a ui-sref="other({bar: [4,5]})"> ``` Check it in action [here](http://plnkr.co/edit/r2JhV4PcYpKJdBCwHIWS?p=preview)
How a recursive function works in MIPS? I'm a newbie in MIPS (as I started learning MIPS assembly for my college) and I've got a problem in understanding how a recursive function works in MIPS. For example, I've got this program (in C) to write it in MIPS: ``` int fact (int n) { if (n < 1) return 0; else return n * fact(n - 1); } ``` Can someone help me, with this or another example of a recursive function and explain me how it works?
The first thing I'd like to share is that the *complexity in translating this into MIPS comes from the presence of mere function calling, rather than because recursion is involved — that `fact` is recursive is IMHO a red herring*.  To this end, I'll illustrate a non-recursive function that has every bit the complexity of the recursive function you've stated: ``` int fact (int n) { if (n < 1) return 0; else return n * other(n - 1); // I've changed the call to "fact" to function "other" } ``` My alteration is no longer recursive!  However the MIPS code for this version will look identical to the MIPS code for your `fact` (with the exception, of course, that the `jal fact` which changes `jal other`).  This is meant to illustrate that the complexity in translating this is due to the call within the function, and has nothing to do with who is being called.  (Though YMMV with optimization techniques.) --- To understand function calling, you need to understand: - the program counter: how the program interacts with the program counter, especially, of course in the context of function calling.. - parameter passing - register conventions, generally In C, we have explicit parameters.  These explicit parameter, of course, also appear in assembly/machine language — but there are also parameters passed in machine code that are not visible in C code.  Examples of these are the return address value, and the stack pointer. --- What is needed here is an analysis of the function (independent of recursion): The parameter `n` will be in `$a0` on function entry.  The value of `n` is required after the function call (to `other`), because we cannot multiply until that function call returns the right hand operand of `*`. Therefore, `n` (the left hand operand to `*`) must survive the function call to `other`, and in `$a0` it will not — since our own code will repurpose `$a0` in order to call `other(n-1)`, as `n-1` must go into `$a0` for that. Also, the (in C, implicit) parameter`$ra` holds the return address value needed to return to our caller.  The call to `other` will, similarly, repurpose the `$ra` register, wiping out its previous value. Therefore, this function (yours or mine) needs two values to survive the function call that is within its body (e.g. the call to `other`). The solution is simple: values we need (that are living in registers that are repurposed or wiped out by something we're doing, or the callee potentially does) need to be moved or copied elsewhere: somewhere that will survive the function call. Memory can be used for this, and, we can obtain some memory for these purposes using the stack. Based on this, we need to make a stack frame that has space for the two things we need (and would otherwise get wiped out) after calling `other`.  The entry `$ra` must be saved (and later reloaded) in order for us to use it to return; also, the initial `n` value needs to be saved so we can use it for the multiply.  (Stack frames are typically created in function prologue, and removed in function epilogue.) --- As is often the case in machine code (or even programming in general) there are also other ways of handling things, though the gist is the same.  (This is a good thing, and an optimizing compiler will generally seek the best way given the particular circumstances.) --- Presence or absence of recursion does not change the fundamental analysis we need to translate this into assembly/machine language.  Recursion dramatically increases the potential for stack overflow, but otherwise does not change this analysis. --- # Addendum To be clear, recursion imposes the requirement to use a dynamically expandable call stack — though all modern computer systems provide such a stack for calling, so this requirement is easy to forget or gloss over on today's systems. For programs without recursion, a call stack is not a requirement — local variables can be allocated to function-private global variables (including the return address), and this was done on certain older systems like the PDP-8, which did not offer specific hardware support for a call stack. --- Systems that use stack memory for passing parameters and/or are register poor may not require the analysis described in this answer, since variables are already being stored in memory that survives nested function calls. It is the partitioning of registers on modern register-rich machines that creates the requirement for the above analysis.  These register-rich machines pass parameters and return values (mostly) in CPU registers, which is efficient but imposes the need to sometimes make copies as registers are repurposed from one function to another.
How are local ip addresses separated from public ones? How does tools like ping, or any other tool that uses the tcp/ip protocol know that for example 192.168.1.1 or 10.0.0.1 is a local ip address while 8.8.8.8 or 74.142.23.95 are public? are 192.168.x.x and 10.0.x.x hardcoded to be preserved for local use?
Well, they are *reserved* by [RFC 1918](https://www.rfc-editor.org/rfc/rfc1918) for use in private networks. But that doesn't actually matter much. You can obtain a block of "public" IP addresses from RIPE or whatever, and use it for your private network, and everything will still work. The reservation is needed only for political reasons, to allow admins to set up their own private networks without any trouble. Tools like `ping` **do not care** whether an address is "private" or "local" or "public". They simply send a packet to the given address, and your OS looks at the **routing table** to decide where to send it next. For example, when you configure an Ethernet card on Windows with IP address `10.2.3.4/16` (in netmask format: `255.255.0.0`) and gateway `10.2.0.1`, it adds the following entries to the routing table: - `10.2.3.4/32` (netmask `255.255.255.255`) to interface `Loopback` (Your own addresses are always routed through the loopback interface, they never go to the network.) - `10.2.0.0/16` (netmask `255.255.0.0`) to interface `Local Area Connection` (Addresses in your own subnet are, by definition, local.) - `0.0.0.0/0` (netmask `0.0.0.0`) to gateway `10.2.0.1` (Everything else is not local.) In other words, **you told the OS** that all addresses within the `10.2.0.0/16` range are local, and the OS takes care of everything. --- To view the routing table: - on Linux, `ip route` (IPv4) and `ip -6 route` (IPv6) - on Windows, `route print` (IPv4 on ≤XP, both v4/v6 on ≥Vista) - on Windows XP, `netsh interface ipv6 show route` (IPv6) - on Windows, Linux, BSD, and other Unix-likes, `netstat -r -n` (IPv4) - on Linux and some Unix-likes, `netstat -r -n -6` (IPv6) Editing the routing table can be done with the same tools. For example, to mark all of `172.16.0.0/16` as local, you can use `ip route add 172.16.0.0/16 dev eth0` on Linux.
Count current streak in swift I would like to count how many continuous days user has used the app. It updates the label depending on the streaks and if user has not used the app for a day the number goes back to zero. How can I achieve this? I have searched but could not find any source. [![enter image description here](https://i.stack.imgur.com/2jeae.png)](https://i.stack.imgur.com/2jeae.png)
## For this you have a few things to take into consideration: --- ## When to report last usage? - **Your app idea may include the need to perform some actions before considering a complete usage.** For example, after loading or presenting something on the screen, after retrieving data and performing some actions, etc. - **Just by intention of opening the app.** The only intention is for the user to hit your app´s icon to launch the app, nevermind if he set it to a close state before even passing your loading screen. > > This can be a bit unpredictable > > > - **When sending the app to background**. > > Important to notice that iOS can kill your process anytime after your > app is sent to background, so better to do it right after user´s > action. > > > Also, the user could not open your app again in a while. You can subscribe to background capabilities for letting your app be active for a while longer while transitioning to suspended/close state if you are going to save data out of the iPhone. The function you are looking for is [`applicationDidEnterBackground(_:)`](https://developer.apple.com/documentation/uikit/uiapplicationdelegate/1622997-applicationdidenterbackground) > > **Strong Points of this approach** > > > You get last time that your app was **actually** used. > > > For more on the application life cycle and how to handle it correctly, please visit [apple documentation about this topic](https://developer.apple.com/library/content/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/StrategiesforHandlingAppStateTransitions/StrategiesforHandlingAppStateTransitions.html) ## Do I need this information to be available between installs & Where to save ? --- - If you care about this counter to be stable and remains intact between installs you can not save it in any local database or `NSUserDefaults`. In this case you should implement some kind of online storage, via user creation & handling in your servers or the use of **iCloud** alternatives. - If your information is sensitive (let's say that you are going to give some **money like reward** to your user for opening your app **1000** times) then you can not store it in `NSUserDefaults`, as it is not encripted and can be modified. ## What to save in order to count days in a row? --- Simplicity is king when dealing with stored data and there are many ways to achieve this specific task. I would go with: - Storing the first date (*ignoring time if you are dealing with calendar days, but including it if you are handling 24hours lapses as your day instead*) - Storing last visit date (*same considerations apply*). > > You could save complete timestamp in order to be able of change your mind later ;-) > > > In my app I would do the maths then with current date data (`now = NSDate()`) before making any changes. 1. If timelapse between `now` and `last visit date` is **bigger** than a "*Day*", then update `first visit date` with `now`. 2. Save `now` data into `last visit date` storage. > > Your counter will always be the difference in "*Days*" between `now` and `first visit date`. > > > ## Summing Up --- If your data is not sensitive store it in `NSUserDefaults`, otherwise *and if this can affect your income* store it somewhere else. If it's sensitive but you don't care if your user lose the counter, save it in a local DB (**CoreData**, **Realm**, etc) Best time (as of my consideration) for storing new data will be when an intention of closure (*included **suspended state** and **incoming calls***) is notified to your app. You can save this data in many ways, one that give you some space for maneuvering is saving just **last visit** and date of **first visit of the row** and then do the maths. Of course, updating as needed and explained before.
Different stack depth for lambdas and regular functions in C++? Consider a normal recursive function: ``` #include <iostream> #include <functional> void f(unsigned long long int x) { std::cout << x << "\n"; if(x < 1e9) f(x+1); } int main() { f(1); return 0; } ``` This terminates at **43033**. Now consider a recursive lambda: ``` #include <iostream> #include <functional> int main() { std::function<void(int)> g = [&g](unsigned long long int x) { std::cout << x << "\n"; if(x < 1e9) g(x+1); }; g(1); return 0; } ``` This terminates at a much lower stack depth of **11736**. Why do lambdas have a lower max stack depth? (Compiling with `g++ (GCC) 5.4.0`, with `-std=c++14 -Wall`) Also note that compiling with `-O3` optimization allows for **practically infinite** recursion depth, but the lambda still terminates at **25k**. --- **EDIT**: Following @Yakk, here are results with the [Y-combinator](https://yongweiwu.wordpress.com/2014/12/14/y-combinator-and-cplusplus/): ``` #include <iostream> #include <functional> using namespace std; template <typename T, typename R> function<R(T)> Y(function<function<R(T)>(function<R(T)>)> f) { // Y f = f (λx.(Y f) x) return f([=](T x) { return Y(f)(x); }); } int main() { using fg = function<void(int)>; function<fg(fg)> sg = [](fg g) { return [g](unsigned long long int x) { std::cout << x << "\n"; if(x < 1e9) g(x+1); }; }; Y(sg)(1); return 0; } ``` This terminates at **4781** and **9221** with and without `-O3` respectively.
std function does not mean the same thing as lambda. A std function is an object capable of storing some lambdas, or a function pointer, or a pointer to member function, or a pointer to member data, or almost any object that overrides operator() compatibly. When you store a lambda within a std function, there is some overhead. Not much, but some. Some of this overhead may show up as using the stack more (and the overhead will be larger in unoptimized builds). You can more directly recurse using a lambda by using the [y combinator](https://stackoverflow.com/a/35609226/1774667), but even there you'll be passing a reference-to-self as a parameter, and unless the recursion is eliminated by the optimizer it will probably use more stack. (A highly tweaked optimizer could notice that a stateless lambda reference argument could be eliminated, but that seems tricky to work out).
how to use MS Office with proprietary java back-end document system Currently I have a document system that launches documents in Star Office or LibreOffice in an iframe. Moving to the future I ideally want to retain the document system I have but integrate this into SharePoint so as to enable us to open and edit documents using MS Office. As there is no Java Api to integrate with MS Office this is why I have chosen to go with SharePoint. I can manage to get my documents to load from a link on a sharepoint page but then comes the hard part of manipulating the save features in MS Office and ensuring that my document doesn't get saved in sharepoint. Has anyone done anything similar. Basically I just want to use MS Office to interact with my documents without storing things in sharepoint. So I need to get access to the save functions etc. As far as I see Apache POI is not a viable solution as it doesn't physically open the document and allow user to click file -> save. My understanding is that it can manipulate documents by manipulating them in code but can't use any of the controls in office. I've read here <http://msdn.microsoft.com/en-us/library/office/bb462633(v=office.12).aspx?cs-save-lang=1&cs-lang=vb#code-snippet-2> that you can repurpose the commands in office and modify the ribbon? Thanks for any advice It appears it is possible with WOPI and Office Web Apps. Basically needing to create a WOPI application
Well I had the same problem, so I actually wrote a quick PPT editor with Apache POI and SVG edit.. But then I switched to the Office Web Apps.. Here is quick implementation of the WOPI server, I am a Java guy so my .NET skills are quite horrible. Writing a servlet should be trivial. Logic is simple: Pass WOPISrc to the office web apps - this is basically the URL of your WOPI server with the file location in it. You can ignore access\_token/access\_token\_ttl for now. The Office Web Apps with hit the WOPISrc url with at least 2 requests: 1. It will be a meta data request, which is basically is a GET call to the WOPIsrc. You will create a WOPI (CheckFileInfo) object and send it back JSON encoded to the office web apps. 2. Now the Office Web Apps will request the file itself, it will append ( /contents ) to the end of the WOPIsrc. So just send it back in binary format. 3. (Optional) Office Web Apps will do a POST to the ( WOPISrc + "/contents") on save. You can grab the data from the POST and save to the disk. Note: Word doesn't work :/ You can view only, for the edit you need to implement Cobalt protocol (FSSHTTP). I am somewhat researching this topic, but it will be easier to write in C# since you can grab a Cobalt assembly. Otherwise this protocols implements SOAP (FSSHTTP) with BASE64 encoded binary messages (FSSHTTPB) Open the Office Web Apps with something like this: `http://OFFICEWEBAPPS.HOST/p/PowerPointFrame.aspx?PowerPointView=EditView&access_token=12345&WOPISrc=URLENCODED_URL_OF_THE_WOPI_SERVER` Like: `http://WOPISERVER.HOST:2000/wopi/files/1.pptx` This will open 1.pptx in your c:\temp ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Collections; using System.Runtime.Serialization; using System.Net; using System.Net.Http; using System.Threading.Tasks; using System.Web; using System.IO; using System.Runtime.Serialization.Json; namespace WopiServerTutorial { public class WopiServer { private HttpListener Listener; static void Main(string[] args) { WopiServer s = new WopiServer(); s.Start(); Console.WriteLine("A simple wopi webserver. Press a key to quit."); Console.ReadKey(); s.Stop(); } public void Start() { Listener = new HttpListener(); Listener.Prefixes.Add(@"http://+:8080/"); Listener.Start(); Listener.BeginGetContext(ProcessRequest, Listener); Console.WriteLine(@"WopiServer Started"); } public void Stop() { Listener.Stop(); } private void ProcessRequest(IAsyncResult result) { HttpListener listener = (HttpListener)result.AsyncState; HttpListenerContext context = listener.EndGetContext(result); Console.WriteLine(@"Got a " + context.Request.HttpMethod + " request for URL: " + context.Request.Url.PathAndQuery); var stringarr = context.Request.Url.AbsolutePath.Split('/'); var rootDir = @"C:\\temp\\"; if (stringarr.Length == 5 && context.Request.HttpMethod.Equals(@"GET")) { Console.WriteLine(@"Getting content for the file: " + rootDir + stringarr[3]); // get file's content var file = rootDir + stringarr[3]; var stream = new FileStream(file, FileMode.Open); var fi = new FileInfo(file); context.Response.ContentType = @"application/octet-stream"; context.Response.ContentLength64 = fi.Length; stream.CopyTo(context.Response.OutputStream); context.Response.Close(); } //else if (stringarr.Length == 5 && context.Request.HttpMethod.Equals(@"POST")) //{ // // write //} else if (stringarr.Length == 4 && context.Request.HttpMethod.Equals(@"GET")) { Console.WriteLine(@"Getting metdata for the file: " + rootDir + stringarr[3]); var fi = new FileInfo(rootDir + stringarr[3]); CheckFileInfo cfi = new CheckFileInfo(); cfi.AllowExternalMarketplace = false; cfi.BaseFileName = fi.Name; cfi.BreadcrumbBrandName = ""; cfi.BreadcrumbBrandUrl = ""; cfi.BreadcrumbDocName = ""; cfi.BreadcrumbDocUrl = ""; cfi.BreadcrumbFolderName = ""; cfi.BreadcrumbFolderUrl = ""; cfi.ClientUrl = ""; cfi.CloseButtonClosesWindow = false; cfi.CloseUrl = ""; cfi.DisableBrowserCachingOfUserContent = true; cfi.DisablePrint = true; cfi.DisableTranslation = true; cfi.DownloadUrl = ""; cfi.FileUrl = ""; cfi.FileSharingUrl = ""; cfi.HostAuthenticationId = "s-1-5-21-3430578067-4192788304-1690859819-21774"; cfi.HostEditUrl = ""; cfi.HostEmbeddedEditUrl = ""; cfi.HostEmbeddedViewUrl = ""; cfi.HostName = @"SharePoint"; cfi.HostNotes = @"HostBIEnabled"; cfi.HostRestUrl = ""; cfi.HostViewUrl = ""; cfi.IrmPolicyDescription = ""; cfi.IrmPolicyTitle = ""; cfi.OwnerId = @"4257508bfe174aa28b461536d8b6b648"; cfi.PresenceProvider = "AD"; cfi.PresenceUserId = @"S-1-5-21-3430578067-4192788304-1690859819-21774"; cfi.PrivacyUrl = ""; cfi.ProtectInClient = false; cfi.ReadOnly = false; cfi.RestrictedWebViewOnly = false; cfi.SHA256 = ""; cfi.SignoutUrl = ""; cfi.Size = fi.Length; cfi.SupportsCoauth = false; cfi.SupportsCobalt = false; cfi.SupportsFolders = false; cfi.SupportsLocks = true; cfi.SupportsScenarioLinks = false; cfi.SupportsSecureStore = false; cfi.SupportsUpdate = true; cfi.TenantId = @"33b62539-8c5e-423c-aa3e-cc2a9fd796f2"; cfi.TermsOfUseUrl = ""; cfi.TimeZone = @"+0300#0000-11-00-01T02:00:00:0000#+0000#0000-03-00-02T02:00:00:0000#-0060"; cfi.UserCanAttend = false; cfi.UserCanNotWriteRelative = false; cfi.UserCanPresent = false; cfi.UserCanWrite = true; cfi.UserFriendlyName = ""; cfi.UserId = ""; cfi.Version = @"%22%7B59CCD75F%2D0687%2D4F86%2DBBCF%2D059126640640%7D%2C1%22"; cfi.WebEditingDisabled = false; // encode json var memoryStream = new MemoryStream(); var json = new DataContractJsonSerializer(typeof(CheckFileInfo)); json.WriteObject(memoryStream, cfi); memoryStream.Flush(); memoryStream.Position = 0; StreamReader streamReader = new StreamReader(memoryStream); var jsonResponse = Encoding.UTF8.GetBytes(streamReader.ReadToEnd()); context.Response.ContentType = @"application/json"; context.Response.ContentLength64 = jsonResponse.Length; context.Response.OutputStream.Write(jsonResponse, 0, jsonResponse.Length); context.Response.Close(); } else { byte[] buffer = Encoding.UTF8.GetBytes(""); context.Response.ContentLength64 = buffer.Length; context.Response.ContentType = @"application/json"; context.Response.OutputStream.Write(buffer, 0, buffer.Length); context.Response.OutputStream.Close(); } Listener.BeginGetContext(ProcessRequest, Listener); } } [DataContract] public class CheckFileInfo { [DataMember] public bool AllowExternalMarketplace { get; set; } [DataMember] public string BaseFileName { get; set; } [DataMember] public string BreadcrumbBrandName { get; set; } [DataMember] public string BreadcrumbBrandUrl { get; set; } [DataMember] public string BreadcrumbDocName { get; set; } [DataMember] public string BreadcrumbDocUrl { get; set; } [DataMember] public string BreadcrumbFolderName { get; set; } [DataMember] public string BreadcrumbFolderUrl { get; set; } [DataMember] public string ClientUrl { get; set; } [DataMember] public bool CloseButtonClosesWindow { get; set; } [DataMember] public string CloseUrl { get; set; } [DataMember] public bool DisableBrowserCachingOfUserContent { get; set; } [DataMember] public bool DisablePrint { get; set; } [DataMember] public bool DisableTranslation { get; set; } [DataMember] public string DownloadUrl { get; set; } [DataMember] public string FileSharingUrl { get; set; } [DataMember] public string FileUrl { get; set; } [DataMember] public string HostAuthenticationId { get; set; } [DataMember] public string HostEditUrl { get; set; } [DataMember] public string HostEmbeddedEditUrl { get; set; } [DataMember] public string HostEmbeddedViewUrl { get; set; } [DataMember] public string HostName { get; set; } [DataMember] public string HostNotes { get; set; } [DataMember] public string HostRestUrl { get; set; } [DataMember] public string HostViewUrl { get; set; } [DataMember] public string IrmPolicyDescription { get; set; } [DataMember] public string IrmPolicyTitle { get; set; } [DataMember] public string OwnerId { get; set; } [DataMember] public string PresenceProvider { get; set; } [DataMember] public string PresenceUserId { get; set; } [DataMember] public string PrivacyUrl { get; set; } [DataMember] public bool ProtectInClient { get; set; } [DataMember] public bool ReadOnly { get; set; } [DataMember] public bool RestrictedWebViewOnly { get; set; } [DataMember] public string SHA256 { get; set; } [DataMember] public string SignoutUrl { get; set; } [DataMember] public long Size { get; set; } [DataMember] public bool SupportsCoauth { get; set; } [DataMember] public bool SupportsCobalt { get; set; } [DataMember] public bool SupportsFolders { get; set; } [DataMember] public bool SupportsLocks { get; set; } [DataMember] public bool SupportsScenarioLinks { get; set; } [DataMember] public bool SupportsSecureStore { get; set; } [DataMember] public bool SupportsUpdate { get; set; } [DataMember] public string TenantId { get; set; } [DataMember] public string TermsOfUseUrl { get; set; } [DataMember] public string TimeZone { get; set; } [DataMember] public bool UserCanAttend { get; set; } [DataMember] public bool UserCanNotWriteRelative { get; set; } [DataMember] public bool UserCanPresent { get; set; } [DataMember] public bool UserCanWrite { get; set; } [DataMember] public string UserFriendlyName { get; set; } [DataMember] public string UserId { get; set; } [DataMember] public string Version { get; set; } [DataMember] public bool WebEditingDisabled { get; set; } } } ```
Can I use viewDidLoad method in UITableviewCell? Can I use `viewDidLoad` method in `UITableviewCell`?
No you don't write viewDidLoad in Custom cell class subclassing UITableViewCell(It;s part of UIViewController) .you have a method called ``` -(void)layoutSubviews { [super layoutSubviews]; } ``` where in you can define frames and all for custom cell's controls.Refer [Apple's UITableviewCell reference](http://developer.apple.com/library/ios/#documentation/uikit/reference/UITableViewCell_Class/Reference/Reference.html) **Note however** that 'viewDidLoad' is called ***only once*** in the lifetime of the object; it is rather like an initializer in general OO programming. However, 'layoutSubviews' ***will be called many times*** on each cell (depending on issues like scrolling and so on). It's important to realize that for this reson many of the things you "usually do" in viewDidLoad, you ***can not do*** in layoutSubviews. # Note that viewDidLoad is called *once* only: layoutSubviews is called *often*. It will just be a simple function if you write it. [Tutorial for custom cell](http://jainmarket.blogspot.in/2009/05/creating-custom-table-view-cell.html)
Drawing dashed line in java My problem is that I want to draw a dashed line in a panel. I'm able to do it, but it drew my border in a dashed line as well. Can someone please explain why? I'm using paintComponent to draw and draw straight to the panel. This is the code to draw a dashed line: ``` public void drawDashedLine(Graphics g, int x1, int y1, int x2, int y2){ Graphics2D g2d = (Graphics2D) g; //float dash[] = {10.0f}; Stroke dashed = new BasicStroke(3, BasicStroke.CAP_BUTT, BasicStroke.JOIN_BEVEL, 0, new float[]{9}, 0); g2d.setStroke(dashed); g2d.drawLine(x1, y1, x2, y2); } ```
You're modifying the `Graphics` instance passed into `paintComponent()`, which is also used to paint the borders. Instead, make a copy of the `Graphics` instance and use that to do your drawing: ``` public void drawDashedLine(Graphics g, int x1, int y1, int x2, int y2){ // Create a copy of the Graphics instance Graphics2D g2d = (Graphics2D) g.create(); // Set the stroke of the copy, not the original Stroke dashed = new BasicStroke(3, BasicStroke.CAP_BUTT, BasicStroke.JOIN_BEVEL, 0, new float[]{9}, 0); g2d.setStroke(dashed); // Draw to the copy g2d.drawLine(x1, y1, x2, y2); // Get rid of the copy g2d.dispose(); } ```
Git tracked, untracked, staged, indexed meaning? Can someone clarify the meaning of these terms? Are tracked files any files that have, at some point, been added to the stage? Is the "index" the same as the "stage"? Are all staged files tracked, but the reverse is not necessarily true (namely, files that were once staged and committed, but aren't part of the current stage to be committed)? How do I know which files are tracked? How do I know which files are staged?
There are three things to consider here: the current commit (known variously as `HEAD` or `@`), the *index*, and the *work-tree*. The index is also called the *staging area* and the *cache*. These represent its various functions, because the index does more than just hold the contents of the proposed next commit. Its use as a cache is mostly invisible, though: you just use Git, and the cache tricks that make Git go fast, are all done under the hood with no manual intervention necessary. So you only need "cached" to remember that some commands use `--cached`, e.g., `git diff --cached` and `git rm --cached`. Some of these have additional names (`git diff --staged`), and some don't. Git is not very consistent about where it uses each of these terms, so you must simply memorize them. One issue seems to be that for many users, "the index" is mysterious. This is probably because you can't *see* it directly, except using `git ls-files` (which is not a user-friendly command: it's meant for programming, not for daily use). Note that the work-tree (also called the *working tree* and sometimes the *work directory* or *working directory*) is quite separate from the index. You can see, and modify, files in the work-tree quite easily. I once thought "tracked" was more complicated, but it turns out that *tracked* quite literally means *is in the index*. A file is tracked if and only if `git ls-files` shows that it will be in the next commit. You cannot see files in the index so easily—but you can copy from the work-tree, into the index, easily, using `git add`: ``` git add path/to/file.txt ``` copies the file from the work-tree into the index. If it was not already in the index (was not tracked), it is now in the index (is tracked). --- Hence: > > Are tracked files any files that have, at some point, been added to the stage? > > > No! Tracked files are files that are in the index *right now*. It does not matter what has happened in the past, in any commit, or at any point in the past. If some path `path/to/file.txt` is present in the index *right now*, that file is tracked. If not, it is not tracked (and is potentially also *ignored*). If `path/to/file.txt` is in the index now, and you take it out, the file is no longer tracked. It may or may not be in any existing commits, and it may or may not still be in the work-tree. > > Is the "index" the same as the "stage"? > > > Yes, more or less. Various documentation and people are not very consistent about this. > > Are all staged files tracked, but the reverse is not necessarily true (namely, files that were once staged and committed, but aren't part of the current stage to be committed)? > > > This question doesn't quite make sense, since "the staging area" *is* the index. I think *staged* doesn't have a perfectly-defined meaning, but I would define it this way. A file is *staged* if: - it is not in `@` / HEAD, but is in the index, or - is in both `@` / HEAD *and* the index, and is different in the two. Equivalently, you could say "when some path is being called *staged*, that means that if I make a new commit right now, the new commit's version of that file will be different from the current commit's version." Note that if you have not touched a file in any way, so that it's in the current commit *and* in the index *and* in the work-tree, but *all three versions match*, the file is still going to get committed. It's just neither "staged" nor "modified". > > How do I know which files are tracked? > > > While `git ls-files` can tell you, the usual way to find out is *indirect*: you run `git status`. > > How do I know which files are staged? > > > Assuming the definition above, you must ask Git to `diff` the current commit (HEAD / `@`) and the index. Whatever is different between them is "staged". Running `git status` will do this diff for you, and report the names of such files (without showing detailed diffs). To get the detailed diffs, you can run `git diff --cached`, which compares `HEAD` vs index. This also has the name `git diff --staged` (which is a better name—but, perhaps just to be annoying, `--staged` is not available as an option to `git rm`!). Because there are *three* copies of every file, you need *two* diffs to see what is going on: - compare HEAD vs index: `git diff --cached` - compare index vs work-tree: `git diff` Running `git status` runs *both* of these `git diff`-s for you, and summarizes them. You can get an even shorter summary with `git status --short`, where you will see things like: ``` M a.txt M b.txt MM c.txt ``` The first column is the result of comparing `HEAD` vs index: a blank means the two match, an `M` means `HEAD` and `index` differ. The second column is the result of comparing index vs work-tree: a blank means the two match, an `M` means they differ. The two `M`s in a row mean *all three versions of `c.txt` are different*. You can't see the one in the index directly, but you can `git diff` it!
Amazon Elastic MapReduce - SIGTERM I have an EMR streaming job (Python) which normally works fine (e.g. 10 machines processing 200 inputs). However, when I run it against large data sets (12 machines processing a total of 6000 inputs, at about 20 seconds per input), after 2.5 hours of crunching I get the following error: ``` java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 143 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:372) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:586) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:441) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:377) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132) at org.apache.hadoop.mapred.Child.main(Child.java:249) ``` If I am reading this correctly, the subprocess failed with code 143 because someone sent a SIGTERM signal to the streaming job. Is my understanding correct? If so: When would the EMR infrastructure send a SIGTERM?
I figured out what was happening, so here's some information if anyone else experiences similar problems. The key to me was to look at the "jobtracker" logs. These live in your task's logs/ folder on S3, under: ``` <logs folder>/daemons/<id of node running jobtracker>/hadoop-hadoop-jobtracker-XXX.log. ``` There were multiple lines of the following kind: ``` 2012-08-21 08:07:13,830 INFO org.apache.hadoop.mapred.TaskInProgress (IPC Server handler 29 on 9001): Error from attempt_201208210612_0001_m_000015_0: Task attempt_201208210612_0001_m_000015_0 failed to report status for 601 seconds. Killing! ``` So my code was timing out, and it was being killed (it was going beyond the 10 minute task timeout). 10 minutes I wasn't doing any I/Os, which was certainly not expected (I would typically do an I/O every 20 seconds). I then discovered this article: <http://devblog.factual.com/practical-hadoop-streaming-dealing-with-brittle-code> "In one of our science projects, we have a few Hadoop Streaming jobs that run over ruby and rely on libxml to parse documents. This creates a perfect storm of badness – the web is full of really bad html and libxml occasionally goes into infinite loops or outright segfaults. On some documents, it always segfaults." It nailed it. I must be experiencing one of these "libxml going into infinite loop" situations (I am using libxml heavily -- only with Python, not Ruby). The final step for me was to trigger skip mode (instructions here: [Setting hadoop parameters with boto?](https://stackoverflow.com/questions/12071436/setting-hadoop-parameters-with-boto)).
How to add Back and Fore Color on ContextMenu I am Showing a `ContextMenu` whenever the user right clicks on a specific location in a `DataGridView`. I want the items of that `ContextMenu` to have a back color and fore color depending on their content. How can I do this since `ContextMenu` has no back color or Fore color property? I tried looking up `ContextMenuStrip` but this has to be connected to a `ToolStripButton` which I do not have and do not want.
In order to change the back color of a `MenuItem` you need to specify a draw item handler and set owner-draw to true for each item. Also for the color to actually take some space you need to implement a MeasureMenuItem handler. So for example ``` color.MenuItems.Add(new MenuItem("#123456", menuHandler)); color.MenuItems.Add(new MenuItem("Green", menuHandler)); color.MenuItems.Add(new MenuItem("Red", menuHandler)); foreach (MenuItem item in color.MenuItems) { item.OwnerDraw = true; item.DrawItem += item_DrawItem; item.MeasureItem += MeasureMenuItem; } ``` The above codes hooks up the items and their handlers. ``` void item_DrawItem(object sender, DrawItemEventArgs e) { MenuItem cmb = sender as MenuItem; string color = SystemColors.Window.ToString(); if (e.Index > -1) { color = cmb.Text; } if (checkHtmlColor(color)) { e.DrawBackground(); e.Graphics.FillRectangle(new SolidBrush(ColorTranslator.FromHtml(color)), e.Bounds); e.Graphics.DrawString(color, new Font("Lucida Sans", 10), new SolidBrush(ColorTranslator.FromHtml(color)), e.Bounds); } } ``` The above code takes the MenuItem contents, converts it to a color, creates a rectangle for that color and draws it. ``` void MeasureMenuItem(object sender, MeasureItemEventArgs e) { MenuItem m = (MenuItem)sender; Font font = new Font(Font.FontFamily, Font.Size, Font.Style); SizeF sze = e.Graphics.MeasureString(m.Text, font); e.ItemHeight = (int)sze.Height; e.ItemWidth = (int)sze.Width; } ``` And lastly the above few lines simply measure the area the MenuItem should take before drawing (basically measures the space of it's string content) so the draw\_item handler knows how much space to take up
What is the purpose of multiple users? In setting up an apache web server on my linux box, I've been told to make a separate user who has permission to use (and whose home directory is) the files in the /www directory. Additionally, I've made a mySQL user. Both of these users have the nologin attribute. I have very little idea what I'm doing here, and I'm mostly doing testing via <http://localhost>, so I've been working mostly from my user account. What I would like to know is as follows: 1) Generally, why do these users exist? 2) Does apache, mySQL or php access the privileges of these users? 3) Under what circumstances would I access the privileges of these users? 4) Why do they need the nologin attribute? 5) Any other relevant information that I didn't know to ask about. Thanks.
In brief... 1. Separate users does exactly that - it keeps things separate so that one user can't access or modify another user's files, depending on the permissions that user has set on their files. Many daemons (including Apache) run under a particular user account - for instance, Red Hat tends to use the 'apache' user, whereas Debian uses 'www-data'. This means that the daemon is treated like any other user - it can't access files it's not allowed to, unlike the root user 2. Apache does, because it's reading the files off the filesystem, again like any other user. Because PHP is usually run as a module within Apache, it's also subject to the same restrictions. MySQL, however, has a separate authentication scheme, so there's no implied correlation between local users and database users 3. If you're creating a separate user just for your web content, you'd be accessing that user to modify the files owned by that user - presumably your web content. 4. By 'nologin attribute' I'm assuming you mean that the shell is set to `/sbin/nologin` - this prevents the user from logging in interactively, i.e. they can't SSH to the server or log in on the console. 5. To be honest, it's not something that can be covered in a few lines. There's quite a few guides dotted around the internet aimed at beginners - have a read through a few and get a feel for it!
Construction of lambda object in case of specified captures in C++ Starting from C++20 closure types without captures have default constructor, see <https://en.cppreference.com/w/cpp/language/lambda>: > > If no captures are specified, the closure type has a defaulted default constructor. > > > But what about closure types that capture, how can their objects be constructed? One way is by using `std::bit_cast` (provided that the closure type can be trivially copyable). And Visual Studio compiler provides a constructor for closure type as the example shows: ``` #include <bit> int main() { int x = 0; using A = decltype([x](){ return x; }); // ok everywhere constexpr A a = std::bit_cast<A>(1); static_assert( a() == 1 ); // ok in MSVC constexpr A b(1); static_assert( b() == 1 ); } ``` Demo: <https://gcc.godbolt.org/z/dnPjWdYx1> Considering that both Clang and GCC reject `A b(1)`, the standard does not require the presence of this constructor. But can a compiler provide such constructor as an extension?
Since this is tagged `language-lawyer`, here's what the C++ standard has to say about all this. > > But what about closure types that capture, how can their objects be constructed? > > > The actual part of the standard that cppreference link is referencing is **[[expr.prim.lambda.general]](http://eel.is/c++draft/expr.prim.lambda#closure-14)** - 7.5.5.1.14: > > The closure type associated with a lambda-expression has no default constructor if the lambda-expression has a lambda-capture and a defaulted default constructor otherwise. It has a defaulted copy constructor and a defaulted move constructor ([class.copy.ctor]). It has a deleted copy assignment operator if the lambda-expression has a lambda-capture and defaulted copy and move assignment operators otherwise ([class.copy.assign]). > > > However, clauses [1](http://eel.is/c++draft/expr.prim.lambda#closure-1) and [2](http://eel.is/c++draft/expr.prim.lambda#closure-2) say: > > The type of a lambda-expression (which is also the type of the closure object) is a unique, unnamed non-union class type, called the closure type, whose properties **are** described below. > > > > > The closure type is not an aggregate type. An implementation may define the closure type differently from what is described below **provided this does not alter the observable behavior of the program** other than by changing: > [unrelated stuff] > > > Which means that (apart from the unrelated exceptions), the described interface of the lambda as stated is *exhaustive*. Since no other constructors than the default one is listed, then that's the only one that is supposed to be there. **N.B.** : A lambda may be *equivalent* to a class-based functor, but it is not *purely* syntactical sugar. The compiler/implementation does not need a constructor in order to construct and parametrize the lambda's type. It's just **programmers** who are prevented from creating instances by the lack of constructors. As far as extensions go: > > But can a compiler provide such constructor as an extension? > > > Yes. A compiler is allowed to provide this feature as an extension as long as all it does is make programs that would be ill-formed functional. From **[[intro.compliance.general]](http://eel.is/c++draft/intro.compliance.general#8)** - 4.1.1.8: > > A conforming implementation may have extensions (including additional library functions), provided they do not alter the behavior of any well-formed program. Implementations are required to diagnose programs that use such extensions that are ill-formed according to this document. Having done so, however, they can compile and execute such programs. > > > However, for the feature at hand, MSVC would be having issues in its implementation as an extension: 1. It should be emmiting a diagnostic. 2. By its own [documentation](https://learn.microsoft.com/en-us/cpp/build/reference/permissive-standards-conformance?view=msvc-160), it should refuse the code when using `/permissive-`. Yet it does [not](https://gcc.godbolt.org/z/zhW55TbTa). So it looks like MSVC is, either intentionally or not, behaving as if this was part of the language, which is not the case as far as I can tell.
Argparse - do not catch positional arguments with `nargs`. I am trying to write a function wo which you can parse a variable amount of arguments via argparse - I know I can do this via `nargs="+"`. Sadly, the way argparse help works (and the way people generally write arguments in the CLI) puts the positional arguments last. This leads to my positional argument being caught as part of the optional arguments. ``` #!/usr/bin/python import argparse parser = argparse.ArgumentParser() parser.add_argument("positional", help="my positional arg", type=int) parser.add_argument("-o", "--optional", help="my optional arg", nargs='+', type=float) args = parser.parse_args() print args.positional, args.optional ``` running this as `./test.py -h` shows the following usage instruction: ``` usage: test.py [-h] [-o OPTIONAL [OPTIONAL ...]] positional ``` but if I run `./test.py -o 0.21 0.11 0.33 0.13 100` gives me ``` test.py: error: too few arguments ``` to get a correct parsing of args, I have to run `./test.py 100 -o 0.21 0.11 0.33 0.13` So how do I: - make argparse reformat the usage output so that it is less misleading, OR, even better: - tell argparse to not catch the last element for the optional argument `-o` if it is the last in the list ?
There is a bug report on this: <http://bugs.python.org/issue9338> > > argparse optionals with nargs='?', '\*' or '+' can't be followed by positionals > > > A simple (user) fix is to use `--` to separate postionals from optionals: ``` ./test.py -o 0.21 0.11 0.33 0.13 -- 100 ``` I wrote a patch that reserves some of the arguments for use by the positional. But it isn't a trivial one. As for changing the usage line - the simplest thing is to write your own, e.g.: ``` usage: test.py [-h] positional [-o OPTIONAL [OPTIONAL ...]] usage: test.py [-h] [-o OPTIONAL [OPTIONAL ...]] -- positional ``` I wouldn't recommend adding logic to the usage formatter to make this sort of change. I think it would get too complex. Another quick fix is to turn this positional into an (required) optional. It gives the user complete freedom regarding their order, and might reduce confusion. If you don't want to confusion of a 'required optional' just give it a logical default. ``` usage: test.py [-h] [-o OPTIONAL [OPTIONAL ...]] -p POSITIONAL usage: test.py [-h] [-o OPTIONAL [OPTIONAL ...]] [-p POS_WITH_DEFAULT] ``` --- One easy change to the Help\_Formatter is to simply list the arguments in the order that they are defined. The normal way of modifying formatter behavior is to subclass it, and change one or two methods. Most of these methods are 'private' (\_ prefix), so you do so with the realization that future code might change (slowly). In this method, `actions` is the list of arguments, in the order in which they were defined. The default behavior is to split 'optionals' from 'positionals', and reassemble the list with positionals at the end. There's additional code that handles long lines that need wrapping. Normally it puts positionals on a separate line. I've omitted that. ``` class Formatter(argparse.HelpFormatter): # use defined argument order to display usage def _format_usage(self, usage, actions, groups, prefix): if prefix is None: prefix = 'usage: ' # if usage is specified, use that if usage is not None: usage = usage % dict(prog=self._prog) # if no optionals or positionals are available, usage is just prog elif usage is None and not actions: usage = '%(prog)s' % dict(prog=self._prog) elif usage is None: prog = '%(prog)s' % dict(prog=self._prog) # build full usage string action_usage = self._format_actions_usage(actions, groups) # NEW usage = ' '.join([s for s in [prog, action_usage] if s]) # omit the long line wrapping code # prefix with 'usage:' return '%s%s\n\n' % (prefix, usage) parser = argparse.ArgumentParser(formatter_class=Formatter) ``` Which produces a usage line like: ``` usage: stack26985650.py [-h] positional [-o OPTIONAL [OPTIONAL ...]] ```