prompt
stringlengths
49
4.73k
ground_truth
stringlengths
238
35k
VSCode Filter Problems tab for currently opened file only I'm looking for a linter feature like-atom that shows the problems by either line, file or project. Is it possible to filter the Problems tab to only show the errors and warnings in the file being visualized or in the files opened in different tabs instead of from the entire project?
VSCode v.1.23 added the ability to filter the Problems panel by files, see [problems view filtering in the release notes](https://code.visualstudio.com/updates/v1_23#_problems-view-filtering). So you can include (or exclude via the usual glob negation !) only a certain file by entering its name (you may need only a part of it). The filtering is done only within opened tabs however. So you cannot get the entire workspace's problems listed when only some of its files are opened. --- The ability to filter the problems panel by the current file has been added to the Insider's Build just recently (@mid-November 2019) so should be in the November 2019 update.. See <https://github.com/microsoft/vscode/issues/30038> and <https://github.com/microsoft/vscode-docs/blob/vnext/release-notes/v1_41.md#problems-panel> > > More predefined filters are added to the Problems panel. You can now > filter problems by type and also see problems scoped to the current > active file. > > > --- [![demo of filter problems by current file.](https://i.stack.imgur.com/EBhxb.png)](https://i.stack.imgur.com/EBhxb.png)
Calling Bool on a Regex does not work as documented According to [the documentation](https://docs.perl6.org/type/Regex#method_Bool), `Bool` method of `Regex` class... > > Matches against the caller's $\_ variable, and returns True for a match or False for no match. > > > However, in this example ``` $_ = "3"; my regex decimal { \d }; say &decimal.Bool; ``` returns `False`. Also, [looking at the source](https://github.com/rakudo/rakudo/blob/cbb9034a08b0423f40d8a070f1c24a9ed7e74527/src/core/Regex.pm6#L85-L92), it kinda makes sense what it says, since it will be matching a `$!topic` instance variable. Not clear, however, that this variable will effectively correspond to $\_, and the example above seems to say so. Any idea of what actually happens?
Short answer: the documentation was sort of accurate for 6.c, however the exact semantics were not at all so straightforward as "the caller" (and in fact, contained a risk of really odd bugs). The refined behavior is: - Anonymous regexes constructed with forms like `/.../` and `rx:i/.../` will capture the `$_` and `$/` at the point they are reached in the code (populating the `$!topic` variable mentioned in the question). - `Bool` and `sink` will cause a match against that captured `$_`, and will store the resulting `Match` object into that `$/`, provided it is writable. Since this behavior only applies to anonymous regexes, you'd need to write: ``` $_ = "3"; my regex decimal { \d }; say /<&decimal>/.Bool; ``` Here's the long answer. The goal of the `Bool`-causes-matching behavior in the first place was for things like this to work: ``` for $file-handle.lines { .say if /^ \d+ ':'/; } ``` Here, the `for` loop populates the topic variable `$_`, and the `if` provides a boolean context. The original design was that `.Bool` would look at the `$_` of the caller. However, there were a number of problems with that. Consider: ``` for $file-handle.lines { .say if not /^ \d+ ':'/; } ``` In this case, `not` is the caller of `.Bool` on the `Regex`. However, `not` would also have its own `$_`, which - as in any subroutine - would be initialized to `Any`. Thus in theory, the matching would not work. Apart from it did, because what was actually implemented was to walk through the callers until one was found with a `$_` that contained a defined value! This is as bad as it sounds. Consider a case like: ``` sub foo() { $_ = some-call-that-might-return-an-undefiend-value(); if /(\d+)/ { # do stuff } } $_ = 'abc123'; foo(); ``` In the case that the call inside of `foo` were to return an undefiend value - perhaps unexpectedly - the matching would have continued walking the caller chain and instead found the value of `$_` in the caller of `foo`. We could in fact have walked many levels deep in the call stack! (Aside: yes, this also meant there was complications around which `$/` to update with results too!) The previous behavior also demanded that `$_` have dynamic scope - that is, to be available for callers to look up. However, a variable having dynamic scope prevents numerous analyses (both the compiler's and the programmer's ones) and thus optimizations. With many idioms using `$_`, this seemed undesirable (nobody wants to see Perl 6 performance guides suggesting "don't use `with foo() { .bar }` in hot code, use `with foo() -> $x { $x.bar }` instead"). Thus, `6.d` changed `$_` to be a regular lexical variable. That 6.d `$_` scoping change had fairly little real-world fallout, but it did cause the semantics of `.Bool` and `.sink` on `Regex` to be examined, since they were the one frequently used thing that relied on `$_` being dynamic. That in turn shed light on the "first defined `$_`" behavior I just described - at which point, the use of dynamic scoping started to look more of a danger than a benefit! The new semantics mean the programmer writing an anonymous regex can rely on it matching the `$_` and updating the `$/` that are visible in the scope they wrote the regex - which seems rather simpler to explain, and - in the case they end up with a `$_` that isn't defined - a lot less surprising!
How can I integrate bitbucket.org's Issues with issue tracking in TortoiseHg? I can not find any documentation for this - is it possible?
The help for the fields you've found in the TortoiseHg config dialog (`thg userconfig`) is: - Issue Regex field: > > Defines the regex to match when picking up issue numbers. > > > - Issue Link field: > > Defines the command to run when an issue number is recognized. You may include groups in issue.regex, and corresponding {n} tokens in issue.link (where n is a non-negative integer). {0} refers to the entire string matched by issue.regex, while {1} refers to the first group and so on. If no {n} tokensare found in issue.link, the entire matched string is appended instead. > > > In other words, if you configure them like ``` [tortoisehg] issue.regex = [Ii]ssue(\d+) issue.link = https://www.mercurial-scm.org/bts/issue{1} ``` then you will have a setting suitable for the Mercurial project itself: if a commit message contains the text "issueNNN" or "IssueNNN", then TortoiseHg will now make that a link to the Mercurial bug tracker for Issue NNN. For Bitbucket's issue tracker you will want a link like ``` https://bitbucket.org/<user>/<repo>/issue/{1}/ ``` and then capture the issue number in the regular expression. This works because Bitbucket is smart enough to ignore the rest of the URL after the issue number -- you can write whatever you want there, or write nothing as above. Very simple functionality, but also quite useful when you often lookup bugs based on the commit messages.
Unable to find method 'com.android.build.gradle.api.BaseVariant.getOutputs()Ljava/util/List;' build.gradle ``` buildscript { ext.kotlin_version = '1.1.51' repositories { jcenter() mavenCentral() maven { url "https://jitpack.io" } } dependencies { classpath 'com.android.tools.build:gradle:3.0.0' classpath 'me.tatarka:gradle-retrolambda:3.6.1' classpath 'com.google.gms:google-services:3.1.0' classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" } } ``` app/build.gradle ``` buildscript { repositories { jcenter() } dependencies { classpath 'com.jakewharton:butterknife-gradle-plugin:8.7.0' } } android{ compileSdkVersion = 26 buildToolsVersion = "26.0.2" defaultConfig { minSdkVersion = 16 targetSdkVersion = 26 } ... applicationVariants.all { variant -> variant.outputs.all { output -> outputFileName = "newApkName.apk" } } } ``` How to solve issue: > > > ``` > Unable to find method 'com.android.build.gradle.api.BaseVariant.getOutputs()Ljava/util/List;'. > > ``` > > Possible causes for this unexpected error include: > Gradle's dependency cache may be corrupt (this sometimes occurs after a network connection timeout.) Re-download dependencies and sync > project (requires network) > The state of a Gradle build process (daemon) may be corrupt. Stopping all Gradle daemons may solve this problem. Stop Gradle build > processes (requires restart) > Your project may be using a third-party plugin which is not compatible with the other plugins in the project or the version of > Gradle requested by the project. > In the case of corrupt Gradle processes, you can also try closing the IDE and then killing all Java processes. > > >
I had the same error with gradle syncing, specifically tied to butterknife, my solution was solved via <https://github.com/JakeWharton/butterknife/issues/963#issuecomment-339545297> tldr in your project build file... ``` buildscript{ repositories { //other repos will likely exist here like jcenter and mavenCentral //add this closure maven { url "https://oss.sonatype.org/content/repositories/snapshots" } } dependencies { classpath 'com.jakewharton:butterknife-gradle-plugin:8.7.0' classpath 'com.android.tools.build:gradle:3.0.0' //change your version to 9.0.0-SNAPSHOT classpath 'com.jakewharton:butterknife-gradle-plugin:9.0.0-SNAPSHOT' //.. other project level dependencies } } } ``` Also make sure your allproject includes ``` allproject{ repositories{ maven { //this url "https://oss.sonatype.org/content/repositories/snapshots" } } } ``` And then the none butterknife related issue was upping my `buildToolsVersions` to `"26.0.2"` in your app build file UPDATE 4/19/2018 Have since parted ways with butterknife as it has since caused more issues than any other 3rd party I've used. Besides, with full Kotlin support, butterknife isn't necessary
kafka streams - how to set a new key for KTable I am new to Kafka Streams, I am using version 1.0.0. I would like to set a new key for a KTable from one of the values. When using KStream, it cane be done by using method selectKey() like this. ``` kstream.selectKey ((k,v) -> v.newKey) ``` However such method is missing in KTable. Only way is to convert given KTable to KStream. Any thoughts on this issue? Its changing a key against design of KTable?
If you want to set a new key, you need to re-group the KTable: ``` KTable newTable = table.groupBy(/*put select key function here*/) .aggregate(...); ``` Because a key must be unique for a KTable (in contrast to a KStream) it's required to specify an aggregation function that aggregates all records with same (new) key into a single value. Since Kafka 2.5, Kafka Streams also support `KStream#toTable()` operator. Thus, it is also possible to do `table.toStream().selectKey(...).toTable()`. There are advantages and disadvantages for both approaches. The main disadvantage of using `toTable()` is that it will repartition the input data based on the new key, which leads to interleaves writes into the repartition topic and thus to out-of-order data. While the first approach via `groupBy()` uses the same implementation, using the aggregation function helps you to resolve "conflicts" expliclity. If you use the `toTable()` operator, an "blind" upsert based on offset order of the repartition topic is done (this is actually similar to the code example in the other answers). Example: ``` Key | Value A | (a,1) B | (a,2) ``` If you re-key on `a` your output table would be either once of both (but it's not defined with one): ``` Key | Value Key | Value a | 1 a | 2 ``` The operation to "rekey" a table is semantically *always* ill-defined.
EF5 Code First - Changing A Column Type With Migrations I am new to EF5 Code First and I'm tinkering with a proof-of-concept before embarking on a project at work. I have initially created a model that looked something like ``` public class Person { public int Id { get; set; } public string FirstName { get; set;} public string Surname {get;set;} public string Location {get;set;} } ``` And I added a few records using a little MVC application I stuck on the top. Now I want to change the Location column to an enum, something like: ``` public class Person { public int Id { get; set; } public string FirstName { get; set;} public string Surname {get;set;} public Locations Location {get;set;} } public enum Locations { London = 1, Edinburgh = 2, Cardiff = 3 } ``` When I add the new migration I get: ``` AlterColumn("dbo.People", "Location", c => c.Int(nullable: false)); ``` but when I run update-database I get an error ``` Conversion failed when converting the nvarchar value 'London' to data type int. ``` Is there a way in the migration to truncate the table before it runs the alter statement? I know I can open the database and manually do it, but is there a smarter way?
The smartest way is probably to not alter types. If you need to do this, I'd suggest you to do the following steps: 1. Add a new column with your new type 2. Use `Sql()` to take over the data from the original column using an update statement 3. Remove the old column 4. Rename the new column This can all be done in the same migration, the correct SQL script will be created. You can skip step 2 if you want your data to be discarded. If you want to take it over, add the appropriate statement (can also contain a switch statement). Unfortunately Code First Migrations do not provide easier ways to accomplish this. Here is the example code: ``` AddColumn("dbo.People", "LocationTmp", c => c.Int(nullable: false)); Sql(@" UPDATE dbp.People SET LocationTmp = CASE Location WHEN 'London' THEN 1 WHEN 'Edinburgh' THEN 2 WHEN 'Cardiff' THEN 3 ELSE 0 END "); DropColumn("dbo.People", "Location"); RenameColumn("dbo.People", "LocationTmp", "Location"); ```
Google Cloud Function - Memory Limit Exceeded (2GB) - Heavy data processing I have a heavy data processing script written in Python. Each time the script processes one job, about 500MB of RAM is used. (The reason is because the script looks up historical records from a very large database.) The processing script also takes about 3 minutes for each row to run. We have deployed our python script to a Google Cloud Function. When we invoke the function to process three jobs simultaneously, the function works fine, memory usage is about 1500-1600MB; all is dandy. However, when we try to invoke the function to process 10 jobs or 100 jobs simultaneously, the function is killed as memory is exceeded. We noticed in the documentation that the memory limit for a function at any one time is 2GB. Would it be safe to say that we can't increase that to 10GB or 100GB or 1000GB so we can run more instances of the script in parallel? To be honest, why is it 2GB per function, not 2GB per invocation? I would love to have access to serverless capabilities for heavy data processing work on Google; this does not seem to be available. If so, would you say that the best way to achieve our goal is just use a stock-standard Google VM with 1000GB of RAM? Thanks.
The 2Gb is per instance. When a function is trigger, an instance is spawned. If the function is not used, after a while (10 minutes, more or less, without commitment), the instance end. However, if there is a new request and an instance is up, the existing instance is reused. And, if there is a lot of request, new instances are spawned. An instance of function can handle only 1 request in same time (no concurrency) So, when your instance is reused, all the element in your execution environment is reused. If you don't cleanup the memory and/or the local storage (/tmp which is an in memory storage), you have a memory leak and your function crash. Take care of your memory and object handle, clean well your context. If your request can handle 1 job, it must be able to handle 10 or 100 successive jobs, without crash. **UPDATE** I'm not Python expert but for cleaning the memory I use this ``` import gc gc.collect() ```
Running Cucumber tests on different environments I'm using Cucumber and Capybara for my automated front end tests. I have two environments that I would like to run my tests on. One is a staging environment, and the other is the production environment. Currently, I have my tests written to access staging directly. ``` visit('https://staging.somewhere.com') ``` I would like to re-use the tests in production (<https://production.somewhere.com>). Would it be possible to store the URL in a variable in my step definitions ``` visit(domain) ``` and define domain using an environment variable called form the command line? Like ``` $> bundle exec cucumber features DOMAIN=staging ``` if I want to point the tests to my staging environment, or ``` $> bundle exec cucumber features DOMAIN=production ``` if I want it to run in production? How do I go about setting this up? I'm fairly new to Ruby and I've been searching the forums for a straight forward information but could not find any. Let me know if I can provide more information. Thanks for your help!
In the project's config file, create a config.yml file ``` --- staging: :url: https://staging.somewhere.com production: :url: https://production.somewhere.com ``` Then extra colon in the yml file allows the hash key to be called as a symbol. In your support/env.rb file, add the following ``` require 'yaml' ENV['TEST_ENV'] ||= 'staging' project_root = File.expand_path('../..', __FILE__) $BASE_URL = YAML.load_file(project_root + "/config/config.yml")[ENV['TEST_ENV']][:url] ``` This will default to the staging environment unless you override the TEST\_ENV. Then, from your step or hook, you can call: ``` visit($BASE_URL) ``` or you might need :/ ``` visit "#{$BASE_URL}" ``` This will allow you to use ``` bundle exec cucumber features TEST_ENV=production ```
How to use getters and setters in v-model for class based component? I'm a newbie to vuejs and i'm from angular background. I'm trying to bind my setter/getter in v-model for an input. But that is not working as I'm intending it to. But when I tried to bind it to variable directly, then it's working fine. **Following is my code:** ## My Component TS file: ``` import { Component, Vue } from 'vue-property-decorator'; @Component({ components: {} }) export default class MyComponent extends Vue { private _username: string = ''; private _password: string = ''; get username(): string { return this._username; } set username(value: string) { this._username = value; } get password(): string { return this._password; } set password(value: string) { this._password = value; } public login() { console.log(this.username, this.password); } } ``` ## MyComponent Vue file: ``` <template> <form @submit.prevent="login"> <v-text-field v-model="username" label="Username" required> </v-text-field> <v-text-field v-model="password" :type="'password'" label="Password" required> </v-text-field> <v-btn large type="submit">Login</v-btn> <v-btn large>Reset</v-btn> </form> </template> ``` I was expecting username and password value typed into the respective fields to be displayed in the console. But instead all I get is `undefined undefined` (Even though I have initialized the variable to empty string). I did go through the documentation, but it was not of much help since i'm using typescript with vue-class-component. Can anyone please point me in the right direction.
[Reference](https://v2.vuejs.org/v2/api/#data) > > Properties that start with \_ or $ will not be proxied on the Vue > instance because they may conflict with Vue’s internal properties and > API methods. You will have to access them as vm.$data.\_property. > > > Simply don't use the \_ prefix to solve the problem. ``` import { Component, Vue } from 'vue-property-decorator'; @Component({ components: {} }) export default class MyComponent extends Vue { private xusername: string = ''; private xpassword: string = ''; get username(): string { return this.xusername; } set username(value: string) { this.xusername = value; } get password(): string { return this.xpassword; } set password(value: string) { this.xpassword = value; } public login() { console.log(this.username, this.password); } } ```
Linspace applied on array Given an array like `a = [ -1; 0; 1];`. For each `a(i)`, I need to compute a linearly spaced vector with `linspace(min(a(i),0),max(a(i),0),3);`, where each linspace-vector should be stored into a matrix: ``` A = [-1 -0.5 0; 0 0 0; 0 0.5 1]; ``` With a for loop, I can do this like so: ``` for i=1:3 A(i) = linspace(min(a(i),0),max(a(i),0),3); end ``` How can I achieve this without using loops?
The fastest way I can think of is calculating the step-size, construct the vector from that using implicit binary expansion. ``` a = [ -1; 0; 1]; n = 3; stepsizes = (max(a,0)-min(a,0))/(n-1); A = min(a,0) + (0:(n-1)).*stepsizes; ``` **Timeit:** A couple of `timeit` results using (use `timeit(@SO)` and remove comments from the blocks to be timed): ``` function SO() n = 1e3; m = 1e5; a = randi(9,m,1)-4; % %Wolfie % aminmax = [min(a, 0), max(a,0)]'; % A = interp1( [0,1], aminmax, linspace(0,1,n) )'; % %Nicky % stepsizes = (max(a,0)-min(a,0))/(n-1); % A = min(a,0) + (0:(n-1)).*stepsizes; % %Loop % A = zeros(m,n); % for i=1:m % A(i,:) = linspace(min(a(i),0),max(a(i),0),n); % end %Arrayfun: A = cell2mat(arrayfun(@(x) linspace(min(x,0),max(x,0),n),a,'UniformOutput',false)); ``` Then the times are: - Wolfie: 2.2243 s - Mine: 0.3643 s - Standard loop: 1.0953 s - `arrayfun`: 2.6298 s
Parsing dynamic JSON in Android I have a json Object is like this: ``` { Yg7R_: { fld_invoice: "Yg7R_" fld_order_id: "5" fld_orders: { 4: { fld_oiid: "4" fld_date: "2014-03-27 00:00:00" fld_name: "20140327_H5epz2y4OB_IMG_20140326_020341.jpg" fld_loc: "../orders/oid_5/" } } } LldP_: { fld_invoice: "LldP_" fld_order_id: "7" fld_orders: { 6: { fld_oiid: "6" fld_date: "2014-03-27 00:00:00" fld_name: "20140327_SovH7Xf3n2_IMG_20140326_020418.jpg" fld_loc: "../orders/oids_7/" } } } NYEO: { fld_invoice: "NYEO" fld_order_id: "24" fld_orders: { 27: { fld_oiid: "27" fld_date: "2014-03-27 00:00:00" fld_name: "20140327_duLLsssVWA_IMG_20140326_020341.jpg" fld_loc: "orders/oid_24/" } 28: { fld_oiid: "28" fld_date: "2014-03-27 00:00:00" fld_name: "20140327_F8ayA4vWrP_IMG_20140326_020405.jpg" fld_loc: "orders/oid_24/" } 29: { fld_oiid: "29" fld_date: "2014-03-27 00:00:00" fld_name: "20140327_1FlYnuNbtr_IMG_20140326_020418.jpg" fld_loc: "orders/oid_24/" } } } } ``` I tried to parse like this: ``` // result is the above json object. for(int i = 0; i < result.length(); i++){ try { invoice = result.getJSONArray(<Some name here>); // this name is dynamic Log.d("invoice "+i, invoice); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } ``` but I don't get that how could i access the dynamic name. How could we parse such JSON **--Note--** Ya all the values has their quotes eg: "Yg&R\_"
Try this dynamic json parser ``` private void parseJson(JSONObject data) { if (data != null) { Iterator<String> it = data.keys(); while (it.hasNext()) { String key = it.next(); try { if (data.get(key) instanceof JSONArray) { JSONArray arry = data.getJSONArray(key); int size = arry.length(); for (int i = 0; i < size; i++) { parseJson(arry.getJSONObject(i)); } } else if (data.get(key) instanceof JSONObject) { parseJson(data.getJSONObject(key)); } else { System.out.println(key + ":" + data.getString(key)); } } catch (Throwable e) { try { System.out.println(key + ":" + data.getString(key)); } catch (Exception ee) { } e.printStackTrace(); } } } } ```
Can Cocos2D-Swift code written in Swift be ported to Android? I'm new to game development and I'm very interested in Cocos2D-Swift, specially because of the Android compatibility. I know Objective-C code can be ported to Android, but can I say the same of Swift? Thanks in advance.
Swift support for Android via the SpriteBuilder Android plugin is currently in development. However there's no release date yet. Since you're new to game development I think it's fair to say that you can use Swift without having to worry about cross-platform development at this point. It's more important to get your first app running and out there, or just getting it to a stage where you're so hopelessly lost that you'd rather start it anew or a different project altogether, taking with you what you've learned. Personally I find that every new game I start developing with a new engine ends up being completely refactored at least once throughout development just due to how much better you can work with the engine once you've learned how to (best) use it. If your app is successful and you want to port it to Android but Swift support weren't available yet and you really need to port, there's always the option to transcribe Swift code to Objective-C. It's fairly straightforward (albeit tedious) and if you're lucky someone even made a two-way Swift-ObjC converter by that time.
SwiftUI and the three-finger undo gesture I'm trying to implement undo in a SwiftUI app for iOS, but I haven't been able to get the undo gestures to work. Here's a sample that demonstrates the problem: ``` class Model: ObservableObject { @Published var active = false func registerUndo(_ newValue: Bool, in undoManager: UndoManager?) { let oldValue = active undoManager?.registerUndo(withTarget: self) { target in target.active = oldValue } active = newValue } } struct UndoTest: View { @ObservedObject private var model = Model() @Environment(\.undoManager) var undoManager var body: some View { VStack { Toggle(isOn: Binding<Bool>( get: { self.model.active }, set: { self.model.registerUndo($0, in: self.undoManager) } )) { Text("Toggle") } .frame(width: 120) Button(action: { self.undoManager?.undo() }, label: { Text("Undo") .foregroundColor(.white) .padding() .background(self.undoManager?.canUndo == true ? Color.blue : Color.gray) }) } } } ``` Switching the toggle around then tapping the undo button works fine. Using the three-finger undo gesture or shaking to undo does nothing. How do you tie in to the system gesture?
It appears that the editing gestures require the window to have first responder, and that SwiftUI doesn't set up anything that the `UIWindow` wants to pick as first responder by default. If you subclass `UIHostingController`, and in your subclass, you override `canBecomeFirstResponder` to return `true`, then the `UIWindow` will set your controller as first responder by default, which appears sufficient to enable the editing gestures. I tested the following code on my iPad Pro running iPadOS 13.1 beta 2 (17A5831c). It mostly works. I believe there is an iOS bug, perhaps fixed in a newer beta: when the undo stack is empty, the gestures sometimes don't work (even when a redo action is possible). Switching to the home screen and then back to the test app (without killing the test app) seems to make the editing gestures work again. ``` import UIKit import SwiftUI class MyHostingController<Content: View>: UIHostingController<Content> { override var canBecomeFirstResponder: Bool { true } } class Model: ObservableObject { init(undoManager: UndoManager) { self.undoManager = undoManager } let undoManager: UndoManager @Published var active = false { willSet { let oldValue = active undoManager.registerUndo(withTarget: self) { me in me.active = oldValue } } } } struct ContentView: View { @ObservedObject var model: Model @Environment(\.undoManager) var undoManager var body: some View { VStack { Toggle("Active", isOn: $model.active) .frame(width: 120) HStack { Button("Undo") { withAnimation { self.undoManager?.undo() } }.disabled(!(undoManager?.canUndo ?? false)) Button("Redo") { withAnimation { self.undoManager?.redo() } }.disabled(!(undoManager?.canRedo ?? false)) } } } } class SceneDelegate: UIResponder, UIWindowSceneDelegate { var window: UIWindow? func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) { guard let scene = scene as? UIWindowScene else { return } let window = UIWindow(windowScene: scene) let model = Model(undoManager: window.undoManager!) let contentView = ContentView(model: model) window.rootViewController = MyHostingController(rootView: contentView) self.window = window window.makeKeyAndVisible() } } ```
Mac Dock like magnification for iPad I am trying to bring out dock like magnification effect for my iPad app thru iCarousel library. With that i am able to zoom in the center item of the carousel with the following piece of code, but trying to zoom the adjacent items of the centre item with zoom level little less than the centre item. ``` - (CATransform3D)carousel:(iCarousel *)_carousel itemTransformForOffset: :(CGFloat)offset baseTransform:(CATransform3D)transform { CGFloat MAX_SCALE = 1.95f; //max scale of center item CGFloat MAX_SHIFT = 40.0f; //amount to shift items to keep spacing the same CGFloat shift = fminf(1.0f, fmaxf(-1.0f, offset)); CGFloat scale = 1.0f + (1.0f - fabs(shift)) * (MAX_SCALE - 1.0f); transform = CATransform3DTranslate(transform, offset * _carousel.itemWidth * 1.08f + shift * MAX_SHIFT, 0.0f, 0.0f); return CATransform3DScale(transform, scale, scale, scale); } ``` Looking forward for any kind of help. thanks.
This function could be your answer: ![enter image description here](https://i.stack.imgur.com/8y5ej.gif) its graph (for scaleMax = 3, xFactor = 1): ![enter image description here](https://i.stack.imgur.com/iz7xg.png) This function is used directly for calculating the scale factor from the carousel offset. In addition you need to shift the elements to left and right, so that don't overlap (as you already did). This can be done either by shifting the items by the function's integral, which works, but the gap in the center is huge this way. Or it can be calculated manually by taking a sum of all scaled items. The gap can stay constant, or it can be scaled separately. Notice that the scale is equal to 1 in the center and descends to 1/scale\_max by the edges. This is because scaling down doesn't create undesirable pixelated effects. Make your item view as you want it to appear in the center and the views on the edges will get scaled down. This could be the usage: ``` -(CGFloat) scaleForX:(CGFloat)x xFactor:(CGFloat)xFactor centerScale:(CGFloat)centerScale { return (1+1/(sqrtf(x*x*x*x*xFactor*xFactor*xFactor*xFactor+1))*(centerScale-1.0))/centerScale; } - (CATransform3D)carousel:(iCarousel *)carousel itemTransformForOffset:(CGFloat)offset baseTransform:(CATransform3D)transform { //items in the center are scaled by this factor const CGFloat centerScale = 4.0f; //the larger the xFactor, the smaller the magnified area const CGFloat xFactor = 1.5f; //should the gap also be scaled? or keep it constant. const BOOL scaleGap = NO; const CGFloat spacing = [self carousel:carousel valueForOption:iCarouselOptionSpacing withDefault:1.025]; const CGFloat gap = scaleGap?0.0:spacing-1.0; //counting x offset to keep a constant gap CGFloat scaleOffset = 0.0; float x = fabs(offset); for(;x >= 0.0; x-=1.0) { scaleOffset+=[self scaleForX:x xFactor:xFactor centerScale:centerScale]; scaleOffset+= ((x>=1.0)?gap:x*gap); } scaleOffset -= [self scaleForX:offset xFactor:xFactor centerScale:centerScale]/2.0; scaleOffset += (x+0.5)*[self scaleForX:(x+(x>-0.5?0.0:1.0)) xFactor:xFactor centerScale:centerScale]; scaleOffset *= offset<0.0?-1.0:1.0; scaleOffset *= scaleGap?spacing:1.0; CGFloat scale = [self scaleForX:offset xFactor:xFactor centerScale:centerScale]; transform = CATransform3DTranslate(transform, scaleOffset*carousel.itemWidth, 0.0, 0.0); transform = CATransform3DScale(transform, scale, scale, 1.0); return transform; } ``` with result: ![enter image description here](https://i.stack.imgur.com/EGKXq.png) You can try to alter constants for different behaviors. Also changing the exponent to another even number can further widen the peak and sharpen the descent to the minimum scale.
sklearn kfold returning wrong indexes in python I am using kfold function from sklearn package in python on a df (data frame) with non-contious row indexes. this is the code: ``` kFold = KFold(n_splits=10, shuffle=True, random_state=None) for train_index, test_index in kFold.split(dfNARemove):... ``` I get some train\_index or test\_index that doesn't exist in my df. what can I do?
kFold iterator yields to you positional indices of train and validation objects of DataFrame, not their non-continuous indices. You can access your train and validation objects by using `.iloc` pandas method: ``` kFold = KFold(n_splits=10, shuffle=True, random_state=None) for train_index, test_index in kFold.split(dfNARemove): train_data = dfNARemove.iloc[train_index] test_data = dfNARemove.iloc[test_index] ``` If you want to know, which non-continuous indices used for train\_index and test\_index on each fold, you can do following: ``` non_continuous_train_index = dfNARemove.index[train_index] non_continuous_test_index = dfNARemove.index[test_index] ```
Why some spatial functions does not exists on my mysql server? I have installed a new local MySQL server (Version8) for a dev usage. I want to use spatial functions, but some of them does not exists. This script returns me a good value: ``` create database test; use test; select st_x(point(15, 20)); ``` > > 15 > > > So, I though spatial extension was natively installed, but as soon as I use other functions like `geomfromtext`, my script throws an error: ``` create database test; use test; SELECT geomfromtext('Point(15 20)'); ``` > > Error Code: 1305. FUNCTION test.geomfromtext does not exist 0.000 sec > > > I don't understand, the autocompletion of MySQL Workbench console completes geometry. [![Autocompletion](https://i.stack.imgur.com/IeqYM.png)](https://i.stack.imgur.com/IeqYM.png) Which step did I forget during installation?
I just flew over the [official documentation](https://dev.mysql.com/doc/refman/8.0/en/spatial-analysis-functions.html) and it seems that the function is called [ST\_GeomFromText()](https://dev.mysql.com/doc/refman/8.0/en/gis-wkt-functions.html#function_st-geomfromtext) in MySQL 8.0. > > In MySQL 5.7, several spatial functions available under multiple names > were deprecated to move in the direction of making the spatial > function namespace more consistent, the goal being that each spatial > function name begin with ST\_ if it performs an exact operation, or > with MBR if it performs an operation based on minimum bounding > rectangles. In MySQL 8.0, the deprecated functions are removed to > leave only the corresponding ST\_ and MBR functions: > > > - These functions are removed in favor of the MBR names: Contains(), Disjoint(), Equals(), Intersects(), Overlaps(), Within(). > - These functions are removed in favor of the ST\_ names: Area(), AsBinary(), AsText(), AsWKB(), AsWKT(), Buffer(), Centroid(), > ConvexHull(), Crosses(), Dimension(), Distance(), EndPoint(), > Envelope(), ExteriorRing(), GeomCollFromText(), GeomCollFromWKB(), > GeomFromText(), GeomFromWKB(), GeometryCollectionFromText(), > GeometryCollectionFromWKB(), GeometryFromText(), GeometryFromWKB(), > GeometryN(), GeometryType(), InteriorRingN(), IsClosed(), IsEmpty(), > IsSimple(), LineFromText(), LineFromWKB(), LineStringFromText(), > LineStringFromWKB(), MLineFromText(), MLineFromWKB(), > MPointFromText(), MPointFromWKB(), MPolyFromText(), MPolyFromWKB(), > MultiLineStringFromText(), MultiLineStringFromWKB(), > MultiPointFromText(), MultiPointFromWKB(), MultiPolygonFromText(), > MultiPolygonFromWKB(), NumGeometries(), NumInteriorRings(), > NumPoints(), PointFromText(), PointFromWKB(), PointN(), > PolyFromText(), PolyFromWKB(), PolygonFromText(), PolygonFromWKB(), > SRID(), StartPoint(), Touches(), X(), Y(). > - GLength() is removed in favor of ST\_Length(). > > >
Using Windows Authentication inside my own login form I have WPF application that has a login form. I would like to make all existing windows users that belong to some specific group able to log into my application. So what I need is a way after the user have given his username and password to see if this is a user, belonging to the wanted group, and that the password is correct. The feedback I can use to decide if the user gets logged in or not.
If you need to find out if the user has membership to some AD group, you will need to use the group's SID if the user is not a "direct" member of the group (i.e. the user is a member of a nested group which itself is a member of the 'desired' AD group). (I've used this for years, but long ago lost the link to where I found it. I *believe* there's actually a simpler way to check for nested groups in DirectoryServices 4.0, but I have not used it). If you're using .NET 3.5 (as indicated in the link from Travis), you can check the user's credentials like this: ``` using (PrincipalContext pc = new PrincipalContext(ContextType.Domain) { if (pc.ValidateCredentials(username, password)) { /* Check group membership */ } } ``` If you are not using .NET 3.5, you can still check the credentials like this: ``` var user = new DirectoryEntry("", username, password) try { user.RefreshCache(); /* Check group membership */ } catch (DirectoryServicesCOMException ex) { /* Invalid username/password */ } finally { user.Close(); } ``` Then, to check, the AD group membership, use the following: ``` var user = new DirectoryEntry("", username, password); var searcher = new DirectorySearcher(); searcher.Filter = "(&(objectCategory=group)(samAccountName=" + YourGroupName + "))"; var group = searcher.FindOne(); if (group != null && IsMember(group.GetDirectoryEntry(), user)) /* User is a direct OR nested member of the AD group */ ``` The IsMember helper method: ``` static bool IsMember(DirectoryEntry group, DirectoryEntry user) { group.RefreshCache(new string[] { "objectSid" }); SecurityIdentifier groupSID = new SecurityIdentifier((byte[])group.Properties["objectSid"].Value, 0); IdentityReferenceCollection refCol; user.RefreshCache(new string[] { "tokenGroups" }); IdentityReferenceCollection irc = new IdentityReferenceCollection(); foreach (byte[] sidBytes in user.Properties["tokenGroups"]) { irc.Add(new SecurityIdentifier(sidBytes, 0)); } refCol = irc.Translate(typeof(NTAccount)); PropertyValueCollection props = user.Properties["tokenGroups"]; foreach (byte[] sidBytes in props) { SecurityIdentifier currentUserSID = new SecurityIdentifier(sidBytes, 0); if (currentUserSID.Equals(groupSID)) { return true; } } return false; } ```
Javascript rich text editor, contenteditable area loses focus after button is clicked I have simple javascript rich text editor consiting only of bold button that has following onclick: ``` document.execCommand('bold', false) ``` And simple html... ``` <div contenteditable="true"> ``` My problem is that when I click the bold button, text area loses it's focus, is there some solution for that?
Well the focus moves to the button so you need to cancel the click action so the focus is not lost in the content editable element. ``` document.querySelector(".actions").addEventListener("mousedown", function (e) { var action = e.target.dataset.action; if (action) { document.execCommand(action, false) //prevent button from actually getting focused e.preventDefault(); } }) ``` ``` [contenteditable] { width: 300px; height: 300px; border: 1px solid black; } ``` ``` <div class="actions"> <button data-action="bold">bold</button> <button data-action="italic">italic</button> </div> <div contenteditable="true"></div> ```
How to return all items in an ObservableCollection which satisfy a condition C# I'm trying to find a neat way to find all of the values in an observable collection which meet a certain criteria. For this example to keep things simple lets say its the collection contains ints and I'm trying to find all of the items that are greater than 5. The best way I currently know of doing it is like this ``` ObservableCollection<Int> findAllGreaterThanFive (ObservableCollection<Int> numbers) { ObservableCollection<Int> numbersGreaterThanFive; foreach(Int number in numbers) { if (number > 5) { numbersGreaterThanFive.add(number); } } return numbersGreaterThanFive; } ``` Obviously ignore any simple solutions that take advantage to the fact I'm looking for ints I need a solution that works with any an ObservableCollection of any type with any condition. I was just wondering if checking every item with the foreach loop and the conditional is the best way of doing it?
you can use System.Linq namespace, add using statement `using System.Linq` and after that you can use following `Where` method. ``` ObservableCollection<int> list = new ObservableCollection<int>(); list.Where(i => i > 5).ToList(); ``` you can use any kind of objects like : ``` ObservableCollection<DataItem> list = new ObservableCollection<DataItem>(); list.Where(i => i.ID > 10); ``` The code above returns DataItem's with ID greater than 10. If you sure about there's only one record satisfying condition, you can use `First()` method like : ``` ObservableCollection<DataItem> list = new ObservableCollection<DataItem>(); list.First(i => i.ID == 10); ``` Above code returns the DataItem with ID 10. But if there's no record with ID = 10 then it will throw an exception. Avoid of this if you're not sure there's only one record satisfies the condition. Also you can use `FirstOrDefault()` method. ``` ObservableCollection<DataItem> list = new ObservableCollection<DataItem>(); DataItem item = list.FirstOrDefault(i => i.ID == 10); if(item != null) { //DoWork } ``` If there's no record with ID = 10, then item will be null.
OpenGL texture vs FBO I am pretty new to OpenGL and just want some quick advice. I want to draw a tiled background for a game. I guess this means drawing a whole bunch of sprite like objects to the screen. I have about 48 columns to 30 rows, therefore 1440 tiles (tiles change depending on the game, so I can't pre-render the entire grid). Currently on start up I create 6 different FBO (using the [ofFbo](http://ofxfenster.undef.ch/doc/classofFbo.html) class from OpenFrameworks) that act as 6 different tiles. I then draw these buffers, up to a maximum of 1400 times, selecting one for each tile. So there are only ever 6 fbos, just being draw a lot of times. (The buffers are drawn to on start up, and are never changed once created). ``` for (int x=0; x<columns; x++) { for (int y=0; y<rows; y++) { // Get tile type and rotation from tile struct. tileNum = tile.form rotNum = tile.rot // Draw image/texture/fbo that's stored in a std vector. tileSet->draw(x*TILESIZE, y*TILESIZE, TILESIZE, TILESIZE); } } ``` I think I am going about this the wrong way, and was wondering if anyone new the best / optimal way to do this. Think something like an old school 8 bit video game background. Here is an image of my work in progress. ![work in progress](https://i.stack.imgur.com/KIQ8n.png) The structures in the background are the sprites i'm talking about, the different pieces are the inny corner, outty (concave) corner, square fill, and straight edge. Sorry for messing around with question.
I don't really understand your question. A texture is a (usually 2-dimensinal) image that can be applied to polygons, whereas an [FBO (framebuffer object)](http://www.songho.ca/opengl/gl_fbo.html) is a kind of offscreen buffer that can be rendered into instead of the screen, usually used to render directly into textures. So I don't see where your question could be "textures vs FBOs" in any way, as they're quite orthogonal concepts and using FBOs doesn't make any sense in your example. Maybe you're messing up FBOs with [VBOs (vertex buffer objects)](http://www.songho.ca/opengl/gl_vbo.html) or [PBOs (pixel buffer objects)](http://www.songho.ca/opengl/gl_fbo.html)? But then your question is still quite ill-posed. **EDIT:** If you're really using FBOs, and that only because you think they magically make the texture they reference to be stored in video memory, then rest assured that textures are always stored in video memory and using an FBO in your case is completely useless.
Installing linux onto late 2008 Macbook Pro and getting corrupted screen This is a somewhat general issue that I've been running into while trying to install Linux onto my late 2008 Macbook Pro. I've tried the following distros: - Linux Mint 10 - Linux Mint 14 - Ubuntu 12.10 - Fedora 17 What's happening is that at some point during the boot process, something the Macbook Pro doesn't like is crashing it. I think it is related to the gpu drivers, but I can't tell for sure. What happens is the system totally freezes and the top one-third of the screen is all corrupted. I tried changing the runlevel to 3 so that X does not start while I was attempting to boot Fedora 17's live install, but a few seconds after getting the initial login prompt, it went all corrupted. Up to that point however, everything was fine. It also does not seem to make a difference if I run the "windows" boot loader (which is a low-res shell) or the EFI boot loader (which is a high-res shell). Both exhibit the same behavior. I did somehow manage to get Linux Mint to boot to a desktop on ONE occasion, however it froze shortly afterwards. FWIW, Mac OS X 10.6 works perfectly fine on this machine. I also tried installing rEFIt, but that did not help at all.
I believe I figured out the cause of the issue. It was indeed an incompatibility with the way the Apple hardware communicated with the VESA drivers, I believe when switching modes on the built-in screen. When `nomodeset` was added to the kernel parameters, system could proceed to boot without crashing. The Macbook Pro is Late 2008, 5,1 with nVIDIA 9600m GT. `nomodeset` is only necessary until you can install the proprietary nVIDIA drivers for your distro. To recap: - Installed rEFIt while in OS X (run `/efi/refit/enable.sh` if rEFIt does not work automatically) - I ran the Mint 14 live DVD by adding `nomodeset` to the kernel parameters in the grub bootloader. - Ran Mint installer - Did partitioning - mounted / to /dev/sda4, also installed grub to this partition - swap on /dev/sda3 (because I placed some space in-between the Mac partition) - Finished mint installer, rebooted. - Booted from linux partition using rEFIt. - Again, added `nomodeset` to kernel parameters so I could boot. - Ran **Software Sources** application - Went to **Additional Drivers** tab - Selected first NVIDIA driver (proprietary, tested), applied changes - Waited for it to finish, then rebooted. - Booted into linux again and all was well (`nomodeset` automatically removed as it is a temporary change) Phew.
Rails Model Versioning with Approval I have a model that members will be able to update but their changes wont take effect until an admin approves their changes. Has anyone solved this same problem and what gems would you recommend for versioning? PaperTrail? Vestal versions?
Perhaps you could use [vestal\_versions](https://github.com/bousquet/vestal_versions) with a slight twist. Add an after\_update action in your controller which rolls back to the previous version if the user who made the change is not an admin. Then you can set the instance's status to pending, which would alert an admin for review. The admin would then just review the latest version and move it up if approved. ``` # model_controller.rb after_update :rollback_if_not_admin def rollback_if_not_admin unless current_user.admin? #roll back changes version = @model_instance.versions.count if version > 1 @model_instance.reset_to!(version - 1) @model_instance.status = "pending" end flash[:notice] = "Your changes will be reflected once an admin has reviewed them" redirect_to @model_instance end ```
How to setup angular 4 inside a maven based java war project I'm becoming a bit crazy because I can't find a guide to setup an angular 4 app inside a java war project that will be built with maven. This is because I want to run it into a wildfly server. Any help? Thanks
I had similar requirement to have one source project which has java web-services project as well as angular project(an angular-cli based project) and maven build should create a war with all angular files in it. I used [maven-frontend-plugin](https://github.com/eirslett/frontend-maven-plugin) with few configuration changes for base path. The goal was to create a war file with all the java code in it plus all the aot compiled angular code in root folder of war, all this with single command `mvn clean package`. One more thing for all this to work is to avoid conflict between angular-app router urls and your java application urls, You need to use HashLocationStrategy. one way set it up in app.module.ts like below app.module.ts - ``` providers: [ { provide: LocationStrategy, useClass: HashLocationStrategy }, ] ``` Folder structure for Angular App is below- ### angular-project - dist - e2e - node\_modules - public - src - app - assets - environments - favicon.ico - index.html - main.ts - polyfills.ts - style.css - tsconfig.json - typings.d.ts - etc-etc - tmp - .angular-cli.json - .gitignore - karma.conf.js - package.json - README.md - tslint.json - etc - etc ### Maven Project - - src - main - java - resources - webapp - WEB-INF - web.xml - angular-project (**place your angular project here**) - node\_installation - pom.xml Add maven-frontend-plugin configuration to pom.xml ``` <properties> <angular.project.location>angular-project</angular.project.location> <angular.project.nodeinstallation>node_installation</angular.project.nodeinstallation> </properties> <plugin> <groupId>com.github.eirslett</groupId> <artifactId>frontend-maven-plugin</artifactId> <version>1.0</version> <configuration> <workingDirectory>${angular.project.location}</workingDirectory> <installDirectory>${angular.project.nodeinstallation}</installDirectory> </configuration> <executions> <!-- It will install nodejs and npm --> <execution> <id>install node and npm</id> <goals> <goal>install-node-and-npm</goal> </goals> <configuration> <nodeVersion>v6.10.0</nodeVersion> <npmVersion>3.10.10</npmVersion> </configuration> </execution> <!-- It will execute command "npm install" inside "/e2e-angular2" directory --> <execution> <id>npm install</id> <goals> <goal>npm</goal> </goals> <configuration> <arguments>install</arguments> </configuration> </execution> <!-- It will execute command "npm build" inside "/e2e-angular2" directory to clean and create "/dist" directory --> <execution> <id>npm build</id> <goals> <goal>npm</goal> </goals> <configuration> <arguments>run build</arguments> </configuration> </execution> </executions> </plugin> <!-- Plugin to copy the content of /angular/dist/ directory to output directory (ie/ /target/transactionManager-1.0/) --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-resources-plugin</artifactId> <version>2.4.2</version> <executions> <execution> <id>default-copy-resources</id> <phase>process-resources</phase> <goals> <goal>copy-resources</goal> </goals> <configuration> <overwrite>true</overwrite> <!-- This folder is the folder where your angular files will be copied to. It must match the resulting war-file name. So if you have customized the name of war-file for ex. as "app.war" then below value should be ${project.build.directory}/app/ Value given below is as per default war-file name --> <outputDirectory>${project.build.directory}/${project.artifactId}-${project.version}/</outputDirectory> <resources> <resource> <directory>${project.basedir}/${angular.project.location}/dist</directory> </resource> </resources> </configuration> </execution> </executions> </plugin> ``` As above plugin call 'npm run build' internally, make sure package.json should have build command in script like below - package.json ``` "scripts": { -----//-----, "build": "ng build --prod", -----//------ } ``` index.html should always be loaded when someone hit application from browser that's why make it a welcome file . For web services lets say we have path /rest-services/\* will explain this later. web.xml - ``` <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <servlet-mapping> <servlet-name>restservices</servlet-name> <url-pattern>/restservices/*</url-pattern> </servlet-mapping> ``` The above configuration is enough if your application does not have any context path and is deployed on root path on server. But if your application has any context path like <http://localhost:8080/myapplication/> then make changes to index.html file as well - angular-project/src/index.html - Here document.location will be myapplication/ (the context path of your app otherwise / if application has no context path ) The purpose of making context path a base path for angular-app is that whenever you make ajax http call from angular, it will prepend base path to url. for example if i try to call 'restservices/persons' then it will actually make calls to '<http://localhost:8080/myapplication/restservices/persons>' index.html ``` <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>E2E</title> <script>document.write('<base href="' + document.location + '" />'); </script> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> </head> <body> <app-root></app-root> </body> ``` After all above changes once you run `mvn clean package` it will create required war. Check if all the content of angular 'dist' folder is in root of war file.
Map strings to (const) ints I'm imagining go generate would be a good tool for this. I want to convert strings to `int`s to save space. ``` type Source int func NewSource(s string) Source { switch s { case "Twitter": return Twitter case "Facebook": return Facebook case "Gplus": return Gplus case "Spotify": return Spotify case "Linkedin": return Linkedin case "Github": return Github case "Lastfm": return Lastfm default: panic(ErrUnknownSourceType) } } const ( Twitter Source = iota Facebook Gplus Spotify Linkedin Github Lastfm ) ```
Shortest (shortest by you) would be indeed to use `go generate`. If you don't want to do that: You have to enumerate your source names and source values to associate them, you can't avoid that. But this enumeration and pairing can be shorter by using a `map[string]Source`: ``` var srcMap = map[string]Source{ "Twitter": Twitter, "Facebook": Facebook, "Gplus": Gplus, "Spotify": Spotify, "Linkedin": Linkedin, "Github": Github, "Lastfm": Lastfm, } func NewSource(s string) Source { if src, ok := srcMap[s]; ok { return src } panic(ErrUnknownSourceType) } ``` Also note that panicing is a little "strong" reaction for an invalid source name. I would rather return an `error` along with the source, or return a special `UnknownSrc` source instead of panicing. And while we're at it: you should exploit the zero value of `Source` for representing `UnknownSrc` and that way you don't even have to use the comma-ok idiom when checking in the map: indexing a map returns the zero value of the value type if the key is not found. So: ``` const ( UnknownSrc Source = iota // It will be 0, zero value for the underlying type (int) Twitter // ... and your other sources ) ``` And this way converting a source name to the `Source` type is one-line: ``` func NewSource(s string) Source { return srcMap[s] } ``` It's just indexing a map, you don't even need a function for that. If you would want to return an `error`, it could look like this: ``` func NewSource(s string) (Source, error) { if src, ok := srcMap[s]; ok { return src, nil } return UnknownSrc, errors.New("Invalid Source name!") } ```
How can I get a friend to stop thinking global and start thinking relative? I have a friend who is struggling to learn OO programming (in a Computer Science class), mainly, I believe, due to the fact that he does not understand the relativity/difference between "type" and the actual "instance." It appears to be an issue in understanding how data is stored. He knows variables, and control structures, and quite a few algorithms, but I think understanding scope and the way objects are defined in Java is confusing to him. The most confusing part is the "template" part, where the methods and variables are defined, but are "owned" by each object. As this post kind of shows, I'm horrible at explaining this concept, even when I know I'm explaining it to people. So what I want to know, is how can **clearly explain to him instances and classes (as a type, versus the actual data).** He knows the basic concept, I think, but does not fully understand the difference between type and data, and how the data is passed around. --- More info: Here's the sort of confusion he is experiencing. We're working with overriding a Critter object in GridWorld (The horribly designed AP test object thing). In any event, we are suppose to redefine some methods. He'll end up trying to use methods as properties (quite possibly a typo, but also because stuff like GetLocation() may be confusing), or he'll try to use methods on the wrong variable, or not use the right type of variable. The difference between "Integer nameHere" and "nameHere Integer" is confusing, I think, to him. He may try to do Integer.equals() when he should do nameHere.equals(). I show him the API, but I think it's just plaining confusing. How do you explain the difference between type and variable, but not only that, but show how a variable can have other variables in it, how when you write ``` public getSomething() { return something; } ``` You are getting specific data from the class you instantiated, oh, and by the way, it's different for every single object? How can I get him to stop thinking "global" and start thinking "relative"? How do you explain "relative"?
I like to use TVs to explain OOP concepts. There are "TVs" as a class, but then there's your TV, in your house. It can do some things that ALL TVs can do, like show a picture, turn on and off, etc. But it also has a locally defined scope because it is one instance of the class of TVs in the world. If you enter channel 32 on a TV in NYC you get one thing. If you do it in Bangalore, you get something else. Entering a channel is like calling the TV's `setChannel(int channel)` method. It's not surprising to users of such real world objects that they work this way. My advice for this particular problem is to stop using abstract and technical language to describe the basic concepts. There are plenty of things in computer science that can't be explained using material objects in the real world, but this isn't one of them.
SSDs + RAID5 + Database = Fail? I have a database running on a RAID5 array of SSDs. Could this be killing its performance? When I do many inserts in a row, the entire computer starts blocking up. Even things like firefox start glitching. On my much less powerful computer with a traditional hard drive, the inserts w/ the same database version, schema, and data run smoothly. Could this be the reason, or might it be some other factor?
Hmm. 1) RAID5 might not be the best bet for a database. You might try mirroring + striping for redundancy and small file performance. 2) You could be bottlenecking something other than the drives with the SSDs, when the rotational media would bottleneck at the drive. This bottleneck could be resulting in the performance difference. E.g. network bottlenecks can affect the strangest things, especially when you add NFS or the like. If you're tooling the RAID card, maybe it just flakes. 3) You might want to check the I/O scheduler, to make sure they match between the two machines, as they can affect performance. 4) Is the SSD database system performing database queries on par with the other, rotational media system, or better, or worse? 5) To get a good idea of where the bottleneck is happening, you might try `dstat` or `iostat`
caffe with multi-label images I have a dataset of images that have multiple labels; There are 100 classes in the dataset, and each image has 1 to 5 labels associated with them. I'm following the instruction in the following URL: <https://github.com/BVLC/caffe/issues/550> It says that I need to generate a text file listing the images and its labels as in > > > ``` > /home/my_test_dir/picture-foo.jpg 0 > /home/my_test_dir/picture-foo1.jpg 1 > > ``` > > In my case, since I have multi-label images, does it work to simply add labels as in following? > > > ``` > /home/my_test_dir/picture-foo.jpg 0 2 5 > /home/my_test_dir/picture-foo1.jpg 1 4 > > ``` > > I have a feeling that it's probably not going to be that simple, and if I'm right, in what step and how should I integrate the multi-label-ness of the dataset in the process of setting up Caffe?
I believe Shai's answer is no longer up-to-date. [Caffe supports multi-label/matrix ground truth](https://github.com/BVLC/caffe/issues/1698#issue-53768814) for HDF5 and LMDB formats. The python snippet in [this](https://github.com/BVLC/caffe/issues/1698#issuecomment-70211045) github comment demonstrates how to construct multi-label LMDB ground truth (see [Shai's answer](https://stackoverflow.com/a/32687772/2466336) for HDF5 format). Different from the construction of single-label image datasets, an lmdb is constructed for the images while a second separate lmdb is constructed for the multi-label ground truth data. The snippet deals with spatial multi-label ground truth useful for pixel-wise labeling of images. The order in which data is written to the lmdb is crucial. The order of the ground truth must match the order of the images. Loss layers such as SOFTMAX\_LOSS, EUCLIDEAN\_LOSS, SIGMOID\_CROSS\_ENTROPY\_LOSS also support multi-label data. However, the Accuracy layer is still limited to single-label data. You might want to follow [this github issue](https://github.com/BVLC/caffe/issues/2188) to keep track of when this feature is added to Caffe.
What is the usage of GL\_BLEND? Im studying open gl and I came across GL\_BLEND. Its kinda confusing to understand the practical usage of it, so if somebody got experience in using it, can you explain it to me?
You know how "Layers" work in Photoshop (or similar image editing programs) and what the "merge layers" function does? It's the same principle: There's a "bottom" layer (the destination) and a "top" layer (the source) that has the pixels of a single *primitive* (triangle, line, point). For every triangle, line or point drawn the "bottom" layer of what's currently in the framebuffer is merged with the newly incoming layer of that single triangle, line or point. The exact mode of composition is controlled through the blending function, set with `glBlendFunc`. Each single primitive (triangle, line, point) drawn "adds" a new layer and immediately merges that with the bottom layer. The practical application is everything you'd do with layers in Photoshop. For example you may have a stock photo of a window, where the glass is translucent. The same works in OpenGL where you can draw geometry where parts of it are rendered translucent and blend with what's been drawn before.
When did Unix stop storing passwords in clear text? When did Unix move away from storing clear text passwords in passwd? Also, when was the shadow file introduced?
For the early history of Unix password storage, read Robert Morris and Ken Thompson's [*Password Security: A Case History*](https://www.bell-labs.com/usr/dmr/www/passwd.ps). They explain why and how early Unix systems acquired most the features that are still seen today as the important features of password storage (but done better). - The first Unix systems stored passwords in plaintext. Unix Third Edition introduced the [`crypt`](https://minnie.tuhs.org/cgi-bin/utree.pl?file=V3/man/man3/crypt.3) function which hashes the password. It's described as “encryption” rather than “hashing” because modern cryptographic terminology wasn't established yet and it used an encryption algorithm, albeit in an unconventional way. Rather than encrypt the password with a key, which would be trivial to undo when you have the key (which would have to be stored on the system), they use the password as the key. - When Unix switched from an earlier cipher to the then-modern [DES](https://en.wikipedia.org/wiki/Data_Encryption_Standard), it was also made slower by iterating DES multiple times. I don't know exactly when that happened: V6? V7? - Merely hashing the password is vulnerable to multi-target attacks: hash all the most common passwords once and for all, and look in the password table for a match. Including a salt in the hashing mechanism, where each account has a unique salt, defeats this precomputation. Unix acquired a salt in [Seventh Edition in 1979](https://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/man/man3/crypt.3). - Unix also acquired password complexity rules such as a minimum length in the 1970s. Originally the password hash was in the publicly-readable file `/etc/passwd`. Putting the hash in a separate file [`/etc/shadow`](https://en.wikipedia.org/wiki/Passwd#Shadow_file) that only the system (and the system administrator) could access was one of the many innovations to come from Sun, dating from around SunOS 4 in the mid-1980s. It spread out gradually to other Unix variants (partly via the third party [shadow suite](https://en.wikipedia.org/wiki/Passwd#History) whose descendent is still used on Linux today) and wasn't available everywhere until the mid-1990s or so. Over the years, there have been improvements to the hashing algorithm. The biggest jump was [Poul-Henning Kamp's MD5-based algorithm](http://phk.freebsd.dk/pubs/ieee.software.pdf) in 1994, which replaced the DES-based algorithm by one with a better design. It removed the limitation to 8 password characters and 2 salt characters and had increased slowness. See [IEEE's *Developing with open source software*, Jan–Feb. 2004, p. 7–8](http://phk.freebsd.dk/pubs/ieee.software.pdf). The SHA-2-based algorithms that are the de facto standard today are based on the same principle, but with slightly better internal design and, most importantly, a configurable slowness factor.
Advantage of metropolis hastings or MonteCarlo methods over a simple grid search? I have a relatively simple function with three unknown input parameters for which I only know the upper and lower bounds. I also know what the output Y should be for all of my data. So far I have done a simple grid search in python, looping through all of the possible parameter combinations and returning those results where the error between Y predicted and Y observed is within a set limit. I then look at the results to see which set of parameters performs best for each group of samples, look at the trade-off between parameters, see how outliers effect the data etc.. So really my questions is - whilst the grid search method I'm using is a bit cumbersome, what advantages would there be in using Monte Carlo methods such as metropolis hastings instead? I am currently researching into MCMC methods, but don’t have any practical experience in using them and, in this instance, can’t quite see what might be gained. I’d greatly appreciate any comments or suggestions Many Thanks
MCMC methods tend to be useful when the underlying function is complex (sometimes too complicated to directly compute) and/or in high-dimensional spaces. They are often used when nothing else is feasible or works well. Since you have a simple, low-dimensional problem, I wouldn't expect MCMC approaches to be especially helpful for you. If you can perform the grid search at a sufficiently-fine scale in a small enough amount of time for your problem domain, it's likely a good approach. If your function is convex, there are many [well-known approaches](https://en.wikipedia.org/wiki/Convex_optimization) such a gradient descent. If your function has a simple functional form that can easily be solved but you have large amounts of data with gross outliers, [RANSAC](https://en.wikipedia.org/wiki/RANSAC) can be helpful. If your function has many local minima at unknown locations, [simulated annealing](https://en.wikipedia.org/wiki/Simulated_annealing) can work well.
Convert yyyy-mm-dd to yyyy-ww in Python I'm trying to to convert `yyyy-mm-dd` into `yyyy-ww`. How is this achieved for the following dataframe: ``` dates = {'date': ['2015-02-04','2016-03-05']} df = pd.DataFrame(dates, columns=['date']) print(df) 0 2015-02-04 1 2016-03-05 dtype: datetime64[ns] ``` I've tried using ``` YW = pd.to_datetime(df, format='%Y%W') ``` However without luck.
Use [`to_datetime`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html) working with columns called `year`, `month` and `day` for datetimes and add [`Series.dt.strftime`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html) for custom format: ``` YW = pd.to_datetime(df).dt.strftime('%Y%W') print (YW) 0 201505 1 201609 dtype: object ``` If possible another columns filter only necessary by list: ``` YW = pd.to_datetime(df[['year','month','day']]).dt.strftime('%Y%W') ``` EDIT: ``` YW = pd.to_datetime(df['date']).dt.strftime('%Y%W') print (YW) 0 201505 1 201609 Name: date, dtype: object ```
Calculate height of appwidget I cannot figure out or find a solution by googling for this problem. I have an android app with an appwidget, looks like <http://www.livescorewidget.eu/img/screendumps/widget.png>, and I am adding the data rows on the fly. Due to different devices the height of the widget is different and therefore a different amount of space is available for my rows. I want to know how many dips or pixels my widget uses so I can calculate how many rows there is room for. Is it possible? Even better could be if you could calculate how much height available in the layout for the rows. Thanks
From Jelly Bean onwards you can get a `Bundle` object containing widget dimensions using the `getAppWidgetOptions()` method in `AppWidgetManager`: ``` Bundle options = appWidgetManager.getAppWidgetOptions(widgetId); int minWidth = options.getInt(AppWidgetManager.OPTION_APPWIDGET_MIN_WIDTH); int maxWidth = options.getInt(AppWidgetManager.OPTION_APPWIDGET_MAX_WIDTH); int minHeight = options.getInt(AppWidgetManager.OPTION_APPWIDGET_MIN_HEIGHT); int maxHeight = options.getInt(AppWidgetManager.OPTION_APPWIDGET_MAX_HEIGHT); ``` `minWidth` and `maxHeight` are the dimensions of your widget when the device is in portrait orientation, `maxWidth` and `minHeight` are the dimensions when the device is in landscape orientation. All in dp.
What is this expression in Haskell, and how do I interpret it? I'm learning basic Haskell so I can configure Xmonad, and I ran into this code snippet: ``` newKeys x = myKeys x `M.union` keys def x ``` Now I understand what the `M.union` in backticks is and means. Here's how I'm interpreting it: ``` newKeys(x) = M.union(myKeys(x),???) ``` I don't know what to make of the `keys def x`. Is it like `keys(def(x))`? Or `keys(def,x)`? Or is `def` some sort of other keyword?
It's `keys(def,x)`. This is basic Haskell syntax for function application: first the function itself, then its arguments separated by spaces. For example: ``` f x y = x + y z = f 5 6 -- z = 11 ``` However, it is not clear what `def` is without larger context. In response to your comment: no, `def` couldn't be a function that takes `x` as argument, and then the result of that is passed to `keys`. This is because function application is left-associative, which basically means that in any bunch of things separated by spaces, only the first one is the function being applied, and the rest are its arguments. In order to express `keys(def(x))`, one would have to write `keys (def x)`. If you want to be super technical, then the right way to think about it is that *all functions have exactly one parameter*. When we declare a function of two parameters, e.g. `f x y = x + y`, what we really mean is that it's a function of one parameter, which returns another function, to which we can then pass the remaining parameter. In other words, `f 5 6` means `(f 5) 6`. This idea is kind of one of the core things in Haskell (and any ML offshoot) syntax. It's so important that it has its own name - "currying" (after Haskell Curry, the mathematician).
Is it possible to launch an android application activity when the phone starts? Im attempting to build an android application and one of the key features to this application is for it to be able to launch an activity automatically when the phone starts, I see some apps on my phone that already do this, any help would be great so that I can atleast research this a little better through the sdk, thanks!
You need to implement BroadCastreceiver like this: ``` public class PhoneStateReceiver extends BroadcastReceiver{ @Override public void onReceive(final Context context, Intent intent) { if(intent.getAction().equals(Intent.ACTION_BOOT_COMPLETED)){ Intent launch = new Intent(context, AcitivityToLaunch.class); launch.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); context.startActivity(launch); } } } ``` In your manifest add this: ``` <receiver android:name=".receiver.PhoneStateReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver> ``` Add permission: ``` <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" /> ```
Making a simple tooltip with only HTML and CSS I want my tooltip element (the `<span>`) to appear **above everything on the page** but **still relative to its parent element** (the `<td>`). I'm trying with JS but would like a no-script solution. JS to show/hide the `<span>`: ``` window.AETid = "AE"; function tooltip(tid) { document.getElementById(tid).style.display="block"; } function hideTooltip(tid) { document.getElementById(tid).style.display="none"; } ``` HTML: ``` <td class="ht" onmouseover="tooltip(window.AETid);" onmouseout="hideTooltip(window.AETid);"> Send reminders? <span class="tooltip" id="AE">If this option is selected, e-mail reminders will be sent to the client before each appointment.</span> </td> ``` CSS for `.tooltip`: ``` .ht { position: relative; } .tooltip { color: #ff0000; display: none; z-index: 100; position: absolute; right:0px; bottom: 0px; } ``` Currently, the tooltip appears as expected when I hover over the `<td>`, but it appears within the element, thus changing the size of the `<td>` and thus the `<tr>` and thus the whole dang `<table>`. I want the tooltip to appear, well, like tooltips do: above and not effecting the rest of the page. z-index doesn't seem to do it alone in my case... Using `position: fixed` instead of `absolute` on the tooltip `<span>` kept the element from interrupting the DOM, but literally positioned it after everything else on the page (at the bottom) All help is greatly appreciated
I found a method to make a very lightweight tooltip with no JS! ``` .ht:hover .tooltip { display:block; } .tooltip { display: none; color: red; margin-left: 28px; /* moves the tooltip to the right */ margin-top: 15px; /* moves it down */ position: absolute; z-index: 1000; } ``` ``` <table> <td class="ht">Send reminders? <span class="tooltip">this is the tooltip alshdgwh gahfguo wfhg fghwoug wugw hgrwuog hwaur guoarwhg rwu</span> </td> </table> ``` Totally awesome and props to [this guy](https://stackoverflow.com/questions/18359193/plain-javascript-tooltip/18359711#18359711)!
How to make PDO query work inside a function I am try to make an PDO sql inside function but it doesn't work. got no response from it. it works when not using function. my purpose is to make my code small. anyone can shed a light. thanks. ``` function Test() { $get_name = $smt->prepare("SELECT * FROM customer WHERE id = '1'"); $get_name->execute(); foreach ($get_name as $temp) { $name = $temp['name']; $address = $temp['address']; $phone = $temp['phone']; $page = $temp['page']; } eval("\$page = \"$page\";"); echo $page; eval("\$page = \"$page\";"); echo $page; } Test(); ```
I'd probably refactor your code to something like: ``` function getCustomerInfo(PDO $pdo, $customerId) { // use a prepared statement that can get you info on any customer $statement = $pdo->prepare( "SELECT * FROM customer WHERE id = :customerId LIMIT 1"); // get the result resource from the database $result = $statement->execute(array( ':customerId' => $customerId )); // fetch the first row in the result as an associative array // and return it to the caller. return $result->fetchFirst(PDO::FETCH_ASSOC); } // use your connection in place of $pdo $customerData = getCustomerInfo($pdo, 1); // now you can do stuff with your data var_dump($customerData); ``` This is better because it does not rely on global state, functions should never-ever-ever do that. and it uses prepared, parameterized sql that makes it faster and the function more useful for customers other that the one where id=1.
pyspark generate row hash of specific columns and add it as a new column I am working with spark 2.2.0 and pyspark2. I have created a DataFrame `df` and now trying to add a new column `"rowhash"` that is the sha2 hash of specific columns in the DataFrame. For example, say that `df` has the columns: `(column1, column2, ..., column10)` I require `sha2((column2||column3||column4||...... column8), 256)` in a new column `"rowhash"`. For now, I tried using below methods: 1) Used `hash()` function but since it gives an integer output it is of not much use 2) Tried using `sha2()` function but it is failing. Say `columnarray` has array of columns I need. ``` def concat(columnarray): concat_str = '' for val in columnarray: concat_str = concat_str + '||' + str(val) concat_str = concat_str[2:] return concat_str ``` and then ``` df1 = df1.withColumn("row_sha2", sha2(concat(columnarray),256)) ``` This is failing with "cannot resolve" error. Thanks gaw for your answer. Since I have to hash only specific columns, I created a list of those column names (in hash\_col) and changed your function as : ``` def sha_concat(row, columnarray): row_dict = row.asDict() #transform row to a dict concat_str = '' for v in columnarray: concat_str = concat_str + '||' + str(row_dict.get(v)) concat_str = concat_str[2:] #preserve concatenated value for testing (this can be removed later) row_dict["sha_values"] = concat_str row_dict["sha_hash"] = hashlib.sha256(concat_str).hexdigest() return Row(**row_dict) ``` Then passed as : ``` df1.rdd.map(lambda row: sha_concat(row,hash_col)).toDF().show(truncate=False) ``` It is now however failing with error: ``` UnicodeEncodeError: 'ascii' codec can't encode character u'\ufffd' in position 8: ordinal not in range(128) ``` I can see value of \ufffd in one of the column so I am unsure if there is a way to handle this ?
You can use [`pyspark.sql.functions.concat_ws()`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.concat_ws) to concatenate your columns and [`pyspark.sql.functions.sha2()`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.sha2) to get the SHA256 hash. Using the data from @gaw: ``` from pyspark.sql.functions import sha2, concat_ws df = spark.createDataFrame( [(1,"2",5,1),(3,"4",7,8)], ("col1","col2","col3","col4") ) df.withColumn("row_sha2", sha2(concat_ws("||", *df.columns), 256)).show(truncate=False) #+----+----+----+----+----------------------------------------------------------------+ #|col1|col2|col3|col4|row_sha2 | #+----+----+----+----+----------------------------------------------------------------+ #|1 |2 |5 |1 |1b0ae4beb8ce031cf585e9bb79df7d32c3b93c8c73c27d8f2c2ddc2de9c8edcd| #|3 |4 |7 |8 |57f057bdc4178b69b1b6ab9d78eabee47133790cba8cf503ac1658fa7a496db1| #+----+----+----+----+----------------------------------------------------------------+ ``` You can pass in either `0` or `256` as the second argument to `sha2()`, as per the docs: > > Returns the hex string result of SHA-2 family of hash functions (SHA-224, SHA-256, SHA-384, and SHA-512). The numBits indicates the desired bit length of the result, which must have a value of 224, 256, 384, 512, or 0 (which is equivalent to 256). > > > The function `concat_ws` takes in a separator, and a list of columns to join. I am passing in `||` as the separator and `df.columns` as the list of columns. I am using all of the columns here, but you can specify whatever subset of columns you'd like- in your case that would be `columnarray`. (You need to use the `*` to unpack the list.)
Dynamic row range when calculating moving sum/average using window functions (SQL Server) I'm currently working on a sample script which allows me to calculate the sum of the previous two rows and the current row. However, I would like to make the number '2' as a variable. I've tried declaring a variable, or directly casting in the query, yet a syntax error always pops up. Is there a possible solution? ``` DECLARE @myTable TABLE (myValue INT) INSERT INTO @myTable ( myValue ) VALUES ( 5) INSERT INTO @myTable ( myValue ) VALUES ( 6) INSERT INTO @myTable ( myValue ) VALUES ( 7) INSERT INTO @myTable ( myValue ) VALUES ( 8) INSERT INTO @myTable ( myValue ) VALUES ( 9) INSERT INTO @myTable ( myValue ) VALUES ( 10) SELECT SUM(myValue) OVER (ORDER BY myValue ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) FROM @myTable ```
``` DECLARE @test VARCHAR = 1 DECLARE @sqlCommand VARCHAR(1000) DECLARE @myTable TABLE (myValue INT) INSERT INTO @myTable ( myValue ) VALUES ( 5) INSERT INTO @myTable ( myValue ) VALUES ( 6) INSERT INTO @myTable ( myValue ) VALUES ( 7) INSERT INTO @myTable ( myValue ) VALUES ( 8) INSERT INTO @myTable ( myValue ) VALUES ( 9) INSERT INTO @myTable ( myValue ) VALUES ( 10) SET @sqlCommand = 'SELECT SUM(myValue) OVER (ORDER BY myValue ROWS BETWEEN ' + @test + ' PRECEDING AND CURRENT ROW) FROM #temp' EXEC (@sqlCommand) ```
How to sort the output of find? I'm using the find command to list files with their name containing a string: `find ~/ -type f -name "*inductive*"` I would like to use a pipe to sort the resulting list of files. I would like to be able to sort by file size, date created, date accessed ... How can I do this? Thanks.
Sorting the output by creation time is impossible in Linux (`ctime` is not file creation date). `stat` has the ability to show a file's *birth time* using the `%w` and `%W` format tags, but they always show `-` and `0`, respectively, even on filesystems that store creation time/birth time. Hence, it is practically useless for this purpose on Linux. The other two sorting orders are possible, though: ``` # Sort by size: find ~/ -type f -name "*inductive* -exec ls -ltu {} \; | sort -k 5 -n # Sort by access time: find ~/ -type f -name "*inductive* -exec ls -ltu {} \; | sort -k 6 -M ``` You can add the `-r` flag to `sort` to reverse the sorting order. See `man sort` for more information. Depending on the size of `find`'s output, it may take some time for `sort` to produce sorted output.
E: unable to locate package pip I have been trying to set up Python-android environment, and kept getting this error message: ``` ~$ sudo apt-get install build-essential patch git-core ccache ant pip python-devsudo: /var/lib/sudo/plaix writable by non-owner (040777), should be mode 0700 [sudo] password for plaix: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package pip ```
In Ubuntu, [`pip`](http://manpages.ubuntu.com/manpages/precise/en/man1/pip.1.html) is provided by the [`python-pip`](https://packages.ubuntu.com/search?keywords=python-pip&searchon=names&suite=all&section=all) package. You can install it in the Software Center, or, if you prefer to use the command line: ``` sudo apt-get update && sudo apt-get install python-pip ``` If you have not already installed [`python-dev`](https://packages.ubuntu.com/search?suite=all&section=all&arch=any&keywords=python-dev&searchon=names) and [`build-essential`](https://packages.ubuntu.com/search?suite=all&section=all&arch=any&keywords=build-essential&searchon=names), you should install them too. (But it seems your `apt-get` command *might* have successfully installed them. If you're not sure, you can check by trying to install them again. Or with `apt-cache policy python-dev build-essential`.) Once the necessary software is installed, if you wish to update it further, you can do so with `pip` itself, by running: ``` sudo pip install --upgrade pip sudo pip install --upgrade virtualenv ``` ***Source:*** [How to install pip on Ubuntu](http://www.saltycrane.com/blog/2010/02/how-install-pip-ubuntu/) by [Eliot](http://www.saltycrane.com/about/) (dated, but should still apply).
What's the idea behind naming classes with "Info" suffix, for example: "SomeClass" and "SomeClassInfo"? I'm working in a project which deals with physical devices, and I've been confused as how to properly name some classes in this project. Considering the actual devices (sensors and receivers) are one thing, and their *representation* in software is another, I am thinking about naming some classes with the "Info" suffix name pattern. For example, while a `Sensor` would be a class to represent the actual sensor (when it is actually connected to some working device), `SensorInfo` would be used to represent only the characteristics of such sensor. For example, upon file save, I would serialize a `SensorInfo` to the file header, instead of serializing a `Sensor`, which sort of wouldn't even make sense. But now I am confused, because there is a middleground on objects' lifecycle where I cannot decide if I should use one or another, or how to get one from another, or even whether both variants should actually be collapsed to only one class. Also, the all too common example `Employee` class obviously is just a representation of the real person, but nobody would suggest to name the class `EmployeeInfo` instead, as far as I know. The language I am working with is .NET, and this naming pattern seems to be common throughout the framework, for exemple with these classes: - `Directory` and `DirectoryInfo` classes; - `File` and `FileInfo` classes; - `ConnectionInfo`class (with no correspondent `Connection` class); - `DeviceInfo` class (with no correspondent `Device` class); So my question is: is there a common rationale about using this naming pattern? Are there cases where it makes sense to have pairs of names (`Thing` and `ThingInfo`) and other cases where there should only exist the `ThingInfo` class, or the `Thing` class, without its counterpart?
I think "info" is a misnomer. Objects have state and actions: "info" is just another name for "state" which is already baked into OOP. What are you *really* trying to model here? You need an object that represents the hardware in software so other code can use it. That is easy to say but as you found out, there is more to it than that. "Representing hardware" is surprisingly broad. An object that does that has several concerns: - Low-level device communication, whether it be talking to the USB interface, a serial port, TCP/IP, or proprietary connection. - Managing state. Is the device turned on? Ready to talk to software? Busy? - Handling events. The device produced data: now we need to generate events to pass to other classes that are interested. Certain devices such as sensors will have fewer concerns than say a printer/scanner/fax multifunction device. A sensor likely just produces a bit stream, while a complex device may have complex protocols and interactions. Anyway, back to your specific question, there are several ways to do this depending on your specific requirements as well as the complexity of the hardware interaction. Here is an example of how I would design the class hierarchy for a temperature sensor: - ITemperatureSource: interface that represents anything that can produce temperature data: a sensor, could even be a file wrapper or hard-coded data (think: mock testing). - Acme4680Sensor: ACME model 4680 sensor (great for detecting when the Roadrunner is nearby). This may implement multiple interfaces: perhaps this sensor detects both temperature and humidity. This object contains program-level state such as "is the sensor connected?" and "what was the last reading?" - Acme4680SensorComm: used *solely* for communicating with the physical device. It does not maintain much state. It is used for sending and receiving messages. It has a C# method for each of the messages the hardware understands. - HardwareManager: used for getting devices. This is essentially a factory that caches instances: there should only be one instance of a device object for each hardware device. It has to be smart enough to know that if thread A requests the ACME temperature sensor and thread B requests the ACME humidity sensor, these are actually the same object and should be returned to both threads. --- At the top level you will have interfaces for each hardware type. They describe actions your C# code would take on the devices, using C# data types (not e.g. byte arrays which the raw device driver might use). At the same level you have an enumeration class with one instance for each hardware type. Temperature sensor might be one type, humidity sensor another. One level below this are the actual classes that implement those interfaces: they represent one device similar the Acme4680Sensor I described above. Any particular class may implement multiple interfaces if the device can perform multiple functions. Each device class has its own private Comm (communication) class that handles the low-level task of talking to the hardware. Outside of the hardware module, the only layer that is visible is the interfaces/enum plus the HardwareManager. The HardwareManager class is the factory abstraction that handles the instantiation of device classes, caching instances (you *really* do not want two device classes talking to the same hardware device), etc. A class that needs a particular type of sensor asks the HardwareManager to get the device for the particular enum, which it then figures out if it is already instantiated, if not how to create it and initialize it, etc. The goal here is to *decouple* business logic from low-level hardware logic. When you are writing code that prints sensor data to the screen, that code should not care what type of sensor you have *if and only if* this decoupling is in place which centers on those hardware interfaces. --- ![UML class diagram example showing the design described in this answer](https://i.stack.imgur.com/HeNOR.png) Note: there are associations between the HardwareManager and each device class that I did not draw because the diagram would have turned into arrow soup.
Is context.getSystemService() an expensive call? Is `context.getSystemService()` an expensive call? I.e. I have build a little http networking library (I know there are other http networking libraries available) that uses `ConnectivityManager cm = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE);` to check (before executing a http request) if the user is connected with the internet (kind of fail fast strategy). My question is should I save the `ConnectivityManager` as an instance variable (class field) of my http library or should I call `ConnectivityManager cm = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE);` every time before I start an http request to retrieve a "new" ConnectivityManager? Is the same ConnectivityManager instance returned every time I call `getSystemService(Context.CONNECTIVITY_SERVICE)` (in other words, can storing a ConnectivityManger into a class field lead to problems since my http library is a long living one --> lives as long as application run)
> > My question is should I save the ConnectivityManager as an instance variable (class field) of my http library or should I call ConnectivityManager cm = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY\_SERVICE); every time before I start an http request to retrieve a "new" ConnectivityManager? > > > I would hold onto the instance. While `getSystemService()` does not in practice seem to be that expensive to call, why call it more often than needed? > > in other words, can storing a ConnectivityManger into a class field lead to problems since my http library is a long living one --> lives as long as application run > > > To be on the safe side, call `getSystemService()` on the `Application` singleton (`getApplicationContext()`). *Usually*, the object returned by `getSystemService()` knows nothing about the `Context` that created it. Occasionally, it does — `CameraManager` in Android 5.0 suffered from this flaw, though that was fixed in Android 5.1. If the system service object is going to outlive the context that I am in, I tend to use `getApplicationContext()` to retrieve the system service, out of paranoia. (the memory leaks, they're out to get me!) > > Is the same ConnectivityManager instance returned every time I call getSystemService(Context.CONNECTIVITY\_SERVICE) > > > To be honest, I have never looked.
"There is already an open DataReader..." Reuse or Dispose DB Connections? Please Help.... When I select data from Mysql table its showing "There is already an open DataReader associated with this Connection which must be closed first. vb.net" ![Error showing..](https://i.stack.imgur.com/g056H.jpg) ``` Private Sub cmbJobCategoryVisa_SelectedIndexChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmbJobCategoryVisa.SelectedIndexChanged ''" Dim MyCommand As New MySqlCommand("SELECT jobcategorycode FROM jobcategory WHERE jobcategory='" & Me.cmbJobCategoryVisa.SelectedItem & "'", MyConnection) Dim MyReader As MySqlDataReader = MyCommand.ExecuteReader While MyReader.Read If MyReader.HasRows = True Then Me.txtJobCategoryCodeVisa.Text = MyReader("jobcategorycode") End If End While MyReader.Close() MyCommand.Dispose() End Sub ``` '''at the time of the below code execution,,, the imaged error is showing ``` Private Sub txtEmpNo_Validating(ByVal sender As Object, ByVal e As System.ComponentModel.CancelEventArgs) Handles txtEmpNo.Validating Dim MyCommand5 As New MySqlCommand("SELECT * FROM employeesmaster WHERE empno='" & Me.txtEmpNo.Text & "'", MyConnection) Dim MyDataReader5 As MySqlDataReader = MyCommand5.ExecuteReader If MyDataReader5.HasRows = True Then While MyDataReader5.Read Me.txtEmpName.Text = MyDataReader5("name") Me.cmbNationality.Text = MyDataReader5("nationality") Me.cmbJobCategoryVisa.Text = MyDataReader5("jobcategoryvisa") If Not IsDBNull(MyDataReader5("image")) Then Dim ImageData As Byte() = DirectCast(MyDataReader5("image"), Byte()) Dim MemoryStream As New IO.MemoryStream(ImageData) Me.pbxEmpImage.Image = Image.FromStream(MemoryStream) Else Me.pbxEmpImage.Image = Nothing End If End While Else End If MyDataReader5.Close() MyCommand5.Dispose() End Sub ```
It is apparent that you are using a single, global connection, and apparently leaving it open. As has been mentioned, you should not reuse or store your connection. Connections are cheap to create and .NET is optimized for creating them as needed. There are a number of things in your code which are not being closed and disposed. which should be. Disposing not only prevents your app from leaking resources, but **this kind of error cant happen using newly created DB objects for each task.** **Connections** Since there *are* gyrations involved in creating them, you can write a function to create (and maybe open) a new Connection and avoid having to paste the connection string everywhere. Here is a general example using OleDB: ``` Public Function GetConnection(Optional usr As String = "admin", Optional pw As String = "") As OleDbConnection Dim conStr As String conStr = String.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};User Id={1};Password={2};", dbFile, usr, pw) Return New OleDbConnection(constr) End Function ``` **`Using` blocks** Use it in a Using block so it is disposed of: ``` Using con As OleDb.OleDbConnection = GetConnection() Using cmd As New OleDbCommand(sql.Value, con) con.Open() Using rdr As OleDbDataReader = cmd.ExecuteReader() ' do stuff End Using ' close and dispose of reader End Using ' close and dispose of command End Using ' close, dispose of the Connection objects ``` Each `Using` statement creates a new target object, and disposes it at the end of the block. In general, anything which has a `Dispose` method can and should be used in a `Using` block to assure it is disposed of. This would **include** the `MemoryStream` and `Image` used in your code. `Using` blocks can be "stacked" to specify more than one object and reduce indentation (note the comma after the end of the first line): ``` Using con As OleDb.OleDbConnection = GetConnection(), cmd As New OleDbCommand(sql.Value, con) con.Open() ... End Using ' close and dispose of Connection and Command ``` For more information see: - [Using Statement](https://msdn.microsoft.com/en-us/library/htd05whh.aspx) - [Connection Pooling](https://msdn.microsoft.com/en-us/library/bb399543(v=vs.110).aspx) - [How to: Dispose of a System Resource](https://msdn.microsoft.com/en-us/library/wydd5hkd.aspx) --- `can u pls convert this code to Mysql connection... my connection string is...` For **basic** MySQL connection: ``` ' module level declaration Private MySQLDBase as String = "officeone" Function GetConnection(Optional usr As String = "root", Optional pw As String = "123456") As MySqlConnection Dim conStr As String conStr = String.Format("Server=localhost;Port=3306;Database={0};Uid={1}; Pwd={2};", MySQLDBase, usr, pw) Return New MySqlConnection(constr) End Function ``` *Personally* for MySql, I use a class and a `ConnectionStringBuilder` in the method. There are many, many cool options I use but which differs from project to project like the DB and default app login. The above uses all the defaults.
Fastest way to unpack 32 bits to a 32 byte SIMD vector Having 32 bits stored in a `uint32_t` in memory, what's the fastest way to unpack each bit to a separate byte element of an AVX register? The bits can be in any position within their respective byte. Edit: to clarify, I mean bit 0 goes to byte 0, bit 1 to byte 1. Obviously all other bits within the byte on zero. Best I could at the moment is 2 `PSHUFB` and having a mask register for each position. If the `uint32_t` is a bitmap, then the corresponding vector elements should be 0 or non-0. (i.e. so we could get a vector mask with a `vpcmpeqb` against a vector of all-zero). <https://software.intel.com/en-us/forums/topic/283382>
To "broadcast" the 32 bits of a 32-bit integer `x` to 32 bytes of a 256-bit YMM register `z` or 16 bytes of a two 128-bit XMM registers `z_low` and `z_high` you can do the following. With AVX2: ``` __m256i y = _mm256_set1_epi32(x); __m256i z = _mm256_shuffle_epi8(y,mask1); z = _mm256_and_si256(z,mask2); ``` Without AVX2 it's best to do this with SSE: ``` __m128i y = _mm_set1_epi32(x); __m128i z_low = _mm_shuffle_epi8(y,mask_low); __m128i z_high = _mm_shuffle_epi8(y,mask_high); z_low = _mm_and_si128(z_low ,mask2); z_high = _mm_and_si128(z_high,mask2); ``` The masks and a working example are shown below. If you plan to do this several times you should probably define the masks outside of the main loop. ``` #include <immintrin.h> #include <stdio.h> int main() { int x = 0x87654321; static const char mask1a[32] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03 }; static const char mask2a[32] = { 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, }; char out[32]; #if defined ( __AVX2__ ) __m256i mask2 = _mm256_loadu_si256((__m256i*)mask2a); __m256i mask1 = _mm256_loadu_si256((__m256i*)mask1a); __m256i y = _mm256_set1_epi32(x); __m256i z = _mm256_shuffle_epi8(y,mask1); z = _mm256_and_si256(z,mask2); _mm256_storeu_si256((__m256i*)out,z); #else __m128i mask2 = _mm_loadu_si128((__m128i*)mask2a); __m128i mask_low = _mm_loadu_si128((__m128i*)&mask1a[ 0]); __m128i mask_high = _mm_loadu_si128((__m128i*)&mask1a[16]); __m128i y = _mm_set1_epi32(x); __m128i z_low = _mm_shuffle_epi8(y,mask_low); __m128i z_high = _mm_shuffle_epi8(y,mask_high); z_low = _mm_and_si128(z_low,mask2); z_high = _mm_and_si128(z_high,mask2); _mm_storeu_si128((__m128i*)&out[ 0],z_low); _mm_storeu_si128((__m128i*)&out[16],z_high); #endif for(int i=0; i<8; i++) { for(int j=0; j<4; j++) { printf("%x ", out[4*i+j]); }printf("\n"); } printf("\n"); } ``` --- ## To get 0 or -1 in each vector element: It takes one extra step `_mm256_cmpeq_epi8` against all-zeros. Any non-zero turns into 0, and zero turns into -1. If we don't want this inversion, use `andnot` instead of `and`. It inverts its first operand. ``` __m256i expand_bits_to_bytes(uint32_t x) { __m256i xbcast = _mm256_set1_epi32(x); // we only use the low 32bits of each lane, but this is fine with AVX2 // Each byte gets the source byte containing the corresponding bit __m256i shufmask = _mm256_set_epi64x( 0x0303030303030303, 0x0202020202020202, 0x0101010101010101, 0x0000000000000000); __m256i shuf = _mm256_shuffle_epi8(xbcast, shufmask); __m256i andmask = _mm256_set1_epi64x(0x8040201008040201); // every 8 bits -> 8 bytes, pattern repeats. __m256i isolated_inverted = _mm256_andnot_si256(shuf, andmask); // this is the extra step: compare each byte == 0 to produce 0 or -1 return _mm256_cmpeq_epi8(isolated_inverted, _mm256_setzero_si256()); // alternative: compare against the AND mask to get 0 or -1, // avoiding the need for a vector zero constant. } ``` See it on the [Godbolt Compiler Explorer](http://gcc.godbolt.org/#compilers:!((compiler:g6,options:'-xc+-std%3Dgnu11+-O3+-Wall+-fverbose-asm+-march%3Dhaswell',sourcez:MQSwdgxgNgrgJgUwAQB4QFt3gC4CdwB0AFgHwBQoksiqAztnCAPbHmXTzIr2NjatkKIAGaJhSAPoSAggDUAGgCYpZAPSqkABiRNcSALQBGJOCQIAhhCJIARgE9sCMlPSKArADYQZgB4AHczA4CRsQbFoJbCYQhwRaAAoYHABmZWwkHwBKMgBvMiQCyQlXT28fGwhzeiQAXklMdw8JWgRsQwkEPxBU%2BKyAbkKkdSQAd2QmMCg7JBgWpGwiZCgmEaRU0PCdcQsrJChAhAAaWxh0hZBaE0vhcGQRsOs5JUFB4YBRS2t7RyQAc1bLgtkLQmDBcBBkN9kBAJthzOBwL95oskDDcLg4n4JrwkRt8oUXI1vLQiDBhOgqgBrWr1EpNFrYDpdDwAFh88XxgwKmh8mmSfIF/KFguOPM0inFkol0qlh05XLFhk0SpVyrVKtFvM02p1ur1mT68sJpSQJLJBTqxTpzVJwmEUAQTJAAA5ehUqthjmbyVSDS9CsNaY1mq12p0QKz2TznZoAOQsuMS2NK2OaZ2plmpxSpwyZArDADqDyQAGEANRlwws45jVGBJAUynIMIN3TIDHmODmGwOghG4pEpCBOAU2jUmlW4MMsPMtnxaOaBMSpVpxeS3MDIYaBAANwQuGmztsYUu%2BhISCPUNoxwC2EcuDASAxfgs4T7g2NXiuTH2jmC4D3XA/wnBpPAkYcwCYRlaBARp4m9Y5h1HSk/XlYZzkuC5kWQBAfDwcxTUcPwAC5USYdAAgxMxPlsWJajqbQoiQPxcCYOAYAhLQdD0Ix5QxbAwUfScwIgCiEAARydV0Lh/cw/wkAD9z/Y5hPpVoAC992iGC4MyP0AF8KAQIIREEYZtF0JBILAfRNNYkxHx2L46PgxEHWPdIGWyT9vBgMAAggSkQhPSJoivRIUjSDI8zyV5VBwDIaWjAB2Dw3BZVJDENNCNF%2BCAICQOAmDiMBY3SX4YHMXBAkcZAgTme4oCgWxkHMKAQF%2BMAEDgGsHlBTzyNaIhET2EAmyHdrOqqeJUmyOLCLkkACphMBqiKuIh3SB0PUIpg/D%2BfKkGEVj0F2/AwCRFtYR0IEFsCodhHvPYmE7EbjvIocyNWuE%2BHlehFuWiZqisKqGypQxzAAbVSABdGlYq5bktU1bUUc0NG5URpHUa0ZHcZxsV0flQZFRRwwyYp4nClJ/HydpymsfxxQUeZpmWcxrGxVZrn2bZqnseSFHBfx4WxUF/mRaFqXJflfTssGf7sCWr7gaIUHkMUKHYfhiWaZ5/Hq3x50OcRnklU1CVNQTTUYxNhVeTp/WxUNsVjd1nxzdxy3cet3Hbfd5VeedlG3cZs30a9iOeV9hc7ZJh2g95F3eVDzmPajnxvejjP/cGOX/QKEG9H66HFBh7LQHEMRbjgJB4iKJ5lAkJA5oJAcTQ1kDrWWTsYGaWDPHieIfIAKkyDXzD9D926/ZDjC74Me/Y/u4OHmeQDHufJ/ltu6W8aY6gJUC1LaJ0en6fs96QdSaSP61vXtR1w1dOxDjnqfChvy1j/AoIV8H9Sb8qSKFQh%2BH%2B9A2x9x0oPNee8x79UOOpVCwAEBQBaJfQwihnTeE7t/dAEgl5QJAJg10sCSEb3HsA7eGCsE4KpAQlYFpaQMN7v3EhQ8XDkLHgAMi3pDLQMMP4FE4bQsGY4JDDV%2BNYPBLDl4wXYWQ2hPC%2BGGA8IIzcBQC5FHQOQpAB9mHTjPooXoBokA0OwdfBhqwu42jJI/KS8RX7ISsUI7Ruj1ISI6tIgxtp7HP0cUA8RkiiCuI8csaxMjhxsKwfEMJKxAmUhAYaQYHjgk2KifImJqSvEJKSVoq0zQogYiIQokR2CeElwEYglxyS76FMgdE0hZSKHcJLqomG1Tgl%2BhQSZYQ%2BJ5TCF0PEBKIAaiaAGCAFAzoJkVhihLQZuBhl8CQAAKzGQMFZKAWQbNmUgHIjMJaDBYjgYQ8QABEABSHwSAznHBLiyEeIAywrPURLfSxy%2BCnLOQAHTAGc1x%2BlmLnWwF835/zal8VaIJa%2Bhp9JAA)),filterAsm:(commentOnly:!t,directives:!t,intel:!t,labels:!t),version:3). Also see [is there an inverse instruction to the movemask instruction in intel avx2?](https://stackoverflow.com/questions/36488675/is-there-an-inverse-instruction-to-the-movemask-instruction-in-intel-avx2) for other element sizes.
If allocators are stateless in C++, why are functions not used to allocate memory instead? The default `std::allocator` class is stateless in C++. This means any instance of an `std::allocator` can deallocate memory allocated by another `std::allocator` instance. What is then the point of having instances of allocators to allocate memory? For instance, why is memory allocated like this: ``` allocator<T> alloc, alloc2; T* buffer = alloc.allocate(42); alloc2.deallocate(buffer); ``` When functions could easily do that same job: ``` T* buffer = allocate(42); deallocate(buffer); ```
The default allocator is stateless, but other allocators may not be. However all allocators should share the same interface. You are not supposed to use `std::allocator` directly as in your example. You can just use `new` and `delete` for direct allocation/deallocation. You use `std::allocator` indirectly for generic allocator-aware types, such as containers, that should be agnostic to how the memory they use is allcoated. They usually have a template parameter for the allocator type satisfying the [*Allocator*](https://en.cppreference.com/w/cpp/named_req/Allocator) requirements/interface and `std::allocator` is typically the default argument for this template parameter. And even in these cases you should use the allocator through [`std::allocator_traits`](https://en.cppreference.com/w/cpp/memory/allocator_traits), not by directly calling member functions of the allocator type, since many of them are defaulted through `std::allocator_traits`.
How do I pass variables as variable names to a function? I want to be able to do the following pseudocode: 1. Pass in symbol a. 2. Pass in symbol b. 3. Pass in an expression using a and b 4. As I change the value of a and b, print the output of c at each moment. Ideally, I would like the signature to look like: ``` (runner a b (+ a b)) ``` but I'm not sure that I'm approaching this correctly... I've tried changing the function to ``` (runner 'a 'b (+ 'a 'b)) ``` and this more complicated example: ``` (runner 'a 'b (+ (* 'a 'b) 'a)) ``` but this does a + on 'a and 'b before stepping into runner. Here's my first stab at some clojure: ``` (defn runner [a b c] ( (for [i (range 10) j (range 10)] (println i j (c i j)) ``` What concept of clojure am I missing?
Function arguments are always evaluated before the function is called. If you want to defer evaluation or represent some computation or code as an object, you have a few options: 1. Use a function (see my code below) 2. Use a macro and splice some code into some other code that the macro generates 3. Pass code as a quoted list, and `eval` it. Using a function is what you want to do 99% of the time. 1% of the time, you'll want macros. You should never need `eval` unless you're generating code at runtime or doing very screwy things. ``` user> (defn runner [f] (doseq [a (range 3) b (range 3)] (println a b (f a b)))) #'user/runner user> (runner (fn [x y] (+ x y))) 0 0 0 0 1 1 0 2 2 1 0 1 1 1 2 1 2 3 2 0 2 2 1 3 2 2 4 ``` This could also be written as `(runner #(+ %1 %2)` or even simply `(runner +)`. There is no need to pass "`a`" and "`b`" into the function as arguments. `doseq` and `for` introduce their own local, lexically scoped names for things. There's no reason they should use `a` and `b`; any name will do. It's the same for `fn`. I used `x` and `y` here because it doesn't matter. I could've used `a` and `b` in the `fn` body as well, but they would have been a *different* `a` and `b` than the ones the `doseq` sees. You may want to read up on [scope](http://en.wikipedia.org/wiki/Scope_%28programming%29) if this doesn't make sense.
Playwright - Test against different environments and different variables Im looking to use Playwright to test against a web page. The system im working on has 4 different environments that we need to deploy against, for example the test urls may be `www.test1.com` `www.test2.com` `www.test3.com` `www.test4.com` The first question is how do I target the different Environment? In my playwright config I had a baseUrl but I need to override that. In addition each environment has different login credentials, how can I create and override these as parameters per environment?
Since Playwright `v1.13.0`, there is a `baseURL` option available. You can utilise that in this way probably In your `config.js` file, you can have this ``` import { PlaywrightTestConfig } from '@playwright/test'; const config: PlaywrightTestConfig = { use: { baseURL: process.env.URL, }, }; export default config; ``` Now in the `package.json` file, you can have the environment variables set in the test commands for various env in the `scripts` , like this ``` ... "scripts": { "start": "node app.js", "test1": "URL=www.test1.com mocha --reporter spec", "test2": "URL=www.test2.com mocha --reporter spec", . . }, ... ``` Similarly you can set the environment variables for the login credentials also and then pass them in the script in the same way the `URL` is passed.
Filling User Id Field in Application Insights from ASP.NET Core I would like to be able to populate the User Id field in Application Insights with my real username data. This is an internal application, so privacy concerns with a simple username field are moot. As far as I can tell, all solutions available online for this strictly work in .NET Framework, not .NET Core. You can find [this solution](https://hajekj.net/2017/03/13/tracking-currently-signed-in-user-in-application-insights/) in a few places, including some old AI documentation on GitHub. However, when I run it, I get an error on startup indicating that dependency on the scoped object IHttpContextAccessor is not acceptable from a singleton, which of course is logical. I don't see how this could have ever worked unless a previous version of .NET Core's DI allowed it (I'm on 2.2). [This issue on GitHub](https://github.com/Microsoft/ApplicationInsights-aspnetcore/issues/136) sort of spells out the problem but it was closed after the AI team pointed out that you must use a singleton. I tried variations of what's in the OP and the first response, and while the code ran, the User Id field on AI continued to be filled with gibberish data. Is there any way to make the User Id field in AI fill with something useful for server requests coming from an ASP.NET Core app? ## EDIT After seeing the answer below that my code should have worked just fine, I went back and realized the exception message hadn't specifically mentioned IHttpContextAccessor: > > **System.InvalidOperationException:** 'Cannot consume scoped service > 'Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer' > from singleton > 'Microsoft.Extensions.Options.IConfigureOptions`1[Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration]'.' > > > Now, my code looks just about identical to @PeterBons' answer below, so on the face of it this exception made no sense. `TelemetryConfiguration` doesn't event appear in my code. But then I remembered I am using Scrutor to lazily do DI registration in Startup: ``` services.Scan(scan => scan .FromAssembliesOf(typeof(Startup), typeof(MyDbContext)) .AddClasses() .AsSelfWithInterfaces().WithScopedLifetime()); ``` I assume the problem was that I need to register my ITelemetryInitializer as Singleton, and this code was inadvertently de- or re-registering it as scoped. So I changed the last two lines to: ``` .AddClasses(f => f.Where(t => t != typeof(RealUserAIProvider))) .AsSelfWithInterfaces().WithScopedLifetime()); ``` And it worked. Rather than editing out my mistake above, I'll leave it. @PeterBons' answer below is still going to be helpful to other people, and maybe my confusion with Scrutor will help someone too.
You don't have to register the `HttpContextAccessor` as a scoped dependency. Just use a singleton. We have this working in production using this: ``` services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>(); ``` combined with this initializer: ``` public class SessionDetailsTelemetryEnrichment : TelemetryInitializerBase { public SessionDetailsTelemetryEnrichment(IHttpContextAccessor httpContextAccessor) : base(httpContextAccessor) { } protected override void OnInitializeTelemetry(HttpContext platformContext, RequestTelemetry requestTelemetry, ITelemetry telemetry) { telemetry.Context.User.AuthenticatedUserId = platformContext.User?.Claims.FirstOrDefault(c => c.Type == "Username")?.Value ?? string.Empty; } } ```
Make Python dataclass iterable? I have a dataclass and I want to iterate over in in a loop to spit out each of the values. I'm able to write a very short `__iter__()` within it easy enough, but is that what I should be doing? I don't see anything in the documentation about an 'iterable' parameter or anything, but I just *feel like* there ought to be... Here is what I have which, again, works fine. ``` from dataclasses import dataclass @dataclass class MyDataClass: a: float b: float c: float def __iter__(self): for value in self.__dict__.values(): yield value thing = MyDataclass(1,2,3) for i in thing: print(i) # outputs 1,2,3 on separate lines, as expected ``` Is this the best / most direct way to do this?
The simplest approach is probably to make a iteratively extract the fields following the guidance in [the `dataclasses.astuple` function](https://docs.python.org/3/library/dataclasses.html#dataclasses.astuple) for creating a shallow copy, just omitting the call to `tuple` (to leave it a generator expression, which is a legal iterator for `__iter__` to return: ``` def __iter__(self): return (getattr(self, field.name) for field in dataclasses.fields(self)) # Or writing it directly as a generator itself instead of returning a genexpr: def __iter__(self): for field in dataclasses.fields(self): yield getattr(self, field.name) ``` Unfortunately, `astuple` itself is not suitable (as it recurses, unpacking nested dataclasses and structures), while `asdict` (followed by a `.values()` call on the result), while suitable, involves eagerly constructing a temporary `dict` and recursively copying the contents, which is relatively heavyweight (memory-wise and CPU-wise); better to avoid unnecessary `O(n)` eager work. `asdict` would be suitable if you want/need to avoid using live views (if later attributes of the instance are replaced/modified midway through iterating, `asdict` wouldn't change, since it actually guarantees they're deep copied up-front, while the genexpr would reflect the newer values when you reached them). The implementation using `asdict` is even simpler (if slower, due to the eager pre-deep copy): ``` def __iter__(self): yield from dataclasses.asdict(self).values() # or avoiding a generator function: def __iter__(self): return iter(dataclasses.asdict(self).values()) ``` There is a third option, which is to ditch `dataclasses` entirely. If you're okay with making your class behave like an immutable sequence, then you get iterability for free by making it a `typing.NamedTuple` (or the older, less flexible `collections.namedtuple`) instead, e.g.: ``` from typing import NamedTuple class MyNotADataClass(NamedTuple): a: float b: float c: float thing = MyNotADataClass(1,2,3) for i in thing: print(i) # outputs 1,2,3 on separate lines, as expected ``` and that is iterable automatically (you can also call `len` on it, index it, or slice it, because it's an actual subclass of `tuple` with all the `tuple` behaviors, it just also exposes its contents via named properties as well).
vue-test-utils: mocking vue-router and vuex in the same test I'm trying to mount a component that uses Vuex and requires $route.query to be mocked ``` import { mount, shallow, createLocalVue } from 'vue-test-utils' import Vue from 'vue' import expect from 'expect' import Vuex from 'vuex' import VueRouter from 'vue-router' import Post from '../../components/Post.vue' const localVue = createLocalVue() localVue.use(Vuex); localVue.use(VueRouter); describe('Lead list', () => { let wrapper; let getters; let store; beforeEach(() => { getters = { post: () => { return {} } } store = new Vuex.Store({ getters }); }); it('should just be true', () => { const $route = { path: '/some/path', query: {} } wrapper = shallow(Post, { localVue, mocks: { $route }, store }); expect(true).toBe(true); }); }); ``` And I'm getting back this error ``` TypeError: Cannot set property $route of #<VueComponent> which has only a getter ``` I've found the closed issue <https://github.com/vuejs/vue-test-utils/issues/142> that has similar error. But my case is a little different. If I remove store or mocks from the options it works fine, but it does't work when you have both. Is this an issue or I'm doing something wrong? Thanks
You're getting this error because you have installed VueRouter on the Vue constructor, by calling `localVue.use(VueRouter)`. This adds $route as a read only property on the localVue constructor. You're then trying to overwrite `$router` using `mocks`. `mocks` is unable to overwrite `$route` because it's been added as a read only property by Vue Router. To fix your problem, you could create another `localVue`, install Vuex, and then use `mocks` to pass in `$route`: ``` it('should just be true', () => { const freshLocalVue = createLocalVue() freshLocalVue.use(Vuex) const $route = { path: '/some/path', query: {} } wrapper = shallow(Post, { localVue, mocks: { $route }, store }) expect(true).toBe(true) }) ```
Android - Firebase jobdispatcher I would like to know if it's possible to use Firebase jobdispatcher to schedule an url hit and get the response in order to update the db. I would like it to run once per day at night. Does anyone know if this is possible? I can't find any good example of doing this. I already read android documentation and <https://github.com/firebase/firebase-jobdispatcher-android#user-content-firebase-jobdispatcher-> . I need to use Firebase jobdispatcher because I'm targeting API 16. Thanks in advance. **UPDATE** This is what I did to schedule it once per day. ``` final int periodicity = (int) TimeUnit.HOURS.toSeconds(24); final int toleranceInterval = (int) TimeUnit.HOURS.toSeconds(1); FirebaseJobDispatcher dispatcher = new FirebaseJobDispatcher(new GooglePlayDriver(this)); Job job = dispatcher.newJobBuilder() .setService(UpdateTVJobService.class) .setTag(JOB_TAG) .setTrigger(Trigger.executionWindow(periodicity, periodicity + toleranceInterval)) .setLifetime(Lifetime.FOREVER) .setRecurring(true) .setReplaceCurrent(true) .build(); int result = dispatcher.schedule(job); if (result != FirebaseJobDispatcher.SCHEDULE_RESULT_SUCCESS) { Log.d("JOB_TAG", "ERROR ON SCHEDULE"); } ```
You can schedule recurring jobs using Firebase JobDispatcher.As per your requirement,you need to create a service extending JobService that get response from url and update the db . Then you can schedule this service using Firebase JobDispatcher .In executionWindow you have to specify the earliest and latest time that job should run in ideal circumstances. If you want to schedule job after every 24 hours you can use execution window (60\*60\*24,60\*60\*24+60).Then if you want that it should run every night then you have to make sure that it is initially scheduled at night.For that you can initially use AlarmManager to be fired at night(only once) when app is installed and schedule a recurring job using job dispatcher OR another way is that based on difference from now and the desired execution time you can schedule a non recursive job using jobdispatcher which will run at night and inside that job service you can schedule recurring job using job dispatcher . ExecutionWindow specifies approximate time. It's not guaranteed it would run at the given window. If it misses the window the job will run at earliest time later under ideal circumstances.For recurring jobs once the job has finished next job will calculate execution window time from the time job last run. ``` Job myJob = dispatcher.newJobBuilder() .setTag("my-unique-tag") .setRecurring(true) .setLifetime(Lifetime.FOREVER) .setService(MyJobService.class) .setTrigger(Trigger.executionWindow(60*60*24,60*60*24+60)) .setReplaceCurrent(false) .setRetryStrategy(RetryStrategy.DEFAULT_EXPONENTIAL) .setConstraints(Constraint.ON_ANY_NETWORK) .setExtras(myExtrasBundle) .build(); dispatcher.mustSchedule(myJob); ```
Is there a way to pass a variable through when calling RenderComponentPresentation? > > **Possible Duplicate:** > > [Variable setting in Dreamweaver template in SDL Tridion](https://stackoverflow.com/questions/10208580/variable-setting-in-dreamweaver-template-in-sdl-tridion) > > > We use `RenderComponentPresentation` (on Tridion 2009) to render internal and external links so that the code base is in only one Dreamweaver template. It would be helpful if we could pass through an optional CSS Class to use when rendering the link. Any ideas how this could be done?
It is possible to set a value in the RenderContext and then retrieve it in the second Dreamweaver template. Before calling RenderComponentPresentation set a render context value as follows: ``` @@SetRenderContextVariable("CSSClass","red")@@ ``` You will need to have a C# Fragment or TBB to get the variables out of the render context and add them to the package in the second Dreamweaver template. An example would be: ``` var renderContext = engine.PublishingContext.RenderContext; foreach (string key in renderContext.ContextVariables.Keys) { var value = renderContext.ContextVariables[key] as string; var item = package.CreateStringItem(ContentType.Text, value); package.PushItem("RenderContextVariable."+key, item); } ``` You should then be able to access the variables within the package using the standard Dreamweaver notation ``` @@RenderContextVariable.CSSClass@@ ``` Hope this helps!
Assembling very large files I have users uploading files sometimes as large as 100+ GB to a local web server. The upload process works well and chunks come in at 50MB. The problem seems to be after the file is uploaded, when the web server assembles the files and the server (24GB RAM), despite not showing any graphical signs of memory pressure, gets very sluggish. I want to make sure my code isn't causing any unnecessary slow-downs. Please suggest any more efficient way to do it. If it is already ok, then I'll know to look at other aspects of the process. ``` # open the temp file to write into with open(temp_filename, 'wb') as temp_file: # loop over the chunks for i in range(total_chunks): with open(os.path.join(get_chunk_filename(chunk_identifier, i + 1)), 'rb') as chunk_file: # write the chunk to the temp file temp_file.write(chunk_file.read()) ```
I suggest you use existing library functions, e.g. [`shutil.copyfileobj`](https://docs.python.org/2/library/shutil.html#shutil.copyfileobj) to do the copying. Edit to clarify, as [Gareth said](https://codereview.stackexchange.com/questions/108326/assembling-very-large-files/108329?noredirect=1#comment199059_108329): Use `shutil.copyfileobj(chunk_file, temp_file)` instead of `temp_file.write(chunk_file.read())`. Other than that (allocating and reading into Python objects via `chunk_file.read()`) there's no obvious flaws with the code, but I/O in Python is to be avoided on that scale anyway. I'd even say you could try using a shell script with `cat $FILES > $OUTPUT` and it could perform better.
iOS 9.0 - Remove top bar from UIAlertController action sheet When creating an action sheet with `UIAlertController`, I'm seeing a top bar always show. This SO post suggests setting the title to `nil` - this might work on iOS 8.0, but I'm seeing a top bar on iOS 9.0. [![enter image description here](https://i.stack.imgur.com/ySbYH.png)](https://i.stack.imgur.com/ySbYH.png)
Set `message` to `nil` also: ``` UIAlertController *actionSheet= [UIAlertController alertControllerWithTitle:nil message:nil preferredStyle:UIAlertControllerStyleActionSheet]; UIAlertAction *actionSheetButton1 = [UIAlertAction actionWithTitle:@"Button 1" style:UIAlertActionStyleDefault handler:^(UIAlertAction * action) { NSLog(@"Button 1 pressed"); }]; UIAlertAction *actionSheetButton2 = [UIAlertAction actionWithTitle:@"Button 2" style:UIAlertActionStyleDefault handler:^(UIAlertAction * action) { NSLog(@"Button 2 pressed"); }]; UIAlertAction *actionSheetButton3 = [UIAlertAction actionWithTitle:@"Close Button" style:UIAlertActionStyleCancel handler:^(UIAlertAction * action) { NSLog(@"Close Button pressed"); }]; [actionSheet addAction:actionSheetButton1]; [actionSheet addAction:actionSheetButton2]; [actionSheet addAction:actionSheetButton3]; [self presentViewController:actionSheet animated:YES completion:nil]; ``` [![enter image description here](https://i.stack.imgur.com/uFz5Um.png)](https://i.stack.imgur.com/uFz5Um.png)
babel/6to5 - duplicated code when exporting modules I'm using Babel to transpile ES6 to ES5 and it works great. The only problem is that once transpiled each module duplicates these lines: ``` var _get = function get(object, property, receiver) { var desc = Object.getOwnPropertyDescriptor(object, property); if (desc === undefined) { var parent = Object.getPrototypeOf(object); if (parent === null) { return undefined; } else { return get(parent, property, receiver); } } else if ("value" in desc && desc.writable) { return desc.value; } else { var getter = desc.get; if (getter === undefined) { return undefined; } return getter.call(receiver); } }; var _inherits = function (subClass, superClass) { if (typeof superClass !== "function" && superClass !== null) { throw new TypeError("Super expression must either be null or a function, not " + typeof superClass); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, enumerable: false, writable: true, configurable: true } }); if (superClass) subClass.__proto__ = superClass; }; var _classCallCheck = function (instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } }; ``` As my application is going to have more and more modules defining classes, I cannot imagine waisting so much space just because of these repeated code. Is there a way to tell Babel to not include them in each module so we can add them only once later ? I'm using gulp and my config is: ``` var gulp = require('gulp'); var concat = require('gulp-concat'); var babel = require('gulp-babel'); gulp.task('default', function () { return gulp.src('src/**/*.es6') .pipe(babel({ playground: true, experimental: true, modules: 'amd', moduleIds: true })) .pipe(concat('tmp/src.js')) .pipe(gulp.dest('dist')); }); ``` Thanks a lot for your help!
Yes, as it is described in the [**documentation**](https://babeljs.io/docs/usage/runtime/): > > Babel uses very small helpers for common functions such as `_extend`. By default this will be added to every file that requires it. This duplication is sometimes unnecessary, especially when your application is spread out over multiple files. > > > This is where the runtime optional transformer comes in. All of the helpers will reference the module `babel-runtime` to avoid duplication across your compiled output. > > > **Usage** > > > > ``` > require("babel").transform("code", { optional: ["runtime"] }); > > ``` > > > > --- > > > > ``` > $ babel --optional runtime script.js > > ``` > >
Directed acyclic graph: find all paths from a specific node How do I find all the paths from a specific node (node 36 in the example)? Let's say we have two tables: ``` CATEGORIES CATEG_PARENTS id idc | idparent -- ---------------- 1 2 1 2 2 20 5 5 2 8 8 5 20 20 1 22 22 8 30 22 20 31 30 20 36 31 22 31 30 36 31 ``` These are two possible representations: [![alt text](https://i.stack.imgur.com/oXd3R.png)](https://i.stack.imgur.com/oXd3R.png) (source: [glopes at nebm.ist.utl.pt](http://nebm.ist.utl.pt/~glopes/img/graphso.png)) [![alt text](https://i.stack.imgur.com/jBgYG.png)](https://i.stack.imgur.com/jBgYG.png) (source: [glopes at nebm.ist.utl.pt](http://nebm.ist.utl.pt/~glopes/img/graphso2.png)) This is the output I desire: ``` ids ------------------- "{1,20,22,31}" "{1,20,2,5,8,22,31}" "{1,20,30,31}" "{1,2,5,8,22,31}" ``` One path per line, written as an integer array. (I'm going to write the answer I came up with, but I'll accept any that's simpler, if any)
``` WITH RECURSIVE parent_line(path, id) AS ( SELECT ARRAY[(row_number() OVER (PARTITION BY CP.idc))::integer], C.id FROM categorias C JOIN categ_parents CP ON C.id = CP.idparent WHERE CP.idc = 36 UNION ALL SELECT PL.path || (row_number() OVER (PARTITION BY PL.id))::integer, C.id FROM categorias C JOIN categ_parents CP ON C.id = CP.idparent JOIN parent_line PL on CP.idc = PL.id ), real_parent_line(path, chainid, id) AS ( SELECT PL.path, (row_number() OVER (PARTITION BY PL.id)), PL.id FROM parent_line PL WHERE PL.id IN ( SELECT id FROM categorias C LEFT JOIN categ_parents CP ON (C.id = CP.idc) WHERE CP.idc IS NULL ) UNION ALL SELECT PL.path, chainid, PL.id FROM parent_line PL, real_parent_line RPL WHERE array_upper(PL.path,1) + 1 = array_upper(RPL.path,1) AND PL.path = RPL.path[1:(array_upper(RPL.path,1)-1)] ) SELECT array_accum(id) AS ids FROM real_parent_line RPL GROUP BY chainid; ``` The first `WITH` clause gives this: ``` path | id ------------------------ "{1}" 31 "{1,1}" 22 "{1,2}" 30 "{1,1,1}" 20 "{1,1,2}" 8 "{1,2,1}" 20 "{1,1,2,1}" 5 "{1,1,1,1}" 1 "{1,2,1,2}" 1 "{1,1,2,1,1}" 2 "{1,1,2,1,1,1}" 1 "{1,1,2,1,1,2}" 20 "{1,1,2,1,1,2,1}" 1 ``` Thanks to #postgresql@freenode for some help.
How to check if external microphone is being used via ADB I'm trying to check through ADB whether or not the external wired headset mic is used or not. This mic is automatically detected when I plug in the wired headset, but for external scripting purposes, it would be very helpful to detect this action. I couldn't find an intent for the microphone, but looked up the headset intent here: <http://developer.android.com/reference/android/content/Intent.html> I tried this broadcast intent for detecting the headset alone: ``` adb shell am broadcast -a android.intent.action.HEADSET_PLUG ``` which gets this response whether or not a wired headset is actually plugged in: ``` Broadcasting: Intent { act=android.intent.action.HEADSET_PLUG } Broadcast completed: result=0 ``` So I'm not sure where to go from here. I can't even detect if the headset is plugged in, much less if the external microphone is being used. Any help would be greatly appreciated. Thanks!
I found this method works on my device: Run the command `adb shell dumpsys activity broadcasts | grep microphone` which should produce something like: ``` extras: Bundle[{name=h2w, state=1, microphone=1}] extras: Bundle[{name=h2w, state=0, microphone=1}] extras: Bundle[{name=h2w, state=1, microphone=1}] extras: Bundle[{name=h2w, state=0, microphone=1}] extras: Bundle[{name=h2w, state=1, microphone=1}] extras: Bundle[{name=h2w, state=0, microphone=1}] extras: Bundle[{name=h2w, state=1, microphone=1}] extras: Bundle[{name=h2w, state=0, microphone=1}] extras: Bundle[{name=h2w, state=1, microphone=1}] extras: Bundle[{name=h2w, state=0, microphone=1}] extras: Bundle[{name=h2w, state=1, microphone=1}] extras: Bundle[{name=h2w, state=0, microphone=1}] extras: Bundle[{name=h2w, state=1, microphone=1}] Bundle[{name=h2w, state=1, microphone=1}] ``` The last line is inside the sticky broadcasts section of the dump, the broadcasts that remain the same until changed. So if we take the last line using `tail` and dissect it, it contains the current state of the headset: `adb shell dumpsys activity broadcasts | grep microphone | tail -n 1` output: `Bundle[{name=h2w, state=1, microphone=1}]` The `state` integer refers to whether something is plugged into the headphone jack, regardless of it contains a microphone. 0 for unplugged and 1 for plugged in. The `microphone` integer refers to if the headset that was last plugged in also included a microphone. 0 for no, 1 for yes. # Scenarios If a normal pair of headphones is currently plugged in, the output will be: `Bundle[{name=h2w, state=1, microphone=0}]` --- If a headset with a microphone is currently plugged in, the output will be: `Bundle[{name=h2w, state=1, microphone=1}]` --- If nothing is plugged in, the output is either: `Bundle[{name=h2w, state=0, microphone=0}]` or `Bundle[{name=h2w, state=0, microphone=1}]`
Issue with CSS 'everything except last' selector I have a list of sub-lists. Each sublist has its own children. I need to apply style to all except the last sub-list. I use: ``` .list:not(:last-child) > .sublist { color: red; } ``` But it is applying for all the sublists. Here's a demonstrating code (fiddle: <http://jsfiddle.net/8m72m53r/3/>): ``` :not(:last-child) > .row { color: red; } ``` ``` <ul class="box"> <li class="row">One</li> <li class="row">Two</li> <li class="row">Three</li> </ul> <ul class="box"> <li class="row">FOur</li> <li class="row">Five</li> <li class="row">Six</li> </ul> <ul class="box"> <li class="row">Seven</li> <li class="row">Eight</li> <li class="row">Nine</li> </ul> ```
[Here you go](http://jsfiddle.net/yfs8L2a5/) - you pretty much had it, just make sure to use the [:last-of-type pseudoclass](https://developer.mozilla.org/en-US/docs/Web/CSS/:last-of-type) ``` .box:not(:last-of-type) > .row { color: red; } ``` ``` <ul class="box"> <li class="row">One</li> <li class="row">Two</li> <li class="row">Three</li> </ul> <ul class="box"> <li class="row">FOur</li> <li class="row">Five</li> <li class="row">Six</li> </ul> <ul class="box"> <li class="row">Seven</li> <li class="row">Eight</li> <li class="row">Nine</li> </ul> ```
How to check whether an attribute of an object has a value? I get an error such as "can't call method 'xxxx' on an undefined value" when attempting to check if an object has been created (by the perl module Bio::Perl). Is there a general way of checking if an attribute has a value or not? I would have liked to do something like: ``` if ($the_object->the_attribute) { ``` But as long as the attribute is "undef", calling the method will only give me the error message. I have not been able to find a solution to this problem - which is real, because the object is created by the Bio::Perl module, and some attributes may or may not be set. Maybe I should add that I am not particularly perl-objects-savvy. edit: Below is a relevant part of my code. The get\_sequence() function is in the Bio::Perl module. On line 13, how can I make sure that there is a value (sequence in this case) before checking the length of it? ``` my @msgs; my @sequence_objects; my $maxlength = 0; for ( @contigs ) { my $seq_obj; try { $seq_obj = get_sequence( 'genbank', $_ ); } catch Bio::Root::Exception with { push @msgs, "Nothing found for $_ "; }; if ( $seq_obj ) { my $seq_length = length( $seq_obj->seq ); if ( $seq_length > $maxlength ) { $maxlength = $seq_length; } push @sequence_objects, $seq_obj; } } ... ```
> > > ``` > if ($the_object->the_attribute) { > > ``` > > This checks if the return value of the method `the_attribute` is true. *True* means that it's not `0`, the empty string `q{}` or `undef`. But you said you want to know whether the object exists. Let's go over **some basics** first. ``` # | this variable contains an object # | this arrow -> tells Perl to call the method on the obj # | | this is a method that is called on $the_object # | | | if ($the_object->the_attribute) { # ( ) # the if checks the return value of the expression between those parenthesis ``` It looks like you're confusing a few things. First, your `$the_object` is supposed to be an object. It probably came from a call like this: ``` my $the_object = Some::Class->new; ``` Or maybe it was returned from some other function call. Maybe some other object returned it. ``` my $the_object = $something_else->some_property_that_be_another_obj ``` Now `the_attribute` is a method (that's like a function) that returns a specific piece of data in your object. Depending on the implementation of the class (the building plan of the object), if that attribute is not set (*initialized*), it might either just return `undef`, or some other value. But the error message you are seeing is not related to `the_attribute`. If it was, you'd just not call the code in the block. The `if` check would catch it, and decide to go to `else`, or do nothing if there is no `else`. Your error message says you are trying to call a method on something that is `undef`. We know you are calling the `the_attribute` accessor method on `$the_object`. So `$the_object` is `undef`. --- The easiest way to check if something has a true value is to just put it in an `if`. But you already seem to know that. ``` if ($obj) { # there is some non-false value in $obj } ``` You've now checked that `$obj` is something that is true. So it could be an object. So you could now call your method. ``` if ($obj && $obj->the_attribute) { ... } ``` This will check the true-ness of `$obj` and only continue if there is something in `$obj`. If not, it will never call the right hand side of the `&&` and you will not get an error. But if you want to know whether `$obj` is an object that has a method, you can use `can`. Remember that *attributes* are just accessor methods to values stored inside the object. ``` if ($obj->can('the_attribute')) { # $obj has a method the_attribute } ``` But that can blow up if `$obj` is not there. If you're not sure that `$obj` is really an object, you can use the [Safe::Isa](https://metacpan.org/pod/Safe::Isa) module. It provides a method `$_call_if_object`1 that you can use to safely call your method on your maybe-object. ``` $maybe_an_object->$_call_if_object(method_name => @args); ``` Your call would translate to. ``` my $the_attribute = $obj->$_call_if_object('the_attribute'); if ($the_attribute) { # there is a value in the_attribute } ``` The same way you can use `$_isa` and `$_can` from Safe::Isa. --- 1) Yes, the method starts with a `$`, it's really a variable. If you want to learn more about how and why this works, watch the talk [*You did what?*](https://www.youtube.com/watch?v=9aCsUxfRksE) by mst.
How to access Route Data / Value Provider data in a service in ASP.NET Core? I am attempting to write a [Policy-based Authorization Handler](https://learn.microsoft.com/en-us/aspnet/core/security/authorization/policies). The business logic of the handler needs to use the record `id` of the current request that is passed in through the default route. ``` [Authorize(Roles = "TaskAdmin", Policy = "RecordOwner")] public IActionResult Index(int id) // <-- Need this id { // <snip> return View(); } ``` ## Policy Here is the class where I need to access the `id` route value. ``` public class RecordOwnerHandler : AuthorizationHandler<RecordOwnerRequirement> { private readonly ApplicationDbContext dbContext; public RecordOwnerHandler(ApplicationDbContext dbContext) { this.dbContext = dbContext ?? throw new ArgumentNullException(nameof(dbContext)); } protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, RecordOwnerRequirement requirement) { if (IsUserAuthorized(context)) { context.Succeed(requirement); } //TODO: Use the following if targeting a version of //.NET Framework older than 4.6: // return Task.FromResult(0); return Task.CompletedTask; } private bool IsUserAuthorized(AuthorizationHandlerContext context) { //**************************************** // Need the id here... //**************************************** // Return the result return true; } } ``` ## Startup ``` public void ConfigureServices(IServiceCollection services) { services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); services.AddIdentity<ApplicationUser, IdentityRole>() .AddEntityFrameworkStores<ApplicationDbContext>() .AddDefaultTokenProviders(); // *** Add policy for record owner *** services.AddAuthorization(options => { options.AddPolicy("RecordOwner", policy => policy.Requirements.Add(new RecordOwnerRequirement())); }); // Add application services. services.AddTransient<IEmailSender, EmailSender>(); // *** Register record owner handler with the DI container *** services.AddTransient<IAuthorizationHandler, RecordOwnerHandler>(); services.AddMvc(); } ``` ## What I Tried 1. I tried using the `IHttpContextAccessor` as a constructor parameter of `RecordOwnerHandler`, but `IHttpContextAccessor.HttpContext` doesn't seem to contain the `RouteData` of the request. 2. I did several Google searches to see if there was any info about how to do this and came up blank. 3. Then I dug through the source code for both **Routing** and **Model Binding**, but can't seem to find an abstraction that is meant for injecting route values into services. > > I realize I could try to parse this info out of the URL, but I am hoping for a cleaner way to get the value. > > > So, how can I access **route values** and/or **value provider** data inside of a service in ASP.NET Core 2.0?
Route values can be accessed by using the [`ActionContextAccessor` class](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.infrastructure.actioncontextaccessor?view=aspnetcore-2.0). ## DI Registration ``` services.AddSingleton<IActionContextAccessor, ActionContextAccessor>(); ``` ## Usage ``` public class RecordOwnerHandler : AuthorizationHandler<RecordOwnerRequirement> { private readonly ApplicationDbContext dbContext; private readonly IActionContextAccessor actionContextAccessor; public RecordOwnerHandler(ApplicationDbContext dbContext, IActionContextAccessor actionContextAccessor) { this.dbContext = dbContext ?? throw new ArgumentNullException(nameof(dbContext)); this.actionContextAccessor = actionContextAccessor ?? throw new ArgumentNullException(nameof(actionContextAccessor)); } protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, RecordOwnerRequirement requirement) { if (IsUserAuthorized(context)) { context.Succeed(requirement); } //TODO: Use the following if targeting a version of //.NET Framework older than 4.6: // return Task.FromResult(0); return Task.CompletedTask; } private bool IsUserAuthorized(AuthorizationHandlerContext context) { // Now the id route value can be accessed directly... var id = this.actionContextAccessor.ActionContext.RouteData.Values["id"]; // Use the dbContext to compare the id against the database... // Return the result return true; } } ``` > > **NOTE:** I would still like to find out a way to access the value providers to do this, so it wouldn't matter if the parameter is passed through route values, query string, form values, etc. > > >
Delphi Exception Handling - How to clean up properly? I'm looking at some code in an application of ours and came across something a little odd from what I normally do. With exception handling and cleanup, we (as well as many other programmers out there, I'm sure) use a Try/Finally block embedded with a Try/Except block. Now I'm used to the Try/Except inside the Try/Finally like so: ``` Try Try CouldCauseError(X); Except HandleError; end; Finally FreeAndNil(x); end; ``` but this other block of code is reversed as so: ``` Try Try CouldCauseError(X); Finally FreeAndNil(x); end; Except HandleError; end; ``` Looking around the web, I'm seeing folks doing this both ways, with no explanation as to why. My question is, does it matter which gets the outside block and which gets the inside block? Or will the except and finally sections get handled no matter which way it is structured? Thanks.
One difference is that try..finally..except is potentially vulnerable to an exception masking situation. Imagine that an exception occurs in **CouldCauseError()**. Then imagine that the attempt to **FreeAndNIL**(X) in the **finally** causes a further exception. The original exception (quite possibly which lead to the instability leading to the **FreeAndNIL**() exception) is lost. The **except** handler is now handling the "downstream" exception that occured after the original one. **try..except..finally** avoids this of course and should be preferred for this reason (deal with exceptions as close as possible to their source). The other way to handle a simple case such as this (a single object being cleaned) is to include the cleanup both in the normal flow ***and*** in the exception handler: ``` try CouldCauseError(X); FreeAndNil(x); except HandleError; FreeAndNil(x); end; ``` This looks a little scary at first ("I need to be SURE that **FreeAndNIL**(X) is called, so I HAVE TO HAVE A **FINALLY**!!") but the only way that the first FreeAndNIL() might not be called is if there is an exception and if there *is* an exception you are FreeAndNIL()ing as well anyway, and it makes the order of cleanup in the event of an exception a little clearer (in the sense of removing noise that to some extent has to be "filtered" out in order to understand what is going on). But, I personally do not like it - if you change code in either the exception handler or the normal flow you risk breaking the cleanup behaviour, but depending on the code around such a block, and the size of the block itself, the reduction in "noise" can be argued to be justified in some cases, for the sake of simplification. However, this relies on the fact that **FreeAndNIL**() is actually "**NILThenFree**()"... **X** is NIL'd before it is Free'd, so if an exception occurs in the **FreeAndNIL**(X) in the normal flow, then X will be NIL when the exception handler catches the exception raised by **X.Free**, so it will not attempt to "double-free" X. Whatever you decide, I hope that helps.
memory management and segmentation faults in modern day systems (Linux) In modern-day operating systems, memory is available as an abstracted resource. A process is exposed to a virtual address space (which is independent from address space of all other processes) and a whole mechanism exists for mapping any virtual address to some actual physical address. My doubt is: - If each process has its own address space, then it should be free to access any address in the same. So apart from permission restricted sections like that of .data, .bss, .text etc, one should be free to change value at any address. But this usually gives segmentation fault, why? - For acquiring the dynamic memory, we need to do a malloc. If the whole virtual space is made available to a process, then why can't it directly access it? - Different runs of a program results in different addresses for variables (both on stack and heap). Why is it so, when the environments for each run is same? Does it not affect the amount of addressable memory available for usage? (Does it have something to do with address space randomization?) - Some links on memory allocation (e.g. in heap). The data available at different places is very confusing, as they talk about old and modern times, often not distinguishing between them. It would be helpful if someone could clarify the doubts while keeping modern systems in mind, say Linux. Thanks.
Technically, the operating system is able to allocate any memory page on access, but there are important reasons why it shouldn't or can't: different memory regions serve different purposes. - code. It can be read and executed, but shouldn't be written to. - literals (strings, const arrays). This memory is read-only and should be. - the heap. It can be read and written, but not executed. - the thread stack. There is no reason for two threads to access each other's stack, so the OS might as well forbid that. Moreover, the tread stack can be de-allocated when the tread ends. - memory-mapped files. Any changes to this region should affect a specific file. If the file is open for reading, the same memory page may be shared between processes because it's read-only. - the kernel space. Normally the application should not (or can not) access that region - only kernel code can. It's basically a scratch space for the kernel and it's shared between processes. The network buffer may reside there, so that it's always available for writes, no matter when the packet arrives. - ... The OS might assume that all unrecognised memory access is an attempt to allocate more heap space, but: - if an application touches the kernel memory from user code, it must be killed. On 32-bit Windows, all memory above `1<<31` (top bit set) or above `3<<30` (top two bits set) is kernel memory. You should not assume any unallocated memory region is in the user space. - if an application thinks about using a memory region but doesn't tell the OS, the OS may allocate something else to that memory (OS: sure, your file is at `0x12341234`; App: but I *wanted* to store *my* data *there*). You could tell the OS by touching the end of your array (which is unreliable anyways), but it's easier to just call an OS function. It's just a good idea that the function call is "give me 10MB of heap", not "give me 10MB of heap starting at `0x12345678`" - If the application allocates memory by using it then it typically does not de-allocate at all. This can be problematic as the OS still has to hold the unused pages (but the Java Virtual Machine does *not* de-allocate either, so hey). > > Different runs of a program results in different addresses for variables > > > This is called memory layout randomisation and is used, alongside of proper permissions (stack space is not executable), to make buffer overflow attacks much more difficult. You can still kill the app, but not execute arbitrary code. > > Some links on memory allocation (e.g. in heap). > > > Do you mean, what algorithm the allocator uses? The easiest algorithm is to always allocate at the soonest available position and link from each memory block to the next and store the flag if it's a free block or used block. More advanced algorithms always allocate blocks at the size of a power of two or a multiple of some fixed size to prevent memory fragmentation (lots of small free blocks) or link the blocks in a different structures to find a free block of sufficient size faster. An even simpler approach is to never de-allocate and just point to the first (and only) free block and holds its size. If the remaining space is too small, throw it away and ask the OS for a new one. There's nothing magical about memory allocators. All they do is to: - ask the OS for a large region and - partition it to smaller chunks - without - wasting too much space or - taking too long. Anyways, the Wikipedia article about memory allocation is <http://en.wikipedia.org/wiki/Memory_management> . One interesting algorithm is called ["(binary) buddy blocks"](http://en.wikipedia.org/wiki/Buddy_memory_allocation). It holds several pools of a power-of-two size and splits them recursively into smaller regions. Each region is then either fully allocated, fully free or split in two regions (buddies) that are not both fully free. If it's split, then one byte suffices to hold the size of the largest free block within this block.
Are either the IPad or IPhone capable of OpenCL? With the push towards multimedia enabled mobile devices this seems like a logical way to boost performance on these platforms, while keeping general purpose software power efficient. I've been interested in the IPad hardware as a developement platform for UI and data display / entry usage. But am curious of how much processing capability the device itself is capable of. OpenCL would make it a JUICY hardware platform to develop on, even though the licensing seems like it kinda stinks.
OpenCL is not yet part of iOS. However, the newer iPhones, iPod touches, and the iPad all have GPUs that support OpenGL ES 2.0. 2.0 lets you create your own programmable shaders to run on the GPU, which would let you do high-performance parallel calculations. While not as elegant as OpenCL, you might be able to solve many of the same problems. Additionally, iOS 4.0 brought with it the [Accelerate framework](http://developer.apple.com/iphone/library/documentation/Accelerate/Reference/AccelerateFWRef/index.html) which gives you access to many common vector-based operations for high-performance computing on the CPU. See Session 202 - The Accelerate framework for iPhone OS in the [WWDC 2010 videos](http://developer.apple.com/videos/wwdc/2010/) for more on this.
Can I do type introspection with trait objects and then downcast it? I have a collection of `Trait`, a function that iterates over it and does something, and then I would like to check the implementor type and if it is of type `Foo` then downcast it and call some Foo method. Basically, something similar to Go's [type-switch](https://golang.org/doc/effective_go.html#type_switch) and [interface conversion](https://golang.org/doc/effective_go.html#interface_conversions). Searching around I found about the [Any trait](http://doc.rust-lang.org/core/any/index.html) but it can only be implemented on `'static` types. To help demonstrate what I want: ``` let vec: Vec<Box<Trait>> = // for e in vec.iter() { e.trait_method(); // if typeof e == Foo { // let f = e as Foo; // f.foo_method(); //} } ```
As you have noticed, downcasting only works with `Any` trait, and yes, it only supports `'static` data. You can find a recent discussion on why it is so [here](http://www.reddit.com/r/rust/comments/2r25rf/why_does_any_only_work_for_t_static/). Basically, implementing reflection for references of arbitrary lifetimes is difficult. It is also impossible (as of now, at least) to combine `Any` with your custom trait easily. However, a [macro library](https://github.com/chris-morgan/mopa) for automatic implementation of `Any` for your trait has recently been created. You can also find some discussion on it [here](http://www.reddit.com/r/rust/comments/2ricdw/mopa_my_own_personal_any_a_macro_to_implement_all/).
"Cannot move out of borrowed content" while summing command line arguments It's my first Rust program and it seems I've already encountered the dreaded borrow checker. :) The program should read the arguments passed in the command line, sum them and return the result. I have troubles parsing the arguments into integers. ``` use std::env; fn main() { let args: Vec<String> = env::args().collect(); let sum_args: i32 = args .iter() .skip(1) .fold(0, |a, &b| a + b.parse::<i32>().ok().expect("Not an i32!")); println!("{:?}", sum_args.to_string()); } ``` Which fails with: ``` error[E0507]: cannot move out of borrowed content --> src/main.rs:9:22 | 9 | .fold(0, |a, &b| a + b.parse::<i32>().ok().expect("Not an i32!")); | ^- | || | |hint: to prevent move, use `ref b` or `ref mut b` | cannot move out of borrowed content ``` How should I proceed?
`args` is a `Vec<String>`, and the `iter` iterator returns references to strings (`&String`). One trick to see the types is to attempt to assign a value to the unit type `()`: ``` let () = args.iter().next(); ``` Which has an error that shows the type: ``` error[E0308]: mismatched types --> src/main.rs:5:13 | 5 | let () = args.iter().next(); | ^^ expected enum `std::option::Option`, found () | = note: expected type `std::option::Option<&std::string::String>` = note: found type `()` ``` In your closure, you are attempting to automatically dereference (`|a, &b|`) the second value. If you *were* able to dereference it, then the `String` would be *moved* out of the vector, which would leave the memory in the vector in an indeterminate state! If we tried to use the vector after this, we could cause a segfault, one of the things Rust is designed to help prevent. The easiest thing is to not dereference it at all (leaving `b` as a `&String`): ``` use std::env; fn main() { let args: Vec<String> = env::args().collect(); let sum_args: i32 = args .iter() .skip(1) .fold(0, |a, b| a + b.parse::<i32>().expect("Not an i32!")); println!("{:?}", sum_args.to_string()); } ``` Some additional minor points... You don't have to specify the vector elements type when you `collect`: ``` let args: Vec<_> = env::args().collect(); ``` You don't need to create a string to print out a number: ``` println!("{}", sum_args); ``` And I'd probably have written it as ``` use std::env; fn main() { let args: Vec<String> = env::args().collect(); let sum_args: i32 = args .iter() .skip(1) .map(|n| n.parse::<i32>().expect("Not an i32!")) .sum(); println!("{}", sum_args); } ``` --- **Overly clever solution warning** If you had to sum up a bunch of iterators of potentially-failed numbers, you could create a type that implements `FromIterator` and doesn't allocate any memory: ``` use std::env; use std::iter::{FromIterator, Sum}; struct SumCollector<T>(T); impl<T> FromIterator<T> for SumCollector<T> where T: Sum { fn from_iter<I>(iter: I) -> Self where I: IntoIterator<Item = T> { SumCollector(iter.into_iter().sum()) } } fn main() { let sum: Result<SumCollector<i32>, _> = env::args().skip(1).map(|v| v.parse()).collect(); let sum = sum.expect("Something was not an i32!"); println!("{}", sum.0); } ``` --- Rust 1.16 should even support this out-of-the-box: ``` use std::env; fn main() { let sum: Result<_, _> = env::args().skip(1).map(|v| v.parse::<i32>()).sum(); let sum: i32 = sum.expect("Something was not an i32!"); println!("{}", sum); } ```
How to implement digital signature with my existing web project I'm working on the project where the user needs to do a digital signature on a document. I checked in the google and know about sinadura which is a desktop application but I need to invoke this into my web application. I installed alfresco community edition on Linux server (<https://www.alfresco.com/thank-you/thank-you-downloading-alfresco-community-edition>) and followed the instruction as below GitHub link. <https://github.com/zylklab/alfresco-sinadura> I've implemented successfully with above instructions. But Alfresco is the big project and given several other features too. But I don't need that and I just need to implement digital signature part into my own web application similar to alfresco How to implement the digital signature part in my existing project? Can anyone please give a suggestion
The security restrictions of browsers do not allow javascript access to the system certificate keystore or smart cards. Formerly java applets could be used, but with the latest browser updates it is no longer possible. Current solutions for digital signature in browsers require the installation of a desktop software on the user's computer. The operating process is as follows: **Installation**: The user installs the desktop software on his computer. The software installs a self-signed certificate and includes an embedded browser that listens on a computer port and runs it as a service 1. The web application requests a signature to the local software using a secure web connection. For example `https://localhost:1234/sign?doc=xxxx.` The browser waits for the result 2. The local application receives the document, asks the user to select the certificate or enter the card and make the signature. As it is a local application, there are no security restrictions 3. The web application receives the result. It can query the local application through a REST service or open a websocket. The concept is simple (**a web application that requests the signature to a local application**), but the construction of an application of this type is quite complex because you have to take into account many other factors: - Software installation and distribution - Security and Encryption - Digital signature formats: XAdES, CAdES, PAdES etc. They can be implemented in the application or use a signature service in 3 phases, where the documents are on the server and a single hash is signed locally So I recommend using an existing solution: - [@firma](https://administracionelectronica.gob.es/ctt/clienteafirma/descargas#.Wu_1TIhuaM8) + [Autofirma](http://firmaelectronica.gob.es/Home/Descargas.html): Open-source solution promoted and used by the public administration in Spain - [SD-DSS](https://github.com/esig/dss) + [nexU](http://nowina.lu/news/nexu/)(lowina): Open-source solution promoted by the European Commision. Check the demo [here](https://ec.europa.eu/cefdigital/DSS/webapp-demo/sign-a-document) - [chrome token signing](https://github.com/open-eid/chrome-token-signing/wiki): Chrome and Firefox extension for signing with your eID on the web developed for the Estonian government Sinadura is also an open-source initiative, and from what I've seen it works in a similar way, but I do not know if it has important references and I have not found the API
How to set Panel.ZIndex or other attached properties via DataTrigger? I am trying to have a circular overlay come to the top when a certain view model enters an offline state. So it becomes partially transparent and on top of other elements in the Grid. DataTriggers in the style have worked for everything so far, but I cannot set Panel.ZIndex. There is no error in build or run, but the property is not set (I assume because it's an attached property?) ``` <Ellipse Fill="DarkGray" Panel.ZIndex="-10" Width="50" Height="50"> <Ellipse.Style TargetType="Ellipse"> <Style> <Style.Triggers> <DataTrigger Binding="{Binding Status}" Value="Offline"> <Setter Property="Opacity" Value=".6" /> <Setter Property="Panel.ZIndex" Value="10" /> </DataTrigger> </Style.Triggers> </Style> </Ellipse.Style> </Ellipse> ```
You have the syntax correct, however the problem is that you are defining `Panel.ZIndex` in the `<Ellipse>` tag, and properties set in the tag itself will take precedence over any triggered values. To fix it, simply set `Panel.ZIndex` in your style instead of the `Ellipse` tag ``` <Ellipse Fill="DarkGray" Width="50" Height="50"> <Ellipse.Style TargetType="Ellipse"> <Style> <Setter Property="Panel.ZIndex" Value="-10" /> <Style.Triggers> <DataTrigger Binding="{Binding Status}" Value="Offline"> <Setter Property="Opacity" Value=".6" /> <Setter Property="Panel.ZIndex" Value="10" /> </DataTrigger> </Style.Triggers> </Style> </Ellipse.Style> </Ellipse> ``` See MSDN's article on [Dependency Property Precedence](http://msdn.microsoft.com/en-us/library/ms743230.aspx) for more info
Is there an API for running Visual Studio Unit Tests programmatically? Is there an API for running Visual Studio Unit Tests programmatically? Running MSTests.exe with Process.Start() does not work in the current scenario. What I'm looking for is something like the NUnit SimpleTestRunner. Any ideas? /Erik
You're correct in that there's no public API for the mstest framework. I wrote a manual replacement for mstest one day to see how hard it was, and it's not as simple as it looks (particularly if you want to take advantage of more than one CPU core), so beware of going down this path. Personally I've always just run `mstest.exe` programatically and then parsed the resulting `.trx` XML file. Are there any particular reasons why you can't use `Process.Start` to run it? P.S. Some of the strange behaviour of mstest.exe are solved if you pass the `/noisolation` command line parameter - give that a go if you feel so inclined :-) --- Update: Erik mentions he wants to run the test API in the current thread so he can set the thread culture for globalization issues. If you run a unit test under the debugger, you'll notice that mstest creates a bunch of threads, and runs all your tests in different threads, so this isn't likely to work even if you could access the API. What I'd suggest doing is this: 1. From your test "runner" application, set an environment variable 2. Run mstest pointing it at the specific tests 3. Add a `[ClassInitialize]` (or `[TestInitialize]`) method which reads this environment variable and sets the culture 4. Profit!
How to launch activity only once when app is opened for first time after an update? What would be the most logical way to go about launching an activity when an app is opened for the first time after an update. I understand that a sharedprefs would be the easiest way for this, but sharedprefs are persistent across application updates so it wouldn't seem that that option would work. Any ideas?
Make the shared pref store the version number of the app: if the version is different, update it and then launch your Activity. EDIT: this is what I do in my what's new check. It loads up the app info, fetches the version number and if it has changed it pops open the Activity. ``` public static boolean show(Context c) { ApplicationInfo ai = c.getApplicationInfo(); String basis = ai.loadLabel(c.getPackageManager()).toString(); try { PackageInfo pi = c.getPackageManager().getPackageInfo(c.getPackageName(), PackageManager.GET_META_ DATA); SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(c); String lastVersion = prefs.getString("lastversion", null); SharedPreferences.Editor edit = prefs.edit(); if(lastVersion == null){ // save the first installed version so we can check and see when it got installed edit.putString("firstversion", String.valueOf(pi.versionCode)); edit.commit(); } if(lastVersion == null || pi.versionCode > Integer.parseInt(lastVersion)){ edit.putString("lastversion", String.valueOf(pi.versionCode)); edit.commit(); // show it Intent i = new Intent(c, WhatsNew.class); c.startActivity(i); return true; } } catch (Exception e) { android.util.Log.v("WhatsNew", "Exception checking for release notes for [" + basis + "]:" + e.getMessage(), e); } return false; } ```
AS3 Dynamic class that only accepts a given type Is it possible to make a `dynamic` class in AS3 only accept dynamically created properties if they're a given type? For example, I may only want Sprites to be allowed. So take this quick example class: ``` public dynamic class Test extends Object{} ``` ## ------- And a few quick lines to get an idea of what I mean: ``` var test:Test = new Test(); test.something = 32; // error test.something = "party hats"; // error test.something = new Sprte(); // works ``` Maybe using the proxy class/namespsace there's a way to manipulate whatever is run when creating variables dynamically?
The Test class: ``` package classes { import flash.display.Sprite; import flash.utils.Proxy; import flash.utils.flash_proxy; public dynamic class Test extends Proxy { private var _properties : Object; public function Test() { _properties = new Object(); } override flash_proxy function getProperty(name : *) : * { return _properties[name]; } override flash_proxy function setProperty(name:*, value:*):void { if (!(value is Sprite)) throw new Error("No Sprite given: " + value); _properties[name] = value; } } } ``` The App: ``` package classes { import flash.display.Sprite; public class TestTest extends Sprite { public function TestTest() { var test:Test = new Test(); try { test.something = 32; // error } catch (e : Error) { trace (e); } try { test.something = new Sprite(); // works } catch (e : Error) { trace (e); } trace (test.something); } } } ``` The output: ``` Error: No Sprite given: 32 [object Sprite] ```
Boost Binary Serialization Problem I have a problem using boost serialization using binary archives. It works when using a file stream but I want to store it in my local variable and ultimately save/load it to/from berkeley db. When executing the program I get a *boost::archive::archive\_exception*: 'stream error' when instantiating the *binary\_iarchive*. ``` #include <sys/time.h> #include <string> #include <boost/serialization/serialization.hpp> #include <boost/archive/binary_oarchive.hpp> #include <boost/archive/binary_iarchive.hpp> #include <boost/archive/text_oarchive.hpp> #include <boost/archive/text_iarchive.hpp> #include <fstream> #include <sstream> namespace boost { namespace serialization { template<class Archive> void serialize(Archive & ar, timeval & t, const unsigned int version) { ar & t.tv_sec; ar & t.tv_usec; } }//namespace serialization }//namespace boost int main(int, char**) { timeval t1; gettimeofday(&t1, NULL); char buf[256]; std::stringstream os(std::ios_base::binary| std::ios_base::out| std::ios_base::in); { boost::archive::binary_oarchive oa(os, boost::archive::no_header); oa << t1; } memcpy(buf, os.str().data(), os.str().length()); if(memcmp(buf, os.str().data(), os.str().length()) != 0) printf("memcpy error\n"); timeval t2; { std::stringstream is(buf, std::ios_base::binary| std::ios_base::out| std::ios_base::in); boost::archive::binary_iarchive ia(is, boost::archive::no_header); ia >> t2; } printf("Old(%d.%d) vs New(%d.%d)\n", t1.tv_sec, t1.tv_usec, t2.tv_sec, t2.tv_usec); return 0; } ``` It works when initializing *is* with *os.str()*, so I guess my way of copying the data to my buffer or to *is* is wrong.
Well, for one thing .data() doesn't have a terminal \0. It's not a c-string. I didn't even realize stringstream had a char\* constructor (who in their right mind uses them anymore?) but apparently it does and I'd bet it expects \0. Why are you trying to do it that way anyway? You're much better off working in C++ strings. Initialize is with os.str(). Edit: binary data contains lots of \0 characters and the std::string(char\*) constructor stops at the first one. Your deserialization routine will then inevitably try to read past the end of the stream (because it isn't complete). Use the iterator constructor for std::string when you pass buf into the stringstream. ``` std::stringstream is(std::string(buf, buf+os.str().length()), flags); ```
How to completely wipe rubygems along with rails etc Ok, so I decided I'd be cool and try to use Rails3 that's in beta. Then, things were getting hard to manage so I got rvm. I installed ruby 1.9.2-head in rvm and things were working, and then a computer restart later rails wouldn't start up. So I figured I'd just try running the system ruby and start rails in it. same error. Then, I uninstalled rails3 and got rails: no such file or directory type errors.. So now I'm royally screwed because rails2 is still installed but will not uninstall because of invisible dependencies, along with a lot of other random gems. How do I completely clear out all ruby gems and such so I can start anew?
I've recently had to so just this. I had built up alot of cruft with my system installed ruby and gems and wanted to clean all that out and move everything over to run under rvm for various projects. ## 1. Clean up old and busted First thing I did, before messing with rvm (or run `rvm system` to get back to the system ruby), was to [remove all my gems](http://geekystuff.net/2009/1/14/remove-all-ruby-gems): ``` gem list | cut -d" " -f1 | xargs gem uninstall -aIx ``` WARNING: this will uninstall all ruby gems. If you installed as root you may want to switch to root and run this. ## 2. Install new hotness Now you can run `gem list` to see what is left. Time to install rvm, I recomend blowing away your current install and reinstall fresh: ``` rm -rf $HOME/.rvm bash < <( curl http://rvm.beginrescueend.com/releases/rvm-install-head ) ``` Now the real trick is to use gemsets to install rails 3, and this is easy if you follow [Waynee Seguin's gist](http://gist.github.com/296055): ``` rvm update --head rvm install 1.8.7 rvm --create use 1.8.7@rails3 curl -L http://rvm.beginrescueend.com/gemsets/rails3b3.gems -o rails3b3.gems rvm gemset import rails3b3.gems ``` One difference is I use 1.8.7 since I have had issues with 1.9.2-head and RSpec, but 1.8.7 has been smooth.
Make Pandas figure out how many rows to skip in pd.read\_excel I'm trying to automate reading in hundreds of excel files into a single dataframe. Thankfully the layout of the excel files is fairly constant. They all have the same header (the casing of the header may vary) and then of course the same number of columns, and the data I want to read is always stored in the first spreadsheet. However, in some files a number of rows have been skipped before the actual data begins. There may or may not be comments and such in the rows before the actual data. For instance, in some files the header is in row 3 and then the data starts in row 4 and down. I would like `pandas` to figure out on its own, how many rows to skip. Currently I use a somewhat complicated solution...I first read the file into a dataframe, check if the header is correct, if no search to find the row containing the header, and then re-read the file now knowing how many rows to skip.. ``` def find_header_row(df, my_header): """Find the row containing the header.""" for idx, row in df.iterrows(): row_header = [str(t).lower() for t in row] if len(set(my_header) - set(row_header)) == 0: return idx + 1 raise Exception("Cant find header row!") my_header = ['col_1', 'col_2',..., 'col_n'] df = pd.read_excel('my_file.xlsx') # Make columns lower case (case may vary) df.columns = [t.lower() for t in df.columns] # Check if the header of the dataframe mathces my_header if len(set(my_header) - set(df.columns)) != 0: # If no... use my function to find the row containing the header n_rows_to_skip = find_header_row(df, kolonner) # Re-read the dataframe, skipping the right number of rows df = pd.read_excel(fil, skiprows=n_rows_to_skip) ``` Since I know what the header row looks like is there a way to let `pandas` figure out on its own where the data begins? Or is can anyone think of a better solution?
Let's know if this work for you ``` import pandas as pd df = pd.read_excel("unamed1.xlsx") df Unnamed: 0 Unnamed: 1 Unnamed: 2 0 NaN bad row1 badddd row111 NaN 1 baaaa NaN NaN 2 NaN NaN NaN 3 id name age 4 1 Roger 17 5 2 Rosa 23 6 3 Rob 31 7 4 Ives 15 first_row = (df.count(axis = 1) >= df.shape[1]).idxmax() df.columns = df.loc[first_row] df = df.loc[first_row+1:] df 3 id name age 4 1 Roger 17 5 2 Rosa 23 6 3 Rob 31 7 4 Ives 15 ```
How to share group\_vars between different inventories in Ansible? The Ansible best practices documentation [recommends](http://docs.ansible.com/ansible/playbooks_best_practices.html#alternative-directory-layout) to separate inventories: ``` inventories/ production/ hosts.ini # inventory file for production servers group_vars/ group1 # here we assign variables to particular groups group2 # "" host_vars/ hostname1 # if systems need specific variables, put them here hostname2 # "" staging/ hosts.ini # inventory file for staging environment group_vars/ group1 # here we assign variables to particular groups group2 # "" host_vars/ stagehost1 # if systems need specific variables, put them here stagehost2 # "" ``` My staging and production environments are structured in the same way. I have in both environments the same groups. And it turns out that I have also the same group\_vars for the same groups. This means redundancy I would like to wipe out. Is there a way to share some group\_vars between different inventories? As a work-around I started to put shared group\_vars into the roles. ``` my_var: my_group: - { var1: 1, var2: 2 } ``` This makes it possible to iterate over some vars by intersecting the groups of a host with the defined var: ``` with_items: "{{group_names | intersect(my_var.keys())}}" ``` But this is a bit complicate to understand and I think roles should not know anything about groups. I would like to separate most of the inventories but share some of the group\_vars in an easy to understand way. Is it possible to merge global group\_vars with inventory specific group\_vars?
I scrapped the idea of following Ansible's recommendation. Now one year later, I am convinced that Ansible's recommendation is not useful for my requirements. Instead I think it is important to share as much as possible among different stages. Now I put all inventories in the same directory: ``` production.ini reference.ini ``` And I take care that each inventory defines a group including all hosts with the name of the stage. The file `production.ini` has the group `production`: ``` [production:children] all_production_hosts ``` And the file `reference.ini` has the group `reference`: ``` [reference:children] all_reference_hosts ``` I have just one `group_vars` directory in which I define a file for every staging group: ``` group_vars/production.yml group_vars/reference.yml ``` And each file defines a `stage` variable. The file `production.yml` defines this: ``` --- stage: production ``` And the file `reference.yml` defines that: ``` --- stage: reference ``` This makes it possible to share everything else between production and reference. But the hosts are completely different. By using the right inventory the playbook runs either on production or on reference hosts: ``` ansible-playbook -i production.ini site.yml ansible-playbook -i reference.ini site.yml ``` If it is necessary for the `site.yml` or the roles to behave slightly different in the production and reference environment, they can use conditions using the `stage` variable. But I try to avoid even that. Because it is better to move all differences into equivalent definitions in the staging files `production.yml` and `reference.yml`. For example, if the `group_vars/all.yml` defines some users: ``` users: - alice - bob - mallory ``` And I want to create the users in both environments, but I want to exclude `mallory` from the production environment, I can define a new group called `effective_users`. In the `reference.yml` it is identical to the `users` list: ``` effective_users: >- {{ users }} ``` But in the `production.yml` I can exclude `mallory`: ``` effective_users: >- {{ users | difference(['mallory']) }} ``` The playbook or the roles do not need to distinguish between the two stages, they can simply use the group `effective_users`. The group contains automatically the right list of users simply by selecting the inventory.
Remove Angularjs-Chart border and reduce chart arc thickness How do i remove the border line of Angularjs doughnut chart and reduve the thickness of the arc. HTML ``` <canvas id="doughnut" class="chart chart-doughnut" chart-colors="preColors" chart-dataset-override="datasetOverridePres" chart-dataset-options="preLegend" chart-data="preData" chart-labels="preName"> </canvas> ``` JS ``` $scope.preColors = ['#febe05','#f3f3f3']; $scope.preData = [343,78]; ``` Thank you
I changed a bit your code (I think chart-dataset-options is incorrect) : ``` <canvas id="doughnut" class="chart chart-doughnut" chart-colors="preColors" chart-options="preLegend" chart-data="preData" chart-labels="preName" chart-dataset-override="preOverride"> </canvas> ``` And The JS: ``` $scope.preColors = ['#febe05', '#f3f3f3']; $scope.preData = [343, 78]; $scope.preName = ['a', 'b']; $scope.preLegend = { cutoutPercentage: 80, //arc thickness elements: { arc: { borderWidth: 0 // border line } } } $scope.preOverride = [{}]; ``` Result: [![enter image description here](https://i.stack.imgur.com/GKTFo.jpg)](https://i.stack.imgur.com/GKTFo.jpg) Live demo: [SkinnyDoughnout](http://codepen.io/k3no/pen/MbBZyQ?editors=0010)
Why use a recursive function rather than `while true do` in F#? Whilst watching a [Pluralsight course](https://app.pluralsight.com/player?course=accessing-data-fsharp-type-providers&author=tomas-petricek&name=accessing-data-fsharp-type-providers-m3&clip=4&mode=live) by Tomas Petricek (who I assume knows what he is talking about), I saw code like the following... ``` let echo = MailboxProcessor<string>.Start(fun inbox -> async { while do true let! msg = inbox.Receive() printfn "Hello %s" msg }) ``` Ignore the fact that this was to demo agents, I'm interested in the inner function, which uses `while do true` to keep it running indefinitely. Whilst looking around for other example of agents, I saw that many other people use code like this... ``` let counter = MailboxProcessor.Start(fun inbox -> let rec loop n = async { do printfn "n = %d, waiting..." n let! msg = inbox.Receive() return! loop(n+msg) } loop 0) ``` Code copied from [Wikibooks](https://en.wikibooks.org/wiki/F_Sharp_Programming/MailboxProcessor). The inner function here is recursive, and is started off by calling it with a base value before the main function declaration ends. Now I realise that in the second case recursion is a handy way of passing a private value to the inner function without having to use a mutable local value, but is there any other reason to use recursion here rather than `while do true`? Would there be any benefit in writing the first code snippet using recursion? I find the non-recursive version much easier to read (subjective opinion of course), which seems like a good reason to use that whenever possible.
Talking about `MailboxProcessor` specifically, I think the choice depends on what exactly you are doing. In general, you can always use `while` loop or recursion. Recursion makes it easier to use immutable state and I find `while` loop nicer if you have no state or if you use mutable state. Using mutable state is often quite useful, because `MailboxProcessor` protects you from concurrent accesses and you can keep the state local, so things like `Dictionary` (efficient hash table) are often useful. In general: - If you don't need any state, I would prefer `while` - If you have mutable state (like `Dictionary` or `ResizeArray`), I'd go for `while` - If you have some immutable state (like functional list or an integer), then recursion is nicer - If your logic switches between multiple modes of operation then you can write it as two mutually recursive functions, which is not doable nicely with loops.
Why are the terms "automatic" and "dynamic" preferred over the terms "stack" and "heap" in C++ memory management? Related to a lot of questions and answers on SO, I've learned that it's better to refer to objects whose lifetime is managed as residing in automatic storage rather than the stack. Also, dynamically allocated objects shouldn't be referred to as residing on the heap, but in dynamic storage. I get that there is automatic, dynamic and static storage, but never really understood the difference between automatic-stack and dynamic-heap. Why are the former preferred? **I'm not asking what stack/heap mean or how memory management works. I'm asking why the terms automatic/dynamic storage are preferred over the terms stack/heap.**
**Automatic** tells me something about the lifetime of an object: specifically that it is bound *automatically* to the enclosing scope, and will be destroyed *automatically* when that scope exits. **Dynamic** tells me that the lifetime of an object is not controlled *automatically* by the compiler, but is under my direct control. **Stack** is an overloaded name for a type of container, and for the related popular instruction pointer protocol supported by common `call` and `ret` instructions. It doesn't tell me anything about the lifetime of an object, except through a historical association to object lifetimes in C, due to popular stack frame conventions. Note also that in some implementations, thread-local storage is *on the stack* of a thread, but is not limited to the scope of any single function. **Heap** is again an overloaded name, indicating either a type of sorted container or a free-store management system. This is *not the only* free store available on all systems, and nor does it tell me anything concrete about the lifetime of an object allocated with `new`.
Dynamic mapping for destinations in grunt.js I have a project with several sub folders that contain JavaScript files I want to concatenate. what would be the right way to configure them? eg. source: /modules/$modulename/js/\*.js (several files) dest: /modules/$modulename/js/compiled.js So what I want to do is to compile js-files of an unknown/unconfigured count of subfolders ($modulename) into one file per subfolder. Is this possible? --- The following function (built after hereandnow78's instructions) does the job: ``` grunt.registerTask('preparemodulejs', 'iterates over all module directories and compiles modules js files', function() { // read all subdirectories from your modules folder grunt.file.expand('./modules/*').forEach(function(dir){ // get the current concat config var concat = grunt.config.get('concat') || {}; // set the config for this modulename-directory concat[dir] = { src: [dir + '/js/*.js', '!' + dir + '/js/compiled.js'], dest: dir + '/js/compiled.js' }; // save the new concat config grunt.config.set('concat', concat); }); }); ``` after that i put preparemodulejs before the concat job in my default configuration.
you will probably need to code your own task, where you iterate over your subfolders, and dynamically append to your concat configuration. ``` grunt.registerTask("your-task-name", "your description", function() { // read all subdirectories from your modules folder grunt.file.expand("./modules/*").forEach(function (dir) { // get the current concat config var concat = grunt.config.get('concat') || {}; // set the config for this modulename-directory concat[dir] = { src: ['/modules/' + dir + '/js/*.js', '!/modules/' + dir + '/js/compiled.js'], dest: '/modules/' + dir + '/js/compiled.js' }; // save the new concat configuration grunt.config.set('concat', concat); }); // when finished run the concatinations grunt.task.run('concat'); }); ``` run this with: ``` $ grunt your-task-name ``` this code is untested, but i think it should do your job. HINT: you can put this code into an external file and include in your gruntfile if you want to keep your gruntfile small, e.g. put this into a file inside a tasks-directory: ``` module.exports = function(grunt) { grunt.registerTask("your-task-name", "your description", function() { ... }); }; ``` and load in in your gruntfile: ``` grunt.loadTasks("./tasks"); ```
How to create layout with 6 buttons like windows tiles I'm trying to create a layout with 6 buttons that automatically adapt to the screen size as the tiles of windows phone. In the code I create dynamically the 6 button, 2 for line but the button should fit the size of the screen filling the latter. how can I proceed? ``` <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <LinearLayout android:layout_width="match_parent" android:layout_height="0dip" android:orientation="horizontal" android:weightSum="2" > <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1" android:background="@drawable/conv_up" /> <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1" android:background="@drawable/conv_up" /> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="0dip" android:orientation="horizontal" android:weightSum="2" > <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1" android:background="@drawable/conv_up" /> <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1" android:background="@drawable/conv_up" /> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="0dip" android:orientation="horizontal" android:weightSum="2" > <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1" android:background="@drawable/conv_up" /> <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1" android:background="@drawable/conv_up" /> </LinearLayout> ```
I'd use a vertical `LinearLayout` with three rows of same weight as children, each row being a horizontal `LinearLayout` having two children of same weights, which will make sure the full area is filled. For six buttons performance shouldn't be an issue. If performance is a concern, you can make the rows as `RelativeLayout`s and use a strut to split in half and position the two children based on that. When I say a *strut*, I mean this: ``` <View android:id="@+id/strut" android:layout_width="0dp" android:layout_height="0dp" android:layout_centerHorizontal="true"/> ``` **Update:** Since you're trying the `LinearLayout`s, here's how you can deal with the heights and widths: The parent `LinearLayout` can have: ``` android:layout_width="match_parent" android:layout_height="match_parent" ``` The three `LinearLayout` children will have: ``` android:layout_width="match_parent" android:layout_height="0dip" ``` The `Button`s will have: ``` android:layout_width="0dip" android:layout_height="match_parent" ``` As you can notice, we have `0dip` for the property that weight is applied on (either on height if parent is vertical oriented, or width if parent is horizontal oriented), which will need to grow to fill in the space. Here's the full XML (buttons don't include drawables, so feel free to add yours): ``` <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <LinearLayout android:layout_width="match_parent" android:layout_height="0dip" android:orientation="horizontal" android:layout_weight="1" > <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1" /> <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1"/> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="0dip" android:orientation="horizontal" android:layout_weight="1" > <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1" /> <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1"/> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="0dip" android:orientation="horizontal" android:layout_weight="1" > <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1" /> <Button android:layout_width="0dip" android:layout_height="match_parent" android:layout_weight="1" /> </LinearLayout> </LinearLayout> ``` And the result: ![enter image description here](https://i.stack.imgur.com/WkiTf.jpg)
combine load and resize plain JavaScript I combine the load en the resize function in one. ``` $(window).on("resize", function () { if (window.innerWidth < 700) { alert('hello'); } }).resize(); ``` But I am looking for a code in plain JavaScript (without jQuery). How can I create this?
You can do this by adding event listener and calling a function on resize: ``` window.addEventListener("resize", onResizeFunction); function onResizeFunction (e){ //do whatever you want to do on resize event } ``` Same thing is for onLoad event: ``` window.addEventListener("load", onLoadFunction); function onLoadFunction(e){ //do the magic you want } ``` If you want to trigger function on resize, when the window loads ``` window.addEventListener("load", onLoadFunction); function onLoadFunction(e){ //do the magic you want onResizeFunction();// if you want to trigger resize function immediately, call it window.addEventListener("resize", onResizeFunction); } function onResizeFunction (e){ //do whatever you want to do on resize event } ```
How to catch exceptions thrown in callbacks passed to jQuery? I'd like to catch exceptions thrown from callbacks passed to jQuery (either to event handlers like `click`, or to jqXHR methods such as `then` or `always`). I've identified two options: - `window.onerror` handler - this is only a partial solution because it isn't supported on Android which is one of my target platforms - handling exceptions within each individual callback - not DRY at all! The only other thing I can think of is overriding jQuery methods but that can result in problems any time I upgrade jQuery. For AJAX handlers, I could possibly use $.ajaxSetup (per the answer to [Exceptions thrown in jQuery AJAX callbacks swallowed?](https://stackoverflow.com/q/2721983/221619)) but I'm not sure that will allow me to catch everything. Are there any other options?
You can wrap each callback like this: ``` function cbWrapper(fn) { return function() { try { return(fn.apply(this, arguments)); } catch(e) { // handle all your exceptions here } }; } ``` So, when you go to pass a callback to an Ajax call, instead of passing the actual callback, you pass `cbWrapper(callback)`. ``` $.get(url, cbWrapper(myRealCallback)); ``` Or the inline anonymous version would be like this: ``` $.get(url, cbWrapper(function() { // actual callback code here })); ``` A more advanced way of doing it would be to override `$.get()` with your own implementation that wraps the callback automatically, but that gets tricky because jQuery is so flexible about what arguments are actually passed. Because of that and because you have to override one specific argument to make this work, you would have to duplicate all of their argument detection code that figures out which arguments are present and which correspond to which actual function parameters. That code is a bit messy. It's doable and probably won't break because if jQuery broke it, then existing jQuery code would break, but it isn't very clean. If it's all your own code, you could just make your own version of `$.get()` that isn't so flexible with argument positions and switch all of your code to use it instead of the actual `$.get()`: ``` jQuery.fn.myGet = function(url, fn) { return(jQuery.get(url, cbWrapper(fn))); } ```
Git Project Dependencies on GitHub I've written a PHP framework and a CMS on top of the framework. The CMS is dependent on the framework, but the framework exists as a self-contained folder within the CMS files. I'd like to maintain them as separate projects on GitHub, but I don't want to have the mess of updating the CMS project every time I update the framework. Ideally, I'd like to have the CMS somehow pull the framework files for inclusion into a predefined sub-directory rather than physically committing those files. Is this possible with Git/GitHub? If so, what do I need to know to make it work? Keep in mind that I'm at a very, very basic level of experience with Git - I can make repositories and commit using the Git plugin for Eclipse, connect to GitHub, and that's about it. I'm currently working solo on the projects, so I haven't had to learn much more about Git so far, but I'd like to open it up to others in the future and I want to make sure I have it right. Also, what should my ideal workflow be for projects with dependencies? Any tips on that subject would also greatly appreciated. If you need more info on my setup, just ask in the comments.
First, if you really want to use git for this, then consider using its [Submodule functionality](http://book.git-scm.com/5_submodules.html): > > Git's submodule support allows a repository to contain, as a subdirectory, a checkout of an external project. Submodules maintain their own identity; the submodule support just stores the submodule repository location and commit ID, so other developers who clone the containing project ("superproject") can easily clone all the submodules at the same revision. Partial checkouts of the superproject are possible: you can tell Git to clone none, some or all of the submodules. > > > The linked page contains a detailed discussion including examples of how to use it exactly. That said, I would recommend not to use your version control system for dependency management and rather start using a build tool that can handle these things for you, such as [Maven](http://maven.apache.org/) or [Ant](http://ant.apache.org/). There is even a PHP-specific build tool in development called [Phing](http://phing.info/), but I haven't used it myself yet. It is mentioned in an article that discusses your question: [Version Control != Dependency Management](http://www.tricode.nl/version-control-dependency-management/). The reason build tools may be a better fit in the long run is because they often also support different repository types, external libraries (and different locations) and extensive checking. If you however just want to integrate these two libraries and don't want any additional hassle, the submodule approach is probably sufficient.
How to make a form wait for ajax to finish before it submits? So there is a form that i want to submit only when a condition is verified from the database using ajax. I am using `preventDefault()` method if the condition is true i.e. if a user is not a resident, a variable is set to true in `ajax successs function` and `preventDefault()` gets called, however, when doing this, the form always submits. It doesn't wait for the ajax to finish even when async is set to false. Here's the code. ``` $('#button').click(function(e) { if ($('#ca_resident').prop('checked') == true) { amount=$('#user-amount').val().replace(/[,]/g,""); project_name=$('#project_name').val(); var not_resident = false; $.ajax({ url: '/main/verify_residence/', type: 'POST', aysnc: false, data: { value: amount, name: project_name }, success: function(data){ $("#verify_residence").html(data); not_resident = true; }, dataType: 'html' }); } if(not_resident){ e.preventDefault(); } }); ```
that won't work. Success will fire **after**: ``` if(not_resident){ e.preventDefault(); } ``` As it's asynchronous. You need to always cancel the button click then submit the form once success is hit: ``` $('#button').click(function(e) { var $form = $(this).closest('form'); if ($('#ca_resident').prop('checked') == true) { amount=$('#user-amount').val().replace(/[,]/g,""); project_name=$('#project_name').val(); $.ajax({ url: '/main/verify_residence/', type: 'POST', aysnc: false, data: { value: amount, name: project_name }, success: function(data){ $("#verify_residence").html(data); $form.submit(); }, dataType: 'html' }); } e.preventDefault(); }); ```
What does "apt install" do? Having used `apt-get` and seeing `aptitude` for the first time, I thought that `apt` must just be short for `aptitude` so when told to type `aptitude install`, I just typed `apt install`. It seems to have worked, but have I done what needed to be done or has something been missed?
`aptitude install` means that you are invoking the install target of the `aptitude` program. `apt install` means you are invoking the install target of the `apt` binary. Note that the `apt` binary is very new. It arrived with the 1.0 release. And no, it is not short for `aptitude`, but is a separate binary. Both these commands install the packages that are given as arguments. However, `apt` and `aptitude` each use their own dependency resolution algorithms (which choose which packages to install to satisfy the request), which are different. This means in practice that they may choose different packages to install as a result of the same package arguments. E.g. ``` apt-get install foo ``` and ``` aptitude install foo ``` may choose to install different packages. Note also that one rather noticeable difference between the two commands is aptitudes interactive dependency resolver. This will give you different choices on how to install the package, ranging from the reasonable to the insane. Daniel Burrows, the author of aptitude, was [rather proud of having discovered this algorithm](http://algebraicthunk.net/~dburrows/blog/entry/from-blogspot/2005-05-09--21:30:00/). The `apt` binary is contained in the `apt` software binary package (deb), which also includes `apt-get` and `apt-cache`. `apt` is a newer command than the other two and is intended to be friendlier. As far as I know `apt-get install` and `apt install` are functionally equivalent. The `aptitude` binary is contained in the `aptitude` software binary package (deb). To find out more about these commands you can do e.g. ``` man apt ``` to see the man page and ``` apt --help ``` to see the help output, and similarly for the other commands mentioned here. Here is Michael Vogt, long time apt developer, [on the subject of the new `apt` binary](http://mvogt.wordpress.com/2014/04/04/apt-1-0/). He writes > > The big news for this version is that we included a new “apt” binary > that combines the most commonly used commands from apt-get and > apt-cache. The commands are the same as their apt-get/apt-cache > counterparts but with slightly different configuration options. > > > Currently the apt binary supports the following commands: > > > - list: which is similar to dpkg list and can be used with flags like > --installed or --upgradable. > - search: works just like apt-cache search but sorted alphabetically. > - show: works like apt-cache show but hide some details that people are > less likely to care about (like the hashes). The full record is still > available via apt-cache show of course. > - update: just like the regular apt-get update with color output > enabled. > - install,remove: adds progress output during the dpkg run. > - upgrade: the same as apt-get dist-upgrade –with-new-pkgs. > - full-upgrade: a more meaningful name for dist-upgrade. > - edit-sources: edit sources.list using $EDITOR. > > > PS: If the Super Cow Powers thing puzzles you, you're [not the only one](https://unix.stackexchange.com/q/92185/4671). PPS: NB: `aptitude`, `apt`, `apt-get`, `apt-cache` all use the shared apt library, which lives in (you guessed it) the apt package, so they have a lot of common code. Try running ``` ldd /usr/bin/apt ``` or ``` ldd /usr/bin/aptitude ``` and you'll see a line like ``` libapt-pkg.so.4.12 => /usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12 (0x00007fd065330000) ``` That is apt/aptitude linking against the shared apt library. But the dependency resolver is not one of the things they share.
Find ToolTip Popup in Logical or Visual Tree Say I have a `ToolTip` with a style specified in XAML like this: ``` <Button Content="Click me" ToolTip="Likes to be clicked"> <Button.Resources> <Style TargetType="{x:Type ToolTip}" BasedOn="{StaticResource {x:Type ToolTip}}"> <Setter Property="OverridesDefaultStyle" Value="true" /> <Setter Property="HasDropShadow" Value="True" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type ToolTip}"> <StackPanel Background="Wheat" Height="200" Width="200"> <TextBlock x:Name="TxbTitle" FontSize="24" Text="ToolTip" Background="BurlyWood" /> <ContentPresenter /> </StackPanel> </ControlTemplate> </Setter.Value> </Setter> </Style> </Button.Resources> </Button> ``` Given I have a reference to the `Button` and that the `ToolTip` is showing, how can I find the `Popup` of the `ToolTip` (and later look for its visual children, e.g. `TxbTitle`)? **Update:** Based on [pushpraj's](https://stackoverflow.com/a/24596864/617658) answer I was able to get a hold on the (full) visual tree, and it looks like this: ``` System.Windows.Controls.Primitives.PopupRoot System.Windows.Controls.Decorator System.Windows.Documents.NonLogicalAdornerDecorator System.Windows.Controls.ToolTip System.Windows.Controls.StackPanel System.Windows.Controls.TextBlock (TxbTitle) System.Windows.Controls.ContentPresenter System.Windows.Controls.TextBlock System.Windows.Documents.AdornerLayer ``` Here I can find the `TxbTitle` `TextBlock`. (The logical tree like this:) ``` System.Windows.Controls.Primitives.Popup System.Windows.Controls.ToolTip System.String ``` pushpraj's answer is however based on that I can get hold of the `ToolTip` instance. What I have got is the `Button` only, and `Button.ToolTip` property returns the string `"Likes to be clicked"`, not the `ToolTip` instance. So more specifically, the question is, can I get hold of the `ToolTip` *or* the `Popup` in some way when all I've got is the `Button`. (Crazy idea: is there some way to enumerate all open `Popup`s?)
A `ToolTip` is a kind of `Popup` which hosts the tooltip content And since a Popup is hosted in a separate window so it have it's own logical and visual tree for your information below are the Visual and Logical tree for a tooltip **Visual Tree** ``` System.Windows.Controls.Primitives.PopupRoot System.Windows.Controls.Decorator System.Windows.Documents.NonLogicalAdornerDecorator System.Windows.Controls.ToolTip ``` **Logical Tree** ``` System.Windows.Controls.Primitives.Popup System.Windows.Controls.ToolTip ``` Note: since popup has it's own root so it may not be accessible from main window's visual or logical tree. **To find a popup of tool tip** I have used attached properties to find the popup for a tooltip ``` namespace CSharpWPF { public class ToolTipHelper : DependencyObject { public static bool GetIsEnabled(DependencyObject obj) { return (bool)obj.GetValue(IsEnabledProperty); } public static void SetIsEnabled(DependencyObject obj, bool value) { obj.SetValue(IsEnabledProperty, value); } // Using a DependencyProperty as the backing store for IsEnabled. This enables animation, styling, binding, etc... public static readonly DependencyProperty IsEnabledProperty = DependencyProperty.RegisterAttached("IsEnabled", typeof(bool), typeof(ToolTipHelper), new PropertyMetadata(false,OnEnable)); private static void OnEnable(DependencyObject d, DependencyPropertyChangedEventArgs e) { ToolTip t = d as ToolTip; DependencyObject parent = t; do { parent = VisualTreeHelper.GetParent(parent); if(parent!=null) System.Diagnostics.Debug.Print(parent.GetType().FullName); } while (parent != null); parent = t; do { //first logical parent is the popup parent = LogicalTreeHelper.GetParent(parent); if (parent != null) System.Diagnostics.Debug.Print(parent.GetType().FullName); } while (parent != null); } } } ``` xaml ``` <Button Content="Click me" ToolTip="Likes to be clicked"> <Button.Resources> <Style TargetType="{x:Type ToolTip}" BasedOn="{StaticResource {x:Type ToolTip}}" xmlns:l="clr-namespace:CSharpWPF"> <Setter Property="OverridesDefaultStyle" Value="true" /> <Setter Property="HasDropShadow" Value="True" /> <Setter Property="l:ToolTipHelper.IsEnabled" Value="True"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type ToolTip}"> <StackPanel Background="Wheat" Height="200" Width="200"> <TextBlock x:Name="TxbTitle" FontSize="24" Text="ToolTip" Background="BurlyWood" /> <ContentPresenter /> </StackPanel> </ControlTemplate> </Setter.Value> </Setter> </Style> </Button.Resources> </Button> ``` I have added the newly created attached property to tooltip style `<Setter Property="l:ToolTipHelper.IsEnabled" Value="True"/>` **Retrieve ToolTip instance from code behind** In event you can not specify the style or template of style from xaml then code behind is your way to retrieve the tooltip instance sample code ``` Style style = new Style(typeof(ToolTip), (Style)this.FindResource(typeof(ToolTip))); style.Setters.Add(new Setter(ToolTipHelper.IsEnabledProperty, true)); this.Resources.Add(typeof(ToolTip), style); ``` code above creates a style object for tooltip and adds a setter for `ToolTipHelper.IsEnabledProperty` and inject the same style to the resources of the window as result the property changed handler `OnEnable` will be invoked in the `ToolTipHelper` class when ever the tooltip is required to be displayed. and the dependency object in the handler will be the actual tooltip instance which you may further manipulate.
How can I better document these data relationships/transformations? I'm working on a project that uses `RxJS` to perform data transformations on varying sources of data, and I'm in the process of writing some documentation for it. I want to find an effective way to document the following: 1. An abstract way to describe the cardinality and relationships of the data. 2. An abstract description of the data transformations. Here are two examples of how I'm describing a data transformation. Table headers are the destination fields, the second row is the source data or a transformation done on the source data to get the desired data. [![Data transformation 1](https://i.stack.imgur.com/YS3v0.png)](https://i.stack.imgur.com/YS3v0.png) [![Data transformation 2](https://i.stack.imgur.com/xpm8b.png)](https://i.stack.imgur.com/xpm8b.png) I can see that the Github Markdown format is very limited for this purpose, which is why I'm asking for help on this. I also have a few ERD diagrams that looks like this: [![Schema](https://i.stack.imgur.com/y1udp.png)](https://i.stack.imgur.com/y1udp.png) I'm not sure of a clean way to document how the transformations relate to the schema, and what assumptions about cardinality are made within those transformations (`getStudentTestScoreDcid` in particular)
Data Flow Diagrams sound like what you need [From Wikipedia](http:////en.wikipedia.org/wiki/Data_flow_diagram): > > A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system, modelling its process aspects. A DFD is often used as a preliminary step to create an overview of the system, which can later be elaborated.[2](https://i.stack.imgur.com/U6gwf.png) DFDs can also be used for the visualization of data processing (structured design). > > > A DFD shows what kind of information will be input to and output from the system, where the data will come from and go to, and where the data will be stored. It **does not show information about the timing of process or information about whether processes will operate in sequence or in parallel** (which is shown on a flowchart) > > > *Emphasis above is mine* The whole point of the DFD is to show the transformational aspects of data as it moves through the system. You will always have an input (from a user, data storage, or another process) that feeds into a process with an output (to the screen, data storage, or another process). If you don't have those three elements, you don't include it on the DFD. One other item worth mentioning, a large number (I would say most) DFDs do not have a starting point or ending point on the complete diagram. There are at least 2 different symbologies used (Gane-Sarson, and Yourdon & Coad). The example below shows how data from a Customer goes into the Process Order process which outputs data being stored in the Transaction data storage. Duplicated data stores are usually included to show the process more easily and are often marked with an altered symbol for the data store (a D in a gray box on this example). [![Example data flow diagram example showing how data moves from data stores through the processes of the system being document.](https://i.stack.imgur.com/U6gwf.png)](https://i.stack.imgur.com/U6gwf.png) Sample from [Visual Paradigm](http://www.visual-paradigm.com/VPGallery/bpmodeling/dfd.html)
Ruby on Rails instance vs class methods I have studied major difference between Ruby class ,instance method and the major difference I found is we don't need to create instance of that class we can directly call that method on class name directly. ``` class Notifier def reminder_to_unconfirmed_user(user) headers['X-SMTPAPI'] = '{"category": "confirmation_reminder"}' @user = user mail(:to => @user["email"], :subject => "confirmation instructions reminder") end end ``` So,here I defined instance method `reminder_to_unconfirmed_user` in my `Notifier` class to send email to unconfirmed users, and when I run `Notifier.reminder_to_unconfirmed_user(User.last)` it get called provided it's a instance method not a class method.
To define a class method, use the `self` keyword in the method's definition (or the class' name): ``` class Notifier def self.this_is_a_class_method end def Notifier.this_a_class_method_too end def this_is_an_instance_method end end ``` --- In your case, `reminder_to_unconfirmed_user` should be defined as a class method: ``` class Notifier def self.reminder_to_unconfirmed_user(user) # ... end end ``` Then you can use it like this: ``` Notifier.reminder_to_unconfirmed_user(User.last) ```
Application.WorksheetFunction vs. WorksheetFunction This one is a rather short question and probably easy to answer, however I fail to do so myself at this point: --- **Sample data:** ``` A B C ``` --- **Sample code:** ``` With Sheet1 Debug.Print Application.WorksheetFunction.Match("D", .Columns(1), 0) 'Option1 Debug.Print Application.Match("D", .Columns(1), 0) 'Option2 Debug.Print WorksheetFunction.Match("D", .Columns(1), 0) 'Option3 End With ``` --- **Question:** I know that option2 lost intellisense and will not go into debug mode, however option1 and option3 behave the same - Intellisense works - Error is thrown and code goes into debug-mode Whereas documentation on the `WorksheetFunction` object [says](https://learn.microsoft.com/en-us/office/vba/api/excel.worksheetfunction) that we can use the `WorksheetFunction` property of the `Application` object, it seems to work just fine without doing so. So, what is the added value to use `Application` object reference in this regard and what is the disadvantage of leaving it out?
I'd say that `Application` is global context and when we use anything, that compiler can't find in its current context, it looks it in `Application`, eventually finding `Application.WorksheetFunction` in your case. So both should be equivalent. (this is how it works in JavaScript) BUT I might be wrong. **UPDATE** [Documentation](https://learn.microsoft.com/en-us/office/vba/api/excel.application(object)) states, that some functions and properties can be called without Application., so it is true that Application.WorksheetFunction is equivalent to WorksheetFunction, but it is false, that Application serves as global context. **UPDATE** [According to this interesing article](https://www.excelanytime.com/excel/index.php?option=com_content&view=article&id=180:excel-vba-application-object-the-default-object-in-excel&catid=79&Itemid=475), `Application` is default object indeed: > > The Application object is the Default Object, Excel assumes it even when it is not specified. > > >
Format and pretty print log via tail I have this log file that I check on a frequent basis and because of the format of it, it's quite easier to read when pretty printed. I'd like to do so in a tail. Logs in the file like: ``` 2019-07-04T09:53:04-07:00 some.package.placeholder.stderr {"log": "The content", "foo": "bar", "baz": "blah"} 2019-07-04T10:15:37-07:00 some.package.placeholder.stderr {"log": "I'm actually", "foo": "bar", "baz": "blah"} 2019-07-04T10:15:37-07:00 some.package.placeholder.stderr {"log": "Interested on", "foo": "bar", "baz": "blah"} ``` And I want to do something similar to ``` tail -f myLogFile | grep [...?...] | jq '.log' ``` So when tailing I get: ``` The content I'm actually Interested on ``` Or even: ``` 2019-07-04T09:53:04-07:00 The content 2019-07-04T10:15:37-07:00 I'm actually 2019-07-04T10:15:37-07:00 Interested on ```
With GNU grep for `-o`: ``` $ tail file | grep -o '{[^}]*}' | jq -r '.log' The content I'm actually Interested on ``` With any awk: ``` $ tail file | awk 'sub(/.*{/,"{")' | jq -r '.log' The content I'm actually Interested on $ tail file | awk '{d=$1} sub(/.*{/,""){$0="{\"date\": \""d"\", " $0} 1' | jq -r '.date + " " + .log' 2019-07-04T09:53:04-07:00 The content 2019-07-04T10:15:37-07:00 I'm actually 2019-07-04T10:15:37-07:00 Interested on ``` That last one works by merging the date field from the input into the json so then jq can just select and print it with the log field.
Not able to server multiple S3 buckets on a single Cloudfront Distribution **Case:** I have a few S3 buckets that store my media files. I wanted to map all these s3 buckets to a single CF distribution. (their files should be accessed from different paths). I have made a CF distribution and added 2 buckets. For the behaviour, the first bucket is on Default(\*) and the second bucket is on path **nature/\***. **Issues:** 1. I am able to access the primary bucket (one with default behaviour) but not able to access the secondary bucket (with path nature/\*). The error on accessing the secondary bucket is "Access Denied". **Additional details:** 1. Both my buckets are **not** available to global access and CF is accessing them from OAI. References: 1. <https://aswinkumar4018.medium.com/amazon-cloudfront-with-multiple-origin-s3-buckets-71b9e6f8936> 2. <https://vimalpaliwal.com/blog/2018/10/10f435c29f/serving-multiple-s3-buckets-via-single-aws-cloudfront-distribution.html> 3. <https://youtu.be/5r9Q-tI7mMw> [![enter image description here](https://i.stack.imgur.com/8pHlD.png)](https://i.stack.imgur.com/8pHlD.png) [![enter image description here](https://i.stack.imgur.com/kZtiK.png)](https://i.stack.imgur.com/kZtiK.png)
Your files in the second bucket must start with a top level prefix `nature/` or else the won't resolve. CF doesn't remove the path matched when routing, it is still there. The CloudFront behavior will correctly match and send the request to the second bucket, but the path will still be `nature/...`. If you can't move the objects in the second bucket into a `nature/` prefix, then you need a CF Function to remove this part of the path from the object key before forwarding the request to the S3 origin. Moving all the objects include the `nature` prefix is easy but annoying. It is the best strategy because it is also cheapest (from a monetary standpoint), but may require extra overhead on your side. A CF function is easier but costly both from a money standpoint and performance, since the CF function has to be run every time. An example CF function might be: ``` function handler(event) { var request = event.request; request.uri = request.uri.replace(/^\/nature/, ''); return request; } ```
Show content in title when text-overflow CSS3 introduces [`text-overflow`](http://www.css3.info/preview/text-overflow/) so you can hide overflowing text and even add ellipses. If the text is overflowing and hidden, I would like to show it as a tooltip when hovered. The easiest way to do this is to add the text to the `title` attribute of the element. However that will make the text show whether it is overflowing or not. I only want to show the tooltip when overflowed. so if I had this: ``` <span>some text here</span> <span>some more text here</span> ``` and it rendered like this: > > some text here > > > some more... > > > The first one would have no tooltip since there is no need and the second would have a tooltip that showed: > > some more text here > > > Is there any way to set this up?
You can't do this with CSS alone, and I think any Javascript solution will depend on how your HTML is structured. However, if you have HTML structured like this: ``` <div id="foo"> <span class="bar">Lorem ipsum dolor sit amet, consectetur.</span> <span class="bar">Lorem ipsum dolor sit amet, consectetur adipisicing elit.</span> </div> ``` With the `#foo` element having your `text-overflow` declaration, and the `bar` class having a `white-space: nowrap` declaration. The you should be able to do something like this using jQuery: ``` var foo = $("#foo"); var max_width = foo.width(); foo.children().each(function() { var $this = $(this); if ($this.width() > max_width) { $this.attr("title", $this.text()); } }); ``` See: <http://jsbin.com/alepa3>
How to set multiple folders for a server in VSCode SSH TARGETS? How to set multiple folders for a server in SSH TARGETS of VSCode Remote Explorer ? ``` SSH TARGETS 10.0.1.123 MyWebSite1 /var/www ``` I need to add MyWebSite2: ``` SSH TARGETS 10.0.1.123 MyWebSite1 /var/www MyWebSite2 /var/www ```
You have connect to the same host again (ex:10.0.1.123) and once logged in, select a folder to open Ex: `/home/project1/src` or `/root/myapp/code`. Every "entry point" folder you select for that host, will be listing in the explorer. More details [here](https://code.visualstudio.com/docs/remote/ssh#_remember-hosts-and-advanced-settings) 1. Right Click on Host or click the `Add` button 2. Connect SSH target (either New window or same) 3. Provide the path of the folder or choose `File > Open Folder` [![SSH connections with multiple entrypoints](https://i.stack.imgur.com/b672P.png)](https://i.stack.imgur.com/b672P.png)
What / where is \_\_scrt\_common\_main\_seh? A third party library in my program is trying to call `__scrt_common_main_seh` by way of the Microsoft library `msvcrt.lib`, but is defined by some unknown library and therefore gives a linker error. I don't know what this function is supposed to do or where it is defined. I looked online for this function, but did not find any clues except for general descriptions of what linker errors are. I believe it might be doing some setup for win32 GUI applications. The library which defines it might be configured as project dependency by Visual Studio but my project is using Bazel.
## Summary For non-console applications having error `error LNK2019: unresolved external symbol main referenced in function "int __cdecl __scrt_common_main_seh(void)"` try adding linker flag `/ENTRY:wWinMainCRTStartup` or `/ENTRY:WinMainCRTStartup` For console applications having that error, make sure to implement a `main()` function. ## Details [This answer](https://stackoverflow.com/a/55336931/353407) shows that `__scrt_common_main_seh` is normally called during `mainCRTStartup` which is the default [entry point](https://learn.microsoft.com/en-us/cpp/build/reference/entry-entry-point-symbol?view=msvc-160) for windows **console** applications. `__scrt_common_main_seh` is then (indirectly) responsible for calling `main()`. My program did not have a `main()` function, which might have prevented the compiler from generating `__scrt_common_main_seh` (Just speculating. I am totally clueless about who defines `__scrt_common_main_seh`) I did find, however, that the library I was linking against defined a `wWinMain()` function. So I tried adding the linker flag `/ENTRY:wWinMainCRTStartup` and the linker error went away.
What does it means "corePoolSize", param of newScheduledThreadPool() method? i dont have clear what does it means the "corePoolSize" parameter of the newScheduledThreadPool() method from class java.util.concurrent.Executors. What happen if i put a higher number value and what happen if i put a lower number value? ``` // corePoolSize = 1; java.util.concurrent.Executors.newScheduledThreadPool(corePoolSize); ``` or ``` // corePoolSize = 5; java.util.concurrent.Executors.newScheduledThreadPool(corePoolSize); ``` What is the correct way to define that value?
It is explained in details in the javadoc of [ThreadPoolExecutor](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html) - extract: > > When a new task is submitted in method `execute(Runnable)`, and fewer than `corePoolSize` threads are running, a new thread is created to handle the request, even if other worker threads are idle. If there are more than `corePoolSize` but less than `maximumPoolSize` threads running, a new thread will be created only if the queue is full. > > > So it defines if threads should be created or not depending on the state of the executor. In the case of a `ScheduledExecutorService`, if you don't plan to have more than one task running at a given time, a `corePoolSize` of 1 is probably more efficient. And it won't prevent more threads to be created if required.
Benchmarking programs on Linux for an assignment we need to benchmark our implementations with different optimizations and parameters. Is there a feasible way of benchmarking little programs on the linux command line (I know of time) with different parameters which gives me the time data as CSV or something similiar? Output could be something like: ``` Implementation Time A 23s B with -O3 2Threads 15s B with -O3 4Threads 10s ``` I'm pretty sure that I've seen something like that on some professors slides but I cant remember who or when it was...
Why not using `time` command inside a `bash` script, something like : ``` #!/bin/bash NPROG=`cat proglist | wc -l` for i in `seq 1 ${NPROG}` do PROG=`sed -n "${i}p" proglist` ARG=`sed -n "${i}p" arglist` TIME=`{ time ${PROG} ${ARG}; } 2>&1 | grep real | awk '{print $2}'` echo "${TIME} ${PROG} ${ARG}" done ``` where `proglist` is a text file containing the programs to execute ``` A B B ``` and `arglist` is a text file containing the arguments, something like : ``` -a 1 -b 2 -f "foo" -f "bar" ``` The output of the script will look-like : ``` 0m32.000s A -a 1 -b 2 1m12.000s B -f "foo" 5m38.000s B -f "bar" ```
Data Structure for Subsequence Queries In a program I need to efficiently answer queries of the following form: > > Given a set of strings `A` and a query string `q` return all `s ∈ A` such that q is a [subsequence](http://en.wikipedia.org/wiki/Subsequence) of `s` > > > For example, given `A = {"abcdef", "aaaaaa", "ddca"}` and `q = "acd"` exactly `"abcdef"` should be returned. --- The following is what I have considered considered so far: 1. For each possible character, make a sorted list of all string/locations where it appears. For querying interleave the lists of the involved characters, and scan through it looking for matches within string boundaries. This would probably be more efficient for words instead of characters, since the limited number of different characters will make the return lists very dense. 2. For each n-prefix `q` might have, store the list of all matching strings. `n` might realistically be close to 3. For query strings longer than that we brute force the initial list. This might speed things up a bit, but one could easily imagine some n-subsequences being present close to all strings in `A`, which means worst case is the same as just brute forcing the entire set. --- Do you know of any data structures, algorithms or preprocessing tricks which might be helpful for performing the above task efficiently for large `A`s? (My `s`s will be around 100 characters) --- **Update:** Some people have suggested using LCS to check if `q` is a subsequence of `s`. I just want to remind that this can be done using a simple function such as: ``` def isSub(q,s): i, j = 0, 0 while i != len(q) and j != len(s): if q[i] == s[j]: i += 1 j += 1 else: j += 1 return i == len(q) ``` **Update 2:** I've been asked to give more details on the nature of `q`, `A` and its elements. While I'd prefer something that works as generally as possible, I assume `A` will have length around 10^6 and will need to support insertion. The elements `s` will be shorter with an average length of 64. The queries `q` will only be 1 to 20 characters and be used for a live search, so the query "ab" will be sent just before the query "abc". Again, I'd much prefer the solution to use the above as little as possible. **Update 3:** It has occurred to me, that a data-structure with `O(n^{1-epsilon})` lookups, would allow you to solve OVP / disprove the SETH conjecture. That is probably the reason for our suffering. The only options are then to disprove the conjecture, use approximation, or take advantage of the dataset. I imagine quadlets and tries would do the last in different settings.
# Tests There have been four main proposals in this thread: 1. Shivam Kalra suggested creating an automaton based on all the strings in `A`. This approach has been tried slightly in the literature, normally under the name "Directed Acyclic Subsequence Graph" (DASG). 2. J Random Hacker suggested extending my 'prefix list' idea to all 'n choose 3' triplets in the query string, and merging them all using a heap. 3. In the note "Efficient Subsequence Search in Databases" Rohit Jain, Mukesh K. Mohania and Sunil Prabhakar suggest using a Trie structure with some optimizations and recursively search the tree for the query. They also have a suggestion similar to the triplet idea. 4. Finally there is the 'naive' approach, which wanghq suggested optimizing by storing an index for each element of `A`. To get a better idea of what's worth putting continued effort into, I have implemented the above four approaches in Python and benchmarked them on two sets of data. The implementations could all be made a couple of magnitudes faster with a well done implementation in C or Java; and I haven't included the optimizations suggested for the 'trie' and 'naive' versions. ## Test 1 `A` consists of random paths from my filesystem. `q` are 100 random `[a-z]` strings of average length 7. As the alphabet is large (and Python is slow) I was only able to use duplets for method 3. Construction times in seconds as a function of `A` size: ![Construction time](https://i.stack.imgur.com/Pnz6F.png) Query times in seconds as a function of `A` size: ![Query time](https://i.stack.imgur.com/FiyyL.png) ## Test 2 `A` consists of randomly sampled `[a-b]` strings of length 20. `q` are 100 random `[a-b]` strings of average length 7. As the alphabet is small we can use quadlets for method 3. Construction times in seconds as a function of `A` size: ![enter image description here](https://i.stack.imgur.com/7UrIX.png) Query times in seconds as a function of `A` size: ![enter image description here](https://i.stack.imgur.com/bWH5g.png) # Conclusions The double logarithmic plot is a bit hard to read, but from the data we can draw the following conclusions: - Automatons are very fast at querying (constant time), however they are impossible to create and store for `|A| >= 256`. It might be possible that a closer analysis could yield a better time/memory balance, or some tricks applicable for the remaining methods. - The dup-/trip-/quadlet method is about twice as fast as my trie implementation and four times as fast as the 'naive' implementation. I used only a linear amount of lists for the merge, instead of `n^3` as suggested by j\_random\_hacker. It might be possible to tune the method better, but in general it was disappointing. - My trie implementation consistently does better than the naive approach by around a factor of two. By incorporating more preprocessing (like "where are the next 'c's in this subtree") or perhaps merging it with the triplet method, this seems like todays winner. - If you can do with a magnitude less performance, the naive method does comparatively just fine for very little cost.
Difference between DbSet property and Set() function in EF Core? Given this kind of context: ``` public class FooContext : DbContext { public FooContext(DbContextOptions<FooContext> opts) : base(opts) { } public DbSet<Bar> Bars { get; set; } } ``` I can get to a `Bar` in two ways: ``` fooContext.Bars.Add(new Bar()); // Approach 1 ``` or ``` fooContext.Set<Bar>().Add(new Bar()); // Approach 2 ``` What is the difference between the two approaches? I've tried to answer my own question by: - Inspecting the intellisense for both (only tells me that `Set<T>()` also creates a `DbSet<T>`) - [Googling for "EF Core Set vs property"](https://www.google.nl/search?q=EF+Core+Set<T>+vs+property) but that doesn't seem to be the 'right' query - [Google for `DbSet<T>` specifically on the docs urls](https://www.google.nl/search?q=DbSet+site%3Ahttps%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fef%2Fcore%2F) but no relevant results here either it seems - Reading the intro of [the `DbSet<T>` docs](https://learn.microsoft.com/en-us/ef/core/api/microsoft.entityframeworkcore.dbset-1) which just suggests that you can get a set through either of the two methods (not if there is or isn't a difference) - Read [the `Set<T>()` docs](https://learn.microsoft.com/en-us/ef/core/api/microsoft.entityframeworkcore.dbcontext.set) which has no relevant info But I could not find any good explanation about which of the two is used for which purpose. What is the difference? Or perhaps more importantly: where and how should I be able to find this in the docs?
They do exactly the same thing. The real question is when will you use one over the other. You use DbSet when you know the type of entity you want to play with. You simple write the DbContext name then the entity type name and you can create, read, update or delete entries for this entity with the entity methods available. You know what you want and you know where to do it. You use Set when you don't know the entity type you want to play with. Lets say, you wanted to build a class that does your repository functions for creating, reading, updating and deleting entries for an entity. You want this class to be reusable so that you can just pass a DbContext on it and it will use the same create, read, update and delete methods. You don't know for sure what DbContext it will be used on or what DbSet the DbContext will have. Here's when you use generics so that your class can be used by any DbContext for any DbSet. Here's an example of a class you can use for creating any entity on any DbSet in any DbContext ``` public class Repository<TDbContext> where TDbContext : DbContext { private TDbContext _context { get; } public Repository(TDbContext context) { _context = context; } public TEntity Create<TEntity>(TEntity entity) where TEntity : class { if(entity != null) { var dataSet = _context.Set<TEntity>(); if(entity is IEnumerable) { dataSet.AddRange(entity); } else { dataSet.Add(entity); } _context.SaveChanges(); } return entity; } } ``` And this is how to use it. ``` var dbContext01 = new DbContext01(); var dbContext02 = new DbContext02(); var repository01 = new Repository<DbContext01>(dbContext01); var repository02 = new Repository<DbContext02>(dbContext02); repository01.Create(new EntityOnDbContext01 { Property01A = "String", Property01B = "String" }); repository02.Create(new EntityOnDbContext02 { Property02A = 12345, Property02B = 12345 }); ``` Here's a link if you want to know more about generics. Its super awesome. <https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/generics/>
T-SQL Group By Problem I've got the following Problem (or maybe just a thinking barrier): I've got a table (actually a view from a table) with the following columns and data: ![https://i.stack.imgur.com/n98Sw.png](https://i.stack.imgur.com/0NVPg.png) Now i want to Group this data by the column "Customer" and get the "CompetitorName" with the highest "CompetitorCount". Of course i can create a query like this: `SELECT Customer, MAX(CompetitorCount) FROM MyTable GROUP BY Customer` This will return two rows: ``` Foo; 12 Bar; 7 ``` But i wont be able to get the CompetitorName that way. If I include it into the group by section, the "Customer" will show up multiple times. Otherwise I have to use an aggregate function to select which "CompetitorName" I want to use, but of course MAX doesnt work. I'm sure this can be done somehow, but at the moment i've got no idea how. Thanks in advance for any help.
``` select customer, competitorname, competitorcount FROM ( select *, rn = ROW_NUMBER() over ( partition by customer order by competitorcount desc) from tbl ) X WHERE rn=1 ``` If you want to show TIES at the Max value, change ROW\_NUMBER() to RANK(). You might even find the `cross apply` version faster, but it doesn't work for ties. `TOP 1` ensures a single record if there are ties among the competitors. ``` select C.* FROM (select distinct customer from tbl) A cross apply (select top 1 B.* from tbl B where A.customer = B.customer order by B.competitorcount desc) C ``` It first selects all the distinct customers. Then for each customer, it goes off and retrieves the records with the MAX(competitorcount) for that customer.
Segmentation fault occur while reading content from file in object C++ In my code first I stored the name and mobile number in one object after that I write that object into one text file using fstream.write() method. It successfully works, but when I read that written content into another object and call the display method then it displays data correctly, but it gives me segmentation fault after printing data. Here is my code - ``` #include<iostream> #include<fstream> using namespace std; class Telephone { private: string name="a"; int phno=123; public: void getTelephoneData() { cout<<"Enter Name:"; cin>>name; cout<<"Enter Phone Number:"; cin>>phno; } void displayData() { cout<<"Name\t\tPhone no"<<endl; cout<<name<<"\t\t"<<phno<<endl; } void getData() { Telephone temp; ifstream ifs("Sample.txt",ios::in|ios::binary); ifs.read((char*)&temp,sizeof(temp)); temp.displayData(); } }; int main() { Telephone t1; t1.getTelephoneData(); cout<<"----Writing Data to file------"<<endl; ofstream ofs("Sample.txt",ios::out|ios::binary); ofs.write((char*)&t1,sizeof(t1)); ofs.close(); t1.getData(); } ``` Please help me where I'm wrong. Thanks in advance...!
So, before I give you a solution, let's briefly talk about what is going on here: `ofs.write((char*)&t1,sizeof(t1));` What you are doing is casting t1 to a pointer to char, and saying 'write to ofs the memory representation of t1, as is'. So we have to ask ourselves: what is this memory representation of t1? 1. You are storing a (implementation defined, most probably 4 byte) integer 2. You are also storing a complex std::string object. Writing the 4 byte integer might be OK. It is definitely not portable (big-endian vs little endian), and you might end up with the wrong int if the file is read on a platform with different endianness. Writing the `std::string` is definitely not OK. Strings are complex object, and they most often allocate storage on the heap (although there is such a thing as small string optimization). What this means is that you're going to serialize a pointer to a dynamically allocated object. This will never work, as reading the pointer back would point to some location in memory that you have absolutely no control of. This is a great example of undefined behavior. Anything goes, and anything might happen with your program, including 'appearing to work correct' despite deeply seated problems. In your specific example, because the Telephone object that was created is still in memory, what you get is 2 pointers to the same dynamically allocated memory. When your `temp` object goes out of scope, it deletes that memory. When you return to your main function, when `t1` goes out of scope, it tries to delete the same memory again. Serializing any kind of pointers is a big no-no. If your object internals consist of pointers, you need to make a custom solution of how those pointers will be stored in your stream, and later read to construct a new object. A common solution is to store them 'as if' they were stored by value, and later, when reading the object from storage, allocate memory dynamically and put the contents of the object in the same memory. This will obviously not work if you are trying to serialize the case where multiple objects point to the same address in memory: if you try to apply this solution, you would end up with multiple copies of your original object. ~~Fortunately, for the case of a `std::string` this problem is easily solved, as strings have overloaded `operator<<` and `operator>>`, and you don't need to implement anything to make them work.~~ *edit: Just using `operator<<` and `operator>>` won't work for `std::string`, explained a bit later why.* ## How to make it work: There are many possible solutions, and I'm going to share one here. The basic idea is that you should serialize every member of your Telephone structure individually, and rely on the fact that every member know how to serialize itself. I am going to ignore the problem of cross-endianness compatibility, to make the answer a bit briefer, but if you care about cross platform compatibility, you should think about it. My basic approach is to override `operator<<` and `operator>>` for the class telephone. I declare two free functions, that are friends of the Telephone class. This would allow them to poke at the internals of different telephone objects to serialize their members. ``` class Telephone { friend ostream& operator<<(ostream& os, const Telephone& telephone); friend istream& operator>>(istream& is, Telephone& telephone); // ... }; ``` *edit: I initially had the code for serializing the strings wrong, so my comment that it's fairly straightforward is plain-out wrong* The code for implementing the functions has a surprising twist. Because `operator>>` for strings stops reading from the stream when encountering a whitespace, having a name that is not a single word, or with special characters would not work, and put the stream in a state of error, failing to read the phone number. To go around the problem, i followed the example by @Michael Veksler and stored the length of the string explicitly. My implementation looks as follows: ``` ostream& operator<<(ostream& os, const Telephone& telephone) { const size_t nameSize = telephone.name.size(); os << nameSize; os.write(telephone.name.data(), nameSize); os << telephone.phno; return os; } istream& operator>>(istream& is, Telephone& telephone) { size_t nameSize = 0; is >> nameSize; telephone.name.resize(nameSize); is.read(&telephone.name[0], nameSize); is >> telephone.phno; return is; } ``` Please note, that you must make sure that the data you write matches the data you're going to later try to read. If you store a different amount of information, or the arguments are in the wrong order, you will not end up with a valid object. If you later do any kind of modifications to the Telephone class, by adding new fields that you would want saved, you'll need to modify *both* functions. To support names with spaces in them, the way you read the names from cin should be modified as well. One way would be to use `std::getline(std::cin, name);` instead of `cin >> name` Finally, how you should serialize and deserialize from those streams: Don't use the `ostream::write()` and `istream::read()` functions - use instead the `operator<<` and `operator>>` that we have overriden. ``` void getData() { Telephone temp; ifstream ifs("Sample.txt",ios::in|ios::binary); ifs >> temp; temp.displayData(); } void storeData(const Telephone& telephone) { ofstream ofs("Sample.txt",ios::out|ios::binary); ofs << telephone; } ```
IllegalAccessError:class cannot access its superinterface I have class Assembly implementing IAssembly. I see following error when starting the application ``` Caused by: java.lang.IllegalAccessError: class <Assembly > cannot access its superinterface <IAssembly> at java.lang.ClassLoader.defineClass1(Native Method) ``` Assembly code ``` class package.Assembly implements IAssembly { } ``` IAssembly ``` interface IAssembly { //note -this is not public, so uses default protected } ``` Assembly and IAssembly exists in two different jars. Both jars loaded by different classloaders. The Assembly class is loaded in child classloader, IAssembly is parent. Class loaders are using chaining. In normal cases, this works. The error occurs when I run my application after instrumenting jars using cobertura. With out instrumentation all works fine. Could cobertura instrumentation cause such error? Or This is an error anyway waiting to be detected, but with cobertura the error is quickly exposed. By making the interface 'public' the error goes away.
It looks to me like package-protection fails with instrumentation and multiple classloaders, even if the loaders are chained. This javadoc on [java.lang.instrument.Instrumentation](http://download.oracle.com/javase/6/docs/api/java/lang/instrument/Instrumentation.html#appendToBootstrapClassLoaderSearch(java.util.jar.JarFile)) isn't directly related to your scenario, but it does describe a similar scenario: > > The agent should take care to ensure that the JAR does not contain any classes or resources other than those to be defined by the bootstrap class loader for the purpose of instrumentation. Failure to observe this warning could result in unexpected behaviour that is difficult to diagnose. For example, suppose there is a loader L, and L's parent for delegation is the bootstrap class loader. Furthermore, a method in class C, a class defined by L, makes reference to a non-public accessor class C$1. If the JAR file contains a class C$1 then the delegation to the bootstrap class loader will cause C$1 to be defined by the bootstrap class loader. In this example an IllegalAccessError will be thrown that may cause the application to fail. One approach to avoiding these types of issues, is to use a unique package name for the instrumentation classes. > > > The Java Virtual Machine Specification specifies that a subsequent attempt to resolve a symbolic reference that the Java virtual machine has previously unsuccessfully attempted to resolve always fails with the same error that was thrown as a result of the initial resolution attempt. Consequently, if the JAR file contains an entry that corresponds to a class for which the Java virtual machine has unsuccessfully attempted to resolve a reference, then subsequent attempts to resolve that reference will fail with the same error as the initial attempt. > > > Maybe check which loader is finding your instrumented classes, and see if there is a way to get both `Assembly` and `IAssembly` to load from that same classloader.