prompt
stringlengths
49
4.73k
ground_truth
stringlengths
238
35k
I get an error when trying to convert video I m trying to convert mp4 video file to avi file with ``` ffmpeg -i 10.mp4 -vcodec mpeg4 -vtag XVID -b 990k -bf 2 -g 300 -s 640x360 -acodec libmp3lame -ab 128k -ar 48000 -ac 2 -pass 2 -f avi 10.avi ``` but I get the following error ``` Cannot read file 'ffmpeg2pass-0.log': No such file or directory Error reading log file 'ffmpeg2pass-0.log' for pass-2 encoding ``` any suggestion to avoid this problem?
You're doing a two pass encoding, but you haven't made a first pass yet (which would have created the log file). Besides, you're using some options which are ambiguous. If you don't need a specific target file size, a two pass encode is unnecessary. Try this: ``` ffmpeg -i 10.mp4 -c:v mpeg4 -vtag xvid -qscale:v 3 -c:a libmp3lame -b:a 128k -vf scale=640:360 10.avi ``` `-qscale:v 3` specifies the desired video quality, 1 would be very high quality / big file size, 31 the lowest quality / small file size For a 2 pass encoding, see [the example in the ffmpeg wiki](http://ffmpeg.org/trac/ffmpeg/wiki/How%20to%20encode%20Xvid%20/%20DivX%20video%20with%20ffmpeg).
Set defaults header on AngularJS but don't use it on one specific request For sending OAuth2 token I am setting up defaults header on AngularJS like this: ``` $http.defaults.headers.common['Authorization'] = 'Bearer ' + access_token; ``` This works great but I don't need this header (I get an error) for one specific request. Is there a way of excluding defaults header when performing that request? Thanks! # SOLVED Thanks to [Riron](https://stackoverflow.com/users/2703657/riron) for getting me on a right path. Here's the answer: ``` $http({ method: 'GET', url: 'http://.../', transformRequest: function(data, headersGetter) { var headers = headersGetter(); delete headers['Authorization']; return headers; } }); ```
When you make your call with $http, you can override defaults headers by providing them directly in your request config: ``` $http({method: 'GET', url: '/someUrl', headers: {'Authorization' : 'NewValue'} }).success(); ``` Otherwise you could transform your request using the `transformRequest` parameter, still in your $http config. See doc : > > transformRequest – `{function(data,headersGetter)|Array.<function(data, headersGetter)>}` – transform > function or an array of such functions. The transform function takes > the http request body and headers and returns its transformed > (typically serialized) version. > > > This way you could delete an header for a single request before it's being send: ``` $http({method: 'GET', url: '/someUrl', transformRequest: function(data,headersGetter){ //Headers change here } }).success(); ```
Mutate each row in group according to the first row of the group For example, I have a data frame: ``` df <- data.frame(grp = c(1,1,1,1,1,2,2,2,2,2), idx = c(1,2,3,4,5,1,2,3,4,5), val = c(4,6,1,7,2,8,5,3,9,1)) ``` I want to divide the val of each row by the val of the first row in each group. The only way I found is to introduce a new column: ``` df %>% group_by(grp) %>% arrange(idx) %>% mutate(t = ifelse(row_number(idx) == 1, val, 0)) %>% mutate(val = val / sum(t)) ``` Is there any simple way to do this?
We can do ``` df %>% group_by(grp) %>% arrange(idx) %>% mutate(val = val/sum((row_number() == 1)*val)) # A tibble: 10 x 3 # Groups: grp [2] # grp idx val # <dbl> <dbl> <dbl> # 1 1 1 1.000 # 2 2 1 1.000 # 3 1 2 1.500 # 4 2 2 0.625 # 5 1 3 0.250 # 6 2 3 0.375 # 7 1 4 1.750 # 8 2 4 1.125 # 9 1 5 0.500 #10 2 5 0.125 ``` --- If we need to divide by the first 'val' observation, just do `val[1L]` ``` df %>% group_by(grp) %>% arrange(idx) %>% mtuate(val = val/val[1L]) ```
android uiautomator to click ListView I have an android app which uses the uiautomator to click the options in a listview. The ListView looks like this: ![enter image description here](https://i.stack.imgur.com/yQ79f.jpg) I am trying to click the Full Benchmark list item, but my code for it does not recognize the list item. This is what I have: ``` UiScrollable listView = new UiScrollable(new UiSelector().scrollable( true).className("android.widget.ListView")); UiObject item1 = listView.getChildByText(new UiSelector() .className(android.widget.TextView.class.getName()), "Full Benchmark"); item1.click(); ``` I would appreciate any help!
Here is what I use to find, and then click an item in a listview: ``` //Find and click a ListView item public void clickListViewItem(String name) throws UiObjectNotFoundException { UiScrollable listView = new UiScrollable(new UiSelector()); listView.setMaxSearchSwipes(100); listView.scrollTextIntoView(name); listView.waitForExists(5000); UiObject listViewItem = listView.getChildByText(new UiSelector() .className(android.widget.TextView.class.getName()), ""+name+""); listViewItem.click(); System.out.println("\""+name+"\" ListView item was clicked."); } ``` So in your case it would be ``` clickListViewItem("Full Benchmark") ``` Or: ``` UiScrollable listView = new UiScrollable(new UiSelector()); listView.setMaxSearchSwipes(100); listView.scrollTextIntoView(name); listView.waitForExists(5000); UiObject listViewItem = listView.getChildByText(new UiSelector() .className(android.widget.TextView.class.getName()), "Full Benchmark"); listViewItem.click(); ```
How to check whether the point is in the tetrahedron or not? I know all coordinates of tetrahedron and the point I would like to determine. So does anyone know how to do it? I've tried to determine the point's belonging to each triangle of tetrahedron, and if it's true to all triangles then the point is in the tetrahedron. But it's absolutely wrong.
You define a tetrahedron by four vertices, A B C and D. Therefore you also can have the 4 triangles defining the surface of the tetrahedron. You now just check if a point P is on the other side of the plane. The normal of each plane is pointing away from the center of the tetrahedron. So you just have to test against 4 planes. Your plane equation looks like this: `a*x+b*y+c*z+d=0` Just fill in the point values (x y z). If the sign of the result is >0 the point is of the same side as the normal, result == 0, point lies in the plane, and in your case you want the third option: <0 means it is on the backside of the plane. If this is fulfilled for all 4 planes, your point lies inside the tetrahedron.
JEST @vue/composition-api + Jest Test suite failed to run [vue-composition-api] must call Vue.use(VueCompositionAPI) before using any function I use Vue 2 and `@vue/composition-api` plugin. I created a Jest test, but the test failed with errors: ``` Test suite failed to run [vue-composition-api] must call Vue.use(VueCompositionAPI) before using any function. 1 | import { ref } from '@vue/composition-api'; 2 | > 3 | const showSidebar = ref(false); | ^ 4 | export const breakPointSize = 1250; 5 | export const classLink = 'SidebarSectionItemRow'; 6 | export const idBtnNavbar = 'sidebarBtnNavbar'; ``` **test.spec.ts** ``` import VueCompositionApi from '@vue/composition-api' import { createLocalVue, mount } from '@vue/test-utils'; import MainMenuContent from '@/components/layouts/main/sidebar/menu/MainMenuContent.vue'; // create an extended `Vue` constructor const localVue = createLocalVue() // install plugins as normal localVue.use(VueCompositionApi) describe('MainMenuContent', () => { it('expect AdminSection ', () => { const wrapper = mount(MainMenuContent, { localVue, }); .... }); }); ``` I think the errors are caused by `ref` being outside a `setup()` function. **sidebarControl.ts** ``` import { ref } from '@vue/composition-api'; const showSidebar = ref(false); export function useControlSidebar() { const toggleSidebar = () => { showSidebar.value = !showSidebar.value; }; return { showSidebar, toggleSidebar, }; } ``` Is it possible to solve this somehow? ![Error](https://i.stack.imgur.com/QRYmA.png)
`MainMenuContent.vue` likely has a top-level import of `sidebarControl.ts`, which calls `ref` outside of the component's `setup()`, so importing `MainMenuContent.vue` would trigger the `ref` call before your test has a chance to setup `localVue` with `VueCompositionApi`. One way to solve this is to defer the component import until after you've setup `localVue` in your test: ``` // import MainMenuContent from '@/components/MainMenuContent.vue' // ❌ DON'T DO THIS import { mount, createLocalVue } from '@vue/test-utils' import VueCompositionApi from '@vue/composition-api' describe('MainMenuContent', () => { it('expect AdminSection', () => { const localVue = createLocalVue() localVue.use(VueCompositionApi) const MainMenuContent = require('@/components/MainMenuContent.vue').default // ✅ const wrapper = mount(MainMenuContent, { localVue }) }) }) ``` Alternatively, you could use a Jest [setup file](https://jestjs.io/docs/en/configuration#setupfiles-array) to initialize your test environment with `VueCompositionApi`: ``` // jest.config.js module.exports = { setupFiles: ['<rootDir>/tests/jest-setup.js'] } // jest-setup.js import Vue from 'vue' import VueCompositionApi from '@vue/composition-api' Vue.use(VueCompositionApi) ```
Function parameter always empty why? Can someone tell me, why this function call does not work and why the argument is always empty ? ``` function check([string]$input){ Write-Host $input #empty line $count = $input.Length #always 0 $test = ([ADSI]::Exists('WinNT://./'+$input)) #exception (empty string) return $test } check 'test' ``` Trying to get the info if an user or usergroup exists.. Best regards
Perhaps use a `param` block for parameters. > > <https://technet.microsoft.com/en-us/magazine/jj554301.aspx> > > > **Update**: the problem seems to be fixed if you don't use `$input` as a parameter name, maybe not a bad thing to have proper variable names ;) Also Powershell doesn't have `return` keyword, you just push the object as a statement by itself, this will be returned by function: ``` function Get-ADObjectExists { param( [Parameter(Mandatory=$true, ValueFromPipeline=$true)] [string] $ObjectName ) #return result by just calling the object (no return statement in powershell) ([ADSI]::Exists('WinNT://./'+$ObjectName)) } Get-ADObjectExists -ObjectName'test' ```
Booleans in ConfigParser always return True This is my example script: ``` import ConfigParser config = ConfigParser.ConfigParser() config.read('conf.ini') print bool(config.get('main', 'some_boolean')) print bool(config.get('main', 'some_other_boolean')) ``` And this is `conf.ini`: ``` [main] some_boolean: yes some_other_boolean: no ``` When running the script, it prints `True` twice. Why? It should be `False`, as `some_other_boolean` is set to `no`.
Use [`getboolean()`](http://docs.python.org/library/configparser.html#ConfigParser.RawConfigParser.getboolean): ``` print config.getboolean('main', 'some_boolean') print config.getboolean('main', 'some_other_boolean') ``` From the [Python manual](http://docs.python.org/library/configparser.html#ConfigParser.RawConfigParser.getboolean): > > > ``` > RawConfigParser.getboolean(section, option) > > ``` > > A convenience method which coerces the option in the specified section to a Boolean value. Note that the accepted values for the option are "1", "yes", "true", and "on", which cause this method to return True, and "0", "no", "false", and "off", which cause it to return False. These string values are checked in a case-insensitive manner. Any other value will cause it to raise ValueError. > > > Such as: ``` my_bool = config.getboolean('SECTION','IDENTIFIER') ``` The `bool()` constructor converts an empty string to False. Non-empty strings are True. `bool()` doesn't do anything special for "false", "no", etc. ``` >>> bool('false') True >>> bool('no') True >>> bool('0') True >>> bool('') False ```
Changes made to an object created outside of Vue component are not detected by Vue 3 I have a class Character : `Character.ts` ``` /// This is called when server responds public setAttributeByType(type: StatsTypes, value: number): void { switch (type) { case StatsTypes.STRENGTH: case StatsTypes.DEXTERITY: case StatsTypes.VITALITY: case StatsTypes.INTELIGENCE: this.stats[type] = value; break; default: break; } } .... ``` The class instance is created outside of Vue component in my "networking code": ``` public static onStartGame(data:any):void { Player.character = new Character(data.data); Game.displayGamePage(PagesIndex.GAME_PAGE); requestAnimationFrame(Game.loop); } ``` And used in main component: `Game.vue` ``` import { defineComponent } from 'vue' import Player from '@/Player/player'; import SceneManager, { Scenes } from '@/Render/scene_manager'; import Scene from '@/Render/scene'; import MainScene from "@/Render/scenes/main"; import MapScene from "@/Render/scenes/map"; import Game from '@/game/game'; // Components import VplayerBar from "@/vue/subs/playerBar.vue" import Vcharacter from "@/vue/subs/character.vue" export enum GamePages { MAIN_PAGE = 1, MAP_PAGE, } export default defineComponent({ name: "game", components: { VplayerBar, Vcharacter, }, data() { return { page: GamePages.MAIN_PAGE, scenes: Scenes, gamePages: GamePages, player: Player, character: Player.character, /* <------------ Reference to class */ pages: { character: false, } } }, }) ``` ...which pass it down as a prop to `character.vue` component ``` export default defineComponent({ name: "character", props: { character: { // <---- This prop does not change type: Character, required: true }, toggleCharacter: { type: Function, required: true } }, components: { VBackpack, VInventory }, data() { return { StatsTypes, } }, methods: { toglePage() { this.toggleCharacter() }, getPortrait(isMobile:boolean = false) { return Character.getCharacterPortrait(this.character.faction, this.character.gender, isMobile); }, addPoint(attribute:StatsTypes, value: number) { if (GKeyHandler.keys[Keys.SHIFT_LEFT]) value = 10; CharacterHandler.addAttributePoint(Player.character, attribute, value); //this.character.stats[StatsTypes.STRENGTH] += 1; } } }); ``` Problem is that whenever I change anything in character class instance outside the vue component (in my networking code) - for example `character.setAttributeByType(attribute, value)`, Vue does not see the change. If I do this directly inside `character.vue` component, it works (see commented code in `addPoint`) I tried to use Proxy & Watch and it did not help.
Your problem is an "identity" issue described [here](https://v3.vuejs.org/guide/reactivity.html#proxy-vs-original-identity) Vue 3 is using ES6 proxies to make objects reactive. If you do `const data = reactive(payload)`, the `data` is different object then `payload` (unlike in Vue 2 where the object was just modified with reactive setters/getters). Same applies for Options API (you are using). If you do `character: Player.character` in `data()` the result is `this.character` (inside Vue component) is different object then `Player.character`. You can easily test it by doing `console.log(this.character === Player.character)` ...for example in `mounted()` - result will be `false` As a result if you make any change using `this.character` (Vue reactive proxy), Vue will detect the change and rerender (and propagate the change to the original object) but if you change the original object `Player.character` the change is not detected by Vue... Simple fix is to use Vue's [Composition API](https://v3.vuejs.org/api/basic-reactivity.html#basic-reactivity-apis) which allows you to use Vue reactivity outside the Vue components. ``` import { reactive } from `vue` Player.character = reactive(new Character(data.data)); ``` Now when you use `Player.character` to initialize `data()` in Vue component, Vue sees it is a reactive proxy already and do not wrap it in proxy again
Better way to create a stream of functions? I wish to do lazy evaluation on a list of functions I've defined as follows; ``` Optional<Output> output = Stream.<Function<Input, Optional<Output>>> of( classA::eval, classB::eval, classC::eval) .map(f -> f.apply(input)) .filter(Optional::isPresent) .map(Optional::get) .findFirst(); ``` where as you see, each class (a, b & c) has an `Optional<Output> eval(Input in)` method defined. If I try to do ``` Stream.of(...).... ``` ignoring explicit type, it gives > > T is not a functional interface > > > compilation error. Not accepting functional interface type for `T` generic type in `.of(T... values)` --- Is there a snappier way of creating a stream of these functions? I hate to explicitly define `of` method with `Function` and its in-out types. Wouldn't it work in a more generic manner? This issue stems from the topic of the following question; [Lambda Expression and generic method](https://stackoverflow.com/questions/22588518/lambda-expression-and-generic-method)
You can break it into two lines: ``` Stream<Function<Input, Optional<Output>>> stream = Stream .of(classA::eval, classB::eval, classC::eval); Optional<Output> out = stream.map(f -> f.apply(input)) .filter(Optional::isPresent) .map(Optional::get) .findFirst(); ``` or use casting: ``` Optional<Output> out = Stream.of( (<Function<Input, Optional<Output>>>)classA::eval, classB::eval, classC::eval) .map(f -> f.apply(input)) .filter(Optional::isPresent) .map(Optional::get) .findFirst(); ``` but I don't think you can avoid specifying the type of the `Stream` element - `Function<Input, Optional<Output>>` - somewhere, since otherwise the compiler can't infer it from the method references.
Replace only if string exists in current line I have a line such as: ``` sed -i 's/mystring/newstring/' $target ``` This command will change all `mystring` to `newstring`. What I want now is: when the program sees `mystring`, how can I check for the current line if the string `searchstring` exists or not? If it exists, `newstring` is `1`; otherwise, `newstring` is `0`.
## Solution Assuming your input file $target contains the following: ``` some text mystring some other text some text mystring a searchstring just some more text ``` This command: ``` sed -i -e '/searchstring/ s/mystring/1/ ; /searchstring/! s/mystring/0/' $target ``` will change its content to: ``` some text 0 some other text some text 1 a searchstring just some more text ``` ## Explanation The script contains two substitute (**s**) commands separated by a semicolon. The substitute command accepts an optional address range that select which lines the substitution should take place. In this case *regexp* address was used to select lines containing the *searchstring* for the first command; and the lines that do not contain the *searchstring* (note the exclamation mark after the regexp negating the match) for the second one. ## Edit This command will perform better and produce just the same result: ``` sed -i -e '/searchstring/ s/mystring/1/ ; s/mystring/0/' $target ``` The point is that commands are executed sequentially and thus if there is still a *mystring* substring in the current line *after* the first command finished then there is no *searchstring* in it for sure. Kudos to user946850.
Changing enum to next value [C++11] What I want to do is to use enum to specify different draw modes easily. So far this is what I've got: ``` class Grid { enum drawMode { GRID, EROSION, RIVERS, HUMIDITY, ATMOSPHERE } drawMode activeDraw; void draw() { switch(activeDraw) { case GRID: drawGrid(); break; case EROSION: drawErosion(); break; // etc.. } void keyPressed(int key) { switch(key) { case ' ': // Cycle activeDraw to next drawMode } } ``` So if user hits spacebar the activeDraw will change to next value from enum. So if the current activeDraw is GRID after hitting space it will change to EROSION and if activeDraw is ATMOSPHERE it will change to GRID. Is there a simple solution to this? Thanks.
As noted by Maroš Beťko, to add 1 to a variable, you have to cast the value to `int` and back: ``` activeDraw = static_cast<drawMode>(static_cast<int>(activeDraw) + 1); ``` If the enum is defined without the C++11 `enum class` syntax (like in the question's text), the casting to `int` is not necessary: ``` activeDraw = static_cast<drawMode>(activeDraw + 1); ``` To make it cycle back to zero, use integer arithmetic, modulo operator: ``` activeDraw = static_cast<drawMode>((activeDraw + 1) % (ATMOSPHERE + 1)); ``` To eliminate one ugly `+1`, add another element to the enum: ``` enum drawMode { ..., ATMOSPHERE, NUM_DRAW_MODES }; ... activeDraw = static_cast<drawMode>((activeDraw + 1) % NUM_DRAW_MODES); ``` You can also stuff this code into a `operator++` if you use it very often: ``` drawMode operator++(drawMode& mode) { mode = static_cast<drawMode>((mode + 1) % NUM_DRAW_MODES); return mode; } drawMode operator++(drawMode& mode, int) // postfix operator { drawMode result = mode; ++mode; return result; } ``` Overloading operators for `enum`s is rarely used, and some people consider it overkill (bad), but it will make your code shorter (and arguably cleaner).
How do I systematically test and think like a real tester My friend asked me this question today. How to test a vending machine and tell me its test cases. I am able to give some test cases but those are some random thoughts. I want to know how to systematically test a product or a piece of software. There are lots of tests like unit testing, functional testing, integration testing, stress testing etc. But I would like to know how do I systematically test and think like a real tester ? Can someone please explain me how all these testings can be differentiated and which one can be applied in a real scenario. For example Test a file system.
Even long-time, well respected, professional testers will tell you: It is an art more than a science. My trick to designing new test cases starts with the various types of tests you mention, and it must include all those to be thorough, but I try to find a list of all the ways I can interact with the code/product. For the vending machine example, there are tons of parts, inside and out. Simple testing, as the product is designed to work, gives plenty of cases - Does it give the correct change - How fast can it process the request - What if an item is out of stock - What if it is overfilled - What if the change drawer is full - What if the items are too big, or badly racked - What if the user puts in too little money - What if it is out of change Then there are the interesting cases, which normal users wouldn't think about. - What if you try to tip it over - Give it a fake coin - Steal from it - Put a coin in with a string - Give it funny amounts of change - Give it half-ripped bills - Pry it open with a crow-bar - Feed it bad power/brownout - Turn it off in the middle of various operations The way to think like a tester is figure out every possible way you can attack it, from all the "funny cases" in usual scenarios, to all the methods that are completely outside of how it should be used. Any point of input, including ones you might think the developers/owners have control over, are fair game. You can also use many automated test tools, such as pairwise test selection, model-based test toolkits, or for software, various stress/load and security tools. --- I feel like this answer was a good start, but I now realize it was only half of the story. Coming up with every single way you can possibly test the system is important. You need to learn to stretch the limits of your imagination, your problem decomposition skills, your understanding of chains of functionality/failure, and your domain knowledge about the thing you are testing. This is the point I was attempting to make above. With the right mindset, and with enough vigilance, these skills will start to improve very quickly - within a year, or within a few years (depending on the complexity of the domain). The second level of becoming a very competent tester is to determine which tests you should care about. You will always be able to break every system, in a ton of different ways. Whether those failures are important or not is a more interesting question, and is often much more difficult to answer. The benefit to answering this question, though, is two-fold. First, if you know why it is important to fix pieces of the system that break (or to skip fixing them!), then you can understand where you should focus your efforts. You know what you can afford to spend less time testing, and what you must spend more time on. Second, and more importantly, you will help your team expose where they should be focusing *their* efforts. You will start to uncover things that are called "second-order unknowns". Your team doesn't know what it doesn't know. The primary trick that helps you accomplish this is to always ask "why?", until whoever you are asking is stumped. An example: **Q:** Why this test? **A:** Because I want to exercise all functionality in the system. **Q:** Why does this system function this way? **A:** Because of the decisions that the programmer made, based on the product specifications. **Q:** Why did our product specifications ask for this? **A:** Because the company that we are writing the software for had a requirement that the software works this way. **Q:** Why did that company we are contracting for add that as a requirement? **A:** Because their users need to do :thing: **Q:** Why do the users need to do :thing:? **A:** Because they are trying to accomplish :xyz: **Q:** Why do they need to accomplish :xyz: **A:** Because they save money by doing :abc: **Q:** Why did they choose :xyz: to solve :abc:? **A:** ... good question. **Q:** What could they do instead? **A:** ... now that I think about it, there's a ton of options! Maybe one of them works better? With practice, you will start knowing which specific "why" questions to ask, and which to focus on. You will also learn to start deeper down the chain, and be less mechanical in your approach. This is no longer just about ensuring that the product matches the specifications that the dev, pm, customer, or end user provided. It also helps determine if the solution you are providing is the highest quality solution that your team could provide. A hidden requirement of this is that you must learn that half your job as a tester is to ask questions all the time. You might think that your team mates will be annoyed at this, but hopefully I've shown that it is both crucial to your development, and the quality of the product you are testing. Smart and curious teammates who care about the product (who aren't busy and frustrated) will love your questions.
how to save and read array of array in NSUserdefaults in swift? I need create an array to add objects with this format like a dictionary in Swift : ["key1": "value1", "key2": "value2"] When I try to save it with `NSUserDefaults` all is correct, but when read `NSUserDefaults` with the key this crashes. What type of data does my var obj need? ``` let def = NSUserDefaults.standardUserDefaults() var key = "keySave" var element: AnyObject! var array1: [AnyObject!] = [] array1.append(["key1": "val1", "key2": "val2"]) array1.append(["key1": "val1", "key2": "val2"]) //save var savestring : [AnyObject!] savestring = array1 var defaults = NSUserDefaults.standardUserDefaults() defaults.setObject(savestring, forKey: key) defaults.synchronize() //read var obj: [AnyObject!] = [] if(obj != nil){ print("size: ") print(obj.count) //vary long value confused.. element = obj[0] //crash print(element.objectForKey("key1")) } ```
The question reads "array of array" but I think most people probably come here just wanting to know how to save an array to `UserDefaults`. For those people I will add a few common examples. # String array Save array ``` let array = ["horse", "cow", "camel", "sheep", "goat"] let defaults = UserDefaults.standard defaults.set(array, forKey: "SavedStringArray") ``` Retrieve array ``` let defaults = UserDefaults.standard let myarray = defaults.stringArray(forKey: "SavedStringArray") ?? [String]() ``` # Int array Save array ``` let array = [15, 33, 36, 723, 77, 4] let defaults = UserDefaults.standard defaults.set(array, forKey: "SavedIntArray") ``` Retrieve array ``` let defaults = UserDefaults.standard let array = defaults.array(forKey: "SavedIntArray") as? [Int] ?? [Int]() ``` # Bool array Save array ``` let array = [true, true, false, true, false] let defaults = UserDefaults.standard defaults.set(array, forKey: "SavedBoolArray") ``` Retrieve array ``` let defaults = UserDefaults.standard let array = defaults.array(forKey: "SavedBoolArray") as? [Bool] ?? [Bool]() ``` # Date array Save array ``` let array = [Date(), Date(), Date(), Date()] let defaults = UserDefaults.standard defaults.set(array, forKey: "SavedDateArray") ``` Retrieve array ``` let defaults = UserDefaults.standard let array = defaults.array(forKey: "SavedDateArray") as? [Date] ?? [Date]() ``` # Object array Custom objects (and consequently arrays of objects) take a little more work to save to `UserDefaults`. See the following links for how to do it. - [Save custom objects into NSUserDefaults](https://stackoverflow.com/questions/29986957/save-custom-objects-into-nsuserdefaults) - [Docs for saving color to UserDefaults](https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/DrawColor/Tasks/StoringNSColorInDefaults.html#//apple_ref/doc/uid/20001693) - [Attempt to set a non-property-list object as an NSUserDefaults](https://stackoverflow.com/questions/19720611/attempt-to-set-a-non-property-list-object-as-an-nsuserdefaults) # Notes - The nil coalescing operator (`??`) allows you to return the saved array or an empty array without crashing. It means that if the object returns nil, then the value following the `??` operator will be used instead. - As you can see, the basic setup was the same for `Int`, `Bool`, and `Date`. I also tested it with `Double`. As far as I know, anything that you can save in a property list will work like this.
Open Facebook page from Android app? from my Android app, I would like to open a link to a Facebook profile in the official Facebook app (if the app is installed, of course). For iPhone, there exists the `fb://` URL scheme, but trying the same thing on my Android device throws an `ActivityNotFoundException`. Is there a chance to open a Facebook profile in the official Facebook app from code?
In Facebook version 11.0.0.11.23 (3002850) `fb://profile/` and `fb://page/` no longer work. I decompiled the Facebook app and found that you can use `fb://facewebmodal/f?href=[YOUR_FACEBOOK_PAGE]`. Here is the method I have been using in production: ``` /** * <p>Intent to open the official Facebook app. If the Facebook app is not installed then the * default web browser will be used.</p> * * <p>Example usage:</p> * * {@code newFacebookIntent(ctx.getPackageManager(), "https://www.facebook.com/JRummyApps");} * * @param pm * The {@link PackageManager}. You can find this class through {@link * Context#getPackageManager()}. * @param url * The full URL to the Facebook page or profile. * @return An intent that will open the Facebook page/profile. */ public static Intent newFacebookIntent(PackageManager pm, String url) { Uri uri = Uri.parse(url); try { ApplicationInfo applicationInfo = pm.getApplicationInfo("com.facebook.katana", 0); if (applicationInfo.enabled) { // http://stackoverflow.com/a/24547437/1048340 uri = Uri.parse("fb://facewebmodal/f?href=" + url); } } catch (PackageManager.NameNotFoundException ignored) { } return new Intent(Intent.ACTION_VIEW, uri); } ```
What is the purpose of android:minSdkVersion and android:targetSdkVersion in the AndroidManifest.xml with respect to phoneGap/Cordova? There is a line in the AndroidManifest.xml ``` android:minSdkVersion="10" android:targetSdkVersion="19" ``` Does it mean that if I include minimum and maximum SDK version in the AndroidManifest.xml file and build the APK using **phonegap/cordova CLI** **(Command Line Inteface)**, than a **SINGLE APK** file generated can be installed on **ALL** Android Devices ranging from **Android 2.3.4** to **Android 4.4** I have read posts that developing using Android SDK(**native** APP) it enables the APP to work on the range of devices. Is it true for PhoneGap/Cordova generated APK file as well? (Note: I am not planning to use Google Play services for distributing the APP.) Do we need to generate APK file for each SDK version?
The implications of these two variables is the same for both native apps and PhoneGap/Cordova apps. `minSdkVersion` will set the *minimum version of Android required to run your application*. If a user is running any version below this, they will not be able to install your application (regardless of whether or not you are distributing via the Play Store). `targetSdkVersion` specifies the latest version of Android that you have tested for. It will not change who can install your app, but it will change the behavior of your application. For example, if this is less than 14, you won't have an action bar. If it is less than 19, then users running KitKat and above will *not* see your content in a Chrome-backed WebView (it will be the older WebView implementation). Generally you just set targetSdkVersion to the latest available version of Android. > > Do we need to generate APK file for each SDK version? > > > No. You need one APK with `mindSdkVersion` set to the minimum version you support and `targetSdkVersion` to the latest version of Android you have tested against. You *can* specify a `maxSdkVersion`, which will actually limit the maximum version you support, but you generally should not do this unless you have a good reason to.
Camel check file's last modified date frequently using scheduling service I want to use camel in my project to check a file's last modified date every xx minutes using camel's scheduling/timer service. I read the document for file component it seems there is a polling function, however there is also a timer component for camel. Anyone has some code example if i want to do with the requirement?
I would use the file consumer end point. Something like this: ``` file:c:/foldername?delay=5000 ``` This will scan the folder every 5 seconds for files and for each file send a message on the route. You would probably need to store the previous times somewhere such as a text file or database and then compare the modified variable passed in the message to the modified one stored in the database or file. A rough example of this would look like follows: ``` <route id="CheckFileRoute"> <from uri="file:d:/RMSInbox?delay=5000&readLock=changed/> <log message="${ file:modified }/> <bean ref="CompareDates"/> </route> ``` The file consumer will place a lot of information regarding the file in the header such as modified date. Go read this link for more details on the variables in the header <http://camel.apache.org/file2.html> The compare dates bean would be java class that acts like a processor which would have a structure like this: ``` public class CompareDates { @Handler public void CheckDates ( @Body Object msgbody , @Headers Map hdr ) { Date newDate = (Date)hdr.get("CamelFileLastModified"); Date oldDate = readfromfileorDatabase if(newDate>oldDate) { //the date has changed look busy } } ``` Hope this gets you going.
Modifying this simple array creation function on a more Pythonic way I've the following function, `createFreeSpaces(first_byte, last_byte)` that inputs two numbers `first_byte` and `last_byte` (always integers), and creates a list with the numbers between those two numbers on a specific format. It's very easy, but a bit hard for me to explain, so let's see my try and an example. Ex: `createFreeSpaces(4, 7)` Output: ``` 555555 0 0 "FREE: [5.0]" 555555 0 0 "FREE: [5.1]" 555555 0 0 "FREE: [5.2]" 555555 0 0 "FREE: [5.3]" 555555 0 0 "FREE: [5.4]" 555555 0 0 "FREE: [5.5]" 555555 0 0 "FREE: [5.6]" 555555 0 0 "FREE: [5.7]" 555555 0 0 "FREE: [6.0]" 555555 0 0 "FREE: [6.1]" 555555 0 0 "FREE: [6.2]" 555555 0 0 "FREE: [6.3]" 555555 0 0 "FREE: [6.4]" 555555 0 0 "FREE: [6.5]" 555555 0 0 "FREE: [6.6]" 555555 0 0 "FREE: [6.7]" ``` This is my try, as you can see it seems a bit dirty and not so *Pythonic*. ``` def createFreeSpaces(first_byte, last_byte): start_bit = 0 end_bit = 7 b_start = first_byte + 1 b_end = last_byte b_diff = b_end - b_start h = 0 final_list = [] while h < b_diff * 8: if start_bit == 8: start_bit = 0 b_start = b_start + 1 final_list.append('555555 0 0 "FREE: [' + str(b_start) + '.' + str(start_bit) + ']"') s_start = b_start + 1 start_bit = start_bit + 1 h = h + 1 return final_list ``` I'm cleaning my code so I would like to know if someone can someone give me a hand and tell me how can I make this simple function on a more pythonic way?
Since you say that the input will always be integer (according to the comments). You can use a single line list comprehension for this. Example - ``` def createFreeSpaces(first_byte, last_byte): return ['555555 0 0 "FREE: [{}.{}]"'.format(x,y) for x in range(first_byte + 1, last_byte) for y in range(8)] ``` Making the list comprehension line a bit smaller - ``` def createFreeSpaces(fbyte, lbyte): fmt = '555555 0 0 "FREE: [{}.{}]"' return [fmt.format(x,y) for x in range(fbyte + 1, lbyte) for y in range(8)] ``` Demo - ``` >>> def createFreeSpacesNew(first_byte, last_byte): ... return ['555555 0 0 "FREE: [{}.{}]"'.format(x,y) for x in range(first_byte + 1, last_byte) for y in range(8)] ... >>> pprint.pprint(createFreeSpacesNew(4,7)) ['555555 0 0 "FREE: [5.0]"', '555555 0 0 "FREE: [5.1]"', '555555 0 0 "FREE: [5.2]"', '555555 0 0 "FREE: [5.3]"', '555555 0 0 "FREE: [5.4]"', '555555 0 0 "FREE: [5.5]"', '555555 0 0 "FREE: [5.6]"', '555555 0 0 "FREE: [5.7]"', '555555 0 0 "FREE: [6.0]"', '555555 0 0 "FREE: [6.1]"', '555555 0 0 "FREE: [6.2]"', '555555 0 0 "FREE: [6.3]"', '555555 0 0 "FREE: [6.4]"', '555555 0 0 "FREE: [6.5]"', '555555 0 0 "FREE: [6.6]"', '555555 0 0 "FREE: [6.7]"'] ```
Matching IPv6 address to a CIDR subnet Is there a good way to match an IPv6 address to an IPv6 subnet using CIDR notation? What I am looking for is the IPv6 equivalent to this: [Matching an IP to a CIDR mask in PHP 5?](https://stackoverflow.com/questions/594112/matching-an-ip-to-a-cidr-mask-in-php5) The example given above can't be used since an IPv6 address is 128 bits long, preventing the bitwise left-shift from working properly. Can you think of any other way? EDIT: Added my own solution to the list of answers.
Since you cannot convert IPv6 addresses to integer, you should operate bits, like this: ``` $ip='21DA:00D3:0000:2F3B:02AC:00FF:FE28:9C5A'; $cidrnet='21DA:00D3:0000:2F3B::/64'; // converts inet_pton output to string with bits function inet_to_bits($inet) { $splitted = str_split($inet); $binaryip = ''; foreach ($splitted as $char) { $binaryip .= str_pad(decbin(ord($char)), 8, '0', STR_PAD_LEFT); } return $binaryip; } $ip = inet_pton($ip); $binaryip=inet_to_bits($ip); list($net,$maskbits)=explode('/',$cidrnet); $net=inet_pton($net); $binarynet=inet_to_bits($net); $ip_net_bits=substr($binaryip,0,$maskbits); $net_bits =substr($binarynet,0,$maskbits); if($ip_net_bits!==$net_bits) echo 'Not in subnet'; else echo 'In subnet'; ``` Also, if you use some database to store IPs, it may already have all the functions to compare them. For example, Postgres has an inet type and can determine, whether IP is contained within subnet like this: ``` SELECT '21DA:00D3:0000:2F3B:02AC:00FF:FE28:9C5A'::inet << '21DA:00D3:0000:2F3B::/64'::inet; ``` [9.11. Network Address Functions and Operators in PostgreSQL](http://www.postgresql.org/docs/8.2/interactive/functions-net.html)
extraction of text from pdf with pdfminer gives multiple copies I am trying to extract text from a PDF file using PDFMiner (the code found at [Extracting text from a PDF file using PDFMiner in python?](https://stackoverflow.com/questions/26494211/extracting-text-from-a-pdf-file-using-pdfminer-in-python)). I didn't change the code except path/to/pdf. Surprisingly, the code returns several copies of the same document. I got the same result with other pdf files. Do I need to pass other arguments or I am missing something? Any help is highly appreciated. Just in case, I provide the code: ``` from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter from pdfminer.converter import TextConverter from pdfminer.layout import LAParams from pdfminer.pdfpage import PDFPage from cStringIO import StringIO def convert_pdf_to_txt(path): rsrcmgr = PDFResourceManager() retstr = StringIO() codec = 'utf-8' laparams = LAParams() device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams) fp = file(path, 'rb') interpreter = PDFPageInterpreter(rsrcmgr, device) password = "" maxpages = 0 caching = True pagenos=set() fstr = '' for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages, password=password,caching=caching, check_extractable=True): interpreter.process_page(page) str = retstr.getvalue() fstr += str fp.close() device.close() retstr.close() return fstr print convert_pdf_to_txt("test.pdf") ```
My answer was a bit incorrect in the thread that you are referencing. I found the bug and forgot to update my answer. Because the documentation is pretty sparse with pdfminer, I'm not able to fully explain why this works the way it does. Hopefully someone who knows the pdfminer library a bit better can give us some insight. All I know is that you have to do `text = retstr.getvalue()` outside of the for loop. I can only assume that `retstr` is being updated as if we were doing `final_text += text` inside the for loop, so once it's all finished we just have to do `text = retstr.getvalue()` to get the text from all the pages. ``` from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter from pdfminer.converter import TextConverter from pdfminer.layout import LAParams from pdfminer.pdfpage import PDFPage from cStringIO import StringIO def convert_pdf_to_txt(path): rsrcmgr = PDFResourceManager() retstr = StringIO() codec = 'utf-8' laparams = LAParams() device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams) fp = file(path, 'rb') interpreter = PDFPageInterpreter(rsrcmgr, device) password = "" maxpages = 0 caching = True pagenos=set() for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages,password=password,caching=caching, check_extractable=True): interpreter.process_page(page) text = retstr.getvalue() fp.close() device.close() retstr.close() return text print convert_pdf_to_txt("test.pdf") ``` Hope this helped!
Dynamic select2 not firing change event I have a form with a couple of selects inside. I'm applying the select2 jquery plugin over those selects like this: ``` $("select.company_select, select.positions_select").select2(); ``` The select's work fine, but I have this code to autosubmit my form (I have the autosubmit class on the form tag). ``` var currentData; $('.autosubmit input, .autosubmit select, .autosubmit textarea').live('focus', function () { currentData = $(this).val(); }); $('.autosubmit input, .autosubmit select, .autosubmit textarea').live('change', function () { console.log('autosubmiting...'); var $this = $(this); if (!currentData || currentData != $this.val()) { $($this.get(0).form).ajaxSubmit(function (response, status, xhr, $form) { currentData = ""; }); } }); ``` The thing is that with the select2, the change or the focus event doesn't fire at all. If I remove the select2, then the events get fired perfectly. What am I doing wrong?
Select2 has only 2 events, `open` and `change` (<http://select2.github.io/select2/#events>), you are able to add listeners only to them. You can use `open` event instead of `focus` for `<select>` element. And please don't use `live()` method, as it is deprecated. Use `on()` instead. ``` var currentData; $(".autosubmit select").on("open", function() { currentData = $(this).val(); }); $(".autosubmit input").on("focus", function() { currentData = $(this).val(); }); $(".autosubmit input, .autosubmit select").on("change", function() { var $this = $(this); console.log('autosubmitting'); if (!currentData || currentData != $this.val()) { $($this.get(0).form).ajaxSubmit(function (response, status, xhr, $form) { currentData = ""; }); } }); ``` Here is the [Fiddle](http://jsfiddle.net/4BLAa/412/)
Telling \_\_import\_\_ where to look - am I stuck with sys.path? I have a project in pure Python with a rudimentary plugin system: you write a module that defines a class with a specific interface and name, and the program imports the module and subsequently instantiates the class as needed. Currently, the plugins all come from a specific folder (subdirectory of where the main .py file is located). I would like to be able to have them elsewhere on disk, and instruct the program to look for plugins in a specific place. Can I do this, for one-off dynamic imports, in a cleaner way than modifying `sys.path`? I don't want to pollute this global. Related: can I count on `sys.path[0]` being the path to the script, even if that differs from the current working directory (`os.getcwd()`)? EDIT: I forgot to mention - I want to be able to get plugins from several different folders, with the user specifying paths to plugin folders. Currently, each of these folders is set up as a package (with an `__init__.py`); I can trivially scrap this if it causes a problem.
This might seem weird, but you can modify a module's `__path__` variable and then import from it. Then you're not messing with the global import space in sys.path. Edit: If the directories are loaded at run time, then you don't need a plugins.py file to store them. You can create the module dynamically: main.py: ``` #create the plugins module (pseudo-package) import sys, os sys.modules['plugins'] = plugins = type(sys)('plugins') plugins.__path__ = [] for plugin_dir in ['plugins1', 'plugins2']: path = os.path.join(sys.path[0], 'addons', plugin_dir) plugins.__path__.append(path) ``` After creating the dynamic module, you can load the plugins as before, using either `import_module` or `__import__`: ``` from importlib import import_module myplugins = [] for plugin in ['myplugin1', 'myplugin2']: myplugins.append(import_module('plugins.' + plugin)) myplugins[-1].init() ##or using __import__: myplugins = [] for plugin in ['myplugin1', 'myplugin2']: myplugins.append(getattr(__import__('plugins.' + plugin), plugin)) myplugins[-1].init() ``` addons/plugins1/myplugin1.py: ``` def init(): print('myplugin1') ``` addons/plugins2/myplugin2.py: ``` def init(): print('myplugin2') ``` I've never used this, but it does work in both Python 2 & 3.
Reading static files under a library in Dart? I am writing a library in Dart and I have static files under the library folder. I want to be able to read those files, but I'm not sure how to retrieve the path to it... there is not `__FILE__` or `$0` like in some other languages. **Update:** It seems that I was not clear enough. Let this help you understand me: *test.dart* ``` import 'foo.dart'; void main() { print(Foo.getMyPath()); } ``` *foo.dart* ``` library asd; class Foo { static Path getMyPath() => new Path('resources/'); } ``` It gives me the wrong folder location. It gives me the path to `test.dart` + `resources/`, but I want the path to `foo.dart` + `resources/`.
As mentioned, you can use mirrors. Here's an example using what you wanted to achieve: *test.dart* ``` import 'foo.dart'; void main() { print(Foo.getMyPath()); } ``` *foo.dart* ``` library asd; import 'dart:mirrors'; class Foo { static Path getMyPath() => new Path('${currentMirrorSystem().libraries['asd'].url}/resources/'); } ``` It should output something like: > > /Users/Kai/test/lib/resources/ > > > There will probably be a better way to do this in a future release. I will update the answer when this is the case. **Update:** You could also define a private method in the library: ``` /** * Returns the path to the root of this library. */ _getRootPath() { var pathString = new Path(currentMirrorSystem().libraries['LIBNAME'].url).directoryPath.toString().replaceFirst('file:///', ''); return pathString; } ```
Map bitwise enum to sql column value I have a bitwise enum with FlagsAttribute set over it like this - ``` [FlagsAttribute] public enum MyEnum { None = 0, First = 1, Second = 2, Third = 4, Five = 8, Six = 16, Seven = 32, Eight = 64, Nine = 128 } ``` Now, in C# i am storing this value in a property say MyProperty and on save i write this property in my SQL database in integer column. Suppose if i select `First,Second,Five` from code then in database it will be saved as `'11'`. I know i can fetch value from DB and just need to typecast int value to MyEnum and it will give me the values. But, i want some manipulation to be done on SQL data in some Stored procedure where obviously i can't typecast it to Enum value. So, is there a way out which can let me know about the individual values. Like in example if 11 is stored, any way that i can get it as `"1+2+8"`
This may help to get you started: ``` Select 11 & 1 As 'First' , 11 & 2 As 'Second' , 11 & 4 As 'Third' , 11 & 8 As 'Five' , 11 & 16 As 'Six' , 11 & 32 As 'Seven' , 11 & 64 As 'Eight' , 11 & 128 As 'Nine'; ``` Where `11` is your stored value. This will return non-zero values for each value that is set (i.e. `Select 11 & 1 As 'First'` returns `1`, `Select 11 & 2 As 'Second'` returns 2, `Select 11 & 4 As 'Third'` returns `0` and so on.
Simple way to refresh power pivot from VBA in Excel 2010? I want to perform the equivalent actions of: - Power Pivot > Tables > Update All - Pivot Table Tools > Data > Refresh All using VBA. All the tables are Excel tables contained within the file. Is there a simple way to do this in Excel 2010?
For Pivot Tables update, this code will work smoothly : ``` ThisWorkbook.RefreshAll ``` Or, if your Excel version is too old : ``` Dim Sheet as WorkSheet, _ Pivot as PivotTable For Each Sheet in ThisWorkbook.WorkSheets For Each Pivot in Sheet.PivotTables Pivot.RefreshTable Pivot.Update Next Sheet Next Pivot ``` --- --- In Excel 2013, to refresh PowerPivot, it is a simple line `ActiveWorkbook.Model.Refresh`. In Excel 2010, ... It is FAR more complicated! [Here is the general code made by Tom Gleeson](http://www.tomgleeson.ie/posts/201404/PowerPivotVBARefresh.html) : ``` ' ================================================== ' Test PowerPivot Refresh ' Developed By: Tom http://www.tomgleeson.ie ' Based on ideas by Marco Rosso, Chris Webb and Mark Stacey ' Dedicated to Bob Phillips a most impatient man ... ' Sep 2011 ' ' ======================================================= Option Explicit #If Win64 Then Public Declare PtrSafe Sub Sleep Lib "Kernel32" (ByVal dwMilliseconds As Long) #Else Public Declare Sub Sleep Lib "Kernel32" (ByVal dwMilliseconds As Long) #End If Sub Refresh() Dim lDatabaseID As String Dim lDimensionID As String Dim lTable As String Dim RS As Object 'ADODB.Recordset Dim cnn As Object 'ADODB.Connection Dim mdx As String Dim xmla As String Dim cnnName As String Dim lSPID As String Dim lArray Dim i As Long On Error Resume Next ' For Excel 2013+ use connection name e.g. "Text InvoiceHeaders" ' Fr Excel 2010 use table name e.g. "InvoiceHeaders" lTable = [TableToRefresh] On Error GoTo 0 ' if Excel 2013 onwards: use Connections or Model refresh option via Object Model If Application.Version() > 14 Then ' "wake up" model ActiveWorkbook.Model.Initialize If lTable <> "" Then ActiveWorkbook.Connections(lTable).Refresh Else ActiveWorkbook.Model.Refresh End If ' For Excel 2013 that's all folks. Exit Sub End If cnnName = "PowerPivot Data" '1st "wake up" default PowerPivot Connection ActiveWorkbook.Connections(cnnName).Refresh '2nd fetch that ADO connection Set cnn = ActiveWorkbook.Connections(cnnName).OLEDBConnection.ADOConnection Set RS = CreateObject("ADODB.Recordset") ' then fetch the dimension ID if a single table specified ' FIX: need to exclude all rows where 2nd char = "$" mdx = "select table_id,rows_count from $System.discover_storage_tables where not mid(table_id,2,1) = '$' and not dimension_name = table_id and dimension_name='<<<<TABLE_ID>>>>'" If lTable <> "" Then mdx = Replace(mdx, "<<<<TABLE_ID>>>>", lTable) RS.Open mdx, cnn lDimensionID = fetchDIM(RS) RS.Close If lDimensionID = "" Then lDimensionID = lTable End If End If ' then fetch a valid SPID for this workbook mdx = "select session_spid from $system.discover_sessions" RS.Open mdx, cnn lSPID = fetchSPID(RS) If lSPID = "" Then MsgBox "Something wrong - cannot locate a SPID !" Exit Sub End If RS.Close 'Next get the current DatabaseID - changes each time the workbook is loaded mdx = "select distinct object_parent_path,object_id from $system.discover_object_activity" RS.Open mdx, cnn lArray = Split(lSPID, ",") For i = 0 To UBound(lArray) lDatabaseID = fetchDatabaseID(RS, CStr(lArray(i))) If lDatabaseID <> "" Then Exit For End If Next i If lDatabaseID = "" Then MsgBox "Something wrong - cannot locate DatabaseID - refesh PowerPivot connnection and try again !" Exit Sub End If RS.Close 'msgbox lDatabaseID If doXMLA(cnn, lDatabaseID, lDimensionID) = "OK" Then Sleep 1000 ' refresh connections and any related PTs ... ActiveWorkbook.Connections(cnnName).Refresh End If End Sub Private Function doXMLA(cnn, databaseID As String, Optional dimensionID As String = "", Optional timeout As Long = 30) Dim xmla As String Dim lRet Dim comm As Object ' ADODB.Command ' The XMLA Batch request If dimensionID = "" Then xmla = "<Batch xmlns=""http://schemas.microsoft.com/analysisservices/2003/engine""><Parallel><Process xmlns:xsd=""http://www.w3.org/2001/XMLSchema"" xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:ddl2=""http://schemas.microsoft.com/analysisservices/2003/engine/2"" xmlns:ddl2_2=""http://schemas.microsoft.com/analysisservices/2003/engine/2/2"" xmlns:ddl100_100=""http://schemas.microsoft.com/analysisservices/2008/engine/100/100""><Object><DatabaseID><<<DatabaseID>>></DatabaseID></Object><Type>ProcessFull</Type><WriteBackTableCreation>UseExisting</WriteBackTableCreation></Process></Parallel></Batch>" xmla = Replace(xmla, "<<<DatabaseID>>>", databaseID) Else xmla = "<Batch xmlns=""http://schemas.microsoft.com/analysisservices/2003/engine""><Parallel><Process xmlns:xsd=""http://www.w3.org/2001/XMLSchema"" xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:ddl2=""http://schemas.microsoft.com/analysisservices/2003/engine/2"" xmlns:ddl2_2=""http://schemas.microsoft.com/analysisservices/2003/engine/2/2"" xmlns:ddl100_100=""http://schemas.microsoft.com/analysisservices/2008/engine/100/100""><Object><DatabaseID><<<DatabaseID>>></DatabaseID><DimensionID><<<DimensionID>>></DimensionID></Object><Type>ProcessFull</Type><WriteBackTableCreation>UseExisting</WriteBackTableCreation></Process></Parallel></Batch>" xmla = Replace(xmla, "<<<DatabaseID>>>", databaseID) xmla = Replace(xmla, "<<<DimensionID>>>", dimensionID) End If Set comm = CreateObject("ADODB.command") comm.CommandTimeout = timeout comm.CommandText = xmla Set comm.ActiveConnection = cnn comm.Execute ' Make the request 'On Error Resume Next - comment out on error as most are not trappable within VBA !!! 'lRet = cnn.Execute(xmla) 'If Err Then Stop doXMLA = "OK" End Function Private Function fetchDatabaseID(ByVal inRS As Object, SPID As String) As String Dim i As Long Dim useThis As Boolean Dim lArray Dim lSID As String lSID = "Global.Sessions.SPID_" & SPID Do While Not inRS.EOF 'Debug.Print inRS.Fields(0) If CStr(inRS.Fields(0)) = lSID Then lArray = Split(CStr(inRS.Fields(1)), ".") On Error Resume Next If UBound(lArray) > 2 Then ' find database permission activity for this SPID to fetch DatabaseID If lArray(0) = "Permissions" And lArray(2) = "Databases" Then fetchDatabaseID = CStr(lArray(3)) Exit Function End If End If End If On Error GoTo 0 inRS.MoveNext Loop inRS.MoveFirst fetchDatabaseID = "" End Function Private Function fetchSPID(ByVal inRS As Object) As String Dim lSPID As String lSPID = "" Do While Not inRS.EOF If lSPID = "" Then lSPID = CStr(inRS.Fields(0).Value) Else lSPID = lSPID & "," & CStr(inRS.Fields(0).Value) End If inRS.MoveNext Loop fetchSPID = lSPID End Function Private Function fetchDIM(ByVal inRS As Object) As String Dim lArray Dim lSID As String If Not inRS.EOF Then fetchDIM = inRS.Fields(0) Else fetchDIM = "" End If End Function ```
Is it possible to open file in specific encoding in geany? My system locale is ru\_RU.KOI8-R and I want geany to create all new files in this encoding. In its settings I set "Default encoding (new files)" to "Cyrillic (KOI8-R)" and it works for new files. But when I open any file without cyrillic characters, geany thinks it's in Unicode. Is there any way to tell geany to open all files in KOI8-R (even if there's no non-ASCII characters inside them) or the only way is to put any character from upper half of codepage to all source files?
I tried to work on this problem, but seems that the only way for Geany to force to use an encoding is to have a corresponding line in the beginning of the file. If file contains UTF-8 characters geany will use this locale. Among other things i tried the following bellow. You can also give a try your self , in case that works better in your machine. - To switch localle of my system to Greek `ISO-8859-7` (it was `en_US.UTF-8` before). I had to first install the new local using `dpkg-reconfigure locales` - To convert a test file from `UTF-8` to desired locale (`ISO-8859-7` in my case) using command `iconv -f UTF-8 -t ISO-8859-7 c.txt --output=c2.txt` - To change geany preferences , in Preferences-Files Tab applying `Greek ISO-8859-7` encoding both for new files and also for "Default Encoding (existing non Unicode Files)" PS: Setting can also be verified/changed directly by look/edit file `~/.config/geany/geany.conf` and look for the lines `pref_editor_default_new_encoding=UTF-8 #changed to ISO-8859-7` and line `pref_editor_default_open_encoding=None #changed to ISO-8859-7` - Then only thing that really worked is to insert in the beginning of the file the line : `# geany_encoding=ISO-8859-7 #` This solution is described on the [Geany online manual - Infile encoding specification Section.](http://www.geany.org/manual/#in-file-encoding-specification) As a workaround to avoid opening all your files one by one and appending the above line you could use: ``` echo -e "# geany_encoding=ISO-8859-7 #\n$(cat c.txt)" >c.txt ``` You can also make a loop to quickly "geany-convert" all your files; something like this: ``` for file in "$(find . -type f -name "*.txt");do echo -e "# geany_encoding=ISO-8859-7 #\n$(cat $file)" >$file;done ``` I hope above ideas to help you solve your problem. You could also check and ask geany devs if forcing encoding during opening a file is in the future plans of Geany. PS: You could always open the file as it is in Geany, press `reload as <your encoding>` and save. This should save the file in the new encoding.
webpack import firebase not working I'm having an issue getting firebase 3.0.1 to work. I have a feeling it's in regards to my webpack setup. My files are below. When running my app with webpack dev server I get the error: > > Uncaught TypeError: firebase.initializeApp is not a function > > > The interesting thing is that if I put a `debugger;` or breakpoint after `var firebase = require('firebase');` it seems to be an empty object. **webpack.config.js** ``` const webpack = require("webpack"); module.exports = { entry: './src/index.js', output: { path: 'public', filename: 'bundle.js' }, module: { loaders: [{ test: /\.js$/, exclude: /node_modules/, loader: 'babel-loader?presets[]=es2015&presets[]=react' }] }, plugins: process.env.NODE_ENV === 'production' ? [ new webpack.optimize.DedupePlugin(), new webpack.optimize.OccurrenceOrderPlugin(), new webpack.optimize.UglifyJsPlugin() ] : [] }; ``` **package.json** ``` { "name": "burn", "version": "1.0.0", "description": "burn messaging", "main": "index.js", "scripts": { "start": "if-env NODE_ENV=production && npm run start:prod || npm run start:dev", "start:dev": "webpack-dev-server --inline --content-base public --history-api-fallback", "start:prod": "webpack && firebase deploy" }, "author": "James Gilchrist <james@burn.today>", "license": "ISC", "dependencies": { "compression": "^1.6.2", "express": "^4.13.4", "firebase": "^3.0.1", "if-env": "^1.0.0", "react": "^15.0.2", "react-dom": "^15.0.2", "react-router": "^2.4.0" }, "devDependencies": { "babel-core": "^6.9.0", "babel-loader": "^6.2.4", "babel-preset-es2015": "^6.9.0", "babel-preset-react": "^6.5.0", "webpack": "^1.13.0", "webpack-dev-server": "^1.14.1" } } ``` **index.js** ``` var firebase = require('firebase'); var config = { apiKey: "AIzaSyA9gUmSBu4SZ4P9H_4lXuN1ouD_GBKq3aw", authDomain: "burn-56840.firebaseapp.com", databaseURL: "https://burn-56840.firebaseio.com", storageBucket: "burn-56840.appspot.com" }; firebase.initializeApp(config); ```
I had the [same problem](https://stackoverflow.com/questions/37311500/firebase-reference-is-empty-after-requiring-new-version), there's a simple fix though: ``` var firebase = require('firebase/app'); ``` This way you get the "real" firebase module. However you must now require each module you'll need so it loads correctly, like so: ``` var firebase = require('firebase/app'); // all 3 are optional and you only need to require them at the start require('firebase/auth'); require('firebase/database'); require('firebase/storage'); ``` It seems to me that something is wrong with the current initialisation code, looking at the source it should work; but then again, somewhat like you, I'm using browserify, and haven't tested outside of it, so it might be related.
Slick 3 multiple outer joins from Slick documentation, it's clear how to make a single left join between two tables. ``` val q = for { (t, v) <- titles joinLeft volumes on (_.uid === _.titleUid) } yield (t, v) ``` Query q will, as expected, have attributes: `_1` of type `Titles` and `_2` of type `Rep[Option[Volumes]]` to cover for non-existing volumes. Further cascading is problematic: ``` val q = for { ((t, v), c) <- titles joinLeft volumes on (_.uid === _.titleUid) joinLeft chapters on (_._2.uid === _.volumeUid) } yield /* etc. */ ``` This won't work because `_._2.uid === _.volumeUid` is invalid given `_.uid` being not existing. According to various sources on the net, this shouldn't be an issue, but then again, sources tend to target different slick versions and 3.0 is still rather new. Does anyone have some clue on the issue? To clarify, idea is to use two left joins to extract data from 3 cascading 1:n:n tables. Equivalent SQL would be: ``` Select * from titles left join volumes on titles.uid = volumes.title_uid left join chapters on volumes.uid = chapters.volume_uid ```
Your second left join is no longer operating on a `TableQuery[Titles]`, but instead on what is effectively a `Query[(Titles, Option[Volumes])]` (ignoring the result and collection type parameters). When you join the resulting query on your `TableQuery[Chapters]` you can access the second entry in the tuple using the `_2` field (since it's an `Option` you'll need to `map` to access the `uid` field): ``` val q = for { ((t, v), c) <- titles joinLeft volumes on (_.uid === _.titleUid) joinLeft chapters on (_._2.map(_.uid) === _.volumeUid) } yield /* etc. */ ``` ### Avoiding `TupleN` If the `_N` field syntax is unclear, you can also use [Slick's capacity for user-defined record types](http://slick.typesafe.com/doc/3.0.0/userdefined.html#monomorphic-case-classes) to map your rows alternatively: ``` // The `Table` variant of the joined row representation case class TitlesAndVolumesRow(title: Titles, volumes: Volumes) // The DTO variant of the joined row representation case class TitleAndVolumeRow(title: Title, volumes: Volume) implicit object TitleAndVolumeShape extends CaseClassShape(TitlesAndVolumesRow.tupled, TitleAndVolumeRow.tupled) ```
CSS issue with IE9 and floating divs aligning I am having an issue where I have a left floated div with an image in it and then a stack of a couple divs on the right. It displays fine in FF, Chrome, and IE9 when in compatibility mode, however the bottom most div gets pushed underneath the image when it is viewed in normal IE9... Left div: ``` { float: left; clear: both; margin-right: 10px; } ``` Right div (one that is showing up under on IE9): ``` width: 350px; float: right; margin-left: 10px; height: 150px; overflow: hidden; ``` here is what it looks like in IE9: <https://i.stack.imgur.com/JNqn6.png> Here is what it looks like in Chrome or FF: <https://i.stack.imgur.com/S1RBY.png>
I think this is one of those clear-fix issues. I also run into these problems every now and then. The fix, or hack is to always add a new element a so-called pseudo-element to it for it to render correctly. So ``` /** * For modern browsers * 1. The space content is one way to avoid an Opera bug when the * contenteditable attribute is included anywhere else in the document. * Otherwise it causes space to appear at the top and bottom of elements * that are clearfixed. * 2. The use of `table` rather than `block` is only necessary if using * `:before` to contain the top-margins of child elements. */ .cf:before, .cf:after { content: " "; /* 1 */ display: table; /* 2 */ } .cf:after { clear: both; } /** * For IE 6/7 only * Include this rule to trigger hasLayout and contain floats. */ .cf { *zoom: 1; } ``` Here cf will be your content Its from <http://nicolasgallagher.com/micro-clearfix-hack/> I hope this fixes it for you, let me know.
How can I decorate my ASP.NET MVC ViewModel property to render as a textarea when using EditorForModel() How can I decorate my ASP.NET MVC ViewModel property to render as a textarea when using EditorForModel()
You could decorate the model property with the [`[DataType(DataType.MultilineText)]`](http://msdn.microsoft.com/en-us/library/system.componentmodel.dataannotations.datatypeattribute.aspx) attribute: Model: ``` public class MyModel { [DataType(DataType.MultilineText)] public string MyProperty { get; set; } } ``` Controller: ``` public class HomeController : Controller { public ActionResult Index() { return View(new MyModel()); } } ``` View: ``` <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<SomeNs.Models.MyModel>" %> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <%: Html.EditorForModel() %> </asp:Content> ```
Haskell elem function definition has t a can you explain that I was looking at the type annotation of function `elem`. It looked like this: ``` elem :: Eq a => a -> t a -> Bool ``` I understand that `Eq a =>` is a type constraint. So `a` must support `==`. But I don't understand `t a`, where does this `t` came from ? What does it do? Why is it necessary? Link: <https://hackage.haskell.org/package/base-4.16.0.0/docs/Prelude.html#v:elem>
The `t` is a `Foldable` typeclass, since it is defined in the [**`Foldable`** class](https://hackage.haskell.org/package/base-4.16.0.0/docs/Prelude.html#t:Foldable), so the signature is: ``` elem :: (**Foldable t**, Eq a) => a -> **t** a -> Bool ``` `elem` thus not only works on a list (where `t ~ []`), but on any `Foldable`, so `t ~ Maybe`, `t ~ NonEmpty`, `t ~ Tree`, etc. Its [default implementation is [src]](https://hackage.haskell.org/package/base-4.16.0.0/docs/src/Data.Foldable.html#elem): > > > ``` > elem :: Eq a => a -> t a -> Bool > elem = any . (==) > > ``` > > It thus checks if any element of the `Foldable` is equal to the query element. This thus means that it can also work for a `Maybe a` for example where `Nothing` can be seen as an empty collection, and `Just x` as a collection with one element: `x`, or for example look if the element is one of the values in a rose tree [**`Tree a`**](https://hackage.haskell.org/package/containers-0.6.5.1/docs/Data-Graph.html#t:Tree).
From where platform device gets it name I am reading about the Linux Device model which is built around buses,devices and drivers .I am able to understand a bit about how devices and driver matches happen but not clear about the role of buses here,how buses matches with device. One more doubt I have regarding where platform device gets it name from. "The platform bus,simply compares the name of each device against the name of each driver; if they are the same, the device matches the driver." Now I could n't really understand above point .I believe device name is first defined in dts file and then corresponding Driver name is defined in platform driver code . if these two name matches ,probe is called from driver code which will confirm device is really in existence. Could anybody let me know the whole process specially from Bus point of view.
To add to @Federico's answer, which describes very well the general case, *platform devices* can be matched to *platform drivers* using four things (that are prioritized). Here is the [match function](https://github.com/torvalds/linux/blob/master/drivers/base/platform.c) of the platform "bus": ``` static int platform_match(struct device *dev, struct device_driver *drv) { struct platform_device *pdev = to_platform_device(dev); struct platform_driver *pdrv = to_platform_driver(drv); /* Attempt an OF style match first */ if (of_driver_match_device(dev, drv)) return 1; /* Then try ACPI style match */ if (acpi_driver_match_device(dev, drv)) return 1; /* Then try to match against the id table */ if (pdrv->id_table) return platform_match_id(pdrv->id_table, pdev) != NULL; /* fall-back to driver name match */ return (strcmp(pdev->name, drv->name) == 0); } ``` Here are two important ones. ## OF style match Match using the device tree (`of_driver_match_device`). If you don't know the device tree concept yet, [go read about it](http://www.devicetree.org/Main_Page). In this data structure, each *device* has its own node within a tree representing the system. Each device also has a `compatible` property which is a list of strings. If any platform driver declares one of the `compatible` strings as being supported, there will be a match and the driver's probe will be called. Here's an [example of a node](https://github.com/torvalds/linux/blob/master/arch/arm/boot/dts/am33xx.dtsi): ``` gpio0: gpio@44e07000 { compatible = "ti,omap4-gpio"; ti,hwmods = "gpio1"; gpio-controller; #gpio-cells = <2>; interrupt-controller; #interrupt-cells = <1>; reg = <0x44e07000 0x1000>; interrupts = <96>; }; ``` This describes a GPIO controller. It only has one compatible string which is `ti,omap4-gpio`. Any registered platform driver declaring this same compatible string will be probed. Here's [its driver](https://github.com/torvalds/linux/blob/master/drivers/gpio/gpio-omap.c): ``` static const struct of_device_id omap_gpio_match[] = { { .compatible = "ti,omap4-gpio", .data = &omap4_pdata, }, { .compatible = "ti,omap3-gpio", .data = &omap3_pdata, }, { .compatible = "ti,omap2-gpio", .data = &omap2_pdata, }, { }, }; MODULE_DEVICE_TABLE(of, omap_gpio_match); static struct platform_driver omap_gpio_driver = { .probe = omap_gpio_probe, .driver = { .name = "omap_gpio", .pm = &gpio_pm_ops, .of_match_table = of_match_ptr(omap_gpio_match), }, }; ``` The driver is able to drive three types of GPIOs, including the one mentioned before. Please note that platform devices are not magically added to the platform bus. The architecture/board initialization will call `platform_device_add` or `platform_add_devices`, in this case with the help of OF functions to scan the tree. ## Name matching If you look at `platform_match`, you will see that the match falls back to name matching. A simple string comparison is done between the driver name and the device name. This is how older platform driver worked. Some of them still do, like [this one here](https://github.com/torvalds/linux/blob/master/sound/soc/fsl/imx-ssi.c): ``` static struct platform_driver imx_ssi_driver = { .probe = imx_ssi_probe, .remove = imx_ssi_remove, .driver = { .name = "imx-ssi", .owner = THIS_MODULE, }, }; module_platform_driver(imx_ssi_driver); ``` Again, the board specific initialization will have to call `platform_device_add` or `platform_add_devices` to add platform devices, which in the case of name matching ones are entirely created statically in C (name is given in C, resources like IRQs and base addresses, etc.).
Node Express - Storage and retrieval of authentication tokens I have an Express application setup and need some advice on storing tokens. I am receiving an access token from an OAuth 2 server after authenticating a user account, which I then need to use for subsequent api requests. I want to hide the token value from the client and I believe one way of doing this is to save the token on the server in an encoded cookie so that when further requests are made, these can be routed through middleware and the cookie can then be used for retrieval of the token stored sever side and then used as a header value in the ongoing request to the actual api endpoint. Someone has actually already asked this question - [How to store an auth token in an Angular app](https://stackoverflow.com/questions/23773408/how-to-store-an-auth-token-in-an-angular-app) This is exactly the flow I am working with in my application but the answer talks about using an Angular service and I'm not so sure I would want to do this, surely this can all be handled by Express so the client side code doesnt need to know about the token, just any errors the API server returns back. So summary of flow I think I need: - User submits login credentials - OAuth 2 server returns access token - Token is saved somewhere in Express, keyed by an id of sorts - A cookie is generated and sent back in response to the client. Cookie contains token value encoded perhaps? Or maybe the id of token value stored in Express middleware component? - Client makes an api request, which Express route middleware picks up. - Express checks for presence of cookie and either decodes the token value, or somehow retrieves from storage mechanism server side. - Token value is then used as a header between express and final api endpoint There is probably middleware already out there that handles this kinda thing, I have already seen PassportJS which seems to be the kinda thing I may want to use, but I'm not so sure it handles the OAuth2 token flow on the server I am working against (password grant) and instead seems to be more suited to the redirect login OAuth flow. I surely need somewhere to save the token value in Express, so some form of storage (not in memory I dont think). I am fairly new to Express so would appreciate any suggestions\advice on how to approach this. Thanks
The most secure way to do this is just as you described: - Get an OAuth token from some third party service (Google, Facebook, whatever). - Create a cookie using Express, and store that token in the cookie. Make sure you also set the `secure` and `httpOnly` cookie flags when you do this: this ensures the cookie CANNOT BE READ by client-side Javascript code, or over any non-SSL connection. - Each time the user makes a request to your site, that cookie can be read by your middleware in Express, and used to make whatever API calls you need to the third party service. If your service also needs to make asynchronous requests to Google / Facebook / etc. when the user is NOT actively clicking around on your site, you should also store their token in your user database somewhere as well -- this way you can make requests on behalf of the user whenever you need to. I'm the author of [express-stormpath](https://github.com/stormpath/express-stormpath), a Node auth library (similar to Passport), and this is how we do things over there to ensure maximal security!
use for loop to visit all elements in a HashSet (Java)? I have write the code as: ``` public class Solution { public int[] intersection(int[] nums1, int[] nums2) { HashSet<Integer> has1 = new HashSet(Arrays.asList(nums1)); for (int i: has1) System.out.println(i); return nums1; } } num1: [1,2,4,2,3] num2: [4,5,6,3] ``` On the for loop it says `java.lang.ClassCastException: [I cannot be cast to java.lang.Integer`
you cannot do this directly but you need to prefer a indirect approach ``` int[] a = { 1, 2, 3, 4 }; Set<Integer> set = new HashSet<>(); for (int value : a) { set.add(value); } for (Integer i : set) { System.out.println(i); } ``` using Java 8 ``` 1) Set<Integer> newSet = IntStream.of(a).boxed().collect(Collectors.toSet());//recomended 2) IntStream.of(a).boxed().forEach(i-> System.out.println(i)); //applicable ``` here first `foreach` is sufficient for you and If you want to go by set, go with second for loop
How to sort a pandas data frame by value counts of a column? I'd like to sort the following pandas data frame by the result of `df['user_id'].value_counts()`. ``` import pandas as pd n = 100 df = pd.DataFrame(index=pd.Index(range(1, n+1), name='gridimage_id')) df['user_id'] = 2 df['has_term'] = True df.iloc[:10, 0] = 1 ``` The sort should be stable, meaning that whilst user 2's rows would come before user 1's rows, the user 2's rows and user 1's rows would be in the original order. I was thinking about using `df.groupby`, merging `df['user_id'].value_counts()` with the data frame, and also converting `df['user_id']` to ordered categorical data. However, none of these approaches seemed particularly elegant. Thanks in advance for any help!
## `transform` and `argsort` Use `kind='mergesort'` for stability ``` df.iloc[df.groupby('user_id').user_id.transform('size').argsort(kind='mergesort')] ``` --- ## `factorize`, `bincount`, and `argsort` Use `kind='mergesort'` for stability ``` i, r = pd.factorize(df['user_id']) a = np.argsort(np.bincount(i)[i], kind='mergesort') df.iloc[a] ``` --- ## Response to Comments > > Thank you @piRSquared. Is it possible to reverse the sort order, though? value\_counts is in descending order. In the example, user 2 has 90 rows and user 1 has 10 rows. I'd like user 2's rows to come first. Unfortunately, Series.argsort ignores the order kwarg. – Iain Dillingham 4 mins ago > > > ### Quick and Dirty Make the counts negative ``` df.iloc[df.groupby('user_id').user_id.transform('size').mul(-1).argsort(kind='mergesort')] ``` Or ``` i, r = pd.factorize(df['user_id']) a = np.argsort(-np.bincount(i)[i], kind='mergesort') df.iloc[a] ```
Handling Empty Nodes Using Java DOM I have a question concerning XML, Java's use of DOM, and empty nodes. I am currently working on a project wherein I take an XML descriptor file of abstract machines (for text parsing) and parse a series of input strings with them. The actual building and interpretation of these abstract machines is all done and working fine, but I have come across a rather interesting XML requirement. Specifically, I need to be able to turn an empty InputString node into an empty string ("") and still execute my parsing routines. The problem, however, occurs when I attempt to extract this blank node from my XML tree. This causes a null pointer exception and then generally bad things start happening. Here is the offending snippet of XML (Note the first element is empty): ``` <InputStringList> <InputString></InputString> <InputString>000</InputString> <InputString>111</InputString> <InputString>01001</InputString> <InputString>1011011</InputString> <InputString>1011000</InputString> <InputString>01010</InputString> <InputString>1010101110</InputString> </InputStringList> ``` I extract my strings from the list using: ``` //Get input strings to be validated xmlElement = (Element)xmlMachine.getElementsByTagName(XML_INPUT_STRING_LIST).item(0); xmlNodeList = xmlElement.getElementsByTagName(XML_INPUT_STRING); for (int j = 0; j < xmlNodeList.getLength(); j++) { //Add input string to list if (xmlNodeList.item(j).getFirstChild().getNodeValue() != null) { arrInputStrings.add(xmlNodeList.item(j).getFirstChild().getNodeValue()); } else { arrInputStrings.add(""); } } ``` How should I handle this empty case? I have found a lot of information on removing blank text nodes, but I still actually have to parse the blank nodes as empty strings. Ideally, I would like to avoid using a special character to denote a blank string. Thank you in advance for your time.
``` if (xmlNodeList.item(j).getFirstChild().getNodeValue() != null) { ``` `nodeValue` shouldn't be null; it would be `firstChild` itself that might be null and should be checked for: ``` Node firstChild= xmlNodeList.item(j).getFirstChild(); arrInputStrings.add(firstChild==null? "" : firstChild.getNodeValue()); ``` However note that this is still sensitive to the content being only one text node. If you had an element with another element in, or some text and a CDATA section, just getting the value of the first child isn't enough to read the whole text. What you really want is the [`textContent` property](http://www.w3.org/TR/DOM-Level-3-Core/core.html#Node3-textContent) from DOM Level 3 Core, which will give you all the text inside the element, however contained. ``` arrInputStrings.add(xmlNodeList.item(j).getTextContent()); ``` This is available in [Java 1.5](http://download.oracle.com/javase/1.5.0/docs/api/org/w3c/dom/Node.html#getTextContent%28%29) onwards.
960 Grid System - 12 col - Touching the container edge i am using the 960 grid system to try and create a layout... I have the following code: ``` <div class="container_12"> <div class="grid_3 alpha"></div> <div class="grid_9 omega"></div> </div> ``` I am using alpha and omega to remove the left and right margin respectively.. This enables the divs to touch the left edge of the container.. The problem however is the right hand grid\_9 omega does not touch the right hand side. I understand why this is happening, but i do not know how to correct this behaviour using 960 methods.. Thanks,
It may help to understand the fundamentals behind the 960 grid framework. This framework is based off of a very simple principle that combines fixed width and margins to create a grid like layout for rapid website development. The entire framework utilized `float: left` which allows the girds to display side-by-side as well as creating the 20px buffer between each grid. And Thus I believe you are misunderstanding the use of the `"alpha"` and `"omega"` classes. These classes are intended to remove margins on grids that are children of other grids so that the margin is multiplied. Take this code for example: ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>960 Grid System &mdash; Demo</title> <link rel="stylesheet" href="css/reset.css" /> <link rel="stylesheet" href="css/text.css" /> <link rel="stylesheet" href="css/960.css" /> </head> <body> <div class="container_12" style="background:blue;"> <div class="grid_3 alpha" style="background:yellow;">Grid_3 Alpha</div> <div class="grid_9 omega" style="background:green;">Grid_9 Omega</div> </div> </body> </html> ``` This produces something similar to: ![Standard 960 layout](https://i.stack.imgur.com/0Pa17.png) you will notice that there is no margin to the left of Grid\_3 but there is a 20 pixel margin between Grid\_3 and Grid\_9. This is caused by Grid\_3 having a `margin-right:10px` and Grid\_9 having a `margin-left:10px`. When both divs are floated left they produces this spacing. You will also notice that there is another 10px margin to the right of Grid\_9. This is due to the fact that the left margin has been removed to Grid\_3 and is now shifted the entire layout over 10px inside the container\_12 div. In order to achieve the layout you described. Which from my understanding should look like this: ![960 layout with Float](https://i.stack.imgur.com/NQfJH.png) You will need to either create a new class to apply a `float:right` to Grid\_9 or increase Grid\_9 width. To do this inline would look something like this: ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>960 Grid System &mdash; Demo</title> <link rel="stylesheet" href="css/reset.css" /> <link rel="stylesheet" href="css/text.css" /> <link rel="stylesheet" href="css/960.css" /> </head> <body> <div class="container_12" style="background:blue;"> <div class="grid_3 alpha" style="background:yellow;">Grid_3 Alpha</div> <div class="grid_9 omega" style="float:right; background:green;">Grid_9 Omega</div> </div> </body> </html> ```
Unlocking lock owned by another thread java I have a LockManager that manages the locks of several threads. Sometimes the threads are bad boys, and I have to kill them and ask the LockManager to release all their locks. However, since I use ReentrantLock in java this is impossible, I can not unlock a lock owned by another thread. I am forced to use Locks (cannot use semaphores, it is point of the homework). Is there any Java Lock implementation that allows me to unlock locks owned by other threads? So far the options I considered are: - re-implementing ReentrantLock in a way that allows me to do this - Make some sort of mapping between Semaphores and ReentrantLocks Extra Sources you may find useful: - [Reentrant locks - Unlocking from another thread](http://www.coderanch.com/t/431859/threads/java/Reentrant-locks-Unlocking-thread) - [Unlocking a lock from a thread which doesn't own it, or redesigning to avoid this?](https://stackoverflow.com/questions/2196220/unlocking-a-lock-from-a-thread-which-doesnt-own-it-or-redesigning-to-avoid-thi/2228883#2228883)
Would you be allowed to use your own `Lock`? Here's a class that completely proxies the `Lock` but when it is told to force the unlock it merely replaces the lock it is proxying with a new one. This should have the effect you want. Sadly it still does not deal with the locks that are left dangling but that now becomes somebody else's problem. Your locks are now magically unlocked. ``` static class LockProxy<L extends Lock> implements Lock { // The actual lock. private volatile Lock lock; public LockProxy(L lock) { // Trap the lock we are proxying. this.lock = lock; } @Override public void lock() { // Proxy it. lock.lock(); } @Override public void lockInterruptibly() throws InterruptedException { // Proxy it. lock.lockInterruptibly(); } @Override public boolean tryLock() { // Proxy it. return lock.tryLock(); } @Override public boolean tryLock(long l, TimeUnit tu) throws InterruptedException { // Proxy it. return lock.tryLock(l, tu); } @Override public void unlock() { // Proxy it. lock.unlock(); } @Override public Condition newCondition() { // Proxy it. return lock.newCondition(); } // Extra functionality to unlock from any thread. public void forceUnlock() { // Actually just replace the perhaps locked lock with a new one. // Kinda like a clone. I expect a neater way is around somewhere. if (lock instanceof ReentrantLock) { lock = new ReentrantLock(); } else { throw new UnsupportedOperationException( "Cannot force unlock of lock type " + lock.getClass().getSimpleName()); } } } ```
Import and not used error I'm getting below error with below import code: Code: package main ``` import ( "log" "net/http" "os" "github.com/emicklei/go-restful" "github.com/emicklei/go-restful/swagger" "./api" ) ``` Error: ``` .\main.go:9: imported and not used: "_/c_/Users/aaaa/IdeaProjects/app/src/api" ``` Is there a reason why the import is not working given that I have `package api` and files stored under api folder? I'm using below to use `api` in main.go ``` func main() { // to see what happens in the package, uncomment the following restful.TraceLogger(log.New(os.Stdout, "[restful] ", log.LstdFlags|log.Lshortfile)) wsContainer := restful.NewContainer() api := ApiResource{map[string]OxiResp{}} api.registerLogin(wsContainer) api.registerAccount(wsContainer) api.registerLostLogin(wsContainer) api.registerWallet(wsContainer) } ```
The compiler looks for *actual use of a package* .. not the fact it exists. You need to use something from that package.. or remove the import. E.g: ``` v := api.Something ... ``` If you don't use anything from that package in your source file .. you don't need to import it. That is, unless you want the `init` function to run. In which case, you can use the ignore notation `import _`. EDIT: After your update, it appears you're overwriting the package import here: ``` api := ApiResource{map[string]OxiResp{}} ``` That declares a variable called `api`. Now the compiler thinks its a variable, and so you're not actually using the `api` package.. you're using the `api` variable. You have a few options. Firstly, you can call that variable something else (probably what I would do): ``` apiv := ApiResource{map[string]OxiResp{}} ``` Or, alias your import (not what I would do.. but an option nonetheless): ``` import ( // others here api_package "./api" ) ``` The problem is that the compiler is confused on what to use. The `api` package.. or the `api` variable you have declared. You should also import the package via the `GOPATH` instead of relatively.
What is os.log for? I am going through a few Swift tutorials on how to build simple apps, as I am just starting to code. I want to make my app iOS 9 compatible, as I have an iPad 3. However, all the `os.log` statements generate an error in Xcode which tells me to add an `if #avaliable` statement before any of the `os.log` statements. What does os.log do, and if I need it, is there an issue using an `if #avaliable` statement for iOS 9 compatibility? If not, what is the equivalent code for iOS 9 to go in the `else` statement after the `if #avaliable` statement? Thanks.
From [Apple's documentation](https://developer.apple.com/reference/os/logging): > > Unified logging is available in iOS 10.0 and later, macOS 10.12 and > later, tvOS 10.0 and later, and watchOS 3.0 and later, and supersedes > ASL (Apple System Logger) and the Syslog APIs. Historically, log > messages were written to specific locations on disk, such as > /etc/system.log. The unified logging system stores messages in memory > and in a data store, rather than writing to text-based log files. > > > There is no iOS9 equivalent. You could use a third party logging tool like [CocoaLumberjack](https://github.com/CocoaLumberjack/CocoaLumberjack), which is very popular. As a concrete example of how to use this logging: ``` if #available(iOS 10.0, *) { let bundleID:String = Bundle.main.bundleIdentifier ?? "unknown" let oslog = OSLog(subsystem: bundleID, category: "Model") os_log("%@", log: oslog, type: .info, message) } ```
Splitting a command line into key/value pairs The code below will split environment variables from a command line (always appear at the end of the command line). Environment variables are represented by '-E key=value'. I've achieved this like so, but I'm wondering if there's a more elegant way ``` public class TestSplit { public static void main(String... args) { String command = "-ps 4 -pe 5 -E opInstallDir=/home/paul -E opWD=/home/paul/remake -E opFam=fam -E opAppli=appli"; int startPosition = command.indexOf("-E") + 2; String envVars = command.substring(startPosition); for(String pair: envVars.split("-E")) { String[] kv = pair.split("="); System.out.println(kv[0] + " " +kv[1]); } } } ``` EDIT Just to clarify these aren't command line arguments for launching the program from the console, they are command line arguments for launching an external program. The details of which I haven't included.
Like @palacsint I will recommend an external library. Apache commons-cli is a decent choice. Another choice (my preference) is [java gnu-getopt](http://www.urbanophile.com/arenn/coding/download.html) ... I like it because I am familiar with the notations and standards from previous work. It can be a little complicated the first time around otherwise. On the other hand, I tend not to use an external library unless the code is already going to be relatively complicated.... But, back to your code. Why do you have everything in a single String? Why is it not part of the `String...args` ? The first thing about command-line arguments is that they get complicated very fast. What if the argument was: > > > ``` > String command = "-ps 4 -pe 5 -E opInstallDir=/opt/OSS-EVAL/thiscode -E opWD=/home/paul/remake -E opFam=fam -E opAppli=appli -Edocs='My Documents' -Eparse=key=value"; > > ``` > > I have thrown in a few things there. First up, on our one machine at work, we really do have the directory /opt/OSS-EVAL/ which we use to install/evaluate OSS software/libraries. The above will break your parsing because it has the `-E` embedded in the name. Next up, is 'POSIX-style' commandline arguments can have quoted values, and also values with an `=` in the value. So, things I would recommend to you: Locate the source of your command-line values. It will likely be available as an array, not a single string. Keep the data as an array! Second, with the array, it is easier to look for stand-alone values that are `-E`, or, if the input is `-Ekey=value` then you look for values that *start* with -E. Finally, when you split the key/value on the `=`, limit the split to 2. ``` String[] kv = pair.split("=", 2); ``` Which will preserve any of the `=` tokens inside the value part. **EDIT:** You have suggested in your edit that this is for sending data to an external command. If you are using Java to initialize the external command, then please, please, please use the version of [exec() that takes a command array](http://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#exec%28java.lang.String%5B%5D%29), or use the ProcessBuilder which allows you to send [all the command-line parameters as separate values in an array](http://docs.oracle.com/javase/7/docs/api/java/lang/ProcessBuilder.html#ProcessBuilder%28java.lang.String...%29)!!!
Multiple parameters for form\_for() I'm reading Beginning Rails 3. It creates a blog with Users who can post Articles and also post Comments to these Articles. They look like this: ``` class User < ActiveRecord::Base attr_accessible :email, :password, :password_confirmation attr_accessor :password has_many :articles, :order => 'published_at DESC, title ASC', :dependent => :nullify has_many :replies, :through => :articles, :source => :comments class Article < ActiveRecord::Base attr_accessible :body, :excerpt, :location, :published_at, :title, :category_ids belongs_to :user has_many :comments class Comment < ActiveRecord::Base attr_accessible :article_id, :body, :email, :name belongs_to :article ``` in app/views/comments/new.html.erb there's a form which begins like this: ``` <%= form_for([@article, @article.comments.new]) do |f| %> ``` My confusion lies in why form\_for() has two parameters. What do they resolve to and why are they necessary? thanks, mike
Actually, in your example, you are calling `form_for`with *one* parameter (which is Array). If you check the documentation you will see parameters it expects: `form_for(record, options = {}, &proc)`. In this case a `record` can be ActiveRecord object, or an Array (it can be also String, Symbol, or object that quacks like ActiveRecord). And when do you need to pass it an Array? The simplest answer is, when you have a nested resource. Like in your example, you have defined `Article has many Comments` association. When you call `rake routes`, and have correctly defined routes, you will see that Rails has defined for you different routes for your nested resource, like: `article_comments POST /article/:id/comments`. This is important, because you have to create valid URI for your form tag (well not you, Rails does it for you). For example: ``` form_for([@article, @comments]) ``` What you are saying to Rails is: "Hey Rails, I am giving you Array of objects as a first parameter, because you need to know the URI for this nested resource. I want to create new comment in this form, so I will give you just initial instance of `@comment = Comment.new`. And please create this comment for this very article: `@article = Article.find(:id)`." This is roughly similar to writing: ``` form_for(@comments, {:url => article_comments_path(@aticle.id)}) ``` Of course, there is more to the story, but it should be enough, to grasp the idea.
Identifying the position in the original string from a given Boost token\_iterator If a string has been processed using a Boost tokenizer is it possible to get the position in the original string that a given token iterator is pointing to: ``` boost:tokenizer<> tok( "this is the original string" ); for(tokenizer<>::iterator it=tok.begin(); it!=tok.end();++it) { std::string strToken = *it; int charPos = it.? /* IS THERE A METHOD? */ } ``` I realize I could create a specific char\_separator with a defined list of 'kept delimiters' and specify keep\_empty\_tokens to try and track the progression of the iterator myself but I was hoping there was an easier way using just the iterator itself.
This appears to be what you're looking for: ``` #include <string> #include <iostream> #include <boost/tokenizer.hpp> int main() { typedef boost::tokenizer<> tok_t; std::string const s = "this is the original string"; tok_t const tok(s); for (tok_t::const_iterator it = tok.begin(), it_end = tok.end(); it != it_end; ++it) { std::string::difference_type const offset = it.base() - s.begin() - it->size(); std::cout << offset << "\t::\t" << *it << '\n'; } } ``` `[**Online Demo**](http://coliru.stacked-crooked.com/a/cc2c103d930fc6f9)`
Java -Reactive Programming vs Event Listener How the Reactive Programming differs from calling a function in Event listener (mouse, key), because both are kind of asynchronous event stream so whats the advantage of Reactive over traditional Event listeners calls?
Event listener has actually only a subset of the functionalities provided by [RxJava](https://github.com/ReactiveX/RxJava/wiki), and that's exactly the problem that it tries to solve: But let's back up a few steps, it's easier to understand what an [Observable](http://reactivex.io/documentation/observable.html) is if you compare it to an [Iterator](http://docs.oracle.com/javase/7/docs/api/java/util/Iterator.html) (push vs. pull). `Iterator.next()` is equivalent to `Observable.onNext()` - when the next item/event occurs - consume it. `Iterator.hasNext()` is equivalent to `Observable.onComplete()` - it allows the publisher to notify the subscriber that there are no more events to consume (one thing that was missing in the EventListener model). For the third, `Observable.onError()` there is no equivalent, because with Iterator, when you try to get `next()` or `remove()` you know that you might get `NoSuchElementException`, `UnsupportedOperationException` or `IllegalStateException` and you can catch and handle any of them since you're doing it synchronously. For the publisher, if an error occurs there is no way to notify the Listener/subscriber other then to crash. `onError()` is the last missing part that was made so that the Observable can enable a graceful handling of any error. To sum up, Reactive Java came to fix parts that were missing from the event model for a long time. By providing those missing parts, and by providing a functional programming style (supports map, flatmap, filter etc) it helps composing async calls in a natural, readable way reducing boilerplate-code that is necessary when the programming style is imperative (e.g. nested for-loops) and which creates the [callback-hell](https://stackoverflow.com/questions/28402376/how-to-compose-observables-to-avoid-the-given-nested-and-dependent-callbacks).
Column widths not aligned with table data in pander tables sent from R with sendmailr I'm working with the 'pander' and 'sendmailr' packages to send a small data frame in the body of an email, rather than as an attachment. I'd like to send it from and to a gmail account. I'm close, but the column headers won't align with the columns themselves in the email body the way they do in Rstudio for example- basically the column headers are too wide to line up with the data columns below them. It seems the problem is the way the dashes and whitespaces are compressed in various email clients (I tried this in gmail, yahoo and hotmail through the web and through the email client that ships with OS X Mavericks). I was able to remedy the problem in my OS X email client by going to 'preferences' and checking the box labeled 'use fixed-width font for plain-text messages' but I'd like it to work on multiple devices, with multiple clients, etc for many of my coworkers so I'm wondering if there's a way that doesn't involve global email settings. Here is the code to reproduce the problem: ``` library(sendmailR) # for emails from R library(pander) # for table-formatting that does not require HTML results <- head(iris) pander(results) # widths look great so far... a = pandoc.table.return(results) strsplit(a, "\n") # widths still look great... panderOptions('table.split.table', Inf) # show all columns on same line msg_content <- mime_part( pandoc.table.return(results, style = "multiline") ) # I'm using my own gmail address for email_from and email_to sendmail(from = email_from, to = email_to, subject = "test", msg = msg_content ) ``` … and the email received has the problem described above. Next you can see an image which illustrates the problem: ![Described problem](https://i.stack.imgur.com/pM98F.png)
The problem with plain text e-mails and using markdown tables is that the e-mail client usually displays the text with a non-fixed font, and you have to use custom settings in all your e-mail client to override that (like you did with your OS X e-mail client). On the other hand, that's why HTML mails are trending :) So let's create a HTML mail and include the markdown table in a `pre` block: ``` msg_content <- mime_part(paste('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"/> </head> <body><pre>', paste(pander.return(results, style = "multiline"), collapse = '\n'), '</pre></body> </html>')) ``` Due to a bug in `sendmailR`, we have to override the `Content-type` to HTML: ``` msg_content[["headers"]][["Content-Type"]] <- "text/html" ``` And now it's ready to be sent via the comment you used in your example, resulting in: ![Pander markdown table in HTML mail](https://i.stack.imgur.com/6Vljv.png) The table should look similarly fine in any other HTML-capable e-mail client. Please note that this way you could also use HTML tables instead of markdown if that would fit your needs better.
Android tombstones are not always generated. How to enforce its generation? I'm trying to debug an unreal engine 4 native application (a game I made). Sometimes I see a "tombstone" is generated when the game crashes. But not always ... Is there any way to enforce tombstones generation? why it isn't generated everytime my game crashes? Thanks in advance
The dumps are created by debuggerd when a program crashes under Linux. When this happens, the kernel will send a signal to the dying program. This signal is caught by a special signal handler installed in every native Android application by the bionic C library (this is why tombstones aren't generated for Java-based Android apps). The signal handler contacts debuggerd (via a named pipe), which then connects back to the dying program using ptrace to read registers and memory to produce the tombstone and log entries. `./bionic/linker/debugger.cpp` installs the `debuggerd_signal_handler()` signal handler for several signals that when invoked will try to use the pipe to communicate to debuggerd to create the tombstone file. I suppose that if the pipe communication fails or the ptrace back to the crashing process fails then the tombstone won't be generated. But those failures should still be logged (in the tombstone file or logcat or maybe both - I'm not sure). There may be other problems that result in a tombstone file not being generated. See the following for how the signal handling is set up by the bionic linker (all function names and files mentioned here are from Android 4.4, but should be similar in other versions): - `__linker_init_post_relocation()` in `./bionic/linker/linker.cpp` - `debuggerd_init()` in `./bionic/linker/debugger.cpp` - `debuggerd_signal_handler()` in `./bionic/linker/debugger.cpp` And see the following for how `debuggerd` responds to the request to deal with a crashing process: - `do_server()` in `./system/core/debuggerd/debuggerd.c` // opens the pipe to receive requests - `handle_request()` in `./system/core/debuggerd/debuggerd.c` // handles the request to deal with a crashing process
How to prevent Screen lock ios with Qt I want to develop an app in Qt for iOS that contains a map. During the use, the screen lock of the phone should be disabled. But I can't find any solution how to prevent the screen lock in iOS using Qt. How can be done that?
You must use the native iOS api. You can compile ObjC++ code directly with the clang compiler in your Qt application. So you can mix `.cpp` and `.mm` (ObjC++) files. QtCreator and `qmake` support this via the `OBJECTIVE_SOURCES` keyword. In a `yourclass.mm` implementation: ``` #import <Foundation/Foundation.h> #import <UIKit/UIKit.h> void YourClass::setTimerDisabled() { [[UIApplication sharedApplication] setIdleTimerDisabled: YES] } ``` `yourclass.h`: ``` class YourClass { public: void setTimerDisabled() } ``` Now you can call from anywhere in your Qt-app: ``` YourClass yc; yc.setTimerDisbabled(); ``` In your project file (`.pro`), if you only want this file on iOS: ``` ios { OBJECTIVE_SOURCES += \ yourclass.mm \ } ``` And if you only want specified code on a single platform, use preprocessor commands in your source and header files like this: ``` #if defined(Q_OS_IOS) // iOs stuff #elsif defined(Q_OS_ANDROID) //Android stuff ... #else //Other stuff ... #endif ```
How to resolve TypeError: Cannot convert undefined or null to object I've written a couple of functions that effectively replicate JSON.stringify(), converting a range of values into stringified versions. When I port my code over to JSBin and run it on some sample values, it functions just fine. But I'm getting this error in a spec runner designed to test this. My code: ``` // five lines of comments var stringify = function(obj) { if (typeof obj === 'function') { return undefined;} // return undefined for function if (typeof obj === 'undefined') { return undefined;} // return undefined for undefined if (typeof obj === 'number') { return obj;} // number unchanged if (obj === 'null') { return null;} // null unchanged if (typeof obj === 'boolean') { return obj;} // boolean unchanged if (typeof obj === 'string') { return '\"' + obj + '\"';} // string gets escaped end-quotes if (Array.isArray(obj)) { return obj.map(function (e) { // uses map() to create new array with stringified elements return stringify(e); }); } else { var keys = Object.keys(obj); // convert object's keys into an array var container = keys.map(function (k) { // uses map() to create an array of key:(stringified)value pairs return k + ': ' + stringify(obj[k]); }); return '{' + container.join(', ') + '}'; // returns assembled object with curly brackets } }; var stringifyJSON = function(obj) { if (typeof stringify(obj) != 'undefined') { return "" + stringify(obj) + ""; } }; ``` The error message I'm getting from the tester is: ``` TypeError: Cannot convert undefined or null to object at Function.keys (native) at stringify (stringifyJSON.js:18:22) at stringifyJSON (stringifyJSON.js:27:13) at stringifyJSONSpec.js:7:20 at Array.forEach (native) at Context.<anonymous> (stringifyJSONSpec.js:5:26) at Test.Runnable.run (mocha.js:4039:32) at Runner.runTest (mocha.js:4404:10) at mocha.js:4450:12 at next (mocha.js:4330:14) ``` It seems to fail with: stringifyJSON(null) for example
**Generic answer** This error is caused when you call a function that expects an *Object* as its argument, but pass *undefined* or *null* instead, like for example ``` Object.keys(null) Object.assign(window.UndefinedVariable, {}) ``` As that is usually by mistake, the solution is to check your code and fix the *null/undefined* condition so that the function either gets a proper *Object*, or does not get called at all. ``` Object.keys({'key': 'value'}) if (window.UndefinedVariable) { Object.assign(window.UndefinedVariable, {}) } ``` **Answer specific to the code in question** The line `if (obj === 'null') { return null;} // null unchanged` will not evaluate when given `null`, only if given the string `"null"`. So if you pass the actual `null` value to your script, it will be parsed in the Object part of the code. And `Object.keys(null)` throws the `TypeError` mentioned. To fix it, use `if(obj === null) {return null}` - without the qoutes around null.
reCaptcha show "input error: invalid referer" I have registered an account on recaptcha.net with mydomain.com. While I'm developing on my `localhost` it works fine, but whenever I try to open the page from another machine in my local network it shows `"input error: invalid referer"` error message! I'm not using recaptcha plugins.
reCaptcha keys are tied to a certain domain + localhost (when you got your private/public keys from them). You can use it on another domain by requesting new private/public keys or by using a global key. From their [website](https://www.google.com/recaptcha/admin/create): > > - By default, your reCAPTCHA key is restricted to the specified domain, and any subdomains for additional security. A key for foo.com works on test.foo.com. > - If you wish to use your key across a large number of domains (e.g., if you are a hosting provider, OEM, etc.), select the global key option. You may want to use a descriptive domain name such as "global-key.mycompany.com" > - **If you own multiple domain names (foocars.com and footrucks.com), you can sign up for multiple keys, or use a global key.** > > >
SVG inside span isn't on the same line as the text I have an SVG file inside a span with text at the same time. The text and the SVG are the same height. However, the SVG isn't on the same line as the text. Relevant jsfiddle: <https://jsfiddle.net/tcrnjd53/> As you can see, the facebook logo needs to be on the red dotted line, just like the sample text. ``` span { font-size: 1em; border-bottom: 1px dotted red; zoom: 3; /* for easier readability */ } span svg { fill: #3b5998; height: 1em; } ``` ``` <span>Sample Text <svg viewBox="0 0 24 24"><path d="M22.676 0H1.324C.593 0 0 .593 0 1.324v21.352C0 23.408.593 24 1.324 24h11.494v-9.294H9.689v-3.621h3.129V8.41c0-3.099 1.894-4.785 4.659-4.785 1.325 0 2.464.097 2.796.141v3.24h-1.921c-1.5 0-1.792.721-1.792 1.771v2.311h3.584l-.465 3.63H16.56V24h6.115c.733 0 1.325-.592 1.325-1.324V1.324C24 .593 23.408 0 22.676 0"></path></svg></span> ```
CSS [`vertical-align`](https://developer.mozilla.org/en-US/docs/Web/CSS/vertical-align) property could help as shown in the snippet. Choosing the appropriate value is up to you. To better understand what i mean do try the following values `top`,`text-top`,`middle`,`bottom`,`text-bottom` and see the differences. You could apply a fixed or percentage value if is best suited. ``` span { font-size: 1em; border-bottom: 1px dotted red; zoom: 3; /* for easier readability */ } span svg { fill: #3b5998; height: 1em; /* vertical-align:text-top; */ vertical-align:-0.1875em; } ``` ``` <span>Sample Text <svg viewBox="0 0 24 24"><path d="M22.676 0H1.324C.593 0 0 .593 0 1.324v21.352C0 23.408.593 24 1.324 24h11.494v-9.294H9.689v-3.621h3.129V8.41c0-3.099 1.894-4.785 4.659-4.785 1.325 0 2.464.097 2.796.141v3.24h-1.921c-1.5 0-1.792.721-1.792 1.771v2.311h3.584l-.465 3.63H16.56V24h6.115c.733 0 1.325-.592 1.325-1.324V1.324C24 .593 23.408 0 22.676 0"></path></svg></span> ```
Linux: ntohl does not work correctly I have a project that needs to build on Windows, Linux, and VxWorks. The project is built on Linux and Windows but cross compiled for VxWorks. To handle endianness across multiple platforms, it uses ntoh.h. The Linux machine is little endian but ntohl doesn't swap in my program. I wrote a test program that directly includes in.h. That swaps appropriately. I wrote another test program that just includes the ntoh.h. That swaps appropriately. Both test programs link to lib64/libc.so.6. However, when I compile my project, ntohl doesn't swap. I can't break on ntohl using gdb "break ntohl" command. When building, I see **LITTLE ENDIAN** warning (see below) and do not see the **"SHOULDNT BE HERE"** error. Please help. I don't understand why this problem is occurring. Below is ntoh.h: ``` #ifndef __ntoh__ #define __ntoh__ #include "basic_types.h" #ifdef WIN32 #include <winsock2.h> #elif LINUX #include <netinet/in.h> //This is here to determine what __BYTE_ORDER is set to in netinet/in.h. // Not in original code #if __BYTE_ORDER == __BIG_ENDIAN #warning BIG ENDIAN BYTE ORDER!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! #endif //This is here to determine what __BYTE_ORDER is set to in netinet/in.h. // Not in original code #if __BYTE_ORDER == __LITTLE_ENDIAN #warning YAY LITTLE ENDIAN!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! #endif #else #error SHOULDNT BE HERE //added for debugging purposes #define ntohl(x) (x) #define ntohs(x) (x) #define htonl(x) (x) #define htons(x) (x) #endif #endif // __ntoh__ ``` Part of my compile command: ``` g++ -DDAU_PARSER -DNO_MT -DTEST_CLOCK -DLINUX -g -Irelease/include -Irelease/include/Record_Data/ -Irelease/include/Utility -o dauParser DAU_Support_Tools/src/dau_parser.cpp DAU_Support_Tools/src/dau_parser_write_data_to_file.cpp Utility/src/Messaging/Communications/Message.cpp Utility/src/time_type.cpp Utility/src/collectable.cpp Utility/src/clist.cpp Utility/src/clock.cpp Utility/src/test_clock.cpp Utility/src/mutex.cpp Utility/src/ntoh.cpp ... ``` The error is generated by the following lines: ``` int deadbeef = 0xDEADBEEF; printf("TESTING DEADBEEF %x %x\n", deadbeef, ntohl(deadbeef) ); ``` The output from those two lines produce same output. TESTING DEADBEEF deadbeef deadbeef
> > The output from those two lines produce same output. TESTING DEADBEEF deadbeef deadbeef > > > Well, something *is* wrong, but we can't tell you what. *You* have to debug this problem, as you are the only one who can observe it. Start with the simplest possible example: ``` cat t.c; gcc t.c && ./a.out #include <netinet/in.h> #include <stdio.h> int main() { int deadbeef = 0xDEADBEEF; printf("TESTING DEADBEEF %x %x\n", deadbeef, ntohl(deadbeef)); return 0; } TESTING DEADBEEF deadbeef efbeadde ``` Did this produce expected result? - No: your toolchain and headers are busted. - Yes: your toolchain is ok, but your *actual* code does something different from the example. Run your code through preprocessor: `gcc -dD -E -DLINUX ntoh.cpp`, and look at what the `ntohl` macro expands to, and where it's coming from. My guess is that you have something stupid in one of *your* headers, e.g. ``` #undef ntohl #define ntohl(x) (x) ```
Rails override validator message I have a site thats served in 2 flavours, English and French. Here's some code ``` app/views/user/register.html.erb ----------------- <% form_for .....> <%= f.text_field :first_name %> <% end %> app/models/user.rb ------------------ class User < ActiveRecord::Base validates_presence_of :first_name end ``` Now to display the error message in case if the site is being served in the French version, I have ``` app/config/locales/fr.yml ------------------------- activerecord: errors: messages: empty: "ne peut pas être vide" ``` So if someone does not fill in a first name, the validator takes the name of the field and appends the custom message for empty clause giving ``` "First name ne peut pas être vide" ``` which is incorrect, coz 'First name' in French is 'Prénom', hence it should be ``` "Prénom ne peut pas être vide" ``` Please can someone suggest a way of achieving the desired result.
From the Rails documentation for generate\_full\_methods in the ActiveRecord::Error class... > > Wraps an error message into a > full\_message format. > > > The default full\_message format for > any locale is "{{attribute}} > {{message}}". One can specify > locale specific default full\_message > format by storing it as a translation > for the key > :"activerecord.errors.full\_messages.format". > > > Additionally one can specify a > validation specific error message > format by storing a translation for > :"activerecord.errors.full\_messages.[message\_key]". > E.g. the full\_message format for any > validation that uses :blank as a > message key (such as > validates\_presence\_of) can be stored > to > :"activerecord.errors.full\_messages.blank". > > > Because the message key used by a > validation can be overwritten on the > validates\_\* class macro level > one can customize the full\_message > format for any particular validation: > > > # # app/models/article.rb class Article < ActiveRecord::Base > > > > ``` > validates_presence_of :title, :message => :"title.blank" end # > > ``` > > # config/locales/en.yml en: > > > > ``` > activerecord: > errors: > full_messages: > title: > blank: This title is screwed! > > ``` > >
preferredStatusBarStyle isn't called I followed [this thread](https://stackoverflow.com/questions/17678881/how-to-change-status-bar-text-color-in-ios-7) to override `-preferredStatusBarStyle`, but it isn't called. Are there any options that I can change to enable it? (I'm using XIBs in my project.)
## Possible root cause I had the same problem, and figured out it was happening because I wasn't setting the root view controller in my application window. The `UIViewController` in which I had implemented the `preferredStatusBarStyle` was used in a `UITabBarController`, which controlled the appearance of the views on the screen. When I set the root view controller to point to this `UITabBarController`, the status bar changes started to work correctly, as expected (and the `preferredStatusBarStyle` method was getting called). ``` (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { ... // other view controller loading/setup code self.window.rootViewController = rootTabBarController; [self.window makeKeyAndVisible]; return YES; } ``` ## Alternative method (Deprecated in iOS 9) Alternatively, you can call one of the following methods, as appropriate, in each of your view controllers, depending on its background color, instead of having to use `setNeedsStatusBarAppearanceUpdate`: ``` [[UIApplication sharedApplication] setStatusBarStyle:UIStatusBarStyleLightContent]; ``` or ``` [[UIApplication sharedApplication] setStatusBarStyle:UIStatusBarStyleDefault]; ``` Note that you'll also need to set `UIViewControllerBasedStatusBarAppearance` to `NO` in the plist file if you use this method.
Error while using delegate: Invalid token 'void' in class, struct, or interface member declaration I am getting this error while declaring a delegate: "Invalid token 'void' in class, struct, or interface member declaration" ![enter image description here](https://i.stack.imgur.com/Zepaw.png) Can someone suggest a reason for this error?
Use `delegate` instead of `Delegate` ``` public delegate void ShowDel(string msg); ``` > > The Delegate class is the base class for delegate types. However, only > the system and compilers can derive explicitly from the Delegate class > or from the MulticastDelegate class. It is also not permissible to > derive a new type from a delegate type. The Delegate class is not > considered a delegate type; it is a class used to derive delegate > types. Most languages implement a delegate keyword, and compilers for > those languages are able to derive from the MulticastDelegate class; > therefore, users should use the delegate keyword provided by the > language. [MSDN](http://msdn.microsoft.com/en-us/library/system.delegate.aspx) > > >
Process.Start in C# The system cannot find the file specified error This is a silly and tricky issue that I am facing. The below code works well (it launches Calculator): ``` ProcessStartInfo psStartInfo = new ProcessStartInfo(); psStartInfo.FileName = @"c:\windows\system32\calc.exe"; Process ps = Process.Start(psStartInfo); ``` However the below one for SoundRecorder does not work. It gives me "The system cannot find the file specified" error. ``` ProcessStartInfo psStartInfo = new ProcessStartInfo(); psStartInfo.FileName = @"c:\windows\system32\soundrecorder.exe"; Process ps = Process.Start(psStartInfo); ``` I am able to launch Sound Recorder by using Start -> Run -> "c:\windows\system32\soundrecorder.exe" command. Any idea whats going wrong? I am using C# in Visual Studio 2015 and using Windows 7 OS. **UPDATE 1**: I tried a `File.Exists` check and it shows me MessageBox from the below code: ``` if (File.Exists(@"c:\windows\system32\soundrecorder.exe")) { ProcessStartInfo psStartInfo = new ProcessStartInfo(); psStartInfo.FileName = @"c:\windows\system32\soundrecorder.exe"; Process ps = Process.Start(psStartInfo); } else { MessageBox.Show("File not found"); } ```
Most likely your app is 32-bit, and in 64-bit Windows references to `C:\Windows\System32` get transparently redirected to `C:\Windows\SysWOW64` for 32-bit apps. `calc.exe` happens to exist in both places, while `soundrecorder.exe` exists in the true `System32` only. When you launch from `Start / Run` the parent process is the 64-bit `explorer.exe` so no redirection is done, and the 64-bit `C:\Windows\System32\soundrecorder.exe` is found and started. From [File System Redirector](https://msdn.microsoft.com/en-us/library/windows/desktop/aa384187.aspx): > > In most cases, whenever a 32-bit application attempts to access %windir%\System32, the access is redirected to %windir%\SysWOW64. > > > --- **[ EDIT ]** From the same page: > > 32-bit applications can access the native system directory by substituting %windir%\Sysnative for %windir%\System32. > > > So the following would work to start `soundrecorder.exe` from the (real) `C:\Windows\System32`. ``` psStartInfo.FileName = @"C:\Windows\Sysnative\soundrecorder.exe"; ```
What is the return type of document.querySelectorAll Let's say I have the following list: ``` <ol> <li>Cookies <ol> <li>Coffee</li> <li>Milk</li> <li class="test1">Chocolate </li> </ol> ``` and I perform this selection at the end of my html ``` var nodes = document.querySelectorAll('li:first-of-type'); ``` When I tried in Chrome `nodes.forEach` it gave me an error. When I looked at the value it looked like an array. I actually was able to navigate it using a regular for like: ``` for(var i=0;i<nodes.length;i++){ nodes[i].onclick= function(){ alert('Hello!'); }; } ``` So, what is the actual returned type of `document.querySelectorAll`? why array methods did not work? So, it looks like an array, can workaround it to make it work like an array but it is not an array?
--- The type of the result is a NodeList. Since it is an Array-like object, you can run the `map`, `forEach` and other Array.prototype functions on it like this: ``` var result = document.querySelectorAll('a'); Array.prototype.map.call(result, function(t){ return t; }) ``` The `map`, `forEach`, `any` and other functions in the Array prototype work on Array-like objects. For example, let's define an object literal with numerical indexes (0,1) and a length property: ``` var arrayLike = { '0': 'a', '1': 'b', length: 2}; ``` The forEach method, applied to the `arrayLike` object will like on a real Array. ``` Array.prototype.forEach.call(arrayLike, function(x){ console.log(x) } ); //prints a and b ```
Multiple Entry Points in GWT I'm getting into Google Web Toolkit, and am a little confused about the Entry Points in GWT. Google's docs say: > > If you have multiple EntryPoints (the interface that defines onModuleLoad()) within a module, they will all be called in sequence as soon as that module (and the outer document) is ready. > If you are loading multiple GWT modules within the same page, each module's EntryPoint will be called as soon as both that module and the outer document is ready. Two modules' EntryPoints are not guaranteed to fire at the same time, or in the same order in which their selection scripts were specified in the host page. > > > So does each page in your website need an Entry Point defined for it? Do you only really NEED an entry point when you have javascript generated based on your Java classes? Are you able to combine multiple auto-generated-js definitions into a single \*.gwt.xml file? EDIT: Link to quoted source: <http://code.google.com/webtoolkit/doc/1.6/DevGuideOrganizingProjects.html> Thanks!
The most straightforward way to make a GWT app is to have a single page for the entire application, and a single top-level [module](http://code.google.com/webtoolkit/doc/latest/DevGuideOrganizingProjects.html#DevGuideModules) (defined in a [.gwt.xml file](http://code.google.com/webtoolkit/doc/latest/DevGuideOrganizingProjects.html#DevGuideModuleXml)). Each module has a single [EntryPoint](http://google-web-toolkit.googlecode.com/svn/javadoc/2.0/com/google/gwt/core/client/EntryPoint.html) class. Then all of your different "pages" are sub-sections of the same page, ideally using GWT's history mechanism to keep track of state changes that in a non-AJAX web app would be new pages. So if you set things up this way you'll need one EntryPoint for your whole app. The bit of the docs that you quoted (link?) discuss what I think is an advanced use case, where you've got more than one module that you're loading on a single page.
History of apropos command I am writing a paper on my project, the goal of which is to write a new implementation of the `apropos(1)` command. While I realize that `apropos` was written in the early days of Unix when computing resources were scarce and hence its designers kept it simple. I am looking for a concrete source of information on this to back my point. Is there any historical document or artifact that describes when and why these commands were introduced into Unix? My Google searches have not returned anything useful so I was wondering whether perhaps those of you who have been involved with Unix since the early days might have some knowledge about it.
According to [research by an OpenBSD committer](http://www.openssh.com/cgi-bin/cvsweb/src/usr.bin/apropos/apropos.1?f=c), the `apropos` command appeared in [2BSD](http://minnie.tuhs.org/cgi-bin/utree.pl?file=2BSD/man/apropos.u) and was written by [Bill Joy](http://en.wikipedia.org/wiki/Bill_Joy), like the rest of the `man` implementation. There's a theory floating around that `apropos` started out as an alias to `man -k`, but `man` in 2BSD didn't have a `-k` option, so it was presumably the other way round (ATT Research Unix had no `apropos` and a different meaning for [`man -k`](http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/man/man1/man.1)). So [2BSD `apropos.c`](http://minnie.tuhs.org/cgi-bin/utree.pl?file=2BSD/src/apropos.c) would be the earliest implementation of `apropos`.
find\_by\_sql to something that returns an activeRecords relation I need find\_by\_sql to return a ActiveRecord::Relation object. It seems to be not possible. Then I want to write my query in pure ruby. I have a table with a relationship with itself: ``` class Alarma < ApplicationRecord **belongs_to :root, class_name: "Alarma", foreign_key: "relacionada_id", optional: true has_many :children, class_name: "Alarma" , foreign_key: "relacionada_id"** ``` This is the table: ``` create_table "alarmas", force: :cascade do |t| t.string "sysname" t.string "component" t.string "details" t.integer "estado_id" t.integer "relacionada_id" t.datetime "created_at", null: false t.datetime "updated_at", null: false t.index ["estado_id"], name: "index_alarmas_on_estado_id", using: :btree t.index ["severity_id"], name: "index_alarmas_on_severity_id", using: :btree end ``` And this is my query in sql: ``` Alarma.find_by_sql("SELECT a1.* FROM alarmas a1 LEFT OUTER JOIN alarmas a2 ON a1.relacionada_id=a2.id WHERE a1.estado_id IN (1,2) OR a2.estado_id IN (1,2)") ``` The Alarma could be root, and then it has children alarms, or it could be in state: estado\_id = 2 what means it is a child, and has a root alarm, (this alarm is relacionada\_id). Example: Root alarm, relacionada\_id = null, id = 99 Child alarm, relacionada\_id = 99, id = 112 What I need is the list of alarms, in state estado\_id = 1 or 2, and the alarms who his root is in the state estado id 1 or 2 (and his state ) I could get it work with my own sql query using find\_by\_sql but it returns an array, and I need a ActiveRecord::Relation object because I need to continue working with the result ( because I use a pagination gem and order and page methods don't work with an array)
See [Converting an array of objects to ActiveRecord::Relation](https://stackoverflow.com/questions/17331862/converting-an-array-of-objects-to-activerecordrelation) The trick is that you take the ids of the results of the raw SQL statement, and perform a second query using `where` so it returns a `ActiveRecord::Relation`. ``` as_array = Alarma.find_by_sql("SELECT a1.* FROM alarmas a1 LEFT OUTER JOIN alarmas a2 ON a1.relacionada_id=a2.id WHERE a1.estado_id IN (1,2) OR a2.estado_id IN (1,2)") as_relation = Alarma.where(id: as_array.map(&:id)) ```
How to use preg\_replace\_callback? I have the following HTML statement ``` [otsection]Wallpapers[/otsection] WALLPAPERS GO HERE [otsection]Videos[/otsection] VIDEOS GO HERE ``` What I am trying to do is replace the [otsection] tags with an html div. The catch is I want to increment the id of the div from 1->2->3, etc.. So for example, the above statement should be translated to ``` <div class="otsection" id="1">Wallpapers</div> WALLPAPERS GO HERE <div class="otsection" id="2">Videos</div> VIDEOS GO HERE ``` As far as I can research, the best way to do this is via a preg\_replace\_callback to increment the id variable between each replacement. But after 1 hour of working on this, I just cant get it working. Any assistance with this would be much appreciated!
Use the following: ``` $out = preg_replace_callback( "(\[otsection\](.*?)\[/otsection\])is", function($m) { static $id = 0; $id++; return "<div class=\"otsection\" id=\"ots".$id."\">".$m[1]."</div>"; }, $in); ``` In particular, note that I used a `static` variable. This variable persists across calls to the function, meaning that it will be incremented every time the function is called, which happens for each match. Also, note that I prepended `ots` to the ID. Element IDs should not start with numbers. --- For PHP before 5.3: ``` $out = preg_replace_callback( "(\[otsection\](.*?)\[/otsection\])is", create_function('$m',' static $id = 0; $id++; return "<div class=\"otsection\" id=\"ots".$id."\">".$m[1]."</div>"; '), $in); ```
Time order of messages Read (skimmed enough to get coding) through [Erlang Programming](https://rads.stackoverflow.com/amzn/click/com/193435600X) and [Programming Erlang](https://rads.stackoverflow.com/amzn/click/com/0596518188). One question, which is as simple as it sounds: If you have a process `Pid1` on machine `m1` and a billion million messages are sent to `Pid1`, are messages handled in parallel by that process (I get the impression no) and(answered below) is there any guarantee of order when processing messages? ie. Received in order sent? If so, how is clock skew handled in high traffic situations for ordering? Coming from the whole C/Thread pools/Shared State background ... I want to get this concrete. I understand distributing an application, but want to ensure the 'raw bones' are what I expect before building processes and distributing workload. Also, am I right in thinking the whole world is currently flicking through Erlang texts ;)
If process A sends two messages to process B, then the two messages are guaranteed to arrive in the order they are sent. If process A sends a message to process B and then a message to process C, there's no guarantee as to the ordering in which they are received. Similarly, if processes A & B send messages to C, there's no guarantee as to the order in which the messages are received. It's a fundamental property of the message passing model the ordering of calculations in different processes is undefined, you can only meaningfully speak about ordering where a message send is involved. One consequence of the above rules is that if A sends a message to C, then a message to B, and on receipt of the message B sends to C, then C can receive the two messages in any order. (In practice, I suspect this never reverses on a single node, but could easily happen if the three processes are on different nodes.)
Numpy masked operations I am a new python user and I am quite interesting on understanding in depth how works the NumPy module. I am writing on a function able to use both masked and unmasked arrays as data input. I have noticed that there are several [numpy masked operations](http://docs.scipy.org/doc/numpy/reference/routines.ma.html) that look similar (and even work?) to its normal (unmasked) counterpart. One of such functions is `numpy.zeros` and `numpy.ma.zeros`. Could someone else tell me the advantage of, say, creating an array using `numpy.ma.zeros` vs. `numpy.zeros`? It makes an actual difference when you are using masked arrays? I have noticed that when I use `numpy.zeros_like` it works fine for both creating a masked or unmasked array.
`np.ma.zeros` creates a masked array rather than a normal array which could be useful if some later operation on this array creates invalid values. An example from the manual: > > Arrays sometimes contain invalid or missing data. When doing > operations on such arrays, we wish to suppress invalid values, which > is the purpose masked arrays fulfill (an example of typical use is > given below). > > > For example, examine the following array: > > > > ``` > >>> x = np.array([2, 1, 3, np.nan, 5, 2, 3, np.nan]) > > ``` > > When we try to calculate the mean of the data, the result is > undetermined: > > > > ``` > >>> np.mean(x) nan > > ``` > > The mean is calculated using roughly `np.sum(x)/len(x)`, but since > any number added to `NaN` produces `NaN`, this doesn't work. > Enter masked arrays: > > > > ``` > >>> m = np.ma.masked_array(x, np.isnan(x)) > >>> m > masked_array(data = [2.0 1.0 3.0 -- 5.0 2.0 3.0 --], > mask = [False False False True False False False True], > fill_value=1e+20) > > ``` > > Here, we construct a masked array that suppress all `NaN` values. > We may now proceed to calculate the mean of the other values: > > > > ``` > >>> np.mean(m) > 2.6666666666666665 > > ``` > >
data-\* attributes do not work with Html::a() in Yii 2 I have the following: ``` Html::a('Link', ['some/route'], [ 'class' => 'btn btn-lg btn-primary', // WORKS 'style' => 'padding: 100px;', // WORKS 'data-id' => 123, // DOES NOT WORK 'data' => [ 'id' => 123, // DOES NOT WORK ], ]); ``` As per [docs](https://www.yiiframework.com/doc/api/2.0/yii-helpers-basehtml#a()-detail), both of the specified `data-*` attributes in `Html::a` helper should render their respective attributes in the HTML output, but they do not, and I do not understand why. Yii 2 documentation on [renderTagAttributes](https://www.yiiframework.com/doc/api/2.0/yii-helpers-basehtml#renderTagAttributes()-detail) also states the following: > > **Renders the HTML tag attributes.** > > > Attributes whose values are of boolean type will be treated as boolean > attributes. > > > Attributes whose values are null will not be rendered. > > > The values of attributes will be HTML-encoded using encode(). > > > The "data" attribute is specially handled when it is receiving an > array value. In this case, the array will be "expanded" and a list > data attributes will be rendered. For example, if 'data' => ['id' => > 1, 'name' => 'yii'], then this will be rendered: data-id="1" > data-name="yii". Additionally 'data' => ['params' => ['id' => 1, > 'name' => 'yii'], 'status' => 'ok'] will be rendered as: > data-params='{"id":1,"name":"yii"}' data-status="ok". > > > --- **EDIT:** I am trying to do this inside `GridView` column.
Okay, since I have used `Html::a` inside a `GridView` column, you will have to change the output format of that column. `html` will not work for data attributes, so you will need to switch to `raw`: ``` [ 'label' => 'Actions', 'format' => 'raw', 'value' => function($model) { return Html::a('Link', ['some/route'], [ 'class' => 'btn btn-lg btn-primary', // WORKS 'style' => 'padding: 100px;', // WORKS 'data-id' => 123, // WORKS 'data' => [ 'id-second' => 123, // WORKS ], ]); }, ] ```
Can the FTP client control FTP server time-out settings? I am using **Apache Commons-Net** library in Java What I want is to set the connection time-out of the FTP-server on the client stage using java-code example : If I look at the FTP server's vsftpd.conf settings file, there is a `idle_session_timeout=600` setting I wonder if this idle time-out can be controlled by FTP-client using java code The method below, which I tried but not all worked ``` FTPClient.setControlKeepAliveTimeout(sec); FTPClient.setConnectTimeout(ms); FTPClient.setDataTimeout(ms); FTPClient.connect(); FTPClient.setSoTimeout(ms); ``` Please help me :)
The FTP client cannot control the settings of the FTP server. But what you are asking looks more like an [XY problem](https://en.wikipedia.org/wiki/XY_problem) where X is probably that you want to prevent the server to close an idle connection an Y is the idea of a solution you came up with: controlling the servers timeout from the client side. Only, this solution will not work. Instead you need to tackle the real reason why the server is closing the connection: because of no activity from the client. This problem can be tackled simply by the client interacting with the server. This way is even [documented](https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/ftp/FTPClient.html). To cite: > > You should keep in mind that the FTP server may choose to prematurely close a connection if the client has been idle for longer than a given time period (usually 900 seconds). ... You may **avoid server disconnections while the client is idle by periodically sending NOOP commands to the server**. > > >
Different Prometheus scrape URL for every target Every instance of my application has a different URL. How can I configure prometheus.yml so that it takes path of a target along with the host name? ``` scrape_configs: - job_name: 'example-random' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s static_configs: - targets: ['localhost:8090','localhost:8080'] labels: group: 'dummy' ```
You currently can't configure the `metrics_path` per target within a job but you can create separate jobs for each of your targets so you can define `metrics_path` per target. Your config file would look something like this: ``` scrape_configs: - job_name: 'example-target-1' scrape_interval: 5s metrics_path: /target-1-path-to-metrics static_configs: - targets: ['localhost:8090'] labels: group: 'dummy' - job_name: 'example-target-2' scrape_interval: 5s metrics_path: /totally-different-path-for-target-2 static_configs: - targets: ['localhost:8080'] labels: group: 'dummy-2' ```
g++ and clang++ - delete pointer acquired by overloaded conversion operator ambiguity I was trying to post this code as an answer to [this question](https://stackoverflow.com/questions/34371345/how-to-prevent-a-function-which-filters-vector-of-pointers-from-non-const-access), by making this pointer wrapper (replacing raw pointer). The idea is to delegate `const` to its pointee, so that the `filter` function can't modify the values. ``` #include <iostream> #include <vector> template <typename T> class my_pointer { T *ptr_; public: my_pointer(T *ptr = nullptr) : ptr_(ptr) {} operator T* &() { return ptr_; } operator T const*() const { return ptr_; } }; std::vector<my_pointer<int>> filter(std::vector<my_pointer<int>> const& vec) { //*vec.front() = 5; // this is supposed to be an error by requirement return {}; } int main() { std::vector<my_pointer<int>> vec = {new int(0)}; filter(vec); delete vec.front(); // ambiguity with g++ and clang++ } ``` Visual C++ 12 and 14 compile this without an error, but GCC and [Clang on Coliru](http://coliru.stacked-crooked.com/a/a22d22faa0b0c986) claim that there's an ambiguity. I was expecting them to choose non-const `std::vector::front` overload and then `my_pointer::operator T* &`, but no. Why's that?
[expr.delete]/1: > > The operand shall be of pointer to object type or of class type. If of > class type, the operand is contextually implicitly converted (Clause > [conv]) to a pointer to object type. > > > [conv]/5, emphasis mine: > > Certain language constructs require conversion to a value having one > of a specified set of types appropriate to the construct. An > expression `e` of class type `E` appearing in such a context is said > to be *contextually implicitly converted* to a specified type `T` and > is well-formed if and only if e can be implicitly converted to a type > `T` that is determined as follows: `E` is searched for non-explicit > conversion functions whose return type is `cv T` or reference to `cv T` > such that `T` is allowed by the context. **There shall be exactly > one such `T`.** > > > In your code, there are two such `T`s (`int *` and `const int *`). It is therefore ill-formed, before you even get to overload resolution. --- Note that there's a change in this area between C++11 and C++14. C++11 [expr.delete]/1-2 says > > The operand shall have a pointer to object type, or a class type > having a single non-explicit conversion function (12.3.2) to a pointer > to object type. [...] > > > If the operand has a class type, the operand is converted to a pointer type by calling the above-mentioned conversion function, [...] > > > Which would, if read literally, permit your code and always call `operator const int*() const`, because `int* &` is a reference type, not a pointer to object type. In practice, implementations consider conversion functions to "reference to pointer to object" like `operator int*&()` as well, and then reject the code because it has more than one qualifying non-explicit conversion function.
Why not always assign return values to const reference? Let's say I have some function: ``` Foo GetFoo(..) { ... } ``` Assume that we neither know how this function is implemented nor the internals of Foo (it can be very complex object, for example). However we do know that function is returning Foo by value and that we want to use this return value as const. Question: Would it be always a good idea to store return value of this function as `const &`? ``` const Foo& f = GetFoo(...); ``` instead of, ``` const Foo f = GetFoo(...); ``` I know that compilers would do return value optimizations and may be move the object instead of copying it so in the end `const &` might not have any advantages. However my question is, are there any *disadvantages*? Why shouldn't I just develop muscle memory to *always* use `const &` to store return values given that I don't have to rely on compiler optimizations and the fact that even move operation can be expensive for complex objects. Stretching this to extreme, why shouldn't I *always* use `const &` for all variables that are immutable in my code? For example, ``` const int& a = 2; const int& b = 2; const int& c = c + d; ``` Besides being more verbose, are there any disadvantages?
These have semantic differences and if you ask for something other than you want, you will be in trouble if you get it. Consider [this code](http://ideone.com/l48m54): ``` #include <stdio.h> class Bar { public: Bar() { printf ("Bar::Bar\n"); } ~Bar() { printf ("Bar::~Bar\n"); } Bar(const Bar&) { printf("Bar::Bar(const Bar&)\n"); } void baz() const { printf("Bar::Baz\n"); } }; class Foo { Bar bar; public: Bar& getBar () { return bar; } Foo() { } }; int main() { printf("This is safe:\n"); { Foo *x = new Foo(); const Bar y = x->getBar(); delete x; y.baz(); } printf("\nThis is a disaster:\n"); { Foo *x = new Foo(); const Bar& y = x->getBar(); delete x; y.baz(); } return 0; } ``` Output is: > > This is safe: > > Bar::Bar > > Bar::Bar(const Bar&) > > Bar::~Bar > > Bar::Baz > > Bar::~Bar > > > This is a disaster: > > Bar::Bar > > Bar::~Bar > > Bar::Baz > > > Notice we call `Bar::Baz` after the `Bar` is destroyed. Oops. Ask for what you want, that way you're not screwed if you get what you ask for.
postgresql: Force Quotes on Header I try to Force Quotes on the Header. ``` Copy ( SELECT * FROM "table" ) TO <path> CSV HEADER FORCE QUOTE *; ``` With this, I get the header and all is in quotes except the header. So I want to have the exact opposite. ``` TO <path> CSV HEADER FORCE QUOTE HEADER; ``` and ``` TO <path> CSV FORCE QUOTE HEADER; ``` did not work. Any idea how to manage this?
No such option is available in PostgreSQL's `COPY` support, at least in 9.2 and older. As you've already observed, the headers aren't quoted - or rather, they're always in auto-quote mode so they're only quoted if the contain a delimiter or other special character: ``` \copy (SELECT 1 AS "First Value", 2 AS "Second value", 3 AS "Third value, with comma") TO '/tmp/test.csv' WITH (FORMAT CSV, HEADER TRUE, FORCE_QUOTE *); ``` produces: ``` First Value,Second value,"Third value, with comma" "1","2","3" ``` You will need to transform the CSV via a more flexible external tool that understands all the weird and wonderful favours of CSV in use, or use an external tool to produce it directly. In situations like this I usually write a simple Perl or Python script that queries the database using perl's DBI and DBD::Pg or Python's psycopg2 and uses the appropriate CSV library to output the desired CSV dialect. Such scripts tend to be simple and they're generally more efficient than `\copy`ing the CSV then parsing and rewriting it.
Vue.js, composition API, "Mixins" and life-cycle hooks I've been looking all around (and I couldn't find) an answer for the following. In Vue 2.x, you could use mixins for life-cycle hooks, i.e., for instance: I could create a Mixins.js with ``` export default { created() { console.log('test'); } } ``` and then, in a component, do the following: ``` import mixins from "../misc/mixins"; export default { name: "My component", mixins: [mixins], created() { console.log('Another test'); } } ``` And if I ran "My component", I would get in the console both "Another test" and "test". I cannot find the way of doing something similar with the Composition API (of course, I can execute inside "onMounted" a functions that I imported from another file, but that's not that elegant). Is there a way? Thanks!
With Composition API you have to import lifecycles you need. Docs with list: <https://v3.vuejs.org/guide/composition-api-lifecycle-hooks.html> *Component.vue* ``` <script> import { onMounted } from 'vue' export default { setup(props) { console.log('CREATED') onMounted(() => { console.log('MOUNTED') }); return {}; }, } </script> ``` Note that there is no `onCreated()`. From docs: > > Because setup is run around the beforeCreate and created lifecycle hooks, you do not need to explicitly define them. In other words, any code that would be written inside those hooks should be written directly in the setup function. > > > ## But what about using this as Mixins alternative? Now if you want you can simply extract this to separate file, often called composable. *demoLifehooks.js* ``` import { onMounted } from 'vue' export default () => { console.log('Created') onMounted(() => { console.log('Mounted') }) } ``` Now simply import it and execute. *Component.vue* ``` <script> import useDemoLifecycles from './demoLifecycles.js' export default { setup(props) { useDemoLifecycles() return {}; }, } </script> ``` or even shorter thanks to [new script setup syntax.](https://github.com/vuejs/rfcs/blob/script-setup-2/active-rfcs/0000-script-setup.md) ``` <script setup> import useDemoLifecycles from './demoLifecycles.js' useDemoLifecycles() </script> ``` Log in console: > > Created > > Mounted > > > [**Live example**](https://codesandbox.io/s/peaceful-kare-5ndrk?file=/src/App.vue) Naming it as `useSomething` is just convention. It will be not a bad idea to force it by exporting not default function but named one: ``` export const useDemoLifecycles = () => { console.log('code here') } ``` and then ``` import { useDemoLifecycles } from './demoLifecycles' ``` Also, if you want refs or other data from that file, it will be ``` const { a, b } = useDemoLifecycles() ``` **Notice that actually in my examples there is not much Vue's "magic", like it was with Mixins. This is pretty much pure JS stuff and not Vue specific code. So it is actually simpler than old Options API + Mixins.**
How do I find which files are missing from a list? I have a list of files that I want to check if they exist on my filesystem. I thought of doing this using `find` as in: ``` for f in $(cat file_list); do find . -name $f > /dev/null || print $f done ``` (using `zsh`) but that doesn't work as `find` seems to exit `0` whether or not it finds the file. I guess I could pass it through some other test which tests to see if `find` produces any output (crude but effective would be to replace the `> /dev/null` with `|grep ''`) but this feels like using a troll to catch a goat (other nationalities might say something about sledgehammers and walnuts). Is there a way to coerce `find` in to giving me a useful exit value? Or at least to get a list of those files that *weren't* found? (I can imagine the latter being perhaps easier by some cunning choice of logical connectives, but I seem to always get tied up in knots when I try to figure it out.) **Background/Motivation:** I have a "master" backup and I want to check that some files on my local machine exist on my master backup before deleting them (to create a bit of space). So I made a list of the files, `ssh`ed them to the master machine, and was then at a loss for figuring out the best way to find the missing files.
`find` considers finding nothing a special case of success (no error occurred). A general way to test whether files match some `find` criteria is to test whether the output of `find` is empty. For better efficiency when there are matching files, use `-quit` on GNU find to make it quit at the first match, or `head` (`head -c 1` if available, otherwise `head -n 1` which is standard) on other systems to make it die of a broken pipe rather than produce long output. ``` while IFS= read -r name; do [ -n "$(find . -name "$name" -print | head -n 1)" ] || printf '%s\n' "$name" done <file_list ``` In bash ≥4 or zsh, you don't need the external `find` command for a simple name match: you can use `**/$name`. Bash version: ``` shopt -s nullglob while IFS= read -r name; do set -- **/"$name" [ $# -ge 1 ] || printf '%s\n' "$name" done <file_list ``` Zsh version on a similar principle: ``` while IFS= read -r name; do set -- **/"$name"(N) [ $# -ge 1 ] || print -- "$name" done <file_list ``` Or here's a shorter but more cryptic way of testing the existence of a file matching a pattern. The glob qualifier `N` makes the output empty if there is no match, `[1]` retains only the first match, and `e:REPLY=true:` changes each match to expand to `1` instead of the matched file name. So `**/"$name"(Ne:REPLY=true:[1]) false` expands to `true false` if there is a match, or to just `false` if there is no match. ``` while IFS= read -r name; do **/"$name"(Ne:REPLY=true:[1]) false || print -- "$name" done <file_list ``` It would be more efficient to combine all your names into one search. If the number of patterns is not too large for your system's length limit on a command line, you can join all the names with `-o`, make a single `find` call, and post-process the output. If none of the names contain shell metacharacters (so that the names are `find` patterns as well), here's a way to post-process with awk (untested): ``` set -o noglob; IFS=' ' set -- $(<file_list sed -e '2,$s/^/-o\ /') set +o noglob; unset IFS find . \( "$@" \) -print | awk -F/ ' BEGIN {while (getline <"file_list") {found[$0]=0}} wanted[$0]==0 {found[$0]=1} END {for (f in found) {if (found[f]==0) {print f}}} ' ``` Another approach would be to use Perl and `File::Find`, which makes it easy to run Perl code for all the files in a directory. ``` perl -MFile::Find -l -e ' %missing = map {chomp; $_, 1} <STDIN>; find(sub {delete $missing{$_}}, "."); print foreach sort keys %missing' ``` An alternate approach is to generate a list of file names on both sides and work on a text comparison. Zsh version: ``` comm -23 <(<file_list sort) <(print -rl -- **/*(:t) | sort) ```
Tabs of equal width in TabControl I have 4 tabs at the top of a tab control. I would like for each tab to use 25% of the TabControl's width. What is the correct way, using XAML, to do that? Here is what I have tried: ``` <Grid HorizontalAlignment="Left" Height="458" Margin="10,65,0,0" VerticalAlignment="Top" Width="276"> <TabControl Grid.IsSharedSizeScope="True" HorizontalAlignment="Stretch"> <TabItem Header="Cameras"> <Grid Background="#FFE5E5E5"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="tabControl"/> </Grid.ColumnDefinitions> </Grid> </TabItem> <TabItem Header="MultiCam"> <Grid Background="#FFE5E5E5"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="tabControl"/> </Grid.ColumnDefinitions> </Grid> </TabItem> <TabItem Header="Search"> <Grid Background="#FFE5E5E5"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="tabControl"/> </Grid.ColumnDefinitions> </Grid> </TabItem> <TabItem Header="Admin" Margin="-2,-2,-10,-1"> <Grid Background="#FFE5E5E5"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="tabControl"/> </Grid.ColumnDefinitions> </Grid> </TabItem> </TabControl> </Grid> ```
Here's another trick, a `Grid` can overlap any number of elements: ``` <Grid> <UniformGrid Columns="4" Margin="5,0"> <FrameworkElement x:Name="c1"/> <!-- no need to add the other three --> </UniformGrid> <TabControl> <TabItem Header="header" Width="{Binding ElementName=c1, Path=ActualWidth}"/> <TabItem Header="header" Width="{Binding ElementName=c1, Path=ActualWidth}"/> <TabItem Header="header" Width="{Binding ElementName=c1, Path=ActualWidth}"/> <TabItem Header="header" Width="{Binding ElementName=c1, Path=ActualWidth}"/> </TabControl> </Grid> ``` a `UniformGrid` the same size of the `TabControl` is used to measure the width of each column. add only one `FrameworkElement` since all `TabItems` are the same size.
Open a window on a remote X display (why "Cannot open display")? Once upon a time, ``` DISPLAY=:0.0 totem /path/to/movie.avi ``` after ssh 'ing into my desktop from my laptop would cause totem to play `movie.avi` on my desktop. Now it gives the error: > > > ``` > No protocol specified > Cannot open display: > > ``` > > I reinstalled Debian squeeze when it went stable on both computers, and I guess I broke the config. I've googled on this, and cannot for the life of me figure out what I'm supposed to be doing. (VLC has an HTTP interface that works, but it isn't as convenient as ssh.) The same problem arises when I try to run this from a cron job.
*(Adapted from [Linux: wmctrl cannot open display when session initiated via ssh+screen](https://superuser.com/questions/190801/linux-wmctrl-cannot-open-display-when-session-initiated-via-sshscreen#190878))* ## DISPLAY and AUTHORITY An X program needs two pieces of information in order to connect to an X display. - It needs the address of the display, which is typically `:0` when you're logged in locally or `:10`, `:11`, etc. when you're logged in remotely (but the number can change depending on how many X connections are active). The address of the display is normally indicated in the `DISPLAY` environment variable. - It needs the password for the display. X display passwords are called *magic cookies*. Magic cookies are not specified directly: they are always stored in X authority files, which are a collection of records of the form “display `:42` has cookie `123456`”. The X authority file is normally indicated in the `XAUTHORITY` environment variable. If `$XAUTHORITY` is not set, programs use `~/.Xauthority`. You're trying to act on the windows that are displayed on your desktop. If you're the only person using your desktop machine, it's very likely that the display name is `:0`. Finding the location of the X authority file is harder, because with gdm as set up under Debian squeeze or Ubuntu 10.04, it's in a file with a randomly generated name. (You had no problem before because earlier versions of gdm used the default setting, i.e. cookies stored in `~/.Xauthority`.) ## Getting the values of the variables Here are a few ways to obtain the values of `DISPLAY` and `XAUTHORITY`: - You can systematically start a screen session from your desktop, perhaps automatically in your login scripts (from `~/.profile`; but do it only if logging in under X: test if `DISPLAY` is set to a value beginning with `:` (that should cover all the cases you're likely to encounter)). In `~/.profile`: ``` case $DISPLAY in :*) screen -S local -d -m;; esac ``` Then, in the ssh session: ``` screen -d -r local ``` - You could also save the values of `DISPLAY` and `XAUTHORITY` in a file and recall the values. In `~/.profile`: ``` case $DISPLAY in :*) export | grep -E '(^| )(DISPLAY|XAUTHORITY)=' >~/.local-display-setup.sh;; esac ``` In the ssh session: ``` . ~/.local-display-setup.sh screen ``` - You could detect the values of `DISPLAY` and `XAUTHORITY` from a running process. This is harder to automate. You have to figure out the PID of a process that's connected to the display you want to work on, then get the environment variables from `/proc/$pid/environ` (`eval export $(</proc/$pid/environ tr \\0 \\n | grep -E '^(DISPLAY|XAUTHORITY)=')`¹). ## Copying the cookies Another approach (following a suggestion by [Arrowmaster](https://unix.stackexchange.com/users/3459/arrowmaster)) is to not try to obtain the value of `$XAUTHORITY` in the ssh session, but instead to make the X session copy its cookies into `~/.Xauthority`. Since the cookies are generated each time you log in, it's not a problem if you keep stale values in `~/.Xauthority`. There can be a security issue if your home directory is accessible over NFS or other network file system that allows remote administrators to view its contents. They'd still need to connect to your machine somehow, unless you've enabled X TCP connections (Debian has them off by default). So for most people, this either does not apply (no NFS) or is not a problem (no X TCP connections). To copy cookies when you log into your desktop X session, add the following lines to `~/.xprofile` or `~/.profile` (or some other script that is read when you log in): ``` case $DISPLAY:$XAUTHORITY in :*:?*) # DISPLAY is set and points to a local display, and XAUTHORITY is # set, so merge the contents of `$XAUTHORITY` into ~/.Xauthority. XAUTHORITY=~/.Xauthority xauth merge "$XAUTHORITY";; esac ``` ¹ In principle this lacks proper quoting, but in this specific instance `$DISPLAY` and `$XAUTHORITY` won't contain any shell metacharacter.
Measuring execution time of a function in C++ I want to find out how much time a certain function takes in my C++ program to execute on **Linux**. Afterwards, I want to make a speed comparison . I saw several time function but ended up with this from boost. Chrono: ``` process_user_cpu_clock, captures user-CPU time spent by the current process ``` Now, I am not clear if I use the above function, will I get the only time which CPU spent on that function? Secondly, I could not find any example of using the above function. Can any one please help me how to use the above function? P.S: Right now , I am using `std::chrono::system_clock::now()` to get time in seconds but this gives me different results due to different CPU load every time.
It is a very easy-to-use method in C++11. You have to use `std::chrono::high_resolution_clock` from `<chrono>` header. Use it like so: ``` #include <chrono> /* Only needed for the sake of this example. */ #include <iostream> #include <thread> void long_operation() { /* Simulating a long, heavy operation. */ using namespace std::chrono_literals; std::this_thread::sleep_for(150ms); } int main() { using std::chrono::high_resolution_clock; using std::chrono::duration_cast; using std::chrono::duration; using std::chrono::milliseconds; auto t1 = high_resolution_clock::now(); long_operation(); auto t2 = high_resolution_clock::now(); /* Getting number of milliseconds as an integer. */ auto ms_int = duration_cast<milliseconds>(t2 - t1); /* Getting number of milliseconds as a double. */ duration<double, std::milli> ms_double = t2 - t1; std::cout << ms_int.count() << "ms\n"; std::cout << ms_double.count() << "ms\n"; return 0; } ``` This will measure the duration of the function `long_operation`. Possible output: ``` 150ms 150.068ms ``` Working example: <https://godbolt.org/z/oe5cMd>
What are the correct usage/parameter values for HoughCircles in OpenCV for Iris detection? I've been reading about the subject but cannot get the idea in "plain English" about the usage and parameters for `HoughCircles` (specially the ones after `CV_HOUGH_GRADIENT`). What's an accumulator threshold? Are 100 "votes" a right value? I could find and "mask" the pupil, and worked my way through the `Canny` function, but I'm struggling beyond that and my problem is the `HoughCircles` function. There seems to be failing at finding the Iris' circle and I don't know why. ![This is what I have so far. LEFT: masked pupil RIGHT: canny result](https://i.stack.imgur.com/EMQxw.png) And this is the function I'm working on: ``` def getRadius(area): r = 1.0 r = math.sqrt(area/3.14) return (r) def getIris(frame): grayImg = cv.CreateImage(cv.GetSize(frame), 8, 1) cv.CvtColor(frame,grayImg,cv.CV_BGR2GRAY) cv.Smooth(grayImg,grayImg,cv.CV_GAUSSIAN,9,9) cv.Canny(grayImg, grayImg, 32, 2) storage = cv.CreateMat(grayImg.width, 1, cv.CV_32FC3) minRad = int(getRadius(pupilArea)) circles = cv.HoughCircles(grayImg, storage, cv.CV_HOUGH_GRADIENT, 2, 10,32,200,minRad, minRad*2) cv.ShowImage("output", grayImg) while circles: cv.DrawContours(frame, circles, (0,0,0), (0,0,0), 2) # this message is never shown, therefore I'm not detecting circles print "circle!" circles = circles.h_next() return (frame) ```
`HoughCircles` can be kind of tricky, I suggest looking through [this thread](https://stackoverflow.com/questions/9860667/writing-robust-color-and-size-invariant-circle-detection-with-opencv-based-on). Where a bunch of people, including me ;), discuss how to use it. The key parameter is `param2`, the so-called `accumulator threshold`. Basically, the higher it is the less circles you get. And these circles have a higher probability of being correct. The best value is different for every image. I think the best approach is to use a parameter search on `param2`. Ie. keep on trying values until your criteria is met (such as: there are 2 circles, or max. number of circles that are non-overlapping, etc.). I have some code that does a binary search on 'param2', so it meet the criteria quickly. The other crucial factor is pre-processing, try to reduce noise, and simplify the image. Some combination of blurring/thresholding/canny is good for this. Anyhow, I get this: ![enter image description here](https://i.stack.imgur.com/AfzzU.jpg) From your uploded image, using this code: ``` import cv import numpy as np def draw_circles(storage, output): circles = np.asarray(storage) for circle in circles: Radius, x, y = int(circle[0][3]), int(circle[0][0]), int(circle[0][4]) cv.Circle(output, (x, y), 1, cv.CV_RGB(0, 255, 0), -1, 8, 0) cv.Circle(output, (x, y), Radius, cv.CV_RGB(255, 0, 0), 3, 8, 0) orig = cv.LoadImage('eyez.png') processed = cv.LoadImage('eyez.png',cv.CV_LOAD_IMAGE_GRAYSCALE) storage = cv.CreateMat(orig.width, 1, cv.CV_32FC3) #use canny, as HoughCircles seems to prefer ring like circles to filled ones. cv.Canny(processed, processed, 5, 70, 3) #smooth to reduce noise a bit more cv.Smooth(processed, processed, cv.CV_GAUSSIAN, 7, 7) cv.HoughCircles(processed, storage, cv.CV_HOUGH_GRADIENT, 2, 32.0, 30, 550) draw_circles(storage, orig) cv.ShowImage("original with circles", orig) cv.WaitKey(0) ``` **Update** I realise I somewhat miss-read your question! You actually want to find the **iris** edges. They are not so clearly defined, as the pupils. So we need to help `HoughCircles` as much as possible. We can do this, by: 1. Specifying a size range for the iris (we can work out a plausible range from the pupil size). 2. Increasing the minimum distance between circle centres (we know two irises can never overlap, so we can safely set this to our minimum iris size) And then we need to do a param search on `param2` again. Replacing the 'HoughCircles' line in the above code with this: ``` cv.HoughCircles(processed, storage, cv.CV_HOUGH_GRADIENT, 2, 100.0, 30, 150,100,140) ``` Gets us this: ![enter image description here](https://i.stack.imgur.com/sQxgT.jpg) Which isn't too bad.
Is there an equivalent to C#’s Process.Start in PHP? 1. In .NET The Process class contains several useful properties/methods that allow developers to access process relative information. Is there any equivalent method or class in PHP? 2. Is there any equivalent method in PHP like C# method "Process.Start()"?
`1.` See [Program execution Functions](http://www.php.net/manual/en/ref.exec.php) Except there's no concept of methods/classes/properties/namespaces in the PHP standard functions. PHP is essentially a procedural programming language with few OOP constructs and namespace support being added as a **new** feature as of the last major release (5.3). It's one of the reasons why people criticise it as a 'toy' language. You can access all of the PHP built-in functions all the time, no pesky namespaces to get in the way ;), just be careful of name collisions. `2.` @YSJaitawat Right answer bad reference. Click this link for the [exec function documentation in the PHP manual](http://www.php.net/manual/en/function.exec.php). *Note: Also, if you're migrating from C# to PHP and looking for info the [PHP manual](http://www.php.net/manual/en/index.php) has some surprisingly good info including user-submitted comments at the bottom of the entry where people usually post use-cases or extensions to the standard uses. It's probably the easiest language to learn because of the wealth of info to be found in the manual.*
Inheritance Confusion Alright so I've been working on a game engine, and I'm running into a problem... Currently I have a hierarchy of classes (image->animation->object) all of which have a method `get_type()` and I'm having this problem: if I declare a derived class using the new keyword, or statically, I get the desired results: ``` object instance; cout << instance.get_type(); //returns an int value object* pointer = new(object); cout << pointer->get_type(); ``` From the above code, the console outputs 3. The method is declared as such in object: **Image Class:** ``` class image{ public: virtual int get_type(); }; int object::get_type(){ return 1; } ``` **Animation Class:** ``` class animation: public image{ public: virtual int get_type(); }; int animation::get_type(){ return 2; } ``` **Object Class:** ``` class object: public animation{ public: virtual int get_type(); }; int object::get_type(){ return 3; } ``` Now the problem arises when I do something like this: ``` object* t = &object(); cout << t->get_type(); ``` The result is now 1. If I remove the virtual keyword from the class declaration of the *object* class, it works as I expect (the last bit of code returns 3) **Question:** how can I utilize my virtual methods from the object\* pointer without using the new keyword? Reason being, I'm filling a vector with pointers to these objects and creating them as I'm adding them, so I hope to be able to do something like the following: ``` vector<object*> objects; //not imgs or animations for (int a = 0; a < 20; a++){ objects.push_back(&object()); } ``` And have any object[x]->get\_type(); return the *correct* class type. The reason for this is an event system that I'm using in my game that needs the origination class type... **TLDR;** I really stink at inheritance, can you help?
This is because both the constructor *and destructor* have run in your code: ``` object* t = &object(); cout << t->get_type(); ``` The `&object()` creates an (anonymous, temporary) object, immediately destructs it (because it's temporary), and then assigns the address of the now-destructed object to `t`. (Your compiler may warn you about this expression if you turn the warning level up high enough.) Try creating an object that sticks around during the time you're using it: ``` object my_object(); object *t = &my_object; cout << t->get_type(); ``` When you're using non-virtual functions, the type of the pointer `t` is used at compile time to determine which method to call. When you're using virtual functions, information inside the object is used at runtime during virtual dispatch to decide which function to call. Since your object is not correctly constructed at the time of the call (since it has already been destructed), the result is strictly undefined but ends up calling the wrong function.
DataGrid Column XAML I am using a DataGrid to show items from a database table, and I am using EF CodeFirst so the database query automatically generates an object. Here is my XAML: ``` <DataGrid Name="details" Margin="0,20,0,0" ItemsSource="{Binding}"> </DataGrid> ``` And this is the code behind it: ``` data = new DbLayer(); int cardNumId = (from dataCardNum in data.creditCards where dataCardNum.creditCardNumber == cardNum select dataCardNum.Id).First(); debits =new ObservableCollection<Debit>(( from billings in data.charges where billings.creditCardNumber.Id == cardNumId select billings).ToList()); DataContext = debits; ``` That resolves in filling my DataGrid with all the information from my database. The only problem is that I have two columns that I don't want to show. I tried to create a dataTemplate that will generate the grid with the columns I want, but when I bind it to the datacontext it showed no information. Here is my dataTemplate: ``` <DataTemplate x:Key="debitShow" DataType="DataTemplate:MonthBill.Debit"> <DataGrid> <DataGrid.Columns> <DataGridTextColumn Header="amount" Binding="{Binding amount}"/> <DataGridTextColumn Header="charge date" Binding="{Binding chargeDate}"/> <DataGridCheckBoxColumn Header="charged" Binding="{Binding charged}"/> <DataGridTextColumn Header="store name" Binding="{Binding storeName}"/> <DataGridTextColumn Header="purchase date" Binding="{Binding debitDate}"/> <DataGridTextColumn Header="description" Binding="{Binding description}"/> </DataGrid.Columns> </DataGrid> </DataTemplate> ``` window xaml: Debit class(the key attribute is for the codefirst database creation): ``` class Debit { [Key] public int Id { get; set; } public int amount { get; set; } public string storeName { get; set; } public DateTime debitDate { get; set; } public DateTime chargeDate { get; set; } public string description { get; set; } public creditCard creditCardNumber { get; set; } public bool charged { get; set; } } ``` Any ideas?
If your objective is to display your data without the two columns that you don't need, I would suggest taking the simpler approach of just specifying the columns of your grid explicitly: ``` <DataGrid ItemsSource="{Binding}" AutoGenerateColumns="False"> <DataGrid.Columns> <DataGridTextColumn Header="amount" Binding="{Binding amount}"/> <DataGridTextColumn Header="charge date" Binding="{Binding chargeDate}"/> <DataGridCheckBoxColumn Header="charged" Binding="{Binding charged}"/> <DataGridTextColumn Header="store name" Binding="{Binding storeName}"/> <DataGridTextColumn Header="purchase date" Binding="{Binding debitDate}"/> <DataGridTextColumn Header="description" Binding="{Binding description}"/> </DataGrid.Columns> </DataGrid> ``` Notice the `AutoGenerateColumns="False"` [attribute.](http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridview.autogeneratecolumns.aspx) I would only use a data template if I wanted to control the way the cells are rendered. If you are happy with the default presentation I think you don't need a template.
Neither ng-submit or ng-click responding to submit button in a form I'm trying to make a very simple login form in angularjs, but it seems that neither the ng-submit() nor the ng-click directive seems to be working. I've created a very basic plunker example here: <http://plnkr.co/edit/BrLIxSZggZofCBoZpjT4?p=preview> In which either/both the ng-click or the ng-submit should open a simple alert window with an 'a' letter. However nothing happens when I click on the sign in button. What's more interesting, that if I change ng-app to ng-app="test" then the form get submitted but the alert doesn't get called either. What am I doing wrong? The basic example: ``` <head> <script data-require="angular.js@1.2.9" data-semver="1.2.9" src="http://code.angularjs.org/1.2.9/angular.js"></script> <link rel="stylesheet" href="style.css" /> <script src="script.js"></script> </head> <body ng-app> <!-- Login --> <form data-title="Sign in" data-value="login" class="tab-pane" name="login" ng-submit="alert('a')" ng-controller="Ctrl"> <input type="text" name="username" value="" placeholder="Username" ng-model="model.username" ng-minlength="3" required autocapitalize="false" /> <input type="password" name="password" value="" placeholder="Password" ng-model="model.password" required autocapitalize="false" autocorrect="false" /> <input type="submit" ng-click="alert('a')" name="login" value="Sign in" /> </form> <!-- / Login --> </body> </html> ```
`ng-submit` directive invokes a function which should be in the controller. The `ng-submit` directive will be triggered provided the form fields are valid **Form/HTML** ``` <form data-title="Sign in" data-value="login" class="tab-pane" name="login" ng-submit="submit()" ng-controller="Ctrl"> <input type="text" name="username" value="" placeholder="Username" ng-model="model.username" ng-minlength="3" required autocapitalize="false" /> <input type="password" name="password" value="" placeholder="Password" ng-model="model.password" required autocapitalize="false" autocorrect="false" /> <input type="submit" name="login" value="Sign in" /> </form> ``` **Controller** ``` function Ctrl($scope) { $scope.model = {}; $scope.submit = function(){ alert('a'); } } ``` **[Plunkr](http://plnkr.co/edit/yWoW4lhjJjbTYfm0S1Vw?p=preview)**
How to emulate 'question mark(?)' type generics in Java to C++? I have a Java code where the return type of a function has unbounded wildcard type (?). How can I emulate something like this in C++? e.g. ``` public GroupHandlerSetting<?> handleGroupProcessingFor(final EventHandler<T> eventHandler) { return new GroupHandlerSetting<T>(eventHandler, eventProcessors); } ```
In C++ all type arguments must have a name, whether you use it or not, so there is no question mark. Just make it a template argument to the function and give it a name and you should be fine. ``` template <typename T> struct templ { template <typename U> void assign( templ<U> & u ); // public void assign<?>( temple<U> u ) }; ``` That's the trivial part, the more complex part is enforcing constraints on the type, and for that you can use SFINAE: ``` template <typename T> struct templ { template <typename U, typename _ = std::enable_if< typename std::is_base_of<U,T>::value >::type > void super( templ<U> & u ); // public void super_<? super T>( templ<?> u ) template <typename U, typename _ = std::enable_if< typename std::is_base_of<T,U>::value >::type > void extends( templ<U> & u ); // public void extends_<? extends T>( templ<?> u ) } ``` That is using C++11 for the SFINAE, in C++03, it is a bit more convoluted (as if this version was simple) as you cannot use SFINAE on a function template argument, so SFINAE needs to be applied to either the return type or extra function arguments. SFINAE is a much more powerful solution, it can be used not only to provide `super` and `extends` but with many other features of types or compile time values. Google for SFINAE and you will find many cases of SFINAE being used, many of them will be C++03 style. There was a proposal for concepts that would have greatly simplified the syntax, but no agreement was reached and in a move to push the standard to completion it was deferred for a later standard. Now, this is really not that common in C++ as it is in Java, so I recommend that you provide a different question with what you want to do, and you will get ideas for designs in more idiomatic C++.
Why is script.onload not working in a Chrome userscript? I want to load another script file, in a site, using a userscript. But, the `js.onload` event doesn't work correctly. The userscript file: ``` // ==UserScript== // @name Code highlight // @description Test // @include http://localhost/* // @version 1.0 // ==/UserScript== var js = document.createElement('script'); js.src = "http://localhost/test/js/load.js"; document.getElementsByTagName("head")[0].appendChild(js); js.onload = function(){ console.log(A) } ``` the load.js file: ``` var A = { name:'aa' } ``` In Chrome, the console outputs "undefined", but the load.js has loaded completely. I tested it in Firefox, and it outputs `A` correctly.
**Never use `.onload`, `.onclick`, etc. from a userscript.** (It's also poor practice in a regular web page). The reason is that userscripts operate in a sandbox (["isolated world"](https://stackoverflow.com/a/10828021/331508)), and you cannot set or use page-scope javascript objects in a Chrome userscript or content-script. Always use `addEventListener()` (or an equivalent library function, like jQuery `.on()`). Also, you should set `load` listeners before adding `<script>` nodes to the DOM. Finally, if you wish to access variables in the page scope (`A` in this case), you must [inject the code](https://stackoverflow.com/a/13485650/331508) that does so. (Or you could switch to Tampermonkey and use `unsafeWindow`, but [Chrome 27 is causing problems with that](http://tampermonkey.net/faq.php#Q404).) Use something like: ``` addJS_Node (null, "http://localhost/test/js/load.js", null, fireAfterLoad); function fireAfterLoad () { addJS_Node ("console.log (A);"); } //-- addJS_Node is a standard(ish) function function addJS_Node (text, s_URL, funcToRun, runOnLoad) { var D = document; var scriptNode = D.createElement ('script'); if (runOnLoad) { scriptNode.addEventListener ("load", runOnLoad, false); } scriptNode.type = "text/javascript"; if (text) scriptNode.textContent = text; if (s_URL) scriptNode.src = s_URL; if (funcToRun) scriptNode.textContent = '(' + funcToRun.toString() + ')()'; var targ = D.getElementsByTagName ('head')[0] || D.body || D.documentElement; targ.appendChild (scriptNode); } ``` Or perhaps: ``` addJS_Node (null, "http://localhost/test/js/load.js", null, fireAfterLoad); function fireAfterLoad () { addJS_Node (null, null, myCodeThatUsesPageJS); } function myCodeThatUsesPageJS () { console.log (A); //--- PLUS WHATEVER, HERE. } ... ... ```
How to achieve a more accurate distance from device to Beacon? I am sorry if this has been asked in one way shape or another. I have started working with beacons, and in Xcode (Swift) - using CoreLocation. I really need a more accurate determination between the device and a beacon though. So far I have been using the standard proximity region values (Far, Near, and Immediate), however this just isn't cutting it at all. It seems far too unstable for the solution I am looking for - which is a simple one at best. My scenario; I need to display notifications, adverts, images etc to the users device when they are approximately 4 meters away from the beacon. This sounds simple enough, but when I found out that the only real solutions there are for beacons are those aforementioned proximity regions, I started to get worried because I need to only display to devices that are 3-5 meters away, no more. I am aware of the accuracy property of the CLBeacon class, however Apple state it should not be used for accurate positioning of beacons, which I believe is what I am trying to achieve. Is there a solution to this? Any help is appreciated! Thanks, Olly
**There are limitations of physics when it comes to estimating distance with Bluetooth radio signals.** Radio noise, signal reflections, and obstructions all affect the ability to estimate distance based on radio signal strength. It's OK to use beacons for estimating distance, but **you must set your expectations appropriately.** Apple's algorithms in CoreLocation take a running average of the measured signal strength over 20 seconds or so, then come up with a distance estimate in meters that is put into the `CLBeacon` accuracy field. The results of this field are then used to come up with the proximity field. (0.5 meters or less means immediate, 0.5-3 meters means near, etc.) When Apple recommends against using the accuracy field, it is simply trying to protect you against unrealistic expectations. This will never be an exact estimate in meters. Best results will come with a phone out of a pocket, with no obstructions between the beacon and the phone, and the phone relatively stationary. **Under best conditions, you might expect to get distance estimates of +/- 1 meter at close distances of 3 meters or less. The further you get away, the more variation you will see.** You have to decide if this is good enough for your use case. If you can control the beacons there are a few things you can do to make the results as good as possible: 1. **Turn the beacon transmitter power setting up as high as possible.** This gives you a higher signal to noise ratio, hence better distance estimates. 2. **Turn the advertising rate up as high as possible.** This gives you more statistical samples, hence better distance estimates. 3. **Place your beacons in locations where there will be as few obstructions as possible.** 4. **Always calibrate your beacon** after making the changes like above. Calibration involves measuring the signal level at 1 meter and storing this as a calibration constant inside the beacon. Consult your beacon manufacturer instructions for details of how to do this calibration.
Setting defaults for empty arguments (Python) Let's say we have the function `f` and I need the argument `b` to default to an empty list, but can't set b=[] because of the issue around mutable default args. Which of these is the most Pythonic, or is there a better way? ``` def f(a, b=None): if not b: b = [] pass def f(a, b=None): b = b or [] pass ```
The first form as it reads easier. Without any specific context, you should explicitly test for the default value, to avoid potential truthiness issues with the passed in value. ``` def f(a, b=None): if b is None: b = [] pass ``` From [PEP 8, Programming Recommendations](https://www.python.org/dev/peps/pep-0008/#programming-recommendations): > > Also, beware of writing if x when you really mean if x is not None -- > e.g. when testing whether a variable or argument that defaults to None > was set to some other value. The other value might have a type (such > as a container) that could be false in a boolean context! > > > You can see examples of this approach throughout the `cpython` repository: - [Lib/bdb.py](https://github.com/python/cpython/blob/971235827754eee6c0d9f7d39b52fecdfd4cb7b4/Lib/bdb.py#L570) - [Lib/argparse.py](https://github.com/python/cpython/blob/971235827754eee6c0d9f7d39b52fecdfd4cb7b4/Lib/argparse.py#L345) - [Lib/base64.py](https://github.com/python/cpython/blob/971235827754eee6c0d9f7d39b52fecdfd4cb7b4/Lib/base64.py#L59)
Strange behaviour of ww SimpleDateFormat Can anyone explain why do I get those values while trying to parse a date? I've tried three different inputs, as follows: 1) Third week of 2013 ``` Date date = new SimpleDateFormat("ww.yyyy").parse("02.2013"); Calendar cal = Calendar.getInstance(); cal.setTime(date); System.out.println(cal.get(Calendar.WEEK_OF_YEAR) + "." + cal.get(Calendar.YEAR)); ``` Which outputs: `02.2013` (as I expected) 2) First week of 2013 ``` Date date = new SimpleDateFormat("ww.yyyy").parse("00.2013"); Calendar cal = Calendar.getInstance(); cal.setTime(date); System.out.println(cal.get(Calendar.WEEK_OF_YEAR) + "." + cal.get(Calendar.YEAR)); ``` Which outputs: `52.2012` (which is fine for me, since the first week of 2013 is also the last one of 2012) 3) Second week of 2013 ``` Date date = new SimpleDateFormat("ww.yyyy").parse("01.2013"); Calendar cal = Calendar.getInstance(); cal.setTime(date); System.out.println(cal.get(Calendar.WEEK_OF_YEAR) + "." + cal.get(Calendar.YEAR)); ``` Which outputs: `1.2012` (which makes absolutely **no** sense to me) Does anyone know why this happens?? I need to parse a date in the format (week of year).(year). Am I using the wrong pattern?
You're using `ww`, which is "week of week-year", but then `yyyy` which is "calendar year" rather than "week year". Setting the week-of-week-year and then setting the calendar year is a recipe for problems, because they're just separate numbering systems, effectively. You should be using `YYYY` in your format string to specify the week-year... although unfortunately it looks like you can't then *get* the value in a sane way. (I'd expect a `Calendar.WEEKYEAR` constant, but there is no such thing.) Also, week-of-year values start at 1, not 0... and no week is in two week-years; it's *either* the first week of 2013 *or* it's the last week of 2012... it's not both. I would personally avoid using week-years and weeks if you possibly can - they can be very confusing, particularly when a date in one calendar year is in a different week year.
Extract positions 2-7 from a fasta sequence for each gene using Bash I've got a file with a subset of geneIDs in it, and a fasta file with all geneIDs and their sequences. For each gene in the subset file, I want to get positions 2-7 from the start of each fasta sequence. Ideally the output file would be 'pos 2-7' '\t' 'geneID'. Example subset: ``` mmu-let-7g-5p MIMAT0000121 mmu-let-7i-5p MIMAT0000122 ``` Fasta file: ``` >mmu-let-7g-5p MIMAT0000121 UGAGGUAGUAGUUUGUACAGUU >mmu-let-7i-5p MIMAT0000122 UGAGGUAGUAGUUUGUGCUGUU >mmu-let-7f-5p MIMAT0000525 UGAGGUAGUAGAUUGUAUAGUU ``` wanted output: ``` GAGGUA mmu-let-7g-5p MIMAT0000121 GAGGUA mmu-let-7i-5p MIMAT0000122 ``` The first part (pulling out fasta sequences for the subset of genes) I've done using `grep -w -A 1 -f`. Not sure how to get pos 2-7 and make the output look like that now using Bash.
Could you please try following, written and tested with shown samples only in GNU `awk`. ``` awk ' FNR==NR{ a[$1]=$2 next } /^>/{ ind=substr($1,2) } /^>/ && (ind in a){ found=1 val=ind OFS a[ind] next } found{ print substr($0,2,6) OFS val val=found="" } ' gene fastafile ``` ***Explanation:*** Adding detailed explanation for above. ``` awk ' ##Starting awk program from here. FNR==NR{ ##Checking condition FNR==NR which will be TRUE when gene Input_file is being read. a[$1]=$2 ##Creating array a with index of $1 and value of $2 here. next ##next will skip all further statements from here. } /^>/{ ##Checking condition if line starts from > then do following. ind=substr($1,2) ##Creating ind which has substring from 2nd charcters to all values of first field. } /^>/ && (ind in a){ ##Checking if line starts with > and ind is present in array a then do following. found=1 ##Setting found to 1 here. val=ind OFS a[ind] ##Creating val which has ind OFS and value of a with index of ind. next ##next will skip all further statements from here. } found{ ##Checking condition if found is NOT NULL then do following. print substr($0,2,6) OFS val ##Printing sub string from 2nd to 7th character OFS and val here. val=found="" ##Nullifying val and found here. } ' gene fastafile ##Mentioning Input_file names here. ```
Why is it that parseInt(8,3) == NaN and parseInt(16,3) == 1? I'm reading [this](http://jibbering.com/faq/notes/type-conversion/) but I'm confused by what is written in the **parseInt with a radix argument** chapter [![table of parseInt(_, 3) outcomes](https://i.stack.imgur.com/b8ABb.png)](https://i.stack.imgur.com/b8ABb.png) Why is it that `parseInt(8, 3)` → `NaN` and `parseInt(16, 3)` → `1`? AFAIK 8 and 16 are not base-3 numbers, so `parseInt(16, 3)` should return `NaN` too [![the first ten base-3 natural numbers](https://i.stack.imgur.com/RzkKJ.png)](https://i.stack.imgur.com/RzkKJ.png)
This is something people trip over all the time, even when they know about it. :-) You're seeing this for the same reason `parseInt("1abc")` returns 1: `parseInt` stops at the first invalid character and returns whatever it has at that point. If there are no valid characters to parse, it returns `NaN`. `parseInt(8, 3)` means "parse `"8"` in base 3" (note that it converts the number `8` to a string; [details in the spec](http://www.ecma-international.org/ecma-262/7.0/index.html#sec-parseint-string-radix)). But in base 3, the single-digit numbers are just `0`, `1`, and `2`. It's like asking it to parse `"9"` in octal. Since there were **no** valid characters, you got `NaN`. `parseInt(16, 3)` is asking it to parse `"16"` in base 3. Since it can parse the `1`, it does, and then it stops at the `6` because it can't parse it. So it returns `1`. --- Since this question is getting a lot of attention and might rank highly in search results, here's a rundown of options for converting strings to numbers in JavaScript, with their various idiosyncracies and applications (lifted from another answer of mine here on SO): - `parseInt(str[, radix])` - Converts as much of the beginning of the string as it can into a whole (integer) number, ignoring extra characters at the end. So `parseInt("10x")` is `10`; the `x` is ignored. Supports an optional radix (number base) argument, so `parseInt("15", 16)` is `21` (`15` in hex). If there's no radix, assumes decimal unless the string starts with `0x` (or `0X`), in which case it skips those and assumes hex. *(Some browsers used to treat strings starting with `0` as octal; that behavior was never specified, and was [specifically disallowed](http://ecma-international.org/ecma-262/5.1/#sec-E) in the ES5 specification.)* Returns `NaN` if no parseable digits are found. - `parseFloat(str)` - Like `parseInt`, but does floating-point numbers and only supports decimal. Again extra characters on the string are ignored, so `parseFloat("10.5x")` is `10.5` (the `x` is ignored). As only decimal is supported, `parseFloat("0x15")` is `0` (because parsing ends at the `x`). Returns `NaN` if no parseable digits are found. - Unary `+`, e.g. `+str` - *(E.g., implicit conversion)* Converts the *entire* string to a number using floating point and JavaScript's standard number notation (just digits and a decimal point = decimal; `0x` prefix = hex; `0o` prefix = octal [ES2015+]; **some** implementations extend it to treat a leading `0` as octal, but not in strict mode). `+"10x"` is `NaN` because the `x` is **not** ignored. `+"10"` is `10`, `+"10.5"` is `10.5`, `+"0x15"` is `21`, `+"0o10"` is `8` [ES2015+]. Has a gotcha: `+""` is `0`, not `NaN` as you might expect. - `Number(str)` - Exactly like implicit conversion (e.g., like the unary `+` above), but slower on some implementations. *(Not that it's likely to matter.)*
How can I make a hidden file/folder? > > **Possible Duplicate:** > > [How can I hide directories without changing their names?](https://askubuntu.com/questions/2034/how-can-i-hide-directories-without-changing-their-names) > > > I'm using Ubuntu 11.10 and I want to make a file hidden. How do I make for example a *.docx* file or a directory hidden? *Please put a way to do it **with** and **without** the terminal?*
Hiding files and directories in Linux is very simple. All you have to do is append a period at the beginning of the name of the file/directory. **With terminal:** ``` mv filename .filename ``` (This command also works with directories) *Some additional information about the `mv` command; if the second argument is a directory, the first argument will be moved into that directory, rather than being renamed to it (regardless of whether the first argument was a file or a directory). If the first argument is a directory and the second is a file, you will get an error.* **If both the first and the second arguments are existing files, the second will be overwritten without prompting you!** *If you wish to be prompted before overwriting, add `-i` to the command before either argument* To view hidden files and directories in a terminal, use `ls -a`. **Without terminal:** Click on the file, press the `F2` key and add a period at the beginning of the name. To view hidden files and directories in Nautilus (Ubuntu's default file explorer), press `Ctrl`+`H`. The same keys will also re-hide revealed files.
05:00:00 - 28:59:59 time format I have dataset where `time.start` vary from 5:00:00 to 28:59:59 (i.e. 01.01.2013 28:00:00 is actually 02.01.2013 04:00:00). Dates are in `%d.%m.%Y` format. ``` Date Time.start 01.01.2013 22:13:07 01.01.2013 22:52:23 01.01.2013 23:34:06 01.01.2013 23:44:25 01.01.2013 27:18:48 01.01.2013 28:41:04 ``` I want to convert it to normal date format. ``` dates$date <- paste(dates$Date,dates$Time.start, sep = " ") dates$date <- as.POSIXct(strptime(dates$date, "%m.%d.%Y %H:%M:%S")) ``` But obviously I have `NA` for time > 23:59:59 How should I modify my code?
E.g. add the time as seconds to the date: ``` df <- read.table(header=T, text=" Date Time.start 01.01.2013 22:13:07 01.01.2013 22:52:23 01.01.2013 23:34:06 01.01.2013 23:44:25 01.01.2013 27:18:48 01.01.2013 28:41:04", stringsAsFactors=FALSE) as.POSIXct(df$Date, format="%d.%m.%Y") + sapply(strsplit(df$Time.start, ":"), function(t) { t <- as.integer(t) t[3] + t[2] * 60 + t[1] * 60 * 60 }) # [1] "2013-01-01 22:13:07 CET" "2013-01-01 22:52:23 CET" "2013-01-01 23:34:06 CET" # [4] "2013-01-01 23:44:25 CET" "2013-01-02 03:18:48 CET" "2013-01-02 04:41:04 CET" ```
jackson why do I need JsonTypeName annotation on subclasses At [this link](https://github.com/Sergey80/scala-samples/blob/master/src/main/scala/json/jackson/SubClasses.scala) I'm trying to understand **why** do I (may) need `@JsonTypeName` on subclasses (like all 'internet; [sujests to put](https://stackoverflow.com/questions/11798394/polymorphism-in-jackson-annotations-jsontypeinfo-usage)) if it works without it **?** ``` @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "aType") @JsonSubTypes(Array( new Type(value = classOf[ModelA], name = "ModelA"), new Type(value = classOf[ModelB], name = "ModelB") )) class BaseModel(val modelName:String) //@JsonTypeName("SomeModel") // Commented. Do I need this? class ModelA(val a:String, val b:String, val c:String, commonData:String) extends BaseModel(commonData) { def this() = this("default", "default", "default" ,"default") } //@JsonTypeName("SomeModel") // Commented. Do I need this? class ModelB(val a:String, val b:String, val c:String, commonData:String) extends BaseModel(commonData) { def this() = this("default", "default", "default" ,"default") } ```
You don't need them. The [documentation](http://fasterxml.github.io/jackson-annotations/javadoc/2.3.0/com/fasterxml/jackson/annotation/JsonSubTypes.Type.html) of `@JsonSubTypes.Type` explains > > Definition of a subtype, along with optional name. If name is missing, class of the type will be checked for JsonTypeName annotation; and if that is also missing or empty, a default name will be constructed by type id mechanism. Default name is usually based on class name. > > > You should have either ``` @JsonSubTypes(Array( new Type(value = classOf[ModelA], name = "ModelA") ... class ModelA ``` or ``` @JsonSubTypes(Array( new Type(value = classOf[ModelA]) ... @JsonTypeName("ModelA") class ModelA ```
Proper way to change contents of a VCL component Quite often when I make VCL programs, I run into a scenario like this: - I have a number of components on the form, that the users are allowed to fiddle with. Most commonly a bunch of edit boxes. - The contents of these edit boxes need to be verified by the OnChange event when the user types in stuff manually. - Somewhere else on the form, there's some component that the user can click on to get some default values loaded into the edit boxes (in TEdit::Text). Now what I want is that whenever the user is typing something in the TEdit::Text, the OnChange event must process the user input. But when my program is setting the TEdit::Text to a default value, this isn't necessary, because then I know that the value is correct. Unfortunately, writing code like `myedit->Text = "Default";` triggers the OnChange event. I tend to solve this with what I think is a rather ugly approach: by creating a bool variable `is_user_input`, which `TEdit::OnChange` checks. If it is true, the TEdit::Text will get validated, otherwise it will get ignored. But of course, this doesn't prevent the program from launching `TEdit::OnChange` when it is unnecessary. Is there a better or cleaner way to achieve this? Is there a way for OnChange to check who called it? Or I suppose, a way of disabling the OnChange event temporarily would be even better. `TEdit::Enabled` doesn't seem to affect whether `OnChange` gets triggered or not.
You could simply unassign the `OnChange` event handler temporarily: ``` template <typename T> void SetControlTextNoChange(T *Control, const String &S) { TNotifyEvent event = Control->OnChange; Control->OnChange = NULL; try { Control->Text = S; } __finally { Control->OnChange = event; } } ``` ``` SetControlTextNoChange(myedit, "Default"); ``` Alternatively, RAII is good for this kind of thing: ``` template <typename T> class DisableChangeEvent { private: T *m_control; TNotifyEvent m_event; public: DisableChangeEvent(T *control); { m_control = control; m_event = control->OnChange; control->OnChange = NULL; } ~DisableChangeEvent(); { m_control->OnChange = m_event; } T* operator->() { return m_control; } }; ``` ``` DisableChangeEvent(myedit)->Text = "Default"; ```
Why am I getting InputMismatchException? So far I have this: ``` public double checkValueWithin(int min, int max) { double num; Scanner reader = new Scanner(System.in); num = reader.nextDouble(); while (num < min || num > max) { System.out.print("Invalid. Re-enter number: "); num = reader.nextDouble(); } return num; } ``` and this: ``` public void askForMarks() { double marks[] = new double[student]; int index = 0; Scanner reader = new Scanner(System.in); while (index < student) { System.out.print("Please enter a mark (0..30): "); marks[index] = (double) checkValueWithin(0, 30); index++; } } ``` When I test this, it can't take double number and I got this message: ``` Exception in thread "main" java.util.InputMismatchException at java.util.Scanner.throwFor(Scanner.java:909) at java.util.Scanner.next(Scanner.java:1530) at java.util.Scanner.nextDouble(Scanner.java:2456) at MarkingSystem.checkValueWithin(MarkingSystem.java:25) at MarkingSystem.askForMarks(MarkingSystem.java:44) at World.main(World.java:6) Java Result: 1 ``` How do I fix this?
Here you can see the nature of [Scanner](http://www.cs.utexas.edu/users/ndale/Scanner.html): > > double nextDouble() > > > Returns the next token as a double. **If the next token is not a float or > is out of range, InputMismatchException is thrown.** > > > Try to catch the exception ``` try { // ... } catch (InputMismatchException e) { System.out.print(e.getMessage()); //try to find out specific reason. } ``` **UPDATE** **CASE 1** I tried your code and there is nothing wrong with it. Your are getting that error because **you must have entered `String` value**. When I entered a numeric value, it runs without any errors. But once I entered `String` it `throw` the same `Exception` which you have mentioned in your question. **CASE 2** You have entered something, which is **out of range** as I have mentioned above. I'm really wondering what you could have tried to enter. In my system, it is running perfectly without changing a single line of code. Just copy as it is and try to compile and run it. ``` import java.util.*; public class Test { public static void main(String... args) { new Test().askForMarks(5); } public void askForMarks(int student) { double marks[] = new double[student]; int index = 0; Scanner reader = new Scanner(System.in); while (index < student) { System.out.print("Please enter a mark (0..30): "); marks[index] = (double) checkValueWithin(0, 30); index++; } } public double checkValueWithin(int min, int max) { double num; Scanner reader = new Scanner(System.in); num = reader.nextDouble(); while (num < min || num > max) { System.out.print("Invalid. Re-enter number: "); num = reader.nextDouble(); } return num; } } ``` As you said, you have tried to enter `1.0`, `2.8` and etc. Please try with this code. **Note : Please enter number one by one, on separate lines. I mean, enter `2.7`, press enter and then enter second number (e.g. `6.7`).**
Node.js and Multer - Handle the destination of the uploaded file in callback function (req,res) I'm new to Node.js and I ran into a simple problem lately. I'm using `multer` module to upload images. In my web app all the users have a unique id, and I want the uploaded images to be stored in a directory to be named based on their id. Example: ``` .public/uploads/3454367856437534 ``` Here is a part of my `routes.js` file: ``` // load multer to handle image uploads var multer = require('multer'); var saveDir = multer({ dest: './public/uploads/' + req.user._id, //error, we can not access this id from here onFileUploadStart: function (file) { return utils.validateImage(file); //validates the image file type } }); module.exports = function(app, passport) { app.post('/', saveDir, function(req, res) { id : req.user._id, //here i can access the user id }); }); } ``` How can I access `req.user._id` attribute outside the callback `function(req, res)`, so I can use it with `multer`, to generate the proper directory based on the id? **EDIT** Here is what I have tried and didn't work: ``` module.exports = function(app, passport) { app.post('/', function(req, res) { app.use(multer({ dest: './public/uploads/' + req.user._id })); }); } ```
**Update** Quite a few things have changed since I posted the original answer. With `multer 1.2.1`. 1. You need to use `DiskStorage` to specify *where* & *how* of the stored file. 2. By default, multer will use the operating system's default directory. In our case, since we are particular about the location. We need to ensure that the folder exists before we could save the file over there. > > Note: You are responsible for creating the directory when providing destination as a function. > > > More [here](https://github.com/expressjs/multer#diskstorage) ``` 'use strict'; let multer = require('multer'); let fs = require('fs-extra'); let upload = multer({ storage: multer.diskStorage({ destination: (req, file, callback) => { let userId = req.user._id; let path = `./public/uploads//${userId}`; fs.mkdirsSync(path); callback(null, path); }, filename: (req, file, callback) => { //originalname is the uploaded file's name with extn callback(null, file.originalname); } }) }); app.post('/', upload.single('file'), (req, res) => { res.status(200).send(); }); ``` `fs-extra` for creating directory, [just in case it doesn't exists](https://www.npmjs.com/package/fs-extra#mkdirsdir-callback) **Original answer** You can use [changeDest](https://github.com/expressjs/multer#changedestdest-req-res) > > Function to rename the directory in which to place uploaded files. > > > It is available from [v0.1.8](https://github.com/expressjs/multer/issues/58#issuecomment-75315556) ``` app.post('/', multer({ dest: './public/uploads/', changeDest: function(dest, req, res) { var newDestination = dest + req.user._id; var stat = null; try { stat = fs.statSync(newDestination); } catch (err) { fs.mkdirSync(newDestination); } if (stat && !stat.isDirectory()) { throw new Error('Directory cannot be created because an inode of a different type exists at "' + dest + '"'); } return newDestination } }), function(req, res) { //set your response }); ```
Is there a way to hide or show inputs when using Vue Formulate Schemas? I have been trying to create a form using Vue Formulate schemas. Specifically, I want two radio buttons, A and B. When A is clicked, an extra input field must appear below. When B is clicked, this input field must be hidden. It is important that I use a schema. Any ideas?
In Vue Formulate, the schema itself is reactive, so the recommendation for doing conditional fields using a schema is to pass the schema through a computed prop first. For example: ``` <template> <FormulateForm v-model="formValues" :schema="conditionalSchema" /> </template> <script> const schema = [ { type: "radio", options: { a: 'A', b: 'B' }, label: 'Do you like a or b?', name: 'question', id: 'question' }, { type: 'text', name: 'c', id: 'c', label: 'If you like b, then you must like c!' } ] export default { data () { return { formValues: {} } }, computed: { conditionalSchema() { if (this.formValues.question !== 'b') { return schema.filter(({ name }) => name !== 'c') } return schema } } } </script> ``` Here's that code in CodePen: <https://codepen.io/justin-schroeder/pen/dyXPGQL>
Specflow use parameters in a table with a Scenario Context I am using Specflow in C# to build automatic client side browser testing with Selenium. The goal of these tests is to simulate the business scenario where a client enters our website in specific pages, and then he is directed to the right page. I Want to use parameters inside a Scenario Context, for example: ``` When I visit url | base | page | parameter1 | parameter2 | | http://www.stackoverflow.com | questions | <questionNumber> | <questionName> | Then browser contains test <questionNumber> Examples: | <questionNumber> | <questionName> | | 123 | specflow-q1 | | 456 | specflow-q2 | | 789 | specflow-q3 | ``` **Note**: step "When I visit url" takes base+page+parameter1+parameter2, creates url "base/page/parameter1/parameter2" and goes to this URL. The problem is that the input table in step "I visit url", is passing the text as-is, without modifying to the equivilent in the Examples section. It means that the table that the above syntax builds has a row with data the parameter names: <http://www.stackoverflow.com>, questions, questionNumber, questionName Instead of using their value: <http://www.stackoverflow.com>, questions, 123 ,specflow-q1 Do you know how can I use it correctly?
It is not possible to mix data tables and scenario outlines. Instead I'd rewrite your scenario as follows: ``` When I visit the URL <base>/<page>/<questionNumber>/<questionName> Then the browser contains test <questionNumber> Examples: | base | page | questionNumber | questionName | | http://www.stackoverflow.com | questions | 123 | specflow-q1 | | http://www.stackoverflow.com | questions | 456 | specflow-q2 | | http://www.stackoverflow.com | questions | 789 | specflow-q3 | ``` Inside the "When I visit the URL" step definition you'd construct the URL from the passed-in table parameter (which is what you are doing currently). Whilst "base" and "question" values are repeated in the "Examples" section, it is clear to see what exactly is being tested. A non-technical user (e.g. business user) will also be able to easily understand what this test is trying to achieve too.
How to calculate if the difference between two dates is greater than 20 years? I have integers representing the month, day and year of two dates. How do I calculate if the difference between them is greater than 20 years? I used this on registration of new users.
Use [Joda-Time](http://joda-time.sourceforge.net/), period! :) Although "greater than 20 years" might be somewhat dependent on which kind of Calendar you're talking about, when you start counting, leap years, or daylight savings, Joda-Time will give you more flexibility that the util.Calendar class. Working with util.Date is not recommended and counting millisecond (or other things like that) will probably lead to buggy code. **JODA-TIME Code Samples:** Given: ``` int year1 = 2012, month1 = 2, day1 = 7; int year2 = 1987, month2 = 7, day2 = 23; //You can include a TimeZone if needed in the constructors below DateTime dateTime1 = new DateTime(year1, month1, day1, 0, 0); //2012-02-07T00:00:00.000-05:00 DateTime dateTime2 = new DateTime(year2, month2, day2, 0, 0); //1987-07-23T00:00:00.000-04:00 ``` Option 1, boring... ``` DateTime twentyYearsBefore = dateTime1.minusYears(20); //1992-02-07T00:00:00.000-05:00 if(dateTime2.compareTo(twentyYearsBefore) == -1) System.out.println("The difference between the dates is greater than 20 years"); ``` Option 2, good stuff! ``` Days d = Days.daysBetween(dateTime1, dateTime2); int days = d.getDays(); //-8965 days System.out.println("There are " + days + " days between the two dates"); ``` Option 3, rocket science!!! ;) ``` Period periodDifference = new Period(dateTime1, dateTime2); System.out.println(periodDifference); //prints: P-24Y-6M-2W-1D ``` Of course the [Period](http://joda-time.sourceforge.net/apidocs/org/joda/time/Period.html) class has a ton of methods to get only the relevant fields. Click the following for the APIs of [DateTime](http://joda-time.sourceforge.net/apidocs/org/joda/time/DateTime.html) and [Days](http://joda-time.sourceforge.net/apidocs/org/joda/time/Days.html)
How to make n() do not count NAs too in tidyverse? Consider the MWE below, where we have `Amt` indicating different amounts (from 1 to 40 with NAs) for each `Food` item and another variable indicating the `Site` of that food item. I wanted a summary median and a count `n()` of food items but for those without `NA`. **MWE** ``` mwe <- data.frame( Site = sample(rep(c("Home", "Office"), size = 884)), Food = sample(rep(c("Banana","Apple","Egg","Berry","Tomato","Potato","Bean","Pea","Nuts","Onion","Carrot","Cabbage","Eggplant"), size=884)), Amt = sample(seq(1, 40, by = 0.25), size = 884, replace = TRUE) ) random <- sample(seq(1, 884, by = 1), size = 100, replace = TRUE) # to randomly introduce 100 NAs to Amt vector mwe$Amt[random] <- NA ``` **Data frame** ``` Site Food Amt 1 Office Cabbage 16.50 2 Home Apple 36.00 3 Office Egg 7.25 4 Home Onion 16.00 5 Office Eggplant 36.50 6 Home Nuts NA ``` **Summary Code** ``` dfsummary <- mwe %>% dplyr::group_by(Food, Site) %>% dplyr::summarise(Median = round(median(Amt, na.rm=TRUE), digits=2), N = n()) %>% ungroup() ``` **Output** ``` # A tibble: 6 x 4 Food Site Median N <fct> <fct> <dbl> <int> 1 Apple Home 17 34 2 Apple Office 22.2 34 3 Banana Home 19.5 34 4 Banana Office 19.9 34 5 Bean Home 20 34 6 Bean Office 18 34 ``` Some food items showed NA values, however they made their way in the `N` count. I simply do not want to count those with `NA`s in the `Amt` vector.
We can `filter` at the top and then do the `summarise` without changing the code ``` library(dplyr) mwe %>% filter(!is.na(Amt)) %>% dplyr::group_by(Food, Site) %>% dplyr::summarise(Median = round(median(Amt, na.rm=TRUE), digits=2), N = n()) %>% ungroup() ``` --- Or another option is to change the `n()` to `sum(!is.na(Amt))` ``` mwe %>% dplyr::group_by(Food, Site) %>% dplyr::summarise(Median = round(median(Amt, na.rm=TRUE), digits=2), N = sum(!is.na(Amt))) %>% ungroup() ```
Margin-left and position left give different values with the same percentage I have a div with: ``` width:100%; max-width:100%; position:relative; overflow:hidden; ``` An immediate child of this div is: ``` .my-class { position:absolute; bottom:6px; padding-left:12px; } ``` I want the child div to line up with some other content. Nothing outside of this div is effecting it. When I use `left:30%` I get one number, when I use `margin-left:30%` I get a different one (which in this case is what I want). Does margin-left take padding into account and left doesn't? Or is there some other factor I've not considered?
Yes. Padding is affecting the margin. Take a look at this example: ``` div, span { border: 1px solid #000; height: 80px; width: 80px; } .left, .marginLeft { background: #aaf; margin: 10px 0 0 10px; padding: 10px; position: relative; } .abs { background: #faa; position: absolute; top: 0; } .left .abs { left: 100px; } .marginLeft .abs { margin-left: 100px; } ``` ``` <h3>Left</h3> <div class="left"> parent <div class="abs">left</div> </div> <h3>Margin left</h3> <div class="marginLeft"> parent <div class="abs">margin left</div> </div> ```
How can I add a request id in apache and forward it to underlying systems? I would like to add a unique id to each request done through apache which will be used in the access log and is forwarded to underlying systems in a header to be used in their logs. [![Request id](https://i.stack.imgur.com/vHYsP.jpg)](https://i.stack.imgur.com/vHYsP.jpg) What is the best solution to accomplish this?
[mod\_unique\_id](http://httpd.apache.org/docs/current/mod/mod_unique_id.html) will provide an environment variable `UNIQUE_ID` with a unique identifier for each request. You can add it to request headers with: ``` RequestHeader set uniqueid %{UNIQUE_ID}e ``` If you add that header to apache logs, for example: ``` LogFormat "%h %l %u %t \"%r\" %>s %b \"%{uniqueid}i\"" combined ``` you willl get something like: ``` 10.0.2.2 - - [01/Nov/2016:23:12:40 +0000] "GET /index.html HTTP/1.1" 404 208 "WBkhaJRMNmj7U9aiFl2pzQAAAAA" ```
How to find out how many physically mounted filesystems there are? I recently learned that you can mount two different types of filesystems: logical filesystems and physical filesystems. From my understanding, physical filesystems are the hardware are located on disks that are physically connected to the machine. Whereas logical filesystems exist somewhere remotely and are mounted through a network. A day ago, I ssh into a server and ran the command `mount | wc -l` and got the answer 17. Today I did the same command and got 70. I am guessing `mount` lists both physical and logically mounted filesystems. Is there a way that I can count the number of just the physical file systems that are mounted? Preferably with a short command?
“Logical” file systems aren’t necessarily mounted over the network; for example on your system with 70 mounted file systems, it’s likely most of those were file systems corresponding to kernel features rather than network file systems. Logical file systems include `sysfs`, `proc`, all the cgroup file systems, `tmpfs`, `devtmpfs`, etc., which are all “local” file systems. “Non-physical” file systems are identified by the kernel in `/proc/filesystems` using `nodev`, so you can use that to list “physical” file systems only, using `findmnt`: ``` findmnt -t $(grep -v nodev /proc/filesystems | paste -sd, - | tr -d \\t) ``` To count the file systems, drop the header and feed the output to `wc -l`: ``` findmnt -n -t $(grep -v nodev /proc/filesystems | paste -sd, - | tr -d \\t) | wc -l ``` It is possible to mount such file system types from image files and other non-device files, even remote block devices over the network; however this approach will give you good results on most systems. Another approach is to start from the disk devices themselves, using `lsblk`; `lsblk -f` will output the tree of physical devices through however many layers are required to reach actual mounted file systems. You can combine *that* with the above information about physical file systems to list only file systems which match a block device on the system: ``` lsblk -f | grep -F -f <(grep -v nodev /proc/filesystems | tr -d \\t) ``` Counting that gives the desired result: ``` lsblk -f | grep -F -f <(grep -v nodev /proc/filesystems | tr -d \\t) | wc -l ```
Does using custom data attributes produce browser compatibility issues? I have to choose between custom data tags or ids. I would like to choose custom data tags, but I want to be sure that they do not cause browser compatibility issues for the most widely used browsers today. I'm using jQuery 1.6 and my particular scenario involves a situation where I need to reference a commentId for several actions. ``` <div data-comment-id="comment-1" id="comment-1"> <a class="foo"></a> </div> ``` It's easier to extract data tags in jQueryin: `$('foo').data('commentId');` Extract a substring from the id seems a bit complicated and could break for one reason or another: `<a id="comment-1"` Are there any sweeping merits or fatal flaws for either approach?
I would advise in favor of data attributes for the following reasons: - ids need to be unique document-wide. Thus they are limited in the semantics they can carry - you can have multiple data-attributes per element and probably less relevant in your case: - changing ids might break idrefs However, I'm not sure whether I understand your specs completely as extracting the element id in jQuery is as trivial as getting the data attribute: `$('.foo').attr('id');`. You might be interested in [Caniuse.com](http://caniuse.com/), a browser compatibility site for web technologies. If XHTML is an issue to you, you might also be interested in how to use custom data attributes in XHTML: see [here](https://stackoverflow.com/questions/4189036/custom-data-in-xhtml-1-0-strict) for a discussion on SO and [here](http://www.bennadel.com/blog/1453-Using-jQuery-With-Custom-XHTML-Attributes-And-Namespaces-To-Store-Data.htm) for an XHTML-compatible approach using namespaces.
The morass of Exceptions related to opening a FileStream Ok, so I have searched in many places for the answer to this question, but I'm open to any links if I missed something obvious. I am interested in producing reasonable error messages to the user when they attempt to open a particular file, but for whatever reason the program cannot access that file. I would like to distinguish between the following cases: - The file was locked by another process such that this process cannot write to it. - The user does not have the appropriate access privileges to write to the file (as in, their user permissions, as seen in the Properties screen for a file in Windows explorer, do not give the user write permission) - The file requires "elevated" permission to access the file. I am using a FileStream object. I have looked at the [msdn documentation for instantiating a FileStream](http://msdn.microsoft.com/en-us/library/5h0z48dh.aspx), and it is not at all clear to me which Exception does what for the above, and how to distinguish between them. I admit that my experience with Windows programming is limited, so I may be missing something obvious. My apologies if so.
Here's what you could do: 1) You could test if you have rights to access to the file **before** trying to access to your file. From [this SO thread](https://stackoverflow.com/a/1281638/870604), here is a method that should return true if user has `Write` rights (i.e. when right-clicking on a file -> property -> security). This covers your point (2) for unappropriated access privileges (do note that there is maybe something more robust/error-proof to get this information than the code below): ``` public static bool HasWritePermissionOnFile(string path) { bool writeAllow = false; bool writeDeny = false; FileSecurity accessControlList = File.GetAccessControl(path); if (accessControlList == null) { return false; } var accessRules = accessControlList.GetAccessRules(true, true, typeof(SecurityIdentifier)); if (accessRules == null) { return false; } foreach (FileSystemAccessRule rule in accessRules) { if ((FileSystemRights.Write & rule.FileSystemRights) != FileSystemRights.Write) { continue; } if (rule.AccessControlType == AccessControlType.Allow) { writeAllow = true; } else if (rule.AccessControlType == AccessControlType.Deny) { writeDeny = true; } } return writeAllow && !writeDeny; } ``` 2) Do try to instantiate your `FileStream`, and catch exceptions: ``` try { string file = "..."; bool hasWritePermission = HasWritePermissionOnFile(file); using (FileStream fs = new FileStream(file, FileMode.Open)) { } } catch (UnauthorizedAccessException ex) { // Insert some logic here } catch (FileNotFoundException ex) { // Insert some logic here } catch (IOException ex) { // Insert some logic here } ``` In your case (3) (file requires elevation), `UnauthorizedAccessException` is thrown. In your case (1) (file is locked by another process), `IOException` is thrown. You can then check the HRESULT of the exception for more details: ``` catch (IOException ex) { // Gets the HRESULT int hresult = Marshal.GetHRForException(ex); // See http://msdn.microsoft.com/en-us/library/windows/desktop/ms681382(v=vs.85).aspx // for system error code switch (hresult & 0x0000FFFF) { case 32: //ERROR_SHARING_VIOLATION Console.WriteLine("File is in use by another process"); break; } } ``` Now you should be able to distinguish your 3 use cases.