prompt
stringlengths
49
4.73k
ground_truth
stringlengths
238
35k
Cucumber step with numerous parameters Is there a way to group the parameters into a single one, e.g. pass a data structure that yields all of them? e.g. I want to avoid having methods with too many arguments: ``` Scenario Outline: My scenario has too many parameters When I perform my request with these inputs: <param1>, <param2>, <param3>, <param4>, <param5>, <param6>, <param7>, <param8>, <param9>, <param10>, <param11> Examples: | param1 | param2 | param3 | param4 | param5 | param6 | param7 | param8 | param9 | param10 | param11 | | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | ``` and then the method: ``` @Given("^When I perform my request with these inputs: (.+), (.+), (.+), (.+), (.+), (.+), (.+), (.+), (.+), (.+), (.+)$") public void tooMany(String param1, String param2, String param3, String param4, String param5, String param6, String param7, String param8, String param9, String param10, String param11) { ... ``` Are there better approaches to transfer that many inputs? Thank you
Using a data table as part of your step can help organize this information: ``` Scenario Outline: My scenario has too many parameters When I perform my request with the following inputs: | Field | Value | | param1 | <param1> | | param2 | <param2> | | param3 | <param3> | | param4 | <param4> | | param5 | <param5> | | param6 | <param6> | | param7 | <param7> | | param8 | <param8> | | param9 | <param9> | | param10 | <param10> | | param11 | <param11> | Examples: | param1 | param2 | param3 | param4 | param5 | param6 | param7 | param8 | param9 | param10 | param11 | | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | dummy | ``` And your step definition becomes: ``` @When("When I perform my request with the following inputs:") public void notTooManyAnymore(DataTable table) { // Use table to get the params } ``` Now you can extract the params from the `table` (see [Data Tables in Cucumber](https://www.toolsqa.com/cucumber/data-tables-in-cucumber/) and [Cucumber Data Tables](https://www.tutorialspoint.com/cucumber/cucumber_data_tables.htm)). You should be able to map the table to a POJO for some compile time safety.
How to give session idle timeout in angular 6? We are maintaining a session based on user role. We want to implement timeout functionality when the session is idle for 5 min. We are using @ng-idle/core npm module to do that. My Service file: ``` import { ActivatedRouteSnapshot } from '@angular/router'; import { RouterStateSnapshot } from '@angular/router'; import {Idle, DEFAULT_INTERRUPTSOURCES, EventTargetInterruptSource} from '@ng-idle/core'; @Injectable() export class LoginActService implements CanActivate { constructor(private authService: APILogService, private router: Router,private idle: Idle) { idle.setIdle(10); idle.setTimeout(10); } canActivate( next: ActivatedRouteSnapshot, state: RouterStateSnapshot ): Observable<boolean>|Promise<boolean>|boolean { let role = localStorage.getItem('currentUser'); if (localStorage.getItem('currentUser')) { if(next.data[0] == role){ },600000) return true; } } else{ this.router.navigate(['/'], { queryParams: { returnUrl: state.url }}); return false; } } } ``` For sample, I have used setIdle timeout for 5 seconds, But it is not happening. Can somebody guide me how to do this?
You can use [bn-ng-idle](https://www.npmjs.com/package/bn-ng-idle) npm for user idle / session timeout detection in angular apps. This blog post explanation will help you [Learn how to Handle user idleness and session timeout in Angular](https://prodevhub.com/2020/03/22/how-to-handle-user-idleness-and-session-timeout-in-angular/) ``` npm install bn-ng-idle ``` app.module.ts ``` import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { BnNgIdleService } from 'bn-ng-idle'; // import bn-ng-idle service @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule ], providers: [BnNgIdleService], // add it to the providers of your module bootstrap: [AppComponent] }) export class AppModule { } ``` app.component.ts ``` import { Component } from '@angular/core'; import { BnNgIdleService } from 'bn-ng-idle'; // import it to your component @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { constructor(private bnIdle: BnNgIdleService) { // initiate it in your component constructor this.bnIdle.startWatching(300).subscribe((res) => { if(res) { console.log("session expired"); } }) } } ``` In the above example, I have invoked the `startWatching(timeOutSeconds)` method with **300 seconds (5 minutes)** and subscribed to the observable, once the user is idle for five minute then the subscribe method will get invoked with the res parameter's value (which is a boolean) as true. By checking whether the res is true or not, you can show your session timeout dialog or message. For brevity, I just logged the message to the console.
How "Run-Script" works in VS Code for node.js applications when debugging? [![screen](https://i.stack.imgur.com/5EEyv.png)](https://i.stack.imgur.com/5EEyv.png) I added a config `start:debug` manually but then again VS Code shows another one as well. Both executes the application but when I run mine it does not show all the app console outputs in terminal e.g. errors, logs, etc. but when I run VS Code's one then everything works perfectly and I prefer to the use that config across our team. Problem is I cant checkin the config so in another machine it does not show up as expected. How does VS Code get that config and execute it? If I can replicate that in my config then I can check it in my repo for others to use.
Here are the steps to solve your mystery: by following along, you'll both discover the task configuration settings for the elusive option, and discover how it was added to your list: 1. Create an empty folder (I named mine `so-70196209` after this question ID), and open it in a new VS Code workspace. 2. Create a `package.json` file in the folder. Make sure it has a `start:debug` script entry like this: `package.json`: ``` { "name": "so-70196209", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "start:debug": "echo \"Success\"", "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "MIT" } ``` 3. In the VS Code menu, select "Run" > "Add Configuration..." [![run > add configuration](https://i.stack.imgur.com/KCna5.png)](https://i.stack.imgur.com/KCna5.png) 4. In the list that appears, select "Node.js": [![node.js](https://i.stack.imgur.com/44WDE.png)](https://i.stack.imgur.com/44WDE.png) A file at `.vscode/launch.json` will be created with a default task like this: `.vscode/launch.json`: ``` { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "type": "pwa-node", "request": "launch", "name": "Launch Program", "skipFiles": [ "<node_internals>/**" ], "program": "${file}" } ] } ``` > > You can delete this default task later if you don't want to keep it, but just leave it for now and follow along to the end. > > > 5. Select the "Run and Debug" icon in the [Activity Bar](https://code.visualstudio.com/docs/getstarted/userinterface). 6. In the "Run and Debug" Side Bar, select the dropdown menu and choose "Node.js...": [![node.js](https://i.stack.imgur.com/N6Jxn.png)](https://i.stack.imgur.com/N6Jxn.png) 7. In the list that appears, find the entry with the text "Run Script: start:debug". Find the **gear icon** on the right, and **select the gear**. > > If you hover over the gear, a tooltip will appear with the text "Edit Debug Configuration in launch.json" > > > [![run script: start:debug](https://i.stack.imgur.com/6U3Wf.png)](https://i.stack.imgur.com/6U3Wf.png) This will add a new entry to `.vscode/launch.json`, and this entry is the one that you've been searching for. (The reason why it wasn't in your launch config, but was in your dropdown list, is because you previously clicked the entry line at some point, but not the gear. I don't know why this adds it to the dropdown and not the config, but that's how it works right now.) The config file now looks like this: `.vscode/launch.json`: ``` { "version": "0.2.0", "configurations": [ { "type": "pwa-node", "request": "launch", "name": "Launch Program", "skipFiles": [ "<node_internals>/**" ], "program": "${file}" }, { "type": "node-terminal", "name": "Run Script: start:debug", "request": "launch", "command": "npm run start:debug", "cwd": "${workspaceFolder}" } ] } ``` The "Run and Debug" dropdown menu now has the entry you want: [![run and debug dropdown menu](https://i.stack.imgur.com/xhFSo.png)](https://i.stack.imgur.com/xhFSo.png) Problem solved!
Flask Testing - How to retrieve variables that were passed to Jinja? In Flask, how can I test what variables were returned to a Jinja template with `render_template`? ``` @app.route('/foo/'): def foo(): return render_template('foo.html', foo='bar') ``` In this example, I want to test that `foo` is equal to `"bar"`. ``` import unittest from app import app class TestFoo(unittest.TestCase): def test_foo(self): with app.test_client() as c: r = c.get('/foo/') # Prove that the foo variable is equal to "bar" ``` How can I do this?
This can be done using [signals](http://flask.pocoo.org/docs/0.11/signals/). I will reproduce the code snippit here: ``` import unittest from app import app from flask import template_rendered from contextlib import contextmanager @contextmanager def captured_templates(app): recorded = [] def record(sender, template, context, **extra): recorded.append((template, context)) template_rendered.connect(record, app) try: yield recorded finally: template_rendered.disconnect(record, app) class TestFoo(unittest.TestCase): def test_foo(self): with app.test_client() as c: with captured_templates(app) as templates: r = c.get('/foo/') template, context = templates[0] self.assertEquals(context['foo'], 'bar') ``` --- Here is another implementation that removes the `template` part and turns it into an iterator. ``` import unittest from app import app from flask import template_rendered from contextlib import contextmanager @contextmanager def get_context_variables(app): recorded = [] def record(sender, template, context, **extra): recorded.append(context) template_rendered.connect(record, app) try: yield iter(recorded) finally: template_rendered.disconnect(record, app) class TestFoo(unittest.TestCase): def test_foo(self): with app.test_client() as c: with get_context_variables(app) as contexts: r = c.get('/foo/') context = next(context) self.assertEquals(context['foo'], 'bar') r = c.get('/foo/?foo=bar') context = next(context) self.assertEquals(context['foo'], 'foo') # This will raise a StopIteration exception because I haven't rendered # and new templates next(context) ```
Why is computing the partition function expensive? The joint distribution of a undirected graph can be factorized as a product of potential functions over the maximal cliques of an undirected graph. $$ p(\mathsf{x} \mid \theta) = \frac {1} {Z(\theta)} \prod\_C \psi\_C (\mathsf{x\_C} \mid \theta\_C) $$ - $\mathsf{x\_C}$ is a set of variables in the clique $C$ - $Z$ normalizes the distribution and is called the *partition function* given by, $$ Z(\theta) \triangleq \sum\_\mathsf{x} \prod\_C \psi\_C (\mathsf{x\_C} \mid \theta\_C) $$ Why exactly is calculating $Z(\theta)$ expensive and how is this situation resolved? I'm a little confused about this.
A probability distribution needs to integrate to one. $$1 = \int\_{x\_1 \in \omega\_1} \int\_{x\_2 \in \omega\_2} \dots \int\_{x\_N \in \omega\_N} \frac {1} {Z(\theta)} \underbrace{\prod\_C \psi\_C (\mathsf{x\_C} \mid \theta\_C) }\_{\text{this part is often known}}\,\text{d} x\_1 \text{d} x\_2 \dots \text{d} x\_N $$ And often you know the expression $\prod\_C \psi\_C (\mathsf{x\_C} \mid \theta\_C) $ based on some theoretical grounds, but the normalisation constant $Z(\theta)$ is missing. We can multiply both sides of the equation above with $Z(\theta)$ giving $$\begin{array}{} Z(\theta) &=& Z(\theta)\iiint\_{{\bf x} \in \boldsymbol{\omega}} \frac {1} {Z(\theta)} \prod\_C \psi\_C (\mathsf{x\_C} \mid \theta\_C) \,\text{d} {\bf x} \\ &=&\iiint\_{{\bf x} \in \boldsymbol{\omega}} \prod\_C \psi\_C (\mathsf{x\_C} \mid \theta\_C) \,\text{d} {\bf x}\end{array}$$ and that integral is not so easy. --- Luckily you don't always need to know $Z(\theta)$. For instance when we compute a probability (density) ratio for two different values $\mathsf{x}$ and $\mathsf{x}^\prime$ $$\frac{p(\mathsf{x} \mid \theta)}{p(\mathsf{x}^\prime \mid \theta)} = \frac{\frac {1} {Z(\theta)} \prod\_C \psi\_C (\mathsf{x\_C} \mid \theta\_C)}{\frac {1} {Z(\theta)} \prod\_C \psi\_C (\mathsf{x\_C}^\prime \mid \theta\_C)} = \frac{ \prod\_C \psi\_C (\mathsf{x\_C} \mid \theta\_C)}{ \prod\_C \psi\_C (\mathsf{x\_C}^\prime \mid \theta\_C)}$$ This is used for instance in Markov chain monte carlo sampling methods.
Mysterious problem with floating point in LISP - time axis generation Ok, I do know what and how works floating point. But, It doesn't stop to puzzle me whenever I deal with it. I try to do some time axis generation function. The idea is simple. To make smt like this ``` (defun make-time-axis (start step stop) ...) ``` So, when you call it with e.g. ``` (make-time-axis 0 0.1 1.2) ``` result will be ``` 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 ``` But whenever I do whatever ``` (loop for i from 0.0 below 1.2 by 0.1 collect i) ``` or ``` (defun make-time-axis (last step stop) (cond ((< last stop ) (cons last (make-time-axis (+ last step) step stop ))) (t nil))) ``` etc. I getting these results ``` (0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.70000005 0.8000001 0.9000001 1.0000001 1.1000001) ``` Can anybode give me some clue how to get What I want?
If you know how floating point works (you have read [What Every Computer Scientist Should Know About Floating Point Arithmetic](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.6768), right?), you should realize that you got exactly what you asked for. Mathematical numbers (be they integers, rationals, reals, complex, whatever) have many useful properties. For example, x/10 + y/10 = (x+y)/10. These properties don't hold for floating point numbers. Given your numerical results, it seems that in your implementation, the value of the floating-point number 0.1 is slightly above the value of the mathematical number 0.1 (which isn't exactly representable as floating point). When you add it up several times, the error eventually rises above the printing precision. Since you're using Lisp, it's easy to retain numbers in exact form (e.g. the rational 1/10 instead of the float 0.1) and convert them at the last minute. ``` (loop for i from 0 below 12/10 by 1/10 collect (float i)) (mapcar #'float (make-time-axis 0 1/10 12/10)) ```
Dynamic Programming technique for solving problems Is it possible to solve any Dynamic Programming problem using recursion+memoization instead of using tabulation/iteration? Or there are some problems where it is must to use tabulation/iteration. Also can we obtain the same time complexity when solving any problem using recursion+memoization ( I know space complexity differs and also recursion overhead cost exists).
Every ***Dynamic Programming problem can be expressed as recurrence relation*** which can be **solved using recursion+memoization** which can be **converted into tabulation+iteration**. When you solve a DP problem using tabulation you solve the problem **bottom up**, typically by filling up an n-dimensional table. Based on the results in the table, the solution to the original problem is then computed. When you solve a DP problem using memoization, you do it by maintaining a map of already solved sub problems. You do it **top down** in the sense that you solve the "top" problem first (which typically recurses down to solve the sub-problems). The time complexity of a DP problem which uses tabulation+iteration is the same as an converted equivalent and correct memoization+recursion version of the solution. It is usually easy to find the time complexity in an tabulation+iteration method. On the other hand, memoization+recursion version of DP solution is more intuitive and readable.
syscall read/write 1 byte only? I am new to assembly and trying to write a version of the "echo" built-in but only operating on 1 byte at a time. I have the following which works the way I want, except it overflows more than 1 byte on both read and write even though I explicitly say 1 byte in x2 on both syscalls. What am I doing wrong? Example run: ``` sh-4.2$ ./echo1b f f o o b b bar bar bazbazbaz bazbazbaz q sh-4.2$ ``` Here is the code: ``` .data temp: .byte 1 .text .globl _start _start: /* read one byte from stdin, store to temp */ mov x0, #0x0 adr x1, temp mov x2, #0x1 mov x8, #0x3F svc #0x0 /* write newline to stdout */ mov x0, #0x1 mov x1, #0xA mov x2, #0x1 mov x8, #0x40 svc #0x0 /* if byte in temp is "q", exit */ mov x5, #0x71 ldr x1, temp cmp x1, x5 beq exit /* otherwise, write it to stdout and repeat */ mov x0, #0x1 adr x1, temp mov x2, #0x1 mov x8, #0x40 svc #0x0 b _start exit: /* exit cleanly */ eor x0, x0, x0 eor x1, x1, x1 eor x2, x2, x2 mov x8, #0x5D svc #0x0 ```
There are several issues in your code: - As mentioned in the comments, the output buffer's address has to be in `x1` when calling `sys_write` as you did with `temp` - When comparing `temp` with the newline character, you had to use `ldrb w1, [x0]` instead of `ldr x1, temp` where `x0` points to `temp`. The latter would read 4 bytes while it's not guaranteed that the upper three bytes are zero. I also improved some parts of your code: - `cmp` can be used with a 12 bit immediate so there's no need to put `0x71` in a register. - Moving the second `sys_write` call before `_start` avoids the unconditional jump. - `sys_exit` uses only `x0` as parameter so there's no need to set `x1` and `x2` to zero. Here's the final code, tested on Raspbian 4.19 (debian based): ``` .data temp: .byte 1 newline: .byte 0x0A .text .globl _start loop: // 4: Otherwise, write it to stdout and repeat mov x0, #0x1 // int fd adr x1, temp // void* buf mov x2, #0x1 // size_t count mov x8, #0x40 // sys_write svc #0x0 _start: // 1: Read one byte from stdin and store to temp (including newline) mov x0, #0x0 // int fd adr x1, temp // void* buf mov x2, #0x1 // size_t count mov x8, #0x3F // sys_read svc #0x0 // 2: If byte in temp is 'q', exit adr x0, temp ldrb w1, [x0] // instead of temp cmp x1, #0x71 bne loop // 5: Exit cleanly eor x0, x0, x0 // int status mov x8, #0x5D // sys_exit svc #0x0 ``` **Edit after comment:** To flush stdin at exit, you could add this lines before step 5: ``` // 5: Flush stdin (read until newline) flush: mov x0, #0x0 // int fd adr x1, temp // void* buf mov x2, #0x1 // size_t count mov x8, #0x3F // sys_read svc #0x0 adr x0, temp ldrb w1, [x0] cmp x1, #0x0A bne flush // loop until x0 == 0x0A ```
Why does tesseract fail to read text off this simple image? I have read mountains of posts on pytesseract, but I cannot get it to read text off a dead simple image; It returns an empty string. Here is the image: [![TestImage](https://i.stack.imgur.com/J2ojU.png)](https://i.stack.imgur.com/J2ojU.png) I have tried scaling it, grayscaling it, and adjusting the contrast, thresholding, blurring, everything it says in other posts, but my problem is that I don't know what the OCR wants to work better. Does it want blurry text? High contrast? Code to try: ``` import pytesseract from PIL import Image print pytesseract.image_to_string(Image.open(IMAGE FILE)) ``` As you can see in my code, the image is stored locally on my computer, hence `Image.open()`
Trying something along the lines of ``` import pytesseract from PIL import Image import requests import io response = requests.get('https://i.stack.imgur.com/J2ojU.png') img = Image.open(io.BytesIO(response.content)) text = pytesseract.image_to_string(img, lang='eng', config='--psm 7') print(text) ``` with `--psm` values equal or larger than 6 did yield "Gm" for me. If the image is stored locally (and in your working directory), just drop the `response` variable and change the definition of `text` with the lines ``` image_name = "J2ojU.png" # or whatever appropriate text = pytesseract.image_to_string(Image.open(image_name), lang='eng', config='--psm 7') ```
Incrementing state value by one using React In React I am trying to make a button increment a value stored in state. However using the code below function my value is set undefined or NaN when using handleClick. ``` class QuestionList extends React.Component { constructor(props) { super(props); this.state = {value: 0}; // This binding is necessary to make `this` work in the callback this.handleClick = this.handleClick.bind(this); } handleClick = (prevState) => { this.setState({value: prevState.value + 1}); console.log(this.state.value) } ``` Can you tell me why this is happening? it should be correct according to the docs here: <https://facebook.github.io/react/docs/state-and-lifecycle.html>
Because you are using the handleClick function incorrectly. Here: ``` handleClick = (prevState) => { .... } ``` `prevState` will be an event object passed to handleClick function, you need to use prevState with setState, like this: ``` handleClick = () => { this.setState(prevState => { return {count: prevState.count + 1} }) } ``` Another issue is, setState is async so `console.log(this.state.value)` will not print the updated state value, you need to use callback function with setState. Check more details about [async behaviour of setState](https://stackoverflow.com/questions/42593202/why-calling-setstate-method-doesnt-mutate-the-state-immediately/42593250#42593250) and how to check updated value. Check the working solution: ``` class App extends React.Component { constructor(props){ super(props); this.state={ count: 1} } onclick(type){ this.setState(prevState => { return {count: type == 'add' ? prevState.count + 1: prevState.count - 1} }); } render() { return ( <div> Count: {this.state.count} <br/> <div style={{marginTop: '100px'}}/> <input type='button' onClick={this.onclick.bind(this, 'add')} value='Inc'/> <input type='button' onClick={this.onclick.bind(this, 'sub')} value='Dec'/> </div> ) } } ReactDOM.render( <App />, document.getElementById('container') ); ``` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script> <div id='container'></div> ```
What does 3 '/'s in a row do in C#? I was coding along in lala land when suddenly I typed a 3'd '/' in a comment. Originally a comment goes **green** with 2 '/'s **:** ![Subject1](https://i.stack.imgur.com/YvNS8.png) But with the third dash, it goes **grey** '///' **:** ![Subject2](https://i.stack.imgur.com/ldDs7.png) Does this have any specific meaning? Or did it just change color?
It denotes an XmlComment, more of which can be read about on [MSDN](http://msdn.microsoft.com/en-us/magazine/cc302121.aspx). For example, ``` /// <summary> /// Method summary - a method that does something /// </summary> /// <param name="i">Description of param</param> /// <param name="s">Description of param</param> /// <returns>An object of some sort or other</returns> public object AMethod(int i, string s) { } ``` Edit: as suggested below, it also provides [IntelliSense](http://en.wikipedia.org/wiki/IntelliSense) guidance, as the image below demonstrates. ![Enter image description here](https://i.stack.imgur.com/yPJiA.png)
How to invoke other Cloud Firebase Functions from a Cloud Function Let's say I have a [Cloud Firebase Function](https://firebase.google.com/docs/functions/) - called by a cron job - that produces 30+ tasks every time it's invoked. These tasks are quite slow (5 - 6 second each in average) and I can't process them directly in the original because it would time out. So, the solution would be invoking another "worker" function, once per task, to complete the tasks independently and write the results in a database. So far I can think of three strategies: 1. Pubsub messages. That would be amazing, but it seems that [you can only listen on pubsub messages from within a Cloud Function](https://firebase.google.com/docs/reference/functions/functions.pubsub), not create one. Resorting to external solutions, like having a GAE instance, is not an option for me. 2. Call the worker http-triggered Firebase Cloud Function from the first one. That won't work, I think, because I would need to wait for a response from the all the invoked worker functions, after they finish and `send`, and my original Function would time out. 3. Append tasks to a real time database list, then have a worker function triggered by each database change. The worker has to delete the task from the queue afterwards. That would probably work, but it feels there are a lot of moving parts for a simple problem. For example, what if the worker throws? Another cron to "clean" the db would be needed etc. Another solution that comes to mind is [firebase-queue](https://github.com/firebase/firebase-queue), but its README explicitly states: > > "There may continue to be specific use-cases for firebase-queue, > however if you're looking for a general purpose, scalable queueing > system for Firebase then it is likely that building on top of Google > Cloud Functions for Firebase is the ideal route" > > > It's not officially supported and they're practically saying that we should use Functions instead (which is what I'm trying to do). I'm a bit nervous on using in prod a library that might be abandoned tomorrow (if it's not already) and would like to avoid going down that route.
## Sending Pub/Sub messages from Cloud Functions Cloud Functions are run in a fairly standard Node.js environment. Given the breadth of the Node/NPM ecosystem, the amount of things you can do in Cloud Functions is quite broad. > > it seems that you can only listen on pubsub messages from within a Cloud Function, not create one > > > You can publish new messages to Pub/Sub topics from within Cloud Functions using the regular Node.js module for Pub/Sub. See the [Cloud Pub/Sub documentation for an example](https://cloud.google.com/nodejs/getting-started/using-pub-sub#publishing_events_to_cloud_pubsub). ## Triggering new actions from Cloud Functions through Database writes This is also a fairly common pattern. I usually have my subprocesses/workers clean up after themselves at the same moment they write their result back to the database. This works fine in my simple scenarios, but your mileage may of course vary. If you're having a concrete cleanup problem, post the code that reproduces the problem and we can have a look at ways to make it more robust.
Meteor.Collection.ObjectID() vs MongoDB ObjectId() Why does Meteor.js use it's own algorithms for IDs? Why doesn't it use MongoDB's ObjectId()?
Meteor uses the same method for object id's if you choose to use it: `Meteor.Collection.ObjectID()` is the same as MongoDB's `ObjectID` Its just under the `Meteor.Collection` name. It uses EJSON to hold object id's in ordinary JSON to the client end. Because basically there are 2 databases with meteor **Minimongo** This is a sort of cache of mongodb on the client end. The data is downloaded from the main mongodb on the server to this one when the browser loads up. When changes are made they are pushed up to the server. **Server MongoDB** This is the original mongodb from 10gen on the server So because of these two databases Meteor needs to wrap mongodb functionality in `Meteor.Collection` and let you use the same code on both the client and server. By default meteor won't use Object IDs it'll use sort of random alphanumeric text. This is done so you can easily use ID's in your URL's and ID's in your html attributes. If you do use `new Meteor.Collection.ObjectID()` you will get an `ObjectID` object that will use mongodb's specification of ObjectID on the server end. The timestamp value in the Object ID isn't held up but this shouldn't really do any harm.
Approximate the distribution of the sum of ind. Beta r.v If $X\_i$ has a Beta distribution $\beta(1,K)$. What is the best approximation for the distribution of $ S=\sum\_{i=1}^N X\_i$, when the $X\_{i}$ are independent and $N$ is finite.
If you want better approximations than what you get from the central limit theorem, there is results in a book dedicated exclusively to the beta distribution: [http://www.amazon.com/Handbook-Beta-Distribution-Applications-Statistics/dp/0824753968/ref=sr\_1\_1?s=books&ie=UTF8&qid=1403444915&sr=1-1&keywords=beta+distribution](http://rads.stackoverflow.com/amzn/click/0824753968) (On the amazon.com website you can search within this book!) Arouind page 70 there is exact results for the sum of two independent beta distributions, arouind page 70 they find an approximation by assuming the sum also has an generalized beta distribution, and then equate moments. On page 85 they give approximations for general sums by using the same method, equating moments. Around page 85-87 they give references you can follow up.
Laravel 4 database seed doesn't work I follow this tutorial: <http://fideloper.com/post/41750468389/laravel-4-uber-quick-start-with-auth-guide?utm_source=nettuts&utm_medium=article&utm_content=api&utm_campaign=guest_author> And this tutorial: <http://laravelbook.com/laravel-database-seeding/> But, when I try run `php artisan db:seed`, nothing happens. I try this: ``` <?php // app/database/seeds/groups.php return array( 'table' => 'groups', array( 'name' => 'Administrador', 'description' => '<p>Permissão total no sistema</p>', 'created_at' => new DateTime, 'updated_at' => new DateTime ), array( 'name' => 'Moderadores', 'description' => '<p>Podem apenas postar e moderar comentários</p>', 'created_at' => new DateTime, 'updated_at' => new DateTime ) ); ``` And next: `php artisan db:seed`. ``` php artisan db:seed --env=local Database seeded! ``` But: ``` mysql> select * from groups; Empty set (0.00 sec) ```
The example in the tutorial is wrong - because there was a change to the way seeds work between Beta 1 and Beta 2. Change your `DatabaseSeeder.php` file to this - and it will work for the tutorial: ``` <?php class DatabaseSeeder extends Seeder { /** * Run the database seeds. * * @return void */ public function run() { $this->call('UserTableSeeder'); } } class UserTableSeeder extends Seeder { public function run() { DB::table('users')->delete(); User::create(array( 'id' => 1, 'username' => 'firstuser', 'password' => Hash::make('first_password'), 'created_at' => new DateTime, 'updated_at' => new DateTime )); User::create(array( 'id' => 2, 'username' => 'seconduser', 'password' => Hash::make('second_password'), 'created_at' => new DateTime, 'updated_at' => new DateTime )); } } ``` Now run `php artisan db:seed` - and it will work.
Accessing model through Varien\_Event\_Observer I have a custom observer in Magento 1.6.2.0 that is called when a CMS page is saved or deleted (events cms\_page\_delete\_before/cms\_page\_save\_before). I have verified (using Mage::log()) that the observer is working, however when I try the following: ``` public function getCmsUrl(Varien_Event_Observer $observer) { $url = $observer->getEvent()->getPage()->getIdentifier(); return $url; } ``` I get nothing returned (rather than "about-us" or "enable-cookies" or whatever URL path the CMS page has). The following code, however, works perfectly fine: ``` public function getProductUrl(Varien_Event_Observer $observer) { $baseUrl = $observer->getEvent()->getProduct()->getBaseUrl(); return $baseUrl; } ``` Can someone let me know what the correct way of accessing a CMS page is when passed via an observer? Thanks in advance for any help/tips/pointers :-)
The events `cms_page_delete_before` and `cms_page_save_before` are fired in `Mage_Core_Model_Abstract`. This it how it looks like in the `beforeSave` function: ``` Mage::dispatchEvent($this->_eventPrefix.'_save_before', $this->_getEventData()); ``` As you can see, it uses a variable `_eventPrefix` to construct the event key. In the CMS page model, this is set to `cms_page`. Also notice the part `$this->_getEventData()`. This is how the model, in this case the CMS page, is passed to the observer: ``` protected function _getEventData() { return array( 'data_object' => $this, $this->_eventObject => $this, ); } ``` As you can see, the object has two names, `data_object` and a name defined in a variable, `_eventObject`. In the product model, the name is set to `product`, but in the CMS page model, the variable is missing. Apparently the Magento team forgot to put this in, and as a result, the default name from the core model is used: ``` protected $_eventObject = 'object'; ``` That means you can get the CMS page in your observer by using `getObject`: ``` public function myObserver(Varien_Event_Observer $observer) { $page = $observer->getEvent()->getObject(); } ```
.NET Core WebAPI VueJS template publish issue I'm trying out the "VueJS with Asp.Net Core 3.1 Web API Template" found [here](https://marketplace.visualstudio.com/items?itemName=alexandredotnet.netcorevuejs) and it works quite smooth during development. However, I've wanted to see how it handles publishing and I can't manage to get it working. When running publish to folder, it doesn't move the clientapp/dist folder to the output directory, which is OK, so I thought I'd do it manually. So I've tried moving the contents of the dist folder to output directory with the following paths: - "/publish/clientapp/dist" - "/publish/dist" - "/publish/clientapp" But none of the above seems to work, I get the following error when running the .dll file: ``` fail: Microsoft.AspNetCore.Server.Kestrel[13] Connection id "0HLTD93CRG52F", Request id "0HLTD93CRG52F:00000001": An unhandled exception was thrown by the application. System.InvalidOperationException: The SPA default page middleware could not return the default page '/index.html' because it was not found, and no other middleware handled the request. Your application is running in Production mode, so make sure it has been published, or that you have built your SPA manually. Alternatively you may wish to switch to the Development environment. at Microsoft.AspNetCore.SpaServices.SpaDefaultPageMiddleware.<>c__DisplayClass0_0.<Attach>b__1(HttpContext context, Func`1 next) at Microsoft.AspNetCore.Builder.UseExtensions.<>c__DisplayClass0_1.<Use>b__1(HttpContext context) at Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware.TryServeStaticFile(HttpContext context, String contentType, PathString subPath) at Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware.Invoke(HttpContext context) at Microsoft.AspNetCore.Builder.UseExtensions.<>c__DisplayClass0_2.<Use>b__2() at Microsoft.AspNetCore.SpaServices.SpaDefaultPageMiddleware.<>c__DisplayClass0_0.<Attach>b__0(HttpContext context, Func`1 next) at Microsoft.AspNetCore.Builder.UseExtensions.<>c__DisplayClass0_1.<Use>b__1(HttpContext context) at Microsoft.AspNetCore.Routing.EndpointMiddleware.Invoke(HttpContext httpContext) at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context) at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application) ``` This is my "UseSpa" in Startup.cs: ``` app.UseSpa(spa => { if (env.IsDevelopment()) spa.Options.SourcePath = "ClientApp"; else spa.Options.SourcePath = "clientapp/dist"; if (env.IsDevelopment()) { spa.UseVueCli(npmScript: "serve"); } }); ``` With the above code, I would assume my dist folder should be located in /publish/clientapp/dist, which I've tried, but even then, I get the error mentioned above. I hope someone can point me in the right direction - thanks in advance :)
There seems a bug in the template: the name of `ClientApp` folder is `clientapp`. However, all the related codes in startup treat it as `ClientApp`. 1. The template didn't configure a task that builds the Vuejs for you. To do that, add a task in your `csproj` file: ``` <PropertyGroup> <SpaRoot>clientapp\</SpaRoot> <DefaultItemExcludes>$(DefaultItemExcludes);$(SpaRoot)node_modules\**</DefaultItemExcludes> </PropertyGroup> <ItemGroup> <!-- Don't publish the SPA source files, but do show them in the project files list --> <Content Remove="$(SpaRoot)**" /> <None Remove="$(SpaRoot)**" /> <None Include="$(SpaRoot)**" Exclude="$(SpaRoot)node_modules\**" /> </ItemGroup> <Target Name="DebugEnsureNodeEnv" BeforeTargets="Build" Condition=" '$(Configuration)' == 'Debug' And !Exists('$(SpaRoot)node_modules') "> <!-- Ensure Node.js is installed --> <Exec Command="node --version" ContinueOnError="true"> <Output TaskParameter="ExitCode" PropertyName="ErrorCode" /> </Exec> <Error Condition="'$(ErrorCode)' != '0'" Text="Node.js is required to build and run this project. To continue, please install Node.js from https://nodejs.org/, and then restart your command prompt or IDE." /> <Message Importance="high" Text="Restoring dependencies using 'npm'. This may take several minutes..." /> <Exec WorkingDirectory="$(SpaRoot)" Command="npm install" /> </Target> <Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish"> <!-- As part of publishing, ensure the JS resources are freshly built in production mode --> <Exec WorkingDirectory="$(SpaRoot)" Command="npm install" /> <Exec WorkingDirectory="$(SpaRoot)" Command="npm run build" /> <!-- Include the newly-built files in the publish output --> <ItemGroup> <DistFiles Include="$(SpaRoot)dist\**; $(SpaRoot)dist-server\**" /> <DistFiles Include="$(SpaRoot)node_modules\**" Condition="'$(BuildServerSideRenderer)' == 'true'" /> <ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)"> <RelativePath>%(DistFiles.Identity)</RelativePath> <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory> <ExcludeFromSingleFile>true</ExcludeFromSingleFile> </ResolvedFileToPublish> </ItemGroup> </Target> ``` 2. The code above are mostly copied from the standard ASP.NET Core Angular template. It will build your Vuejs under `clientapp/dist` folder after published. In order to make ASP.NET Core know this, configure your SpaStaticFiles Service as below: ``` services.AddSpaStaticFiles(configuration => { configuration.RootPath = "ClientApp"; configuration.RootPath = "clientapp/dist"; } ``` 3. Finally, you don't need source path when in production environment since it has been built automatically: ``` app.UseSpa(spa => { if (env.IsDevelopment()) spa.Options.SourcePath = "ClientApp"; spa.Options.SourcePath = "clientapp"; else spa.Options.SourcePath = "dist" ```
How can I access cookie-session from client side? I am building an application single page using NodeJS, and want to use my cookie session (cookie-session npm) to verify if the user is logged in or not. From my node server side I can get and set the session cookie, but I do not know how to get from my client side. This is how I am setting up from my server side: ``` req.session.user_id = user[0]._id; ``` Where `user[0]._id` is my user id that I get from my mongodb.
So let's assume you've configured `cookie-session` something like this: ``` var cookieSession = require('cookie-session'); app.use(cookieSession({ keys: ['secret'] })); ``` Then let's store some data in the session: ``` req.session.user_id = 123; ``` If you look in your browser's dev tools you'll see 2 cookies set: ``` express:sess = eyJ1c2VyX2lkIjoxMjN9 express:sess.sig = 01I_Rx2gACezZI1tdl2-NvxPq6w ``` The cookie `express:sess` is base64 encoded. If we decode it we get `{"user_id":123}`. It's important to appreciate that the session data is being stored in the cookie itself - this isn't just an id for the session. The other cookie, `express:sess.sig`, is the signature. This signature is generated using the key (`secret` in this example) and is used to help prevent tampering. It's easy for anyone to modify `express:sess` but unless they can also generate the corresponding `express:sess.sig` the server will know it's been changed. All that said, I suggest you take a look at the `express-session` middleware. That also uses cookies but it only uses them to store the session id. No data is stored in the cookie, that is all stored on the server. This is much more akin to how sessions work in most other web frameworks but I can't say for certain which approach is best suited to your needs. Whichever approach you use the cookie with be set to `httponly` by default. You'll be able to verify this in your browser's dev tools. This means that it's included on HTTP requests but isn't accessible via client-side JavaScript. This is a security measure designed to make it more difficult for malicious code to steal the cookie. You can disable this security feature in `cookie-session` using: ``` app.use(cookieSession({ httpOnly: false, keys: ['secret'] })); ``` You'll then be able to access those cookies using `document.cookie`. I reiterate that this is a security measure and turning it off isn't recommended. It's impossible for me to judge whether this is a genuine concern in your application. It isn't clear from your question whether you actually want to parse the values out of the cookie or just check for its existence. If you need to parse it then you'll need to base64 decode the relevant cookie value and then JSON decode it. There are various alternative approaches you might adopt to keep the cookies `httponly`. Without knowing more about what you're going to do with this information it's difficult to be specific. If you're using Express views (i.e. template rendering) then you can do all the work in the template. If you're in SPA territory then you could maybe use an AJAX request to gather the relevant information. At a pinch you could even use another cookie to give you the information you need while keeping the session cookies safe.
Why instanceof sometimes doesn't compile and sometimes returns false? In the following example - test instanceof java.util.Map returns false - test instanceof java.util.HashMap doesn't compile but - mymap instanceof Set returns false and - mymap instanceof HashSet returns false (why does it compile?!?) Why? They look so similar to each other! ``` import java.util.Map; import java.util.HashMap; import java.util.Set; import java.util.HashSet; public class InstanceofTest { public static class Test {} public static void main(String[] args) { // -- left operand references a Class instance Test test = null; // 1. outputs: false System.out.println(test instanceof Map); // 2. COMPILATION ERROR System.out.println(test instanceof HashMap); // -- left operand references an Interface instance Map mymap = new HashMap(); // 3. outputs: false System.out.println(mymap instanceof Set); // 4. outputs: false System.out.println(mymap instanceof HashSet); } } ```
The instanceof operator is used in Java to test if a reference points to an object that is an instance of a specific class or interface. For example: ``` String myString="test string"; System.out.println(myString instanceof String); // true, myString is a String System.out.println(myString instanceof Object); // true, myString is a String, and so it is an Object, too ``` For **null** references, the **instanceof** always returns **false**: ``` System.out.println(null instanceof Object); // false, null doesn't reference any object ``` Sometimes the compiler knows for sure that a reference can never be an instance of a specific class, because the type of the reference it is not in the hierarchy tree of the specific class. For example, in the following example the compiler complains: "***Incompatible conditional operand types String and Map***" ``` String myString="test string"; System.out.println(myString instanceof java.util.Map); ``` Now, things get interesting. In the following example, we have one **non final class Test** and one **final class TestFinal**. ``` public class InstanceofTest { public static class Test {} public static final class TestFinal {} public static void main(String[] args) { Test test = null; // 1. outputs: false System.out.println(test instanceof java.util.Map); // 2. COMPILATION ERROR System.out.println(test instanceof java.util.HashMap); TestFinal testFinal = null; // 3. COMPILATION ERROR System.out.println(testFinal instanceof java.util.Map); // 4. COMPILATION ERROR System.out.println(testFinal instanceof java.util.HashMap); } } ``` Why does it return false in 1., but it doesn't compile in 2., 3., 4.? In 1., we are testing the reference test against an **Interface** (java.util.Map). The compiler cannot be sure that test isn't an instance of java.util.Map. In fact, it may happen that test references an object whose class implements java.util.Map and extends the class Test. So, there isn't a compilation error, but it returns false at runtime. In 2., we are testing the reference test against a **Class**. In this case, the compiler can be sure that the object referenced by the test variable cannot extend java.util.Map, because Test class doesn't extend java.util.Map, and every subclass of Test will extend the class Test (or one of its subclasses), so it cannot extend java.util.Map at the same time. In 3., we are testing the reference testFinal against an **Interface**. It looks similar to 1., but it's quite different, because the class TestFinal cannot be subclassed, so there's no way that an instance of TestFinal could be an instance of java.util.Map too. In 4., we are testing the reference testFinal against a **Class**. As in 2., the compiler can be sure that the object referenced by the testFinal variable cannot extend java.util.Map. There is still another case that is worth considering: ``` List myList = new ArrayList(); // 5. outputs: false System.out.println(myList instanceof java.util.Map); // 6. outputs: false System.out.println(myList instanceof java.util.HashMap); ArrayList myArrayList = new ArrayList(); // 7. outputs: false System.out.println(myArrayList instanceof java.util.Map); // 8. COMPILATION ERROR System.out.println(myArrayList instanceof java.util.HashMap); ``` In 5., 6., myList is a reference to an Interface, theoretically it could exist an instance of List that implements Map or that extends HashMap. 7. is analogous to 1. 8. is analogous to 2. Conclusion: A. ***null instanceof AnyClass*** (or ***AnyInterface***) always returns false B. ***myreferenceToAClass instanceof MyInterface*** may return true or false, depending from the context C. ***myreferenceToAnInterface instanceof AnyClass*** (or ***AnyInterface***) may return true or false, depending from the context D. ***myreferenceToAClass instanceof MyClass***: - compilation error if myreference's class doesn't belong to MyClass's hierarchy tree - returns true or false, depending from the context, if myreference's class belongs to MyClass's hierarchy tree
How to get every nth column in pandas? I have a dataframe which looks like this: ``` a1 b1 c1 a2 b2 c2 a3 ... x 1.2 1.3 1.2 ... ... ... ... y 1.4 1.2 ... ... ... ... ... z ... ``` What I want is grouping by every nth column. In other words, I want a dataframe with all the as, one with bs and one with cs ``` a1 a2 a4 x 1.2 ... ... y z ``` In another SO question I saw that is possibile to do `df.iloc[::5,:]`, for example, to get every 5th raw. I could do of course `df.iloc[:,::3]` to get the c cols but it doesn't work for getting a and b. Any ideas?
slice the columns: ``` df[df.columns[::2]] ``` To get every nth column Example: ``` In [2]: cols = ['a1','b1','c1','a2','b2','c2','a3'] df = pd.DataFrame(columns=cols) df Out[2]: Empty DataFrame Columns: [a1, b1, c1, a2, b2, c2, a3] Index: [] In [3]: df[df.columns[::3]] Out[3]: Empty DataFrame Columns: [a1, a2, a3] Index: [] ``` You can also filter using `startswith`: ``` In [5]: a = df.columns[df.columns.str.startswith('a')] df[a] Out[5]: Empty DataFrame Columns: [a1, a2, a3] Index: [] ``` and do the same for b cols and c cols etc.. You can get a set of all the unique col prefixes using the following: ``` In [19]: df.columns.str.extract(r'([a-zA-Z])').unique() Out[19]: array(['a', 'b', 'c'], dtype=object) ``` You can then use these values to filter the columns using `startswith`
Efficient way to implement multiple dispatch for many similar functions I am writing some software that involves a library of various functional forms of a quantity. I want to leverage Julia's multiple dispatch, but want to know if there's a more efficient way to implement this procedure. Consider, for example, a library that contains the following two functions ``` function firstfunction(x::Float64) return 2*x end function secondfunction(x::Float64) return x^2 end ``` I would also like to implement multiple dispatch methods that can apply these functional forms to an vector of values, or an array of vectors (matrix). I could do this as follows ``` function firstfunction(x::Float64) return 2*x end function firstfunction(xs::Vector{Float64}) f = similar(xs) for i = 1:size(xs, 1) f[i] = 2*xs[i] end return f end function firstfunction(xss::Matrix{Float64}) f = similar(xss) for i = 1:size(xss, 1) for j = 1:size(xss, 2) f[i, j] = 2*xss[i, j] end return f end function secondfunction(x::Float64) return x^2 end function secondfunction(xs::Vector{Float64}) f = similar(xs) for i = 1:size(xs, 1) f[i] = xs[i]^2 end return f end function secondfunction(xss::Matrix{Float64}) f = similar(xss) for i = 1:size(xss, 1) for j = 1:size(xss, 2) f[i, j] = xss[i, j]^2 end return f end ``` But since all three versions of the function use the same kernel, and the actions of the various dispatches are the same across all functional forms, I'd like to know if there's a more efficient way to write this such that defining a new function for the library (e.g `thirdfunction`) only involves explicitly writing the kernel function, rather than having to type out `2*n` essentially identical functions for `n` functional forms in the library.
Just do: ``` function thirdfunction(x::Union{Number, Array{<:Number}}) return x.^0.5 end ``` This is the beauty of multiple-dispatch in Julia: ``` julia> thirdfunction(4) 2.0 julia> thirdfunction([4,9]) 2-element Array{Float64,1}: 2.0 3.0 julia> thirdfunction([4 9; 16 25]) 2×2 Array{Float64,2}: 2.0 3.0 4.0 5.0 ``` Note that however in your case it might make sense to have only a single representation of a function and let the user decide to vectorize it using the dot operator (`.`). ``` function fourthfunction(x::Real) min(x, 5) end ``` And now the user just needs to add a dot when needed: ``` julia> fourthfunction(4) 4 julia> fourthfunction.([4,9]) 2-element Array{Int64,1}: 4 5 julia> fourthfunction.([4 9; 16 25]) 2×2 Array{Int64,2}: 4 5 5 5 ``` Since vectorizing in Julia is so easy you should consider this design whenever possible,
Haskell: Deleting white space from a list of strings The question is: Write a function that will delete leading white space from a string. Example: `cutWhitespace [" x","y"," z"]` Expected answer: `["x","y","z"]` Heres what I have: ``` cutWhitespace (x:xs) = filter (\xs -> (xs /=' ')) x:xs ``` This returns `["x", " y"," z"]` when the input is `[" x"," y", " z"]`. Why is it ignoring the space in the second and third string and how do I fix it? We are allowed to use higher-order functions which is why I implemented filter.
The reason the OP `cutWhitespace` function only works on the first string, is that due to operator precedence, it's actually this function: ``` cutWhitespace (x:xs) = (filter (\xs -> (xs /=' ')) x) : xs ``` Here, I've put brackets around most of the body to make it clear how it evaluates. The `filter` is only applied on `x`, and `x` is the first element of the input list; in the example input `" x"`. If you filter `" x"` as given, you get `"x"`: ``` Prelude> filter (\xs -> (xs /=' ')) " x" "x" ``` The last thing `cutWhitespace` does, then, is to take the rest of the list (`[" y", " z"]`) and cons it on `"x"`, so that it returns `["x"," y"," z"]`. In order to address the problem, you could write the function with the realisation that a list of strings is a nested list of characters, i.e. `[[Char]]`. As a word of warning, pattern-matching on `(x:xs)` without also matching on `[]` is dangerous, as it'll fail on empty lists.
GitLab 7.2.1 with Apache Server instead of Nginx I have installed GitLab `7.2.1` with the .deb package from [GitLab.org](https://about.gitlab.com/downloads) for Debian 7 on a virtual server where I have root access. On this virtual server I have already installed Apache, version `2.2.22` and I don't want to use Ngnix for GitLab. Now I have no idea where the public folders of GitLab are or what I have to do or on what I have to pay attention. So my question is: How do I have to configure my vhost for apache or what do I have to do also that I can use a subdomain like "gitlab.example.com" on my apache web server?
With two things in mind: 1. Unicorn is listening on 8080 (you can check this with `sudo netstat -pant | grep unicorn`) 2. Your document root is `/opt/gitlab/embedded/service/gitlab-rails/public` You can create a new vhost for gitlab in apache with the following configuration: ``` <VirtualHost *:80> ServerName gitlab.example.com ServerSignature Off ProxyPreserveHost On <Location /> Order deny,allow Allow from all ProxyPassReverse http://127.0.0.1:8080 ProxyPassReverse http://gitlab.example.com/ </Location> RewriteEngine on RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule .* http://127.0.0.1:8080%{REQUEST_URI} [P,QSA] # needed for downloading attachments DocumentRoot /opt/gitlab/embedded/service/gitlab-rails/public </VirtualHost> ```
Why does my SPARQL query duplicate results? I am doing some searching, and learning more about SPARQL, but it is not easy like SQL. I just want to know why my query duplicates result and how to fix it. This is my SPARQL Query: ``` PREFIX OQ:<http://www.owl-ontologies.com/Ontology1364995044.owl#> SELECT ?x ?ys ?z ?Souhaite WHERE { ?y OQ:hasnameactivite ?x. ?y OQ:AttenduActivite ?Souhaite. ?y OQ:SavoirDeActivite ?z. ?y OQ:hasnamephase ?ys. ?y OQ:Activitepour ?v. ?ro OQ:hasnamerole ?nr. ?y OQ:avoirrole ?ro. FILTER regex (?nr ,"Concepteur"). FILTER regex (?v,"Voiture"). } ``` This gives me these results: ![enter image description here](https://i.stack.imgur.com/O6xkv.png) Expected result is: ![enter image description here](https://i.stack.imgur.com/bqvsD.png)
While first reading your question, I was going to respond that you can change `SELECT` in your query to `SELECT DISTINCT` (using the [`DISTINCT` modifier](http://www.w3.org/TR/2013/REC-sparql11-query-20130321/#modDistinct)) to remove duplicate results. However, looking at your result set, I don't actually see any duplicated answers. Each row appears to be unique. The values for `?xs` and `?ys` all happen to be the same, but the combinations of `?z` and `?Souhaite` make the rows distinct. Your results are essentially the product `{ xs1 } × { ys1 } × { z1, z2, z3 } × { S1, S2, S3 }`, and don't contain any duplicates. I just looked a bit more closely at the query and the results you are showing, and there are some discrepancies. For instance, your results have a variable named `?xs` but your query does not use such a variable. I will assume that `?x` is supposed to be `?xs`. Also, the variable names `?xs`, `?ys`, `?z`, and `?Souhaite` are not very descriptive at all. It's hard to talk about these when we don't know what role they play in the result. Regarding the results that you are expecting, `?xs` and `?ys` really should be bound for each row. The second row of your desired results, for instance, have a `?z` and a `?Souhaite`, but no `?xs` and `?ys`, but they probably do not make any sense without a corresponding `?xs` and `?ys`, correct? As such, I will not try to address the issue of those columns being blank in your second and third rows; they should not be blank. In your expected results, you have removed the rows that included many `?z/?Souhaite` combinations, such as `"Besoins …" "Schemas …"` and `"Volume …" "Fourchette …"`. These appeared in the results because they are in your data. If you want help cleaning your data so that these are not present, we will need to see your data, and know something about from where it came.
How to make a scrolling menu in python-curses There is a way to make a scrolling menu in python-curses? I have a list of records that I got from a query in sqlite3 and I have to show them in a box but they are more than the max number of rows: can I make a little menu to show them all without making curses crashing?
This code allows you to create a little menu in a box from a list of strings. You can also use this code getting the list of strings from a sqlite query or from a csv file. To edit the max number of rows of the menu you just have to edit `max_row`. If you press enter the program will print the selected string value and its position. ``` from __future__ import division #You don't need this in Python3 import curses from math import * screen = curses.initscr() curses.noecho() curses.cbreak() curses.start_color() screen.keypad( 1 ) curses.init_pair(1,curses.COLOR_BLACK, curses.COLOR_CYAN) highlightText = curses.color_pair( 1 ) normalText = curses.A_NORMAL screen.border( 0 ) curses.curs_set( 0 ) max_row = 10 #max number of rows box = curses.newwin( max_row + 2, 64, 1, 1 ) box.box() strings = [ "a", "b", "c", "d", "e", "f", "g", "h", "i", "l", "m", "n" ] #list of strings row_num = len( strings ) pages = int( ceil( row_num / max_row ) ) position = 1 page = 1 for i in range( 1, max_row + 1 ): if row_num == 0: box.addstr( 1, 1, "There aren't strings", highlightText ) else: if (i == position): box.addstr( i, 2, str( i ) + " - " + strings[ i - 1 ], highlightText ) else: box.addstr( i, 2, str( i ) + " - " + strings[ i - 1 ], normalText ) if i == row_num: break screen.refresh() box.refresh() x = screen.getch() while x != 27: if x == curses.KEY_DOWN: if page == 1: if position < i: position = position + 1 else: if pages > 1: page = page + 1 position = 1 + ( max_row * ( page - 1 ) ) elif page == pages: if position < row_num: position = position + 1 else: if position < max_row + ( max_row * ( page - 1 ) ): position = position + 1 else: page = page + 1 position = 1 + ( max_row * ( page - 1 ) ) if x == curses.KEY_UP: if page == 1: if position > 1: position = position - 1 else: if position > ( 1 + ( max_row * ( page - 1 ) ) ): position = position - 1 else: page = page - 1 position = max_row + ( max_row * ( page - 1 ) ) if x == curses.KEY_LEFT: if page > 1: page = page - 1 position = 1 + ( max_row * ( page - 1 ) ) if x == curses.KEY_RIGHT: if page < pages: page = page + 1 position = ( 1 + ( max_row * ( page - 1 ) ) ) if x == ord( "\n" ) and row_num != 0: screen.erase() screen.border( 0 ) screen.addstr( 14, 3, "YOU HAVE PRESSED '" + strings[ position - 1 ] + "' ON POSITION " + str( position ) ) box.erase() screen.border( 0 ) box.border( 0 ) for i in range( 1 + ( max_row * ( page - 1 ) ), max_row + 1 + ( max_row * ( page - 1 ) ) ): if row_num == 0: box.addstr( 1, 1, "There aren't strings", highlightText ) else: if ( i + ( max_row * ( page - 1 ) ) == position + ( max_row * ( page - 1 ) ) ): box.addstr( i - ( max_row * ( page - 1 ) ), 2, str( i ) + " - " + strings[ i - 1 ], highlightText ) else: box.addstr( i - ( max_row * ( page - 1 ) ), 2, str( i ) + " - " + strings[ i - 1 ], normalText ) if i == row_num: break screen.refresh() box.refresh() x = screen.getch() curses.endwin() exit() ``` ![Screenshot of the code](https://i.stack.imgur.com/R3M6Z.png)
xrandr set offset to monitor I've read [this question](https://superuser.com/questions/485120/how-do-i-align-the-bottom-edges-of-two-monitors-with-xrandr), it's not helping. I have two monitors, a 1920x1200 (DP-1), and a 1920x1080 (eDP-1) below *and a bit to the side* of the previous one. `xrandr`'s man page and some online reading made me believe that I could offset the bottom monitor with the option `--pos`. This however: ``` xrandr --output DP-1 --above eDP-1 --auto --primary --pos 0x0 --output eDP-1 --pos 940x1200 ``` does not seem to add the x-offset to eDP-1. Adding the `--verbose` flag confirms this, I get: ``` crtc 0: 1920x1200 59.95 +0+0 "DP-1" crtc 1: 1920x1080 59.93 +0+1200 "eDP-1" ``` x-offset is 0.
It seems that the `--above`, `--below`, `--right-of`, and `--left-of` options are not compatible with `--pos`. Apparently, `--pos` has lower precedence and the other ones take over (a warning would have been nice). Modifying my command and removing the `--above` portion works: ``` $ xrandr --output DP-1 --auto --primary --pos 0x0 --output eDP-1 --pos 940x1200 --verbose screen 0: 2860x2280 755x602 mm 96.20dpi crtc 0: 1920x1200 59.95 +0+0 "DP-1" crtc 1: 1920x1080 59.93 +940+1200 "eDP-1" ```
Recommended libraries/howtos for using WMI with java? Hey Folks. I have a reasonably large commercial project (so for-pay licensing is always an option) that is moving into the windows space and I am planning on doing alot of polling with WMI. This is a Java 1.6 client app. Are there any good resources or books out there to get me started in accomplishing this in java? I feel my .NET fu is pretty strong, but I often don't know where to start to accomplish the same tasks in Java. Thanks!
I don't know of any books specific to Java, but If I can offer some advice it would be to just start with WMI MSDN [documentation](http://msdn.microsoft.com/en-us/library/aa394582%28v=vs.85%29.aspx). Also, since WMI is a COM based API, I would look into using a Java to COM bridge for accessing the API. Any one of the following should work: - [Java2Com](http://www.java2com.com/) - [JACOB](http://sourceforge.net/projects/jacob-project/) - A WMI [example](http://www.dreamincode.net/code/snippet3297.htm). - [J-Integra](http://j-integra.intrinsyc.com/) - A WMI [example](http://j-integra.intrinsyc.com/support/com/doc/#other_examples/WMI_Scripting_from_Java.htm). - [J-Interop](http://www.j-interop.org/) - A WMI [example](https://stackoverflow.com/questions/310355/how-do-i-access-windows-event-viewer-log-data-from-java). - Another [Java2Com](http://apollosoft.net/products/java2com/) tool For a Java wrapper around WMI, there is the [jWMI](http://henryranch.net/software/jwmi-query-windows-wmi-from-java/) library.
Javascript Array lookup efficiency: associative vs. stored associative? I've been reading, and they're saying that associative arrays won't give you the same efficiency as arrays. An associative array can look things up in O(N) time, where an array can look things up in O(1). Here's my question: which one would be more efficient in terms of looking up values quickly and not hogging too much memory? Associative: ``` var myVars=new Array(); myVars['test1'] = a; myVars['test2'] = b; myVars['test3'] = c; ... (up to 200+ values) echo myVars['test2']; ``` Stored Associative: ``` var myVars=new Array(); var TEST1 = 1; var TEST2 = 2; var TEST3 = 3; ... (up to 200+ values) myVars[TEST1] = a; myVars[TEST2] = b; myVars[TEST3] = c; ... (up to 200+ values) echo myVars[TEST2]; ```
First, the first usage of `Array` is wrong. Although it is *possible* to do it, it does not mean you should. You are "abusing" the fact that arrays are objects too. This can lead to unexpected behaviour, e.g. although you add 200 values, `myVars.length` will be `0`. Don't use a JavaScript array as associative array. Use plain objects for that: ``` var myVars = {}; myVars['test1'] = a; myVars['test2'] = b; myVars['test3'] = c; ``` Second, in JavaScript there is no real difference between the two (objects and arrays). Arrays extend objects and add some behaviour, but they are still objects. The elements are stored as properties of the array. You can find more information in the [specification](http://ecma262-5.com/ELS5_HTML.htm#Section_15.4): > > Array objects give special treatment to a certain class of property names. A property name P (in the form of a String value) is an array index if and only if ToString(ToUint32(P)) is equal to P and ToUint32(P) is not equal to 232−1. (...) > > > So both: ``` var obj = {'answer': 42}; obj['answer']; ``` and ``` var arr = [42]; arr[0]; ``` have the same access time†, which is definitely **not** `O(n)`. †: It is better to say *should* have. Apparently this varies in different implementations. --- Apart from that, your second example is horrible to maintain. If you assign numbers to variables, why not use the numbers directly? ``` var myVars = []; myVars[0] = a; myVars[1] = b; myVars[2] = c; ``` --- **Update:** More importantly: You have to choose the right data structure for your needs and this is not only determined by the access time of a single element, but also: - Are the keys consecutive numbers or arbitrary strings/numbers? - Do you have to access all (i.e. loop over all) elements of the collection? Numerical arrays (arrays) and associative arrays (or hash tables/maps (objects in JS)) provide different solutions for different problems.
Get parent SKU (configurable or bundle) from child SKU On the cart page I need to be able to obtain the parent SKU using a child SKU. I've tried several code snipped from both the Magento forums and similar questions here on StackOverflow without success. I'm able to determine if a product is just a simple product without a parent by using getTypeId() but after that everything I try fails to result in getting at the parent SKU. Magento Version: 1.4.2.0
Take a look at the Mage\_Catalog\_Model\_Product\_Type\_Configurable and Mage\_Bundle\_Model\_Product\_Type classes. They have useful methods for getting parent and child products. You want getParentIdsByChild(): For configurable products: ``` $parent_ids = Mage::getModel('catalog/product_type_configurable')->getParentIdsByChild($childId); ``` For bundle products: ``` $parent_ids = Mage::getModel('bundle/product_type')->getParentIdsByChild($childId); ``` These only work with ids. You'll need to convert the child SKU to an id and then the parent id back to a SKU. A simple way to get the id from the SKU is: ``` Mage::getModel('catalog/product')->getIdBySku($sku); ``` Also, you can have multiple parent ids, so you'll have to be aware of that. Here's an example: ``` $child_id = Mage::getModel('catalog/product')->getIdBySku($child_sku); $parent_ids = Mage::getModel('catalog/product_type_configurable')->getParentIdsByChild($child_id); $parent_collection = Mage::getResourceModel('catalog/product_collection') ->addFieldToFilter('entity_id', array('in'=>$parent_ids)) ->addAttributeToSelect('sku'); $parent_skus = $parent_collection->getColumnValues('sku'); ```
Open AutoCompleteBox in WPF on control focus I'm trying to open `System.Windows.Controls.AutoCompleteBox` on control focus. The event triggers but nothing happens:/ When I start entering some text, the autocomplete box works fine. What am I doing wrong? ``` AutoCompleteBox box = new AutoCompleteBox(); box.Text = textField.Value ?? ""; box.ItemsSource = textField.Proposals; box.FilterMode = AutoCompleteFilterMode.Contains; box.GotFocus += (sender, args) => { box.IsDropDownOpen = true; }; ```
I did a quick workaround as if this solution is satisfying for me in my program. ``` AutoCompleteBox box = new AutoCompleteBox(); box.Text = textField.Value ?? ""; if (textField.Proposals != null) { box.ItemsSource = textField.Proposals; box.FilterMode = AutoCompleteFilterMode.None; box.GotFocus += (sender, args) => { if (string.IsNullOrEmpty(box.Text)) { box.Text = " "; // when empty, we put a space in the box to make the dropdown appear } box.Dispatcher.BeginInvoke(() => box.IsDropDownOpen = true); }; box.LostFocus += (sender, args) => { box.Text = box.Text.Trim(); }; box.TextChanged += (sender, args) => { if (!string.IsNullOrWhiteSpace(box.Text) && box.FilterMode != AutoCompleteFilterMode.Contains) { box.FilterMode = AutoCompleteFilterMode.Contains; } if (string.IsNullOrWhiteSpace(box.Text) && box.FilterMode != AutoCompleteFilterMode.None) { box.FilterMode = AutoCompleteFilterMode.None; } }; } ```
Reading data only when present I'm trying to read the data from the COM3 port. I'm using this code: ``` in = fscanf(s); if(in == 'A') fclose(s); break; end ``` The problem is that when no data is sent to the com3 port, the `fscanf()` will wait for a certain time interval and then give a timeout. Is there a way to read data only when it is present?
## Read only when data present You can read out the `BytesAvailable`-property of the serial object `s` to know how many bytes are in the buffer ready to be read: ``` bytes = get(s,'BytesAvailable'); % using getter-function bytes = s.BytesAvailable; % using object-oriented-addressing ``` Then you can check the value of `bytes` to match your criteria. Assuming a char is 1 byte, then you can check for this easily before reading the buffer. ``` if (bytes >= 1) in = fscanf(s); % do the handling of 'in' here end ``` ## Minimize the time to wait You can manually set the `Timeout`-property of the serial object `s` to a lower value to continue execution earlier as the default timeout. ``` set(s,'Timeout',1); % sets timeout to 1 second (default is 10 seconds) ``` Most likely you will get the following warning: > > Unsuccessful read: A timeout occurred before the Terminator was > reached.. > > > It can be suppressed by executing the following command before `fscanf`. ``` warning('off','MATLAB:serial:fscanf:unsuccessfulRead'); ``` **Here is an example:** ``` s = serial('COM3'); set(s,'Timeout',1); % sets timeout to 1 second (default is 10 seconds) fopen(s); warning('off','MATLAB:serial:fscanf:unsuccessfulRead'); in = fscanf(s); warning('on','MATLAB:serial:fscanf:unsuccessfulRead'); if(in == 'A') fclose(s); break; end ```
R - Stock market data from csv to xts I have this data in a CSV: ``` Date ALICORC1 ALT ATACOBC1 AUSTRAC1 CONTINC1 BVN DNT 40886 5.8 0.1 0.9 0.28 5.45 38.2 1.11 40889 5.8 0.1 0.88 0.28 5.37 37.7 1.04 40890 5.8 0.09 0.87 0.27 5.33 37.4 0.99 40891 5.7 0.1 0.85 0.27 5.3 37.5 0.91 ``` These are stock closing prices from the Peruvian Stock Market, and I want to convert them to xts so I can find the optimal portfolio and other stuff, but I can't find the way to convert this CSV to xts. I've checked out the answer to many of the questions here but none of them worked. Some of the errors I've got are: - Index has XXXX bad entries at data rows - Ambiguous data. Can anybody help me?
csv stands for *comma*-separated-values so the layout shown in the question is not csv. We will assume that the data really is in csv form and not in the form shown the question. If it truly is in the form shown in the question rather than csv then omit the `sep=","` argument in `read.zoo` below. Also if there are other deviations you may need to modify the arguments further. See `?read.zoo` and the [Reading Data in Zoo](http://cran.r-project.org/web/packages/zoo/index.html) vignette in the zoo package. Here we use `read.zoo` in the zoo package to read in the data as a zoo object, `z`, and then we convert it to xts, `x`. See [R News 4/1](http://cran.r-project.org/doc/Rnews/Rnews_2004-1.pdf) which specifically treats date handling of Excel dates noting that we may need to modify the code below slightly if the Mac version of Excel is being used (as described there in the reference). ``` library(xts) # this also loads zoo which has read.zoo toDate <- function(x) as.Date(x, origin = "1899-12-30") z <- read.zoo("myfile.csv", header = TRUE, sep = ",", FUN = toDate) x <- as.xts(z) ``` ## Update zoo now has read.csv.zoo so the read.zoo line could be written: ``` z <- read.csv.zoo("myfile.csv", FUN = toDate) ```
Interpretation of "stat\_summary = mean\_cl\_boot" at ggplot2? a perhaps simple question I tried to make an errorgraph like the one shown in page 532 of Field's "Discovering Statistics Using R". The code can be found here <http://www.sagepub.com/dsur/study/DSUR%20R%20Script%20Files/Chapter%2012%20DSUR%20GLM3.R> : ``` line <- ggplot(gogglesData, aes(alcohol, attractiveness, colour = gender)) line + stat_summary(fun.y = mean, geom = "point") + stat_summary(fun.y = mean, geom = "line", aes(group= gender)) + stat_summary(fun.data = mean_cl_boot, geom = "errorbar", width = 0.2) + labs(x = "Alcohol Consumption", y = "Mean Attractiveness of Date (%)", colour = "Gender") ``` I produced the same graph; my y-axis variable has only 4-points (it is a discrete scale, 1-4), now the y-axis has the points 1.5, 2, 2.5 in which the lines vary. And the question is: what do these points and graphs describe? I assume that the important part is `stat_summary(fun.data = mean_cl_boot, geom = "errorbar", width = 0.2)` are they count of observations for that group and that level(x-axis)? Are they frequencies? Or, are they proportions? I found this <http://docs.ggplot2.org/0.9.3/stat_summary.html> but it did not help me Thank you
Here is what the ggplot2 [book](https://rads.stackoverflow.com/amzn/click/com/0387981403) on page 83 says about `mean_cl_boot()` ``` Function Hmisc original Middle Range mean_cl_boot() smean.cl.boot() Mean Standard error from bootstrap ``` I think that it is the `smean.cl.boot()` from Hmisc package but renamed as `mean.cl.boot()` in ggplot2. and [here](http://www.inside-r.org/packages/cran/hmisc/docs/smean.sd) is the definition of original function from Hmisc package : `smean.cl.boot` is a very fast implementation of the basic nonparametric bootstrap for obtaining confidence limits for the population mean without assuming normality
std::unordered\_map::emplace issue with private/deleted copy constructor The following code compiles fine with gcc 4.7.2 (mingw) ``` #include <unordered_map> #include <tuple> struct test { test() =default; private: test(test const&) =delete; }; int main() { std::unordered_map<char, test> map; map.emplace( std::piecewise_construct, std::forward_as_tuple('a'), std::forward_as_tuple() ); } ``` If I change the copy constructor in `test` from `test(test const&) =delete;` to `test(test const&) =default;` however, the template error vomit seems to complain about `const test&` not being convertible to `test` (text [here](http://pastebin.com/iGTUHvZN "here")). Shouldn't either work? Or if not, shouldn't they both give an error?
If you look at the template error vomit more carefully you'll see this chunk of carrot in it: ``` test.exe.cpp:8:3: error: 'constexpr test::test(const test&)' is private ``` This is the clue to the problem. GCC 4.7.2 doesn't do access checking as part of template argument deduction (as was required by C++03.) The `is_convertible` trait is implemented using SFINAE, which relies on template argument deduction, and if overload resolution chooses a private constructor argument deduction succeeds, but then access checking fails because the chosen constructor is private. This is a problem with GCC 4.7 because it hadn't been changed to follow the new C++11 rule in 14.8.2 [temp.deduct] which says: > > -8- If a substitution results in an invalid type or expression, type deduction fails. An invalid type or expression is one that would be ill-formed if written using the substituted arguments. *[ Note:* Access checking is done as part of the substitution process. *—end note ]* > > > This is a huge change to the previous deduction rules, previously that paragraph said > > -8- If a substitution results in an invalid type or expression, type deduction fails. An invalid type or expression is one that would be ill-formed if written using the substituted arguments. Access checking is not done as part of the substitution process. Consequently, when deduction succeeds, an access error could still result when the function is instantiated. > > > The change was made quite late in the C++0x process by [DR 1170](http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#1170), and makes SFINAE totally awesome in C++11 :) GCC 4.8 implements the new rules, so `is_convertible` and similar traits give the right answer for inaccessible constructors.
Java - Jackson nested arrays Given the following data ``` { "version" : 1, "data" : [ [1,2,3], [4.5,6]] } ``` I tried the following definitions and used `ObjectMapper.readValue(jsonstring, Outer.class)` ``` class Outer { public int version; public List<Inner> data } class Inner { public List<Integer> intlist; } ``` I got: > > Can not deserialize instance of Inner out of START\_ARRAY token" > > > In the Outer class, if I say ``` List<List<Integer> data; ``` then deserialization works. But in my code, the Outer and Inner classes have some business logic related methods and I want to retain the class stucture. I understand that the issue is that Jackson is unable to map the inner array to the 'Inner' class. Do I have to use the Tree Model in Jackson? Or is there someway I can still use the DataModel here ?
Jackson needs to know how to create an `Inner` instance from an array of ints. The cleanest way is to declare a corresponding constructor and mark it with [the @JsonCreator annotation.](http://www.cowtowncoder.com/blog/archives/2011/07/entry_457.html) Here is an example: ``` public class JacksonIntArray { static final String JSON = "{ \"version\" : 1, \"data\" : [ [1,2,3], [4.5,6]] }"; static class Outer { public int version; public List<Inner> data; @Override public String toString() { return "Outer{" + "version=" + version + ", data=" + data + '}'; } } static class Inner { public List<Integer> intlist; @JsonCreator public Inner(final List<Integer> intlist) { this.intlist = intlist; } @Override public String toString() { return "Inner{" + "intlist=" + intlist + '}'; } } public static void main(String[] args) throws IOException { final ObjectMapper mapper = new ObjectMapper(); System.out.println(mapper.readValue(JSON, Outer.class)); } ``` Output: ``` Outer{version=1, data=[Inner{intlist=[1, 2, 3]}, Inner{intlist=[4, 6]}]} ```
How to set MouseOver event/trigger for border in XAML? I want the border to turn green when the mouse is over it and then to return to blue when the mouse is no longer over the border. I attempted this without any luck: ``` <Border Name="ClearButtonBorder" Grid.Column="1" CornerRadius="0,3,3,0" Background="Blue"> <Border.Triggers> <Trigger Property="Border.IsMouseOver" Value="True"> <Setter Property="Border.Background" Value="Green" /> </Trigger> <Trigger Property="Border.IsMouseOver" Value="False"> <Setter Property="Border.Background" Value="Blue" /> </Trigger> </Border.Triggers> <TextBlock HorizontalAlignment="Center" VerticalAlignment="Center" Text="X" /> </Border> ``` How can one set a trigger or events for MouseOver?
Yes, this is confusing... According to [this blog post](https://web.archive.org/web/20101001072702/http://blogs.charteris.com/blogs/patl-closed/archive/2006/11/02/Using-Property-Triggers-on-a-UIElement.aspx), it looks like this is an omission from WPF. To make it work you need to use a style: ``` <Border Name="ClearButtonBorder" Grid.Column="1" CornerRadius="0,3,3,0"> <Border.Style> <Style> <Setter Property="Border.Background" Value="Blue"/> <Style.Triggers> <Trigger Property="Border.IsMouseOver" Value="True"> <Setter Property="Border.Background" Value="Green" /> </Trigger> </Style.Triggers> </Style> </Border.Style> <TextBlock HorizontalAlignment="Center" VerticalAlignment="Center" Text="X" /> </Border> ``` I guess this problem isn't that common as most people tend to factor out this sort of thing into a style, so it can be used on multiple controls.
How can I find all Markdown links using regular expressions? In Markdown there is two ways to place a link, one is to just type the raw link in, like: `<http://example.com>`, the other is to use the `()[]` syntax: `(Stack Overflow)[<http://example.com> ]`. I'm trying to write a regular expression that can match both of these, and, if it's the second match to also capture the display string. So far I have this: ``` (?P<href>http://(?:www\.)?\S+.com)|(?<=\((.*)\)\[)((?P=href))(?=\]) ``` ![Regular expression visualization](https://www.debuggex.com/i/P8VEk6XYDfWFZ9pt.png) [Debuggex Demo](https://www.debuggex.com/r/P8VEk6XYDfWFZ9pt) But this doesn't seem to match either of my two test cases in Debuggex: ``` http://example.com (Example)[http://example.com] ``` Really not sure why the first one isn't matched at the very least, is it something to do with my use of the named group? Which, if possible I'd like to keep using because this is a simplified expression to match the link and in the real example it is too long for me to feel comfortable duplicating it in two different places in the same pattern. What am I doing wrong? Or is this not doable at all? **EDIT:** I'm doing this in Python so will be using their regex engine.
The reason your pattern doesn't work is here: `(?<=\((.*)\)\[)` since the re module of Python doesn't allow variable length lookbehind. You can obtain what you want in a more handy way using [the new regex module of Python](https://pypi.python.org/pypi/regex) *(since the re module has few features in comparison)*. Example: `(?|(?<txt>(?<url>(?:ht|f)tps?://\S+(?<=\P{P})))|\(([^)]+)\)\[(\g<url>)\])` [An online demo](http://regex101.com/r/mL3hA8/1) pattern details: ``` (?| # open a branch reset group # first case there is only the url (?<txt> # in this case, the text and the url (?<url> # are the same (?:ht|f)tps?://\S+(?<=\P{P}) ) ) | # OR # the (text)[url] format \( ([^)]+) \) # this group will be named "txt" too \[ (\g<url>) \] # this one "url" ) ``` This pattern uses the branch reset feature `(?|...|...|...)` that allows to preserve capturing groups names (or numbers) in an alternation. In the pattern, since the `?<txt>` group is opened at first in the first member of the alternation, the first group in the second member will have the same name automatically. The same for the `?<url>` group. `\g<url>` is a reference to the named subpattern `?<url>` (like an alias, in this way, no need to rewrite it in the second member.) `(?<=\P{P})` checks if the last character of the url is not a punctuation character (useful to avoid the closing square bracket for example). *(I'm not sure of the syntax, it may be `\P{Punct}`)*
How to build a two-way table summarizing a third variable in R (kable package) I am working with RMarkdown and trying to use kable package. I have a three-variable data frame: gender (factor), age\_group (factor), and test\_score(scale). I want to create two-way tables with factor-variables (gender and age\_groups) as table rows and columns, and summary statistics of test\_scores as cell contents. These summary statistics are mean, standard deviation, and percentiles (median, 1st decile, 9th decile, and 99th percentile). Is there an easy way of building those tables in a beautiful way (like with kable package), without needing to input all those values into a matrix first? I searched the kable help file, but could not find how to do it. ``` # How my data looks like: gender <- rep(c(rep(c("M", "F"), each=3)), times=3) age <- as.factor(rep(seq(10,12, 1), each=6)) score <- c(4,6,8,4,8,9,6,6,9,7,10,13,8,9,13,12,14,16) testdata <-data.frame(gender,age,score) | gender | age | score | |--------|-----|-------| | M | 10 | 4 | | M | 10 | 6 | | M | 10 | 8 | | F | 10 | 4 | | F | 10 | 8 | | F | 10 | 9 | | M | 11 | 6 | | M | 11 | 6 | | M | 11 | 9 | | F | 11 | 7 | | F | 11 | 10 | | F | 11 | 13 | | M | 12 | 8 | | M | 12 | 9 | | M | 12 | 13 | | F | 12 | 12 | | F | 12 | 14 | | F | 12 | 16 | ``` I would like a table that looks like below (but calculated directly from my dataset and with a beautiful publishing format): ``` Mean score by gender & age | | 10yo | 11yo | 12yo | Total | |--------|:----:|:----:|:----:|:-----:| | Male | 6 | 7 | 10 | 7.7 | | Female | 7 | 10 | 14 | 10.3 | | Total | 6.5 | 88.5 | 12 | 9 | ``` I tried to use kable package, which indeed provided me some beautiful tables (nicely formatted), but I am only able to produce frequency tables with it. But I cannot find any argument in it to choose for summaries of variables. If anyone has a suggestion of a better package to build a table like above specified, I would appreciate it a lot. ``` kable(data, "latex", booktabs = T) %>% kable_styling(latex_options = "striped") ```
Absent a reproducible example, multi-way tables including a variety of statistics can be created with the `tables::tabular()` function. Here is an example from the `tables` documentation, page 38 that illustrates multiple variables in a table that prints means and standard deviations. ``` set.seed(1206) q <- data.frame(p = rep(c("A","B"),each = 10,len = 30), a = rep(c(1,2,3),each = 10), id = seq(30), b = round(runif(30,10,20)), c = round(runif(30,40,70))) library(tables) tab <- tabular((Factor(p)*Factor(a)+1) ~ (N = 1) + (b + c) * (mean + sd), data = q) tab[ tab[,1] > 0, ] ``` A Stackoverflow friendly version of the output is: ``` b c p a N mean sd mean sd A 1 10 14.40 3.026 55.70 6.447 3 10 14.50 2.877 52.80 8.954 B 2 10 14.40 2.836 56.30 7.889 All 30 14.43 2.812 54.93 7.714 > ``` One can render the table to HTML with the `html()` function. The output from the following code, when rendered in an HTML browser looks like the following illustration. ``` html(tab[ tab[,1] > 0, ]) ``` [![enter image description here](https://i.stack.imgur.com/bl92x.png)](https://i.stack.imgur.com/bl92x.png) `tables` includes capabilities to calculate other statistics, including quantiles. For details on quantile calculations, see pp. 29 - 30 of the [tables package manual](https://cran.r-project.org/web/packages/tables/tables.pdf). The package also works with `knitr`, `kable`, and `kableExtra`.
What is the meaning of 'attachment' when speaking about the Vulkan API? In relation to the Vulkan API, what does an 'attachment' mean? I see that word used in relation to render passes (i.e.: color attachments). I have vague idea of what I think they are, but would like to hear the definition from an expert. I'm doing graphics programming for the first time and decided to jump straight into the deep end by starting with Vulkan.
To understand attachments in Vulkan, You first need to understand render-passes and sub-passes. **Render-pass** is a general description of steps Your drawing commands are divided into and of resources used during rendering. We can't render anything in Vulkan without a render pass. And each render pass must have one or more steps. These steps are called, **Sub-passes** and each sub-pass uses a (sub)collection of resources defined for the render-pass. Render-pass's resources may include render-targets (color, depth/stencil, resolve) and input data (resources that, potentially, were render-targets in previous sub-passes of the same render-pass). And these resources are called, **Attachments** (they don't include descriptors/textures/samplers and buffers). Why don't we call them just render-targets or images? Because we not only render into them (input attachments) and because they are only descriptions (meta data). Images that should be used as attachments inside render-passes are provided through framebuffers. So, in general, we can call them images, because (as far as I know) only images can be used for attachments. But if we want to be fully correct: images are specific Vulkan resources that can be used for many purposes (descriptors/textures, attachments, staging resources); attachments are descriptions of resources used during rendering.
Dotnetnuke partial rendering make my jQueryUI widget stop working I want to use tab widget of jQueryUI in dotnetnuke 5.6.3 I registered jQueryUI in my module and it works fine but when I use partial Rendering in my page it fails to load. Here is my code: ``` $(document).ready(function () { rastaAdmin(); }); function rastaAdmin() { var tabdiv = $('#tabul'); var tabvar = tabdiv.tabs(); } ``` [this site](http://www.shawnduggan.com/?p=102) have a method to solve my problem but it doesn't work in my script. After reading the above site I changed my code to: ``` $(document).ready(function () { rastaAdmin(); }); function pageLoad(sender, args) { rastaAdmin(); } function rastaAdmin() { var tabdiv = $('#tabul'); var tabvar = tabdiv.tabs(); } ``` This Doesn't work for me. What Can I do? Thank You
I've had issues using the `pageLoad` function as well (though I don't remember now where it ended up breaking down). However, something like the other method should work fine (see the new jQuery UI setup in the core modules in DNN 6): ``` $(document).ready(function () { setupDnnSiteSettings(); Sys.WebForms.PageRequestManager.getInstance().add_endRequest(function () { setupDnnSiteSettings(); }); }); ``` The one caveat here is that this registers the setup code to happen after returning from *any* `UpdatePanel`-initiated request, not just your specific `UpdatePanel`. Calling `tabs` again on the same element shouldn't cause any issue, but you'll want to figure out a way to differentiate if you're doing something that should only be called once.
How convert unsigned int to unsigned char array I just need to extract those bytes using bitwise & operator. 0xFF is a hexadecimal mask to extract one byte. For 2 bytes, this code is working correctly: ``` #include <stdio.h> int main() { unsigned int i = 0x7ee; unsigned char c[2]; c[0] = i & 0xFF; c[1] = (i>>8) & 0xFF; printf("c[0] = %x \n", c[0]); printf("c[1] = %x \n", c[1]); return 0; } ``` output: ``` c[0] = ee; c[1] = 7; ``` What should I do for 4 bytes to work correctly? ``` unsigned int i = 0x557e89f3; unsigned char c[4]; ``` my code: ``` unsigned char c[4]; c[0] = i & 0xFF; c[1] = (i>>8) & 0xFF; c[2] = (i>>16) & 0xFF; c[3] = (i>>24) & 0xFF; printf("c[0] = %x \n", c[0]); printf("c[1] = %x \n", c[1]); printf("c[2] = %x \n", c[2]); printf("c[3] = %x \n", c[3]); ```
``` #include <stdio.h> int main() { unsigned int i = 0x557e89f3; unsigned char c[4]; c[0] = i & 0xFF; c[1] = (i>>8) & 0xFF; c[2] = (i>>16) & 0xFF; c[3] = (i>>24) & 0xFF; printf("c[0] = %x \n", c[0]); printf("c[1] = %x \n", c[1]); printf("c[2] = %x \n", c[2]); printf("c[3] = %x \n", c[3]); return 0; } ```
How do I create a javascript object like this? Hi guys I'm trying to create a javascript object like this: ``` data = { values:[ { X: "Apples", Y: 120 }, { X: "Oranges", Y: 280 }, { X: "Chocolates", Y: 180 }, { X: "Bananas", Y: 340 }, { X: "Tomatoes", Y: 400 }, ]}; ``` I've tried this but it doesn't work? ``` var data = {}; for (i = 0; i < json.Answers.length; i++){ data.values[i].X = json.Answers[i].AnswerText data.values[i].Y = json.Answers[i].Responses.length } ``` The json part is fine, any ideas on constructiong the object?
First, you're trying to use a `values` property of `data` that you've never defined. Change ``` var data = {}; ``` to ``` var data = {values: []}; ``` That creates the `values` array. Also, in your loop you're trying to assign to objects that don't exist. Change the content of the loop to: ``` data.values[i] = { X: json.Answers[i].AnswerText, Y: json.Answers[i].Responses.length }; ``` That creates each object that goes in the `values` array as you build it. So: ``` var data = {values: []}; for (i = 0; i < json.Answers.length; i++){ data.values[i] = { X: json.Answers[i].AnswerText, Y: json.Answers[i].Responses.length }; } ``` --- (Side note: Don't forget to declare `i`, if it's not already declared, lest you fall prey to [*The Horror of Implicit Globals*](http://blog.niftysnippets.org/2008/03/horror-of-implicit-globals.html).)
Content Security Policy does not work in Internet Explorer 11 In my asp.net core application for each response i'm adding content security policy header. I understand that for IE, the header name is `X-Content-Security-Policy` and for other browsers like chrome its `Content-Security-Policy` The header value looks something like below where `nonce` is different for each response. ``` default-src 'none'; script-src 'self' 'nonce-somerandomvalue-differnt-foreach-reasone' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; font-src 'self'; object-src 'self'; connect-src 'self'; report-uri /csp/report; ``` The application is using inline javascript on few pages. So to fix inline-script violation i am adding same `nonce` value in script tag. `<script type="text/javascript" nonce="somerandomvalue-differnt-foreach-reasone">` Important thing here is the nonce value needs to match with the nonce value in header. [some details here](http://www.cspplayground.com/compliant_examples) I implemented middleware & tag-helper which adds nonce into header & script tag respectively. And i made sure that both `nonce` values does match when page renders. Then just for testing purpose on a page i added script **without nonce** ``` <script type="text/javascript"> $(function () { alert('i am hacker'); }) </script> ``` Google chrome detects this violation and blocks the above script as expected. However in IE 11 above script gets executed without any violation. Again, I made sure the header in IE is `X-Content-Security-Policy` Why IE 11 is not blocking script?
IE 11 doesn’t support use of the `nonce` attribute and `nonce-` source value at all. [The only CSP directive IE11 supports is the `sandbox` directive](https://blogs.msdn.microsoft.com/ie/2011/07/14/defense-in-depth-locking-down-mash-ups-with-html5-sandbox/). It ignores all other CSP directives. So you could just completely drop the `'nonce-somerandomvalue-differnt-foreach-reasone'` part from your `X-Content-Security-Policy` header and IE11 will still allow inline scripts. IE11 will allow inline scripts no matter what you do, unless you have your server send the response with a `X-Content-Security-Policy: sandbox` header, in which case it will disallow *all* scripts. And the only way to relax that is to send `X-Content-Security-Policy: sandbox allow-scripts`, but that will allow all scripts, including inline scripts. So I think that with IE11 there’s no way to tell it to disallow just inline scripts. You can only tell IE11 to either allow all scripts, or to allow none. --- Also note: IE11 was released in 2013, long before the `nonce` attribute was specified anywhere. I think the first CSP draft spec that the `nonce` attribute was specified in was some time in 2014. <http://caniuse.com/#feat=contentsecuritypolicy> has details on browser support for [CSP1 directives](https://www.w3.org/TR/2012/CR-CSP-20121115/): > > Partial support in Internet Explorer 10-11 refers to the browser only supporting the 'sandbox' directive by using the `X-Content-Security-Policy` header. > > > The `nonce` attribute is [a CSP2 feature](https://www.w3.org/TR/CSP2/). See <http://caniuse.com/#feat=contentsecuritypolicy2> [Support for `nonce` and other CSP2 features was added in Edge 15](https://blogs.windows.com/msedgedev/2017/01/10/edge-csp-2/). So Edge 14 and earlier have no support for `nonce` or other new-in-CSP2 features. But Edge12+ has full support for [all of CSP1](https://www.w3.org/TR/2012/CR-CSP-20121115/).
Shader optimization for retina screen on iOS I make a 3D iphone application which use many billboards. My frame buffer is twice larger on retina screen because I want to increase their quality on iPhone 4. The problem is that fragment shaders consume much more time due to the framebuffer size. Is there a way to manage retina screen and high definition textures without increase shader precision ?
If you're rendering with a framebuffer at the full resolution of the Retina display, it will have four times as many pixels to raster over when compared with the same physical area of a non-Retina display. If you are fill-rate limited due to the complexity of your shaders, this will cause each frame to take that much longer to render. First, you'll want to verify that you are indeed limited by the fragment processing part of the rendering pipeline. Run the OpenGL ES Driver instrument against your application and look at the Tiler and Renderer Utilization statistics. If the Renderer Utilization is near 100%, that indicates that you are limited by your fragment shaders and your overall ability to push pixels. However, if you see your Tiler Utilization percentage up there, that means that you are geometry limited and changes in screen resolution won't affect performance as much as reducing the complexity and size of your vertex data. Assuming that you are limited by your fragment shaders, there are a few things you can do to significantly improve performance on the iOS GPUs. In your case, it sounds like texture size might be an issue. The first thing I'd do is use PowerVR Texture Compressed (PVRTC) textures instead of standard bitmap sources. PVRTC textures are stored in a compressed format in memory, and can be much smaller than equivalent bitmaps. This might allow for much faster access by increasing cache hits on texture reads. Make your textures a power of two in size, and enable mipmaps. I've seen mipmaps really help out for larger textures that often get shrunken down to appear on smaller objects. This definitely sounds like the case for your application which might need to support Retina and non-Retina devices. Avoid dependent texture reads in your fragment shaders like the plague. Anything that performs a calculation to determine a texture coordinate, or any texture reads that fall within a branching statement, triggers a dependent texture read which can be more than an order of magnitude slower to perform on the iOS GPUs. During normal texture reads, the PowerVR GPUs can do a little reading ahead of texture values, but if you use cause a dependent texture read you can lose that optimization. I could go on about various optimizations (using lowp or mediump precision instead of highp where appropriate, etc.), I've [had a little help in this area myself](https://stackoverflow.com/questions/6051237/how-can-i-improve-the-performance-of-my-custom-opengl-es-2-0-depth-texture-gener), but these seem like the first things I'd focus on. Finally, you can also try running your shaders through [PowerVR's profiling editor](https://stackoverflow.com/a/6051739/19679) which can give you cycle time estimates for the best and worst case performance of these shaders. The Retina display devices are not even the worst offenders when it comes to fragment shader limitations. Try getting something rendering to the full screen of the iPad 1 to be performant, because it has more pixels than the iPhone 4 / 4S, yet a far slower GPU than the iPad 2/3 or iPhone 4S. If you can get something to run well on the iPad 1, it will be good on everything else out there (even the Retina iPad).
NullPointerException in ActivityThread.handleBindApplication I have received this stack trace report that does not mention my app at all: ``` java.lang.NullPointerException at android.app.ActivityThread.handleBindApplication(ActivityThread.java:3979) at android.app.ActivityThread.access$1300(ActivityThread.java:130) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1255) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:137) at android.app.ActivityThread.main(ActivityThread.java:4745) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:511) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553) at dalvik.system.NativeStart.main(Native Method) ``` Does anybody have any idea how to prevent this exception?
As always when this happens I have to dig through the Android source to assess the problem, and I thought I would post my findings here so it would save others time. This error corresponds to this code in [**4.1.1\_r1**](http://grepcode.com/file_/repository.grepcode.com/java/ext/com.google.android/android/4.1.1_r1/android/app/ActivityThread.java/#3979) and [**4.1.2\_r1**](http://grepcode.com/file_/repository.grepcode.com/java/ext/com.google.android/android/4.1.2_r1/android/app/ActivityThread.java/#3979): ``` final ContextImpl appContext = new ContextImpl(); appContext.init(data.info, null, this); final File cacheDir = appContext.getCacheDir(); // Provide a usable directory for temporary files System.setProperty("java.io.tmpdir", cacheDir.getAbsolutePath()); // line 3979 setupGraphicsSupport(data.info, cacheDir); ``` This is happening because [`appContext.getCacheDir()`](http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/4.1.1_r1/android/app/ContextImpl.java#ContextImpl.getCacheDir%28%29) returns null in some instances: ``` @Override public File getCacheDir() { synchronized (mSync) { if (mCacheDir == null) { mCacheDir = new File(getDataDirFile(), "cache"); } if (!mCacheDir.exists()) { if(!mCacheDir.mkdirs()) { Log.w(TAG, "Unable to create cache directory"); return null; } FileUtils.setPermissions( mCacheDir.getPath(), FileUtils.S_IRWXU|FileUtils.S_IRWXG|FileUtils.S_IXOTH, -1, -1); } } return mCacheDir; } ``` Related: <https://groups.google.com/forum/?fromgroups=#!topic/android-developers/-694j87eXVU> --- However this *seems* to be being handled properly in version **4.2.1**: ``` final ContextImpl appContext = new ContextImpl(); appContext.init(data.info, null, this); if (!Process.isIsolated()) { final File cacheDir = appContext.getCacheDir(); if (cacheDir != null) { // Provide a usable directory for temporary files System.setProperty("java.io.tmpdir", cacheDir.getAbsolutePath()); setupGraphicsSupport(data.info, cacheDir); } else { Log.e(TAG, "Unable to setupGraphicsSupport due to missing cache directory"); } } ```
How to convert String[] to String and vice versa in Android I want to convert a `String` Array to `String` so that later on while retrieving I can parse `String` to `String[]` with the help of (`,`) separator. ``` String [] ------------> String //and later String ---------------> String[] ``` Can someone guide on how to do this? ``` for (int i = 0; i <= count; i++) { Log.d(TAG, "arrayData == " +arrayData[i]); // Joining: String joined = String.join(",", arrayData); //This will give error "The method join(String, String[]) is undefined for the type String" } ```
You can use ~~[`String.join`](https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#join-java.lang.CharSequence-java.lang.CharSequence:A-)~~ [`StringBuilder`](https://docs.oracle.com/javase/8/docs/api/java/lang/StringBuilder.html) and [`String.split`](https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#split-java.lang.String-): ``` // Joining: ~~String joined = String.join(",", stringArr);~~ StringBuilder buffer = new StringBuilder(); for (String each : stringArr) buffer.append(",").append(each); String joined = buffer.deleteCharAt(0).toString(); // Splitting: String[] splitted = joined.split(","); ```
Race condition for shared variable I have a shared variable of type `double`. This variable will be accessed by two threads. One thread will ever only write the variable, whereas the other thread will ever only read the variable. Do I still get race condition here? If yes, is there an "easy" way in C++ to implement atomic access? How do I implement it efficiently if there is going to be much much more reads than writes? Do I need to mark the variable as `volatile`? EDIT: OK the "reader" thread works periodically on batches of data and the propagation of the new value is not time-sensitive. Instead of implementing complicated interlocking that I have no good way to test, I can just declare another temp variable the writer thread will write to. Then when the reader is finished with one batch, it can atomically propagate the temp value to the actual variable. Would that be race-condition-free?
Yes, there's a race condition, since `double` variables are not atomic on most processors. Use 3 doubles (possibly an array with extra padding in between, to avoid false sharing that kills performance). One is owned by the reader, one is owned by the writer, one is being handed off. To write: write to the write slot, then atomically swap (e.g. with `InterlockedExchange`) the pointer/index for the write slot with the index for the handoff slot. Since the index is pointer-sized or smaller, atomically swapping is easy as long as the variable is properly aligned. If by chance your platform offers interlocked-exchange with and without memory barriers, use the one with. To read: atomically swap the pointer/index for the read slot with the index for the handoff variable. Then read the read slot. You should actually include a version number also, since the read thread will tend to bounce between the latest and previous slot. When reading, read both before and after the swap, then use the one with the later version. Or, in C++11, just use `std::atomic`. Warning: the above only works for single writer/single reader (the particular case in this question). If you have multiple, think about a reader-writer lock or similar protecting all access to the variable.
changing loop index within loop I am relatively new to R. I am iterating over a vector in R by using for() loop. However, based on a certain condition, I need to skip some values in the vector. The first thought that comes to mind is to change the loop index within the loop. I have tried that but somehow its not changing it. There must be some what to achieve this in R. Thanks in advance. Sami
You can change the loop index within a for loop, but it will not affect the execution of the loop; see the Details section of `?"for"`: ``` The ‘seq’ in a ‘for’ loop is evaluated at the start of the loop; changing it subsequently does not affect the loop. If ‘seq’ has length zero the body of the loop is skipped. Otherwise the variable ‘var’ is assigned in turn the value of each element of ‘seq’. You can assign to ‘var’ within the body of the loop, but this will not affect the next iteration. When the loop terminates, ‘var’ remains as a variable containing its latest value. ``` Use a while loop instead and index it manually: ``` i <- 1 while(i < 100) { # do stuff if(condition) { i <- i+3 } else { i <- i+1 } } ```
Initialise prop with value passed from Parent component but map to reactive variable/value from store later on In Parent.svelte: ``` <Child {initialName}/> ``` In Child.svelte ``` export let initialName; <p>{initialName}</p> ``` This works fine, initialName is rendered from value passed from Parent. But I'd like to map it to a reactive variable from store later on ( once the initial comp is rendered). ``` import {_name} from './store.js'; $: initialName = $_name; ``` wouldn't work as initialName would be immediately overwritten by value from store. Currently I'm doing below workaround. ``` let initialized = false; let l_name; $: if (true || $_name) { if (initialized) { l_name = $_name; } else { l_name = initialName; initialized = true; } } <p>{l_name}</p> ``` this works, but I find it bit hacky and too much boilerplate. Any other clean way to accomplish the same? One option is to set value directly to store (in Parent comp) and use it `<p>$_name</p>`. But this had a problem of retaining the old value from store when I refresh the page for a split second.
You simply setup 2 individual reactive statements: ``` $: name = $_name; // `$_name` mutation will write to `name` $: name = initialName; // `initialName` will override `name`, if it also mutates ``` OR, if you only want `initialName` to override on first mount, but not on subsequent update, use `onMount(callback)` instead of reactive statement: ``` $: name = $_name; // `$_name` mutation is always sync'ed to `name` onMount(() => { name = initialName; // but `initialName` only writes once on mount }) ``` Sample code, see for yourself. ``` <!-- Child.svelte --> <script context="module"> import { writable, derived } from 'svelte/store' export const _name = writable("zoo") </script> <script> export let initialName; $: name = $_name; $: name = initialName; </script> <p>{name}</p> ``` ``` <!-- Parent.svelte --> <script> import Child, { _name } from './Child.svelte' let initialName = "foobar" </script> <Child {initialName} /> <label>initialName: <input bind:value={initialName} /></label> <label>store value: <input bind:value={$_name} /></label> ```
Oracle SQL unsigned integer In Oracle, what is the equivalent to MySQL's UNSIGNED? I have the following query, only it doesn't work: ``` CREATE TABLE foo ( id INT UNSIGNED NOT NULL PRIMARY KEY ); ``` Edit: My purpose is to save space, because for some fields I won't be using negative values. Alternatively, it would answer my question of someone confirms that what I'm asking for is impossible in Oracle 11g, or if it's possible, but not straightforward to do (more than 3 lines of code per unsigned int). Also, It's not necessarily about *int*. I also use smallint and tinyint.
If you want to match the restrictions shown [here](https://dev.mysql.com/doc/refman/5.0/en/integer-types.html), you can use a check constraint: ``` SQL> create table foo (id number primary key, constraint foo_uint_id check (id between 0 and 4294967295)); Table created. SQL> insert into foo (id) values (-1); insert into foo (id) values (-1) * ERROR at line 1: ORA-02290: check constraint (SCOTT.FOO_UINT) violated SQL> insert into foo (id) values (0); 1 row created. SQL> insert into foo (id) values (4294967295); 1 row created. SQL> insert into foo (id) values (4294967296); insert into foo (id) values (4294967296) * ERROR at line 1: ORA-02290: check constraint (SCOTT.FOO_UINT_ID) violated SQL> select * from foo; ID ---------- 0 4294967295 ```
Prevent a file from being opened I am writing a Python logger script which writes to a CSV file in the following manner: 1. Open the file 2. Append data 3. Close the file (I think this is necessary to save the changes, to be safe after every logging routine.) **PROBLEM**: The file is very much accessible through Windows Explorer (I'm using XP). If the file is opened in Excel, access to it is locked by Excel. When the script tries to append data, obviously it fails, then it aborts altogether. **OBJECTIVE:** Is there a way to lock a file using Python so that any access to it remains exclusive to the script? Or perhaps my methodology is poor in the first place?
Rather than closing and reopening the file after each access, just flush its buffer: ``` theloggingfile.flush() ``` This way, you keep it open for writing in Python, which should lock the file from other programs opening it for writing. I *think* Excel will be able to open it as read-only while it's open in Python, but I can't check that without rebooting into Windows. **EDIT:** I don't think you need the step below. `.flush()` should send it to the operating system, and if you try to look at it in another program, the OS should give it the cached version. Use `os.fsync` to force the OS to really write it to the hard drive, e.g. if you're concerned about sudden power failures. ``` os.fsync(theloggingfile.fileno()) ```
How to make Azure Service Bus client to not participate to the ambient transaction when sending a message I understand that DTC's are not supported by Azure Service Bus and if you try to do so you get an exception like this: '*Local transactions are not supported with other resource managers/DTC.*' My problem is that I need to send a message to a service bus and the code might be executed within a transaction scope together with possible DB-operations. But the service bus doesn't need to be particularly part of this transaction; so, DTC is not really needed here. However, the service bus client seems to participate automatically to the ambient transaction which elevates the transaction to a DTC. Examples: This runs correctly (service bus code is the only one in the transaction): ``` using (var tx = new TransactionScope()) { //A simple Azure Bus operation var builder = new ServiceBusConnectionStringBuilder(connectionString); var queueClient = new QueueClient(builder); var messageBody = new Message(Encoding.UTF8.GetBytes("Hello")); messageBody.MessageId = Guid.NewGuid().ToString("N"); queueClient.SendAsync(messageBody).GetAwaiter().GetResult(); tx.Complete(); } ``` But from the moment another system participates (here an Sql connection) the "*DTC are not supported by Azure Service Bus*"-exception is thrown: ``` using (var tx = new TransactionScope()) { //A simple DB operation SqlConnection sqlConnection = new SqlConnection(dbConnectionString); sqlConnection.Open(); SqlCommand cmd = new SqlCommand("INSERT INTO [dbo].[Table_1]([Name]) values ('Hello')", sqlConnection); cmd.ExecuteNonQuery(); //A simple Azure Bus operation var builder = new ServiceBusConnectionStringBuilder(connectionString); var queueClient = new QueueClient(builder); var messageBody = new Message(Encoding.UTF8.GetBytes("Hello")); messageBody.MessageId = Guid.NewGuid().ToString("N"); queueClient.SendAsync(messageBody).GetAwaiter().GetResult(); queueClient.CloseAsync().GetAwaiter().GetResult(); sqlConnection.Close(); tx.Complete(); } ``` This error is understandable and already explained [here](https://stackoverflow.com/questions/26298271/problems-with-transactionscope-and-servicebus-in-production-environment). But is there a way to tell the service bus client to ignore the ambient transaction?
You will need to suppress the ambient transaction and wrap your Service Bus code with the following: ``` public async Task Method() { SqlConnection sqlConnection = new SqlConnection(dbConnectionString); sqlConnection.Open(); using (var tx = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled)) { SqlCommand cmd = new SqlCommand("INSERT INTO [dbo].[Table_1]([Name]) values ('Hello')", sqlConnection); await cmd.ExecuteNonQueryAsync(); using (var tx = new TransactionScope(TransactionScopeOption.Suppress, TransactionScopeAsyncFlowOption.Enabled)) { var builder = new ServiceBusConnectionStringBuilder(connectionString); var queueClient = new QueueClient(builder); var messageBody = new Message(Encoding.UTF8.GetBytes("Hello")); messageBody.MessageId = Guid.NewGuid().ToString("N"); queueClient.SendAsync(messageBody).GetAwaiter().GetResult(); queueClient.CloseAsync().GetAwaiter().GetResult(); tx.Complete(); } tx.Complete(); } sqlConnection.Close(); } ``` Note that 1. You should not be recreating your queue client each time. Keep it around for performance considerations. 2. Do not use asynchronous APIs in a synchronous code. Rather convert your method to be async. SQL operations are IO-bound just as Service Bus. It's better to have the method to be asynchronous.
How should I cast generic type manually? I have a generic class with parameter that extends Paint. I really don't understand why I should cast it manually to T in first constructor. What am i doing wrong? Or this is the case when the compiler can't determine a safe cast itself? ``` public class XYPlot <T extends Paint> extends AbsPlot implements XYChartElement { public XYPlot(AbsSeries series){ setUp(series, (T) new Paint(DEFAULT_PAINT));//TODO } public XYPlot(AbsSeries series, T paint){ setUp(series, paint); } private void setUp(AbsSeries series, T paint){ if(series == null) throw new NullPointerException("Series is null"); setSeries(series); setPaint(paint); } ```
> > I really don't understand why I should cast it manually to T in first constructor. > > > You shouldn't - you shouldn't be creating an instance of just `Paint` in the first place. That `Paint` *won't* be an instance of `T`, unless `T` is *exactly* `Paint`. A generic class which only works properly for a single type argument shouldn't be generic in the first place. If you need an instance of `T` on construction, you'll *either* need the caller to pass one in, *or* take a `Class<T>` so that you can look through the constructors using reflection and call an appropriate one. Let's look at a simpler version of what you're doing, and hopefully you'll see why it's wrong: ``` public class Wrapper<T extends Object> { private final T value; public Wrapper() { value = (T) new Object(); } public T getValue() { return value; } } ``` Here we're using `Object` instead of `Paint` - but otherwise, it's basically similar. Now if we call it: ``` Wrapper<String> wrapper = new Wrapper<String>(); String text = wrapper.getValue(); ``` ... what would you expect that to do? Fundamentally it's not clear why you have made your class generic in the first place - but the approach you're taking is inherently flawed.
Mysql ONLY\_FULL\_GROUP\_BY mode issue - Pick single Image URL(any) for each Album I need help in a mysql query, here are the details: Three Tables, Album\_Master , Album\_Photo\_Map, Photo\_Details Album\_Master Table Structure ``` album_id(P) | album_name | user_id(F) 1 abc 1 2 xyz 1 3 pqr 1 4 e3e 2 ``` Album\_Photo\_Map Table Structure ``` auto_id(P) | album_id(F) | photo_id 1 1 123 2 1 124 3 2 123 4 2 125 5 1 127 6 3 127 ``` Photo\_Details Table Structure ``` auto_id(P) | image_id(F) | image_url 1 123 http.... 2 124 http.... 3 125 http... ``` I want to write a query to get the album name with image url for user\_id 1 The output I am expecting here is ``` album_id | album_name | image_url 1 abc http.. (either 123 or 124 or 127 - only one url) 2 xyz http.. (either 123 or 125 - only one) 3 pqr http.. (127 only) ``` The query I am using is taking too much time to execute, almost 8s. ``` SELECT A.album_id , A.album_name , D.image_url from Album_Master A , Album_Photo_Map E LEFT JOIN Photo_Details D ON (D.image_id = ( SELECT P.photo_id FROM Album_Photo_Map P , Photo_Details Q WHERE P.photo_id = Q.image_id AND P.album_id = E.album_id limit 0,1) ) WHERE A.album_id = E.album_id AND A.user_id = 1 group by A.album_id , E.album_id , D.image_url; ``` I am looking for an optimize version of the query, any help will be really appreciated. If I use `image_url` in group by it is creating multiple records, also if I remove `D.image_url` it gives me error > > D.image\_url' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql\_mode=only\_full\_group\_by > > > Note: user can assign one photo in multiple albums, result should pick only one photo per album, there might be 100 of photos in an album.
If it doesn't matter which url it should include in resultset then you can just do group by `album_id` only and you are good to go. ``` SELECT A.album_id, A.album_name, D.image_url FROM album_master A INNER JOIN album_photo_map P ON A.album_id = P.album_id INNER JOIN photo_details D ON P.photo_id = D.image_id GROUP BY A.album_id; ``` Note: If you want album info even there is no photo attached to it then use `LEFT JOIN` instead of `INNER JOIN` in query. **[Functional Dependency Issue due to ONLY\_FULL\_GROUP\_BY](https://dev.mysql.com/doc/refman/5.7/en/group-by-handling.html#:%7E:text=If%20the%20ONLY_FULL_GROUP_BY%20SQL%20mode,are%20functionally%20dependent%20on%20them.)** > > MySQL 5.7.5 and later implements detection of functional dependence. If the ONLY\_FULL\_GROUP\_BY SQL mode is enabled (which it is by default), MySQL rejects queries for which the select list, HAVING condition, or ORDER BY list refer to nonaggregated columns that are neither named in the GROUP BY clause nor are functionally dependent on them > > > > > Specific Issue: D.image\_url' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql\_mode=only\_full\_group\_by > > > ***Solution 1:*** If possible use `aggregate` function for the other columns which are not included in `group by` clause. ***Solution 2:*** If you know that, for a given data set, each `album_id` value in fact uniquely determines the `image_url` value that means `image_url` is effectively functionally dependent on `album_id`. So you can do ``` SELECT A.album_id, A.album_name, ANY_VALUE(D.image_url) AS image_url FROM album_master A INNER JOIN album_photo_map P ON A.album_id = P.album_id INNER JOIN photo_details D ON P.photo_id = D.image_id GROUP BY A.album_id; ``` ***Solution 3: Alternatively Disable `only_full_group_by` in mysql*** ``` SET GLOBAL sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY','')); SET SESSION sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY','')); ``` A more info check [`Solve Query Failures Regarding ONLY_FULL_GROUP_BY SQL MODE`](https://www.percona.com/blog/2019/05/13/solve-query-failures-regarding-only_full_group_by-sql-mode/)
In Python, why do lambdas in list comprehensions overwrite themselves in retrospect? Consider the following code: ``` def f (a): return 1 def g (a): return 2 [lambda a: i(a) for i in (f,g)][0](0) ``` The result is 2. Where it should clearly be 1! I would expect this code to execute as follows. Create a list. Copy the function f into the first index. Copy g into the second index. Fin. Why does this execute the way it does?!
The lambda didn't overwrite itself, it is `i` that was overwritten. This is a common mistake regarding variable scopes. Try this: ``` [lambda a, i=i: i(a) for i in (f,g)][0](0) ``` (the difference is binding the value of `i` at the time the lambda is created) See also: - [Weird behavior: Lambda inside list comprehension](https://stackoverflow.com/q/7368522/1172714) - [What is “lambda binding” in Python?](https://stackoverflow.com/a/160898/1172714) - [Python lambda's binding to local values](https://stackoverflow.com/q/10452770/1172714)
Why does a return statement break the conditional operator? Experimenting with the conditional operator in ruby, ``` def nada false ? true : nil end def err false ? true : raise('false') end ``` work as expected but ``` def reflection false ? true : return false end ``` produces a `syntax error, unexpected keyword_false, expecting keyword_end` ``` def reflection false ? true : return(false) end ``` and attempted with brackets `syntax error, unexpected tLPAREN, expecting keyword_end` yet ``` def reflection false ? true : (return false) end ``` works as expected, and the more verbose `if`...`then`...`else`...`end` ``` def falsy if false then true else return false end end ``` also works as expected. ### So what's up with the conditional (ternary) operator?
You can use it like this, by putting the entire `return` expression in parentheses: ``` def reflection false ? true : (return false) end ``` Of course, it does not make much sense used like this, but since you're experimenting (good!), the above works! The error is because of the way the Ruby grammar works I suppose - it expects a certain structure to form a valid expression. **UPDATE** Quoting some information from a [draft specification](http://www.ipa.go.jp/osc/english/ruby/ruby_draft_specification_agreement.html): > > An expression is a program construct which make up a statement (see 12 > ). A single expression can be a statement as an expression-statement > (see 12.2).12 > > > NOTE A difference between an expression and a statement is that an > expression is ordinarily used where its value is required, but a > statement is ordinarily used where its value is not necessarily > required. However, there are some exceptions. For example, a > jump-expression (see 11.5.2.4) does not have a value, and the value > of the last statement of a compound-statement can be used. > > > NB. In the above, *jump-expression* includes `return` among others.
C# not catching unhandled exceptions from unmanaged C++ dll I've got an unmanaged C++ dll which is being called from a C# app, I'm trying to get the C# app to catch all exceptions so that in the event of the dll failing due to an unmanaged exception then the user will get a half-decent error message (the C# app is a web service implementing it's own http handler). The problem I have is that not all types are being caught. So if I create the following and execute the C# app then the dll throws an error and the entire application terminates. Any ideas? This is being created in VS2005 and using .Net framework v2 C++ - Test.h ``` #ifndef INC_TEST_H #define INC_TEST_H extern "C" __declspec(dllexport) void ProcessBadCall(); #endif ``` C++ - Test.cpp ``` #include <iostream> #include <vector> using namespace std; void ProcessBadCall() { vector<int> myValues; int a = myValues[1]; cout << a << endl; } ``` C# - Program.cs ``` class Program { [DllImport("Test.dll", EntryPoint="ProcessBadCall")] static extern void ProcessBadCall(); static void Main(string[] args) { try { ProcessBadCall(); } catch (SEHException ex) { Console.WriteLine("SEH Exception: {0}", ex.Message); } catch (Exception ex) { Console.WriteLine("Exception: {0}", ex.Message); } } } ``` The dll is being compiled under the release configuration with the following compiler flags. > > /O2 /GL /D "WIN32" /D "NDEBUG" /D > "\_CRT\_SECURE\_NO\_WARNINGS" /D > "\_UNICODE" /D "UNICODE" /D "\_WINDLL" > /FD /EHa /MD /Fo"Release\" > /Fd"Release\vc80.pdb" /W4 /WX /nologo > /c /Wp64 /Zi /TP /errorReport:prompt > > >
Try catching using the `ExternalException` class: <http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.externalexception%28v=VS.100%29.aspx> And, try compiling your unmanaged C++ DLL with asynchronous exception handling (/EHa). It looks like you're getting a Read Access Violation in your DLL which is a type of async exception. AFAIK, only .NET v4 and above disables the delivery of async exceptions by default. Even then, you could add legacyCorruptedState­­ExceptionsPolicy=true to the app.config to enable it. Prior to that, it's automatically enabled (check that you have got it set to false in app.config). Note that it's my personal believe that AVs in your unmanaged DLL is inherently bad (and dangerous) anyway and it's probably the right behavior for .NET to simply terminate the app. Try throwing std::exception instead. If you insists on catching async exceptions, the best way would be to have a thunking DLL which wraps try-catch-all around the call to potentially buggy function calls. Again, highly /not/ recommended (although I can see how it would be useful in debugging the misbehaving DLL).
Eclipse Equinox, how to configure auto load the bundles in plugin folder I've followed <http://www.eclipse.org/equinox/documents/quickstart-framework.php> but it seems to be old and not valid. There is no such bundles as described org.eclipse.update.configurator\_3.2.100.jar I tried with the org.eclipse.equinox.simpleconfigurator\_1.0.200.v20100503, but doesn't work. Anyone can tell me how to make Equinox auto load the bundles inside plugins folder?
Simplest approach would be to use [Apache Felix File Install](http://felix.apache.org/documentation/subprojects/apache-felix-file-install.html). It works just fine with Equinox, you only need to put File Install configuration parameters into the configuration/config.ini. Note though that if you launch Equinox via launcher JAR or via binary, the working directory would be parent of configuration/ or plugins/ directory. Excerpt from our project config.ini: ``` # Start File Install itself osgi.bundles=reference\:file\:org.apache.felix.fileinstall_3.1.0.jar@1\:start # The name of the directory to watch felix.fileinstall.dir=./plugins # A regular expression to be used to filter file names # We have all bundles in plugins/ directory, this regexp # forbids monitoring bundles that are started via osgi.bundles property felix.fileinstall.filter=^(?!org.apache.felix.fileinstall|org.eclipse.osgi).* # Determines if File Install waits felix.fileinstall.poll milliseconds before doing an initial scan or not. felix.fileinstall.noInitialDelay=true # Not sure why we have this... felix.fileinstall.start.level=2 ``` Other possible solution would be to use [Eclipse P2](http://wiki.eclipse.org/Equinox/p2 "Eclipse P2"). It is much more advanced and powerful, though I find it quite difficult to use. Good thing is that if your application is agnostic to the way bundles are provisioned (and it should be this way), you can always change your mind later.
App crashes on launch with < 256 RAM iOS Devices ## The Info I recently launched an app on the AppStore. After testing on the simulator thousands of times, and actual devices hundreds of times we finally released our app. ## The Problem Reviews started popping up about app crashes when the user launches the app. We figured that the **app crashes on launch on iOS devices with less than (or equal to) 256 Mb of RAM**. The following devices are devices our app supports with less than 256: - iPod Touch 4G - iPhone 3GS - iPad 1 The app doesn't always crash. Sometimes it launches fine and runs smoothly. Other times it crashes. The time from launch (when the user taps the icon) to crash is usually two seconds, which would mean that the system isn't shutting it down. ## Findings When using Instruments to test on certain devices, I find the following: 1. There are no memory leaks (I'm using ARC), but there are memory warnings 2. Items are being allocated like crazy. There are so many allocated items, and even though I'm using ARC it's **as if ARC isn't doing what it's supposed to be doing** 3. Because of what I see as "over-allocation", the result is: **This app takes (on average) 60 MB of Real Memory** and 166 MB of Virtual. When the app launches the memory being used quickly increases until it reaches about 60 MB at which point the view has been loaded. Here is a snapshot of the Activity Monitor in Instruments: ![enter image description here](https://i.stack.imgur.com/9PPeB.png) I know that those figures are WAYY to high (although the CPU % never really gets up there). I am worried that ARC is not working properly, or the more likely case: I'm not allocating objects correctly. **What could possibly be happening?** ## The Code and Warnings In Xcode, there are only a few warnings, none of which pertain to the app launch or any files associated with the launching of the app. I placed breakpoints in both the App Delegate and my `viewDidLoad` method to check and see if the crash occurred there - it didn't. ## More Background Info Also, Xcode never generates any errors or messages in the debugger. There are also no crash reports in iTunes Connect, it just says, "Too few reports have been submitted for a report to be shown." I've added crash reporting to my app, but I haven't released that version. ## A Few Questions I started using Obj-C just as ARC arrived, so I'm new to dealing with memory, allocation, etc. (that is probably obvious) but I'd like to know a few things: How can I use `@autoreleasepool` to reduce my memory impact? What do I do with memory warnings, what do I write in the `didRecieveMemoryWarning` since I'm using ARC? Would removing NSLog statements help speed things up? And the most important question: **Why does my app take up so much memory and how can I reduce my whopping 60 MB footprint?** I'd really appreciate any help! Thanks in advance! **EDIT:** After testing on the iPhone 4 (A4), we noticed that the app doesn't crash when run whereas on devices with less than 256 MB of RAM it does.
I finally solved the issue. I spent a few hours pondering why my application could possibly take up more RAM than Angry Birds or Doodle Jump. That just didn't make sense, because my app does no CALayer Drawing, or complex Open GL Graphics Rendering, or heavy web connection. I found this [slideshow](http://www.slideshare.net/360conferences/managing-and-optimizing-memory-usage) while searching for answers and slide 17 listed the ways to reduce memory footprint. One thing that stuck out was PNGCrush (Graphics Compression). My app contains a lot of custom graphics (PNG files), but I hadn't thought of them affecting my app in any way, apparently **images (when not optimized properly) severely increase an applications memory footprint**. After installing **PNGCrush** and using it on a particularly large image (3.2 MB) and then deleting a few unused images I ended up reducing my apps memory footprint from 60+ MB and severe lag to 35 MB and no lag. That took a whopping five minutes. I haven't finished "crushing" all my images, but when I do I'll update everyone on the final memory footprint. For all those interested, here is a link to a [blog that explains how to install PNGCrush](http://www.mactricksandtips.com/2012/02/installing-and-using-pngcrush-on-your-mac.html) (it's rather complicated). **UPDATE:** Instead of using the PNGCrush process (which is very helpful, although time consuming with lots of images) I now use a program called *[ImageOptim](http://imageoptim.com)* that provides a GUI for multiple scripts like PNGCrush. Heres a short description: > > ImageOptim seamlessly integrates various optimisation tools: PNGOUT, AdvPNG, **PNGCrush**, extended OptiPNG, JpegOptim, jpegrescan, jpegtran, and Gifsicle. > > > Here's a [link](http://imageoptim.com) to the website with a free download for OS X 10.6 - 10.8. Note, I am not a developer, publisher or advertiser of this software.
Problem with c# events I need to implement Pause and Resume events of a MP3 player inside the same button click event. following is the code i have tried and its not working,Can any one give me the solution ``` private void button3_Click(object sender, EventArgs e) { if (button3.Text == "Pause") { CommandString = "pause mp3file"; mciSendString(CommandString, null, 0, 0); Status = true; button3.Text = "Resume"; } if (button3.Text == "Resume") { CommandString = "resume mp3file"; mciSendString(CommandString, null, 0, 0); } } ```
You are changing the button3.Text property within the first if statement. When the second if statement is tested it is true (both if statements are running with each button click when the Text property is "Pause") Use if, else to run one code block or another. Use if, else if statements if you want a test to be run on the second code block also. You should also take account of the possibility that neither of these cases is true. ``` if (button3.Text == "Pause") { CommandString = "pause mp3file"; mciSendString(CommandString, null, 0, 0); Status = true; button3.Text = "Resume"; } else if(button3.Text == "Resume") { CommandString = "resume mp3file"; mciSendString(CommandString, null, 0, 0); button3.Text = "Pause"; } ```
What is the character set if default\_charset is empty In PHP 5.6 onwards the `default_charset` string is set to `"UTF-8"` as explained e.g. [in the `php.ini` documentation](http://php.net/manual/en/ini.core.php#ini.default-charset). It says that the string is empty for earlier versions. As I am creating a Java library to communicate with PHP, I need to know which values I should expect when a string is handled as bytes internally. What happens if the `default_charset` string is empty and a (literal) string contains characters outside the range of ASCII? Should I expect the default character encoding of the platform, or the character encoding used for the source file?
### Short answer For literal strings -- always source file encoding. `default_charset` value does nothing here. ### Longer answer PHP strings are "binary safe" meaning they do not have any internal string encoding. Basically string in PHP are just buffers of bytes. For literal strings e.g. `$s = "Ä"` this means that string will contain whatever bytes were saved in file between quotes. If file was saved in **UTF-8** this will be equivalent to `$s = "\xc3\x84"`, if file was saved in **ISO-8859-1** (latin1) this will be equivalent to `$s = "\xc4"`. Setting `default_charset` value does not affect bytes stored in strings in any way. ### What does `default_charset` do then? Some functions, that have to deal with strings as *text* and are encoding aware, accept `$encoding` as argument (usually optional). This tells the function what encoding the text is encoded in a string. Before PHP 5.6 default value of these optional `$encoding` arguments were either in function definition (e.g. `htmlspecialchars()`) or configurable in various php.ini settings for each extension separately (e.g. `mbstring.internal_encoding`, `iconv.input_encoding`). In PHP 5.6 new php.ini setting `default_charset` was introduced. Old settings were deprecated and all functions that accept optional `$encoding` argument should now default to `default_charset` value when encoding is not specified explicitly. However, developer is left responsible to make sure that text in string is actually encoded in encoding that was specified. --- Links: - [Details of the String Type](http://php.net/manual/en/language.types.string.php#language.types.string.details) More details on nature of PHP strings (does not mention `default_charset` at the time of writing). - [New features in PHP 5.6: Default character encoding](http://php.net/manual/en/migration56.new-features.php#migration56.new-features.default-encoding) Short introduction of new `default_charset` option in PHP 5.6 release notes. - [Deprecated features in PHP 5.6: iconv and mbstring encoding settings](http://php.net/manual/en/migration56.deprecated.php#migration56.deprecated.iconv-mbstring-encoding) List of deprecated php.ini options in favour of `default_chaset` option.
Delphi XE: bogus "Never-build package must be recompiled" / "F2084 Internal Error: U10346" errors when building packages I'm trying to build a package (package A) that contains the DWS compiler. It works, but when I then have a second package (package B) that **requires** package A, containing any unit that **uses** a specific unit from DWS, I get the error: > > [DCC Fatal Error] E2225 Never-build package 'Package A' must be recompiled > > > If I change package A to recompile-as-needed, the error doesn't go away. I instead get > > [DCC Fatal Error] F2084 Internal Error: U10346 > > > at the same point. I posted a bug report at <http://code.google.com/p/dwscript/issues/detail?id=419> and it appears that the problem isn't reproducible on the author's end, which means that something's going wrong on my end. He offered some helpful advice: "I've sometimes seen this error because of a stray DCU." Searching for and deleting all DCUs that could possibly be relevant did not resolve the issue, even after restarting both the IDE and the computer. Running a trace on BDS using Process Monitor during the compilation process did not reveal anything that looked relevant. Does anyone have any ideas on how to track this down? Using Delphi XE, update 1. Cross posting to the Delphi forums. Let's see who comes up with a solution first.
I also had the problem and found a work around. It seems the problem is caused by the compiler Inline features. I remember that bug is caused when using inlined method and that method is too big. This bug is exclusive for Delphi XE and seems was never fixed. Check the next options of the packages 1) dwsLibRuntime package - Delphi Compiler/Code generation/Code inlining Control = Auto or On or OFF - Description/Build control/Rebuild as needed 2) dwsLib package - Delphi Compiler/Code generation/Code inlining Control = OFF (this is the important thing) - Description/Build control/Rebuild as needed Recompile the packages and seems it works, it installs correctly and I can compile the DwsIdeDemo I have posted also the solution to the bug report you made at: <http://code.google.com/p/dwscript/issues/detail?id=419>
Calculate result of two values on Property Changed MVVM I am trying to Calculate the "NetAmount" by the following simple formula in MVVM > > GrossAmount + Carriage - Discount = NetAmount > > > I am using MVVM Light Toolkit and declared properties as follows ``` public const string DiscountPropertyName = "Discount"; private double _discount; public double Discount { get { return _discount; } set { if (_discount == value) { return; } _discount = value; // Update bindings, no broadcast RaisePropertyChanged(DiscountPropertyName); } } public const string CarriagePropertyName = "Carriage"; private double _carriage; public double Carriage { get { return _carriage; } set { if (_carriage == value) { return; } _carriage = value; RaisePropertyChanged(CarriagePropertyName); } } public const string NetAmountPropertyName = "NetAmount"; private double _netAmount; public double NetAmount { get { _netAmount = Carriage + Discount; return _netAmount; } set { if (_netAmount == value) { return; } _netAmount = value; RaisePropertyChanged(NetAmountPropertyName); } } public const string GrossAmountPropertyName = "GrossAmount"; private double _grossAmount; public double GrossAmount { get { return _grossAmount; } set { if (_grossAmount == value) { return; } _grossAmount = value; RaisePropertyChanged(GrossAmountPropertyName); } } ``` I Bind these properties in XAML with text boxes like follows: ``` <TextBox Text="{Binding GrossAmount, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" DataContext="{Binding Mode=OneWay}"/> <TextBox Text="{Binding Carriage, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" DataContext="{Binding Mode=OneWay}"/> <TextBox Text="{Binding Discount, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" DataContext="{Binding Mode=OneWay}"/> ``` And I bind a Text Block with the `NetAmount` Property like follows: ``` <TextBlock Text="{Binding NetAmount}" /> ``` The ViewModel is `SalesOrderViewModel`. I dont know where do I put the above mentioned formula so that it when any of the text boxes value is changed, it result in changing the `NetAmount` Property. I am not new to C# but am new to MVVM and `PropertyChanged` Events, I know there is some really small silly thing I am doing wrong but cant get my head around it. Any help will be highly appreciated.
Since `NetAmount` is a calculation it makes sense to model it as a read only property in your view model. Accessing the property actually performs the calculation. The last trick is to call `RaisePropertyChanged(NetAmountProperty)` whenever any of the factors that affects `NetAmount` changes ``` public const string GrossAmountPropertyName = "GrossAmount"; private double _grossAmount; public double GrossAmount { get { return _grossAmount; } set { if (_grossAmount == value) return; RaisePropertyChanged(GrossAmountPropertyName); RaisePropertyChanged(NetAmountPropertyName); } } public double Discount{} ... //Implement same as above public double Carriage {} ... //Implement same as above public const string NetAmountPropertyName = "NetAmount"; public double NetAmount { get { return GrossAmount + Carriage - Discount; } } ``` --- Edit: If you don't want to add a call to `RaisePropertyChanged` to every property that affects the NetAmount then you could modify `RaisePropertyChanged` so that it also raises a `PropertyChanged` event for the `NetAmount` property. It will cause some unnecessary `PropertyChanged` events to be raised but will be more maintainable ``` private void RaisePropertyChanged(string propertyName) { var handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName); handler(this, new PropertyChangedEventArgs(NetAmountProperty); } } ```
Simplified Monte Carlo procedure in Chi square test In R, the function `chisq.test` has an option `simulate.p.value`. In the help page little explanation is provided but it reports to [Hope (1968)](http://www.jstor.org/stable/pdf/2984263.pdf?refreqid=excelsior%3Aa75f32ab2d89a004c081ce5799393341). Could you please help me out by offering me a for dummy explanation of how p.values are being estimated when `simulate.p.value` is set to `TRUE`? I have basic understanding of Monte Carlo procedures, there is no need to give a big intro on this subject.
The basic idea is to fix the margins and simulate from the set of tables with those margins. Consider the 2x2 table: ``` 2 1 0 4 ``` The margins are: ``` x x 3 x x 4 2 5 ``` The possible tables with those margins are: ``` 0 3 1 2 2 1 2 2 1 3 0 4 ``` and their probabilities under the hypothesis of no association can be computed (consider the top left cell, say, and it will be hypergeometric). Consequently we can simulate from that distribution over $2\times 2$ tables under the null and compute the distribution of any statistic we wish, and so obtain p-values in the usual fashion when sampling the null distribution\*. The case for $r\times c$ tables is an extension of this but hopefully this is enough to get the idea. [There's some discussion of ways that $r \times c$ tables might be simulated here [How to simulate effectiveness of treatment in R?](https://stats.stackexchange.com/questions/134293/how-to-simulate-effectiveness-of-treatment-in-r) and gung gives discussion of the situation if you don't have both margins fixed] Note that in R this simulation is done using the algorithm of Patefield (1981) [1], as is explained in the help. (the function `r2dtable` will simulate for you, if you wanted to check the performance of the chi-square against some other statistic under fixed margins). \* it's also possible to generate all tables (if your tables aren't too big) and get an exact permutation test. Clever algorithms exist for just looking at some statistic for tables in the tail -- and doing so quite efficiently -- which makes it feasible to do exact tests for surprisingly large tables (to me at least, considering the scale of the combinatorial explosion). [1] Patefield, W. M. (1981) "*Algorithm AS159*. An efficient method of generating r x c tables with given row and column totals" *Applied Statistics*, *30*, 91-97.
Java records with nullable components I really like the addition of records in Java 14, at least as a preview feature, as it helps to reduce my need to use lombok for simple, immutable "data holders". But I'm having an issue with the implementation of nullable components. I'm trying to avoid returning `null` in my codebase to indicate that a value might not be present. Therefore I currently often use something like the following pattern with lombok. ``` @Value public class MyClass { String id; @Nullable String value; Optional<String> getValue() { // overwrite the generated getter return Optional.ofNullable(this.value); } } ``` When I try the same pattern now with records, this is not allowed stating `incorrect component accessor return type`. ``` record MyRecord (String id, @Nullable String value){ Optional<String> value(){ return Optional.ofNullable(this.value); } } ``` Since I thought the usage of `Optional`s as return types is now preferred, I'm really wondering why this restriction is in place. Is my understanding of the usage wrong? How can I achieve the same, without adding another accessor with another signature which does not hide the default one? Should `Optional` not be used in this case at all?
A `record` comprises attributes that primarily define its state. The derivation of the accessors, constructors, etc. is completely based on this state of the records. Now in your example, the state of the attribute `value` is `null`, hence the access using the default implementation ends up providing the true state. To provide customized access to this attribute you are instead looking for an overridden API that wraps the actual state and further provides an `Optional` return type. Of course, as you mentioned one of the ways to deal with it would be to have a custom implementation included in the record definition itself ``` record MyClass(String id, String value) { Optional<String> getValue() { return Optional.ofNullable(value()); } } ``` Alternatively, you could decouple the read and write APIs from the data carrier in a separate class and pass on the record instance to them for custom accesses. The most relevant quote from [JEP 384: Records](https://openjdk.java.net/jeps/384) that I found would be(formatting mine): > > A record declares its state -- the group of variables -- and commits > to an API that matches that state. This means that *records give up a > freedom that classes usually enjoy -- the ability to decouple a > class's API from its internal representation* -- but in return, records > become significantly more concise. > > >
Lead Programmer definition clarification I have been working on PHP and MySQL based web application for more than 5 years now. I started my career as an Intern, and worked my way up through Jr Developer and Software Developer to Sr. Software Engineer (Team Lead), and that's what I am nowadays. I was looking at the link at [Wikipedia](http://en.wikipedia.org/wiki/Lead_programmer) regarding who is a lead programmer. The link states the following: > > A lead programmer is a software engineer in charge of one or more > software projects. Alternative titles include Development Lead, > Technical Lead, Senior Software Engineer, Software Design Engineer > Lead (SDE Lead), Software Manager, or Senior Applications Developer. > When primarily contributing in a high-level enterprise software design > role, the title Software Architect (or similar) is often used. All of > these titles can have different meanings depending on the context. > > > My current job responsibilities are more or less like a *Development Lead* and to some extent near *Software Architect* because I usually design the core structure of new products, and manage 2-3 project simultaneously while assisting other teams regarding the structural design of their projects. I am usually on call with clients along with project managers. I code most of the time when my team is stuck somewhere, has a heavy workload, is integrating some third party API, etc. Does what I do qualify for a **Development Lead** title in accordance with my above mentioned job descriptions?
I found myself in the same situation, as I got promoted rapidly from normal developer to development lead, because sometimes, bus factors may be very large. If the former lead leaves and you're the one who knows the application best, you become responsible for it. This given as an entry statement, but to be clear: Titles are not much worth, as everbody may give himself a fancy title as senior chief development lead (in a one person project). Some companies even say "Make up the coolest name ever for your card as long as you're comfortable with it"... It depends more on the jobs you do and how good you do them. IMHO, there are two types of developers: Executioners and Designers. Both classes may overlap. In a binary 1/0 world, executioners only do what they are told (by other developers), designers do (or delegate) what customers told them. But even executioners may delegate work to others. The point in "lead developer" is, that there is someone you can lead. Know your team, know what they can do and delegate work to them (and jump in if somewhere the roof is on fire). All other titles (Senior, Junior, Prehistoric) may be nice, but they don't say very much about what your abilities are. That's the 7\*1 year vs. 1\*7 year experience thing. A senior java developer who never did anything else than working in a single framework for seven years may not be as good as someone, who worked with different framesworks over a shorter period of time.
How to override a filter:none in CSS I have an **IE7** specific stylesheet which applies `filter:none;` I have no access to this file so can not simply remove the line. I need to somehow override it via **CSS**, to ignore the `filter:none;` being set. I have tried using `filter:;` `filter: -;` and `filter: !important;` which should cause the filter attribute to be invalid, but the filter is still being set. Is it possible to do this without removing the line in the IE7 specific stylesheet or use of javascript/jquery? --- **answer:** to fix my specific problem of this, it was **not possible to simply override the** `filter` with a `null` equivalent as i was asking. As an answer below suggests, it must be overridden by applying the filter directly to where i wanted to override. **IE7 Specific Stylesheet:** ``` .div.example { filter:none; } ``` overridden by: **Generic Stylesheet:** ``` .div.example { filter:progid:DXImageTransform.Microsoft.gradient(startColorstr=#7F000000,endColorstr=#7F000000); } ```
***Extracted from [this answer](https://stackoverflow.com/questions/6900647/ms-filter-vs-filter-whats-the-difference#answer-6901105)*** > > Microsoft introduced -ms-filter to make Internet Explorer more > standards-compliant (CSS 2.1 requires vendor extensions to have vendor > prefix). As the syntax of original filter property is not CSS 2.1 > compliant, IE8+ requires the value of the -ms-filter property to be > enclosed in quotation marks. > > > ``` -ms-filter: "progid:DXImageTransform.Microsoft.gradient(enabled=false)" /* IE 8+ */; filter: none !important; /* IE 7 and the rest of the world */ ``` **As you said, you need to override an existing style, so append `!important`** ``` -ms-filter: "progid:DXImageTransform.Microsoft.gradient(enabled=false) !important"; ``` If you were wondering, quotations **ARE** required for this microsoft (-ms) vendor prefix. As you see this use case uses MS's gradients, interject that with whatever filter property you wish to override.
Set Polygon Colors Matplotlib I have a list of 10,000+ Matplotlib Polygon objects. Each polygon belongs to 1 of 20 groups. I want to differentiate which group a polygon belongs to by mapping each unique group to a unique color. Here are some posts I've found with similar issues to mine: [Fill polygons with multiple colors in python matplotlib](https://stackoverflow.com/questions/27432367/fill-polygons-with-multiple-colors-in-python-matplotlib) [How to fill polygons with colors based on a variable in Matplotlib?](https://stackoverflow.com/questions/32141476/how-to-fill-polygons-with-colors-based-on-a-variable-in-matplotlib) [How do I set color to Rectangle in Matplotlib?](https://stackoverflow.com/questions/10550477/how-do-i-set-color-to-rectangle-in-matplotlib) These solutions simply apply a random color to each shape in the list. That is not what I am looking for. For me, each shape belonging to a particular group should have the same color. Any ideas? Sample code: ``` from matplotlib.collections import PatchCollection from matplotlib.patches import Polygon from matplotlib import pyplot as plt patches = [] colors = [] num_polys = 10000 for i in range(num_polys): patches.append(Polygon(poly_vertices[i], closed=True)) colors.append(poly_colors[i]) # This line is pointless, but I'm letting you know that # I have a particular color for each polygon fig, ax = plt.subplots() p = PatchCollection(patches, alpha=0.25) ax.add_collection(p) ax.autoscale() plt.show() ``` Note that if you run this code, it won't work because poly\_vertices and poly\_colors haven't been defined. For now, just assume that poly\_vertices is a list of polygon vertices, and poly\_colors is a list of RGB colors and each list has 10000 entries. For example: poly\_vertices[0] = [(0, 0), (1, 0), (0, 1)], colors[0] = [1, 0, 0] Thank you!
Okay, I figured out what I was trying to do. I'll post the answer for anyone who may be having similar issues. For some reason, setting the color in the polygon itself doesn't work. i.e. ``` Polygon(vertices, color=[1, 0, 0]) ``` does not work. Instead, after adding all the polygons to a collection, use ``` p = PatchCollection(patches) p.set_color([1, 0, 0]) ``` But I still want to group polygons by color. Therefore I need to add multiple PatchCollections -- one for each group type! My original list of polygons were in no particular order, so the first polygon may belong to group 5, while its neighbor belongs to group 1, etc. So, I first sorted the list by group number such that all polygons belonging to a particular group were right next to each other. I then iterated through the sorted list and appended each polygon to a temporary list. Upon reaching a new group type, I knew it was time to add all the polygons in the temporary list to their own PatchCollection. Here's some code for this logic: ``` a = [x for x in original_groups] # The original group numbers (unsorted) idx = sorted(range(len(a)), key=lambda k: a[k]) # Get indices of sorted group numbers current_group = original_groups[idx[0]] # Set the current group to the be the first sorted group number temp_patches = [] # Create a temporary patch list for i in idx: # iterate through the sorted indices if current_group == original_groups[i]: # Detect whether a change in group number has occured temp_patches.append(original_patches[i]) # Add patch to the temporary variable since group number didn't change else: p = PatchCollection(temp_patches, alpha=0.6) # Add all patches belonging to the current group number to a PatchCollection p.set_color([random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1)]) # Set all shapes belonging to this group to the same random color ax.add_collection(p) # Add all shapes belonging this group to the axes object current_group = original_groups[i] # The group number has changed, so update the current group number temp_patches = [original_patches[i]] # Reset temp_patches, to begin collecting patches of the next group number p = PatchCollection(temp_patches, alpha=0.6) # temp_patches currently contains the patches belonging to the last group. Add them to a PatchCollection p.set_color([random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1)]) ax.add_collection(p) ax.autoscale() # Default scale may not capture the appropriate region plt.show() ```
setTimeout fails to bind to "this" prototype function I wrote a code and I want to see `"Hello, world!"` each second, but I've got `undefined` and I can't find where is my mistake. ``` function Greeting(message, delay) { this.message = message; setTimeout(this.blowUp, delay * 1000); } Greeting.prototype.blowUp = function () { console.log(this.message); }; new Greeting("Hello, world!", 1); ```
Because when `setTimeout` callback is executed, it is executed with window as its context(object referred by `this` ([this](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this)) in the function) by default. You can pass a custom context to the callback by using [Function.bind()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind) ``` function Greeting(message, delay) { this.message = message; setInterval(this.blowUp.bind(this), delay * 1000); //use setInterval since you want to repeat the callback execution } Greeting.prototype.blowUp = function() { snippet.log(this.message); }; new Greeting("Hello, world!", 1); ``` ``` <!-- Provides the `snippet` object, see http://meta.stackexchange.com/a/242144/134069 --> <script src="http://tjcrowder.github.io/simple-snippets-console/snippet.js"></script> ``` Note: You will have to use setInterval instead of setTimeout() if you want to repeat the callback execution
Vim - prevent saving/writing a file if it contains certain string I'd like to prevent Vim from saving a file if it contains the following text ``` :style=> ``` This could potentially be in multiple places in the file. As a bonus if it could come up with an error message like "stop putting styles inline!" that would be great too ;) Thanks! PS : I would like this prevent action to be triggered upon attempting to write the file :w
### *One* way to do this is to "bind" the save (`:w`) command to a function that checks for your pattern: ``` autocmd BufWriteCmd * call CheckWrite() ``` where your `Check()` function could look like this: ``` function! CheckWrite() let contents=getline(1,"$") if match(contents,":style=>") >= 0 echohl WarningMsg | echo "stop putting styles inline!" | echohl None else call writefile(contents, bufname("%")) set nomodified endif endfunction ``` Note that in this case you have to provide a "save-file" mechanism yourself (probably a not so good idea, but works well). --- ### A safer way would be to `set readonly` when your pattern appears: ``` autocmd InsertLeave * call CheckRO() ``` and issue the warning when you try to save: ``` autocmd BufWritePre * call Warnme() ``` where `CheckRO()` and `Warnme()` would be something like: ``` function! CheckRO() if match(getline(1,"$"),":style=>") >= 0 set ro else set noro endif endfunction function! Warnme() if match(getline(1,"$"),":style=>") >= 0 echohl WarningMsg | echo "stop putting styles inline!" | echohl None endif endfunction ``` --- ### Highlight It is also probably a good idea to highlight your pattern with a `hi`+`syntax match` command: ``` syntax match STOPPER /:style=>/ hi STOPPER ctermbg=red ``` --- Finally, have a look at [this script](http://www.vim.org/scripts/script.php?script_id=1431).
Remove typing cursor from combobox I am using an ExtJS combobox. There is a typing cursor when focusing on the combobox. I tried to implement `editable: false` when creating the combobox, but it helped only for chrome. Also tried `clearListeners()` function to see if this works on that cursor - didn't help, it still appears in FireFox and IE. The other idea is to set `disabled` on the input field in combobox. When I did it manually, it helped. But when I wrote the next `Ext.get('bu-encodingcount-combobox').select('input').set({disabled:'disabled'});` it didn't help - don't know, maybe the expression is wrong.
The reason you see a cursor is because the combobox gets the focus, so the easiest way to handle this is to move the focus onto the drop down picker whenever the combo gets the focus. Simply add this `onFocus` config to your combobox configuration: ``` // example combobox config xtype: 'combo', allowBlank: false, forceSelection: true, valueField:'id', displayField:'name', store: myStore, // add this "onFocus" config onFocus: function() { var me = this; if (!me.isExpanded) { me.expand() } me.getPicker().focus(); }, ``` Also, I would only recommend doing this if this is a `forceSelection: true` combobox. It will ruin a users ability to type anything into the field.
Why Rust's Box Clone implementation requires T to be Clone? I was writing a library for a generic `Box<T>` and on one part of the code I needed to clone the `Box<T>`, so I did something like this: ``` impl<T> OnTheFlySwap<T> where T: ?Sized + Send + Sync, Box<T>: Clone { ``` I added `Box<T>: Clone` thinking this is not a big deal, because only objects that explicitly prohibits `Box<T>: Clone` would have a problem. But no, apparently if the object does not explicitly implements `Box<T>: Clone` then I have a problem, because this is the official `impl Clone for Box`: ``` impl<T, A> Clone for Box<T, A> where T: Clone, A: Allocator + Clone, ``` It requires `T` to be `Clone.` Why? Wouldn't every `Box<T>` be `Clone`, since cloning a `Box` requires no time? If an object does not want its box to be clone then it could implement `!Clone` for it, but the default should be `impl Clone for Box<T>` for any `T`.
A `Box<T>` in Rust represents a box which *owns* a `T`. If you could clone a `Box` and get a second box pointing to the same `T`, then which one would own the `T` value? `Box` isn't just Rust's version of C++ pointers; it represents a concept of ownership that a language like C++ doesn't enforce. As an exercise, consider trying to write the function you're suggesting for `Box::clone` yourself. Its signature would be ``` fn my_clone<T>(value: &Box<T>) -> Box<T> { ... } ``` Try writing a function with that signature without dipping into `unsafe`. The compiler will let you know pretty quickly what went wrong. The only *safe* way to clone a box is to clone everything inside of the box as well, and that requires `T : Clone`.
How to add style - like margin - to react component? So, expect two simple components that I have built: ``` import {Input} from 'semantic-ui-react'; import {Select} from 'semantic-ui-react'; const CategoriesDropdown = ({categories, onCategorySelected, selectedCategory}) => { const handleChange = (e, {value})=>{ onCategorySelected(value); }; return ( <Select placeholder="Select category" search options={categories} onChange={handleChange} value={selectedCategory} /> ); }; const IdentifiersInput = ({identifiers, onIdentifiersChanged}) => { return ( <Input placeholder="Enter identifiers..." value={identifiers} onChange={onIdentifiersChanged}/> ); }; ``` Nothing fancy so far. But now, I am building another component that displays those two in a flexbox row: ``` <Box> <CategoriesDropdown categories={categories} selectedCategory={selectedCategoryId} onCategorySelected={this.selectCategory}/> <IdentifiersInput identifiers={identifiers} onIdentifiersChanged={this.changeIdentifiers}/> </Box> ``` Unfortunately they are both displayed right next to each other without any margin in between. Usually, I would just add a `margin-left` style to the second element, but because it is a React component, that doesn't work. Using `style={{marginLeft: '20px'}}` doesn't work as well, because the `IdentifiersInput` component doesn't use it. I know that I can fix it by doing this: `<Input style={style} ...` inside the `IdentifiersInput` component. However, this seems to be a very tedious way of achieving this goal. Basically, I have to add this to every single component I am writing. **I clearly must be missing something here**. How am I supposed to apply such layout CSS properties to React components?
I think I understand. 1) Applying CSS directly to React Components does not work--I can confirm that. 2) Passing props down to the low level elements is tedious, confirmed but viable. Notice `hasMargin` prop: ``` <Box> <CategoriesDropdown categories={categories} selectedCategory={selectedCategoryId} onCategorySelected={this.selectCategory} /> <IdentifiersInput identifiers={identifiers} onIdentifiersChanged={this.changeIdentifiers} hasMargin /> </Box> ``` Possible input: ``` const IdentifiersInput = ({identifiers, onIdentifiersChanged, className, hasMargin }) => { return ( <Input className={className} placeholder="Enter identifiers..." value={identifiers} onChange={onIdentifiersChanged} style={hasMargin ? ({ marginLeft: '0.8rem' }) : ({})} /> ); }; ``` *NOTE*: I do not like style as much as I like adding an additional class because classes can be adjusted via media queries: ``` const IdentifiersInput = ({identifiers, onIdentifiersChanged, className, hasMargin }) => { const inputPosition = hasMargin ? `${className} margin-sm` : className return ( <Input className={inputPosition} placeholder="Enter identifiers..." value={identifiers} onChange={onIdentifiersChanged} /> ); }; ``` If you find `inputPosition` too verbose as shown above: ``` className={hasMargin ? `${className} margin-sm` : className} ``` 3) You could accomplish it using a divider Component, sacreligious yet rapidly effective ``` <Box> <CategoriesDropdown categories={categories} selectedCategory={selectedCategoryId} onCategorySelected={this.selectCategory} /> <div className="divider" /> <IdentifiersInput identifiers={identifiers} onIdentifiersChanged={this.changeIdentifiers} /> </Box> ``` You can use media queries and control padding at any breakpoints if desired. 4) CSS pseudo-elements or pseudo-classes, I don't see any mention of them in answers so far. - MDN: <https://developer.mozilla.org/en-US/docs/Web/CSS/Pseudo-classes> - CSS Tricks: <https://css-tricks.com/pseudo-class-selectors/> Usually, when you have a random collection of DOM elements, you can calculate a way using CSS to wrangle them into the correct position. The list of available pseudo-classes is in that MDN link. It honestly helps to just look at them and reason about potential combinations. My current issue is I don't know what is in `<Box />` other than it probably has a div with `display: flex;` on it. If all we have to go on is that and the div is called `<div className="Box">`, maybe some CSS like this will fix it: ``` .Box { display: flex; } .Box:first-child { margin-right: 0.8rem; } ``` This is why it is extremely important to know exactly what the surrounding elements will or can be, and exactly which CSS classes/IDs are nearby. We are basically trying to hook into something and correctly identify the left child in Box and add margin to the right of it, or target the right child and add margin to the left of it (or depending on everything, target both and split the additional margin onto both). Remember there is also `::before` and `::after`. You are welcome to get creative and find a solution that involves `position:relative` and `position: absolute` and adds no markup. I will leave my answer like that for now, because I think either you already thought about pseudo-selectors, or you will quickly find something that works :) That or the divider is actually quite viable. The fact you can use media queries alleviates you from concern of future management or scalability of the components. I would not say the same about `<div style={{}} />`.
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique? Based on the little knowledge that I have on MCMC (Markov chain Monte Carlo) methods, I understand that sampling is a crucial part of the aforementioned technique. The most commonly used sampling methods are Hamiltonian and Metropolis. Is there a way to utilise machine learning or even deep learning to construct a more efficient MCMC sampler?
Yes. Unlike what other answers state, 'typical' machine-learning methods such as nonparametrics and (deep) neural networks *can* help create better MCMC samplers. The goal of MCMC is to draw samples from an (unnormalized) target distribution $f(x)$. The obtained samples are used to approximate $f$ and mostly allow to compute expectations of functions under $f$ (i.e., high-dimensional integrals) and, in particular, properties of $f$ (such as moments). Sampling usually requires a large number of evaluations of $f$, and possibly of its gradient, for methods such as Hamiltonian Monte Carlo (HMC). If $f$ is costly to evaluate, or the gradient is unavailable, it is sometimes possible to build a less expensive *surrogate function* that can help guide the sampling and is evaluated in place of $f$ (in a way that still preserves the properties of MCMC). For example, a seminal paper ([Rasmussen 2003](http://www.kyb.mpg.de/fileadmin/user_upload/files/publications/pdfs/pdf2080.pdf)) proposes to use [Gaussian Processes](http://www.gaussianprocess.org/) (a nonparametric function approximation) to build an approximation to $\log f$ and perform HMC on the surrogate function, with only the acceptance/rejection step of HMC based on $f$. This reduces the number of evaluation of the original $f$, and allows to perform MCMC on pdfs that would otherwise too expensive to evaluate. The idea of using surrogates to speed up MCMC has been explored a lot in the past few years, essentially by trying different ways to build the surrogate function and combine it efficiently/adaptively with different MCMC methods (and in a way that preserves the 'correctness' of MCMC sampling). Related to your question, these two very recent papers use advanced machine learning techniques -- random networks ([Zhang et al. 2015](http://arxiv.org/abs/1506.05555)) or adaptively learnt exponential kernel functions ([Strathmann et al. 2015](https://arxiv.org/abs/1506.02564)) -- to build the surrogate function. HMC is not the only form of MCMC that can benefit from surrogates. For example, [Nishiara et al. (2014)](http://jmlr.org/papers/volume15/nishihara14a/nishihara14a.pdf) build an approximation of the target density by fitting a multivariate Student's $t$ distribution to the multi-chain state of an ensemble sampler, and use this to perform a generalized form of [elliptical slice sampling](http://www.jmlr.org/proceedings/papers/v9/murray10a/murray10a.pdf). These are only examples. In general, a number of distinct ML techniques (mostly in the area of function approximation and density estimation) can be used to extract information that *might* improve the efficiency of MCMC samplers. Their *actual* usefulness -- e.g. measured in number of "effective independent samples per second" -- is conditional on $f$ being expensive or somewhat hard to compute; also, many of these methods may require tuning of their own or additional knowledge, restricting their applicability. **References:** 1. Rasmussen, Carl Edward. "Gaussian processes to speed up hybrid Monte Carlo for expensive Bayesian integrals." *Bayesian Statistics* 7. 2003. 2. Zhang, Cheng, Babak Shahbaba, and Hongkai Zhao. "Hamiltonian Monte Carlo Acceleration using Surrogate Functions with Random Bases." *arXiv preprint* arXiv:1506.05555 (2015). 3. Strathmann, Heiko, et al. "Gradient-free Hamiltonian Monte Carlo with efficient kernel exponential families." *Advances in Neural Information Processing Systems.* 2015. 4. Nishihara, Robert, Iain Murray, and Ryan P. Adams. "Parallel MCMC with generalized elliptical slice sampling." Journal of Machine Learning Research 15.1 (2014): 2087-2112.
Is there any way to make jQuery BBQ Google Indexable? I'm working on a AJAX powered web site and I decided to use [Ben Alman's BBQ plugin](http://benalman.com/projects/jquery-bbq-plugin/) for hashchange event. But, with this plugin, I can't make Hashchanges for Google search (!#) Is there any other plugin for it? Thanks
There are two ways to make an ajax website SEO friendly. 1. Graceful upgradation. This involves coding your website to work without any ajax, then merely using AJAX to gracefully upgrade the websites functionality. [Example here](https://gist.github.com/854622) 2. The other way is to code your entire website in AJAX, and not care about SEO until it is too late. You can then use [Google's HashBang proposal](http://code.google.com/intl/tr-TR/web/ajaxcrawling/) to implement a [server-side hack](http://code.google.com/intl/tr-TR/web/ajaxcrawling/docs/getting-started.html) to serve the static content for your website. You can read more about the comparison of these two solutions here: <https://github.com/browserstate/history.js/wiki/Intelligent-State-Handling>
Sed get xml attribute value I have next xml file: ``` <AutoTest> <Source>EBS FX</Source> <CreateCFF>No</CreateCFF> <FoXML descriptor="pb.fx.spotfwd.trade.feed" version="2.0"> <FxSpotFwdTradeFeed> <FxSpotFwd feed_datetime="17-Dec-2014 10:20:09" cpty_sds_id="EBS" match_id="L845586141217" original_trade_id_feed="L80107141217" value_date="20141218" trade_id_external="001-002141880445/5862" match_sds_id="EBSFeedCpty" counter_ccy="USD" trade_id_feed="107" trade_type="S" feed_source_id="80" quoting_term="M" deal_ccy="GBP" rate="1.5" trade_date="20141217" modified_by="automation" cpty_side="B" counter_amt="1500000" smart_match="0" booking_status_id="10" trade_status_id="22" deal_amt="1000000" trade_direction="B"> <Notes /> </FxSpotFwd> </FxSpotFwdTradeFeed> <TestCases /> </FoXML> </AutoTest> ``` How to get value of **trade\_id\_external** attribute by using sed? I tried with this expression: `sed -n '/trade_id_external/s/.*=//p' ./file.xml` but no luck
You don't even need a pattern `/trade_id_external/` before the `s///` ``` $ sed -n 's/.*trade_id_external="\([^"]*\).*/\1/p' file 001-002141880445/5862 ``` In basic sed , `\(...\)` called capturing groups which was used to capture the characters you want to print at the final. Through **grep**, ``` $ grep -oP 'trade_id_external="\K[^"]*' file 001-002141880445/5862 ``` `-P` would turn on the Perl-regex mode in grep. So we could use any PCRE regexes in grep with `-P` param enabled. `\K` in the above regex would discard the previously matched characters, that is, it won't consider the characters which are matched by the pattern exists before the `\K`
VCL.graphics library to FMX I have been trying to convert a unit I had, from VCL to FMX. One of the librarys I use is `VCL.Graphics` for `TpenStyle`: ``` wallstyle,pathstyle,solvedpathstyle:TPenStyle; ``` How can I convert this to be able to use in Firemonkey?
Start with the documentation for [FMX.Types.TCanvas](http://docwiki.embarcadero.com/Libraries/en/FMX.Types.TCanvas). Note that it says: > > ... Specifying the type of brush, stroke, and font to use. > > > We recognise brush and font from VCL times, but what about stroke? Click on the Properties link at the top of the documentation link I gave above. Notice the various properties that have stroke in their name: Stroke, StrokeCap, StrokeDash, StrokeJoin, StrokeThickness. Also take a look at the methods by clicking on the Methods link of the TCanvas documentation page. There you will find, amongst many others: DrawRect and FillRect. The documentation for these methods tell you which properties influence the output. For DrawRect, the outline is determined by Stroke, StrokeDash etc. For FillRect, the fill is determined by the Fill property. So, to answer your question you need to specify a value for [StrokeDash](http://docwiki.embarcadero.com/Libraries/en/FMX.Types.TCanvas.StrokeDash). More generally, the equivalent to the VCL Pen property are the Stroke properties. And the equivalent to the VCL Brush property is Fill. I hope I've also given you some clues as to how to navigate the documentation to find out the answers to such questions for yourself.
Project file window is yellow in PyCharm I'm working with PyCharm 2019 and Django, in Windows 10 in a project that I haven't opened in a year. The Project files window is showing up as yellow, which seems new. What does this mean and how to I get the files to appear as white. [![enter image description here](https://i.stack.imgur.com/oXHNL.jpg)](https://i.stack.imgur.com/oXHNL.jpg) [![enter image description here](https://i.stack.imgur.com/EkIfZ.jpg)](https://i.stack.imgur.com/EkIfZ.jpg)
What the yellow background usually means is that the files are excluded form the project (it can also mean the files are *"read-only"*). This might happen for several reasons, the `.idea` folder might have broken and you need to delete it and recreate the project. If your project is installed in a venv sometimes the source files are marked read-only (which means the source files being edited are the versions installed in the venv). So here it gets complicated because it can depend on the specifics of the project itself. My usual steps for this problem are: 1. Close and reopen the project. 2. See if marking one of the directories as sources root changes the file color in the project tree. (Files might have been marked as excluded from the project for whatever reason.) 3. Just to help diagnosing the issue, open a search and go to costum scopes, see what scope those directories are associated with. 4. Check if file permissions are *read-only*. This can happen if you logged into PyCharm (or the OS) with a user account that doesn't have editing permissions on those files. 5. Delete the `.idea` folder (so the IDE recreates it) and create a new project with those files. (Remember to make a backup copy.)
Can't find the ASP.net ListView control I want to extend the `ListView` control: ``` using System; using System.Data; using System.Collections; using System.Web.UI; using System.Web.UI.WebControls; using System.ComponentModel; using System.Resources; using System.ComponentModel.Design; using System.Globalization; namespace My.WebControls { /// <summary> /// my ListView /// </summary> [Designer(typeof(System.Web.UI.Design.WebControls.DataBoundControlDesigner))] public class MyListView : ListView { } } ``` But compiler does not see this control. I include the namespace `system.web.ui.webcontrols`. Do I have to check anything else? I use the framework 3.5.
The `ListView` documentation [can be found on MSDN](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.listview.aspx) and you can see that the control exists in the **`System.Web.Extensions.dll`** ![enter image description here](https://i.stack.imgur.com/K7vD6.png) Re-check if you have this assembly in your references. Here's the full namespaces of a working `MyList` class ``` [System.ComponentModel.Designer(typeof(System.Web.UI.Design.WebControls.DataBoundControlDesigner))] public class MyCustomListView : System.Web.UI.WebControls.ListView { // ... } ``` Because of the [`DataBoundControlDesigner`](http://msdn.microsoft.com/en-us/library/system.web.ui.design.webcontrols.databoundcontroldesigner.aspx) I also had to reference the **`System.Design.dll`**, verify if you also have such reference.
Count from another table in Sequelize I've got to rewrite this SQL to Sequilize: ``` SELECT table1.id, table1.name, (SELECT COUNT(*) FROM table2 WHERE table2.t1_id = table1.id) AS t1_count FROM table1 WHERE .... LIMIT ..... OFFSET .....; ``` Thanks :)
To generate the exact SQL in your example use `sequelize.literal()` to generate the subquery. ``` const result = await Table1.findByPk(id, { attributes: [ 'id', 'name', [ sequelize.literal(`( SELECT COUNT(table2.id) FROM table2 WHERE table2.id = table1.t1_id )`), 't1_count', ], ], where: { // ... }, limit: { // ... }, offset: { // ... }, }); ``` You can also do this by using a `LEFT JOIN` and then using `sequelize.fn()` to call `COUNT()` on the joined results. ``` const result = await Table1.findByPk(id, { attributes: [ 'id', 'name', [ sequelize.fn('COUNT', sequelize.col('table2.id')), 't1_count', ], ], include: { model: Table2, attributes: [], where: { id: { [Op.col]: sequelize.col('table1.t1_id'), }, }, required: false, }, where: { // ... }, limit: { // ... }, offset: { // ... }, }); ``` ``` SELECT id, name, COUNT(table2.id) FROM table1 LEFT JOIN table2 ON table2.id = table1.t1_id WHERE .... LIMIT ..... OFFSET .....; ```
Refresh a django subtemplate with javascript - reload only part of the page I'm trying to refresh a subtemplate by calling a view with js. Nothing seems to happen, though the server gets hit and seems to return a response. The goal is to refresh the included part without refreshing the whole page. Minimal example: `views.py` ``` def ajax_view(request): context = {} context['data'] = 'data' return render(request, "test.html", context)// <-- reaches here, but no rendering ``` `test.html` ``` {{data}} ``` `main.html` ``` <script type="text/javascript"> function getQuery() { var request = new XMLHttpRequest(), method = 'GET', url = '/ajax/'; request.open(method, url); request.send(); } </script> {% include "test.html" %} // <-- does not update ```
With the help of a friend, I solved this. I'm going to answer my question with some detail, because I am 100% sure someone is going to need to figure out how to do this later. Happy to save you the trouble. I had to change my mental model about how this works. I still send a request to the server with javascript, and the view returns the templated HTML. However, and this is the important part, what I do is take the rendered HTML (from the response, what is returned with `return render(response, "your_template.html", context)`) and replace that part of my page with that HTML. Here's the Javascript. ``` function getQuery(item) { // Sends a request to the API endpoint to fetch data let request = new XMLHttpRequest(); let method = 'GET'; let query = // omitted let url = '/ajax/?' + query; request.open(method, url); request.onload = function () { // the response is the rendered HTML // which django sends as return render(response, "your_template.html", context) let myHTML = request.response; // This is the important part // Set that HTML to the new, templated HTML returned by the server document.getElementById('flex-container-A').innerHTML = myHTML; }; request.send(); } ``` and for completeness, here's the view: `views.py` ``` def ajax_view(request): """ Server API endpoint for fetching data """ context = {} queryset = Form.objects if request.method == 'GET': query = request.GET.get('q') # do some stuff and get some data context['data'] = data return render(request, "my_template.html", context) ``` Hope this helps you! This is a common pattern but not often mentioned so explicitly.
What's the use of specifying the type of a HashSet when working with the generic Set interface? If you instantiate a new HashSet, you usually use the Set interface to work with it afterwards. Just like ``` Set<T> set = new HashSet(); ``` So what is the use of specifying the type of the HashSet explicitly, too? For example: ``` Set<T> set = new HashSet<T>(); ``` I've seen it in quite a couple of books, but I can't think of any use at all. If you need access to the set, you'll work with the interface (which is already parameterized) anyway.
If you say this: ``` Set<T> set = new HashSet(); ``` you'll get an unchecked conversion warning. The static type of `new HashSet()` is a raw HashSet, and converting from that to a generified type is potentially unsafe. There are other circumstances where doing an assignment will cause type inference to take place. Calling a static method is the canonical example. eg: ``` Set<T> set = Collections.emptySet(); ``` Java doesn't do inference on the `new` operator, however, as this would introduce an ambiguity into the language. If you don't like the redundancy you can use a wrapper static method, and inference will take place. Google's [Guava](http://code.google.com/p/guava-libraries/) does this, so you can say: ``` Set<T> set = Sets.newhashSet(); ```
Why does changing $PATH affect child shells, but changing $foo not? ``` $ unset foo $ unset bar $ echo $foo $ echo $bar $ echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games $ foo=a $ bar=b $ export bar $ echo $foo a $ echo $bar b $ PATH= $ echo $PATH $ /bin/bash bash: lesspipe: No such file or directory bash: dircolors: No such file or directory bash: ls: No such file or directory $ echo $foo $ echo $bar b $ echo $PATH $ ``` As we can see, changing `$PATH` affects the subshell, whereas another variable needs to be `export`ed. Why?
There are really two types of variable: 1. Environment variables 2. Shell variables To make things more complicated, they both look the same, and a shell variable can be converted to an environment variable with the `export` command. The `env` command will show the current set of environment variables. ``` $ myvar=100 $ env | grep myvar $ export myvar $ env | grep myvar myvar=100 ``` Variables can also be temporarily exported for the life of a command. ``` $ env | grep anothervar $ anothervar=100 env | grep anothervar anothervar=100 $ env | grep anothervar $ ``` When the shell starts up it inherits a number of environment variables (which may be zero). Startup scripts (eg `.bash_profile`, `.bashrc`, files in the `/etc` directory) can also set and export variables. Finally the shell, itself, may set a default number to environment variables if the environment is empty. e.g. ``` $ PATH=foo /bin/bash -c 'echo $PATH' foo $ PATH= /bin/bash -c 'echo $PATH' $ unset PATH $ /bin/bash -c 'echo $PATH' /usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:. ```
Powershell: Combining Variables in Output I'm trying to put together a very simple code that outputs the server name, the last reboot date, and then the difference in hours and days. I've tried several iterations of Write-Host, but I can't seem to get the output I expect. The output should be something like this: ServerName | Reboot Date | Time Running (Day, Hours) Here is the code: ``` begin {} process { foreach($server in (gc d:\win_IP.txt)){ if(Test-Connection -ComputerName $server -Count 1 -ea 0) { $strFQDN = [System.Net.Dns]::GetHostbyAddress($server) | Select-Object HostName -ErrorAction "SilentlyContinue" $wmi = Get-WmiObject -Class Win32_OperatingSystem -Computer $server $strDate = $wmi.ConvertToDateTime($wmi.LastBootUpTime) $strDiff = [DateTime]::Now - $wmi.ConvertToDateTime($wmi.LastBootUpTime) | Format-Table Days, Hours } else { Write-Verbose "$server is offline" } } } end {} ``` If someone could explain just how combining variables works, as well as how to format the output, I'd really appreciate it. Thanks in advance.
Try this: ``` foreach ($server in (Get-Content "d:\win_IP.txt")){ if (Test-Connection -ComputerName $server -Count 1 -Quiet) { $strFQDN = [System.Net.Dns]::GetHostbyAddress($server) $wmi = Get-WmiObject -Class Win32_OperatingSystem -Computer $server $strDate = $wmi.ConvertToDateTime($wmi.LastBootUpTime) $strDiff = [DateTime]::Now - $strDate [PSCustomObject]@{ "ServerName" = $strFQDN.HostName "Reboot Date" = $strDate "Time Running" = "$($strDiff.Days) days, $($strDiff.Hours) hours" } } else { Write-Verbose "$server is offline" } } ``` What I did was store each field in a single object, then just outputted that object without formatting it. Typically not a good idea to format objects because then they are converted to strings and cannot be used for anything other than outputting to host/file.
Selenium wait for Ajax content to load - universal approach Is there a universal approach for Selenium to wait till all ajax content has loaded? (not tied to a specific website - so it works for every ajax website)
You need to wait for Javascript and jQuery to finish loading. Execute Javascript to check if `jQuery.active` is `0` and `document.readyState` is `complete`, which means the JS and jQuery load is complete. ``` public boolean waitForJSandJQueryToLoad() { WebDriverWait wait = new WebDriverWait(driver, 30); // wait for jQuery to load ExpectedCondition<Boolean> jQueryLoad = new ExpectedCondition<Boolean>() { @Override public Boolean apply(WebDriver driver) { try { return ((Long)((JavascriptExecutor)getDriver()).executeScript("return jQuery.active") == 0); } catch (Exception e) { // no jQuery present return true; } } }; // wait for Javascript to load ExpectedCondition<Boolean> jsLoad = new ExpectedCondition<Boolean>() { @Override public Boolean apply(WebDriver driver) { return ((JavascriptExecutor)getDriver()).executeScript("return document.readyState") .toString().equals("complete"); } }; return wait.until(jQueryLoad) && wait.until(jsLoad); } ```
What's the difference between http request and writing http request text to tcp/ip socket on 80 port Can someone explain the difference between the HTTP request and it handling and socket requests on 80 port. As I understood, HTTP server listen the 80 port and when someone sends an HTTP request on this port - server handle it. So when we place socket listener on port 80, and then write HTML formatted message to it - does it means that we send usual HTTP request? But as fiddler said - it false. What's the difference on a packet level? Or another *lower than presentation*-level between HTTP request and HTTP-formed writing to socket? Thanks.
First of all, port 80 is the default port for HTTP, it is not required. You can have HTTP servers listening on other ports as well. Regarding the difference between "regular" HTTP requests and the ones you make yourself over a socket - there is no difference. The "regular" HTTP requests you are referring to (made by a web browser for example) are also implemented over sockets, just like you would do it manually yourself. And the same goes for the server. The implementation of the HTTP server listens for incoming socket connections and parses the data that passes there just like you would. As long as you send in your socket valid HTTP protocol (according to the RFC), there should be no difference in the packet level (if the lower network stack is identical). Keep in mind that the socket layer is just the layer the HTTP data always passes over. It doesn't really matter who put the data there, it just comes out from the other side the same way it was put in. Please note that you have some degree of freedom when implementing an HTTP yourself. There are many optional fields and the order of the headers doesn't matter. So it is possible that two different HTTP implementations will be different in the packet level, but will behave basically the same. The best way to actually see what's going on in the packet level, is by using a network sniffer - like wireshark or packetyzer. A sniffer actually records the packets of the network and shows you their content. So if you record several HTTP implementations (from various browsers) and your own socket implementation, you can make the required changes to make them identical in the packet level.
Extract several columns from 3d matrix I currently have an array A which has dimensions N x t x t. I want to create a 2D matrix, N x t, of the form: ``` B = [ A[:,1,1] A[:,2,2],...,A[:,t,t]] ``` Obviously, 2 ways I could do this are writing it out in full (impractical since t is large) and loops (potentially slow). Is there a way to do this without loops. I figured it would work if I did: ``` B = A[:,[1:end],[1:end]] ``` but that just gave me back the original matrix.
Basically, you need to start thinking about how to reorganize your matrix. From ``` A = randn([5 3 3]); ``` Let's look at ``` A(:,:) ``` Basically you want columns 1, 5, 9. Thinking about it, knowing t = 3, from the present column you want to increment by t + 1. The formula is basically: ``` ((1:3)-1)*(3+1)+1 %(or (0:2)*(3+1) + 1) ``` Which plugged in A yields your solution ``` A(:,((1:3)-1)*(3+1)+1) ``` In a general format, you can do: ``` A(:,((1:t)-1)*(t+1)+1) ``` EDIT: Amro basically just put me to shame. The idea is still the same, it just becomes more readable thanks to `end` Therefore use: ``` A(:,1:t+1:end) ```
Kendo theme doesn't change for charts I want to change my kendo ui theme from default. The problem is it changes but only for controls: grid etc. but charts stay exactly the same. I'm adding this styles in the bundle. ``` bundles.Add(new StyleBundle("~/Content/kendoUi").Include( "~/Content/kendo/2016.1.112/kendo.common.min.css", "~/Content/kendo/2016.1.112/kendo.mobile.all.min.css", "~/Content/kendo/2016.1.112/kendo.metro.min.css" )); ``` Am I missing something ?
I was struggeling with the same thing today. For some reason, the Theme of a chart must be set via widget configuration. From the [Documentation of the Kendo Client Library](http://docs.telerik.com/kendo-ui/controls/charts/appearance): > > The Kendo UI Chart widgets come with a set of predefined themes. Use > the theme option to select a theme, as demonstrated in the example > below. The theme name is case insensitive. > > > ``` $("#chart").kendoChart({ theme: "blueOpal", //... }); ``` There is no documentation for the Server-Wrappers. However, it will work this way: ``` @(Html.Kendo().Chart().Theme("blueOpal")) ``` The reason for this, seems to be [explained here](http://docs.telerik.com/kendo-ui/styles-and-layout/appearance-styling): > > Kendo UI Gauges, Charts, Barcodes, Diagrams, and Maps use a mix of > browser technologies to attain the required precision and > responsiveness. **Visualization is rendered as vector graphics with > computed layout**. In contrast, interactive features are built using > traditional HTML elements. **As a result, the appearance settings of > these widgets are split between declarative options and traditional > CSS**. > > > If you want to do it globaly, you need to [override kendo](http://blog.falafel.com/custom-skin-kendo-ui-dataviz-components/): ``` var themable = ["Chart", "TreeMap", "Diagram", "StockChart", "Sparkline", "RadialGauge", "LinearGauge"]; if (kendo.dataviz) { for (var i = 0; i < themable.length; i++) { var widget = kendo.dataviz.ui[themable[i]]; if (widget) { widget.fn.options.theme = "blueOpal"; } } } ```
JTable - fire column data changed event I would like to fire an event that would denote that a given column values have all changed i.e. somewhere in column X, some values changed. How can I do this, since all events either concern cell, row or whole data table...
Looking in `AbstractTableModel`, it looks like constructing a `TableModelEvent` explicitly might do the trick: ``` model.fireTableChanged(new TableModelEvent(model, 0, lastRow, columnIndex, TableModelEvent.UPDATE)); ``` See also the javadoc for `TableModelEvent`: ``` /** * Depending on the parameters used in the constructors, the TableModelevent * can be used to specify the following types of changes: <p> * * <pre> * TableModelEvent(source); // The data, ie. all rows changed * TableModelEvent(source, HEADER_ROW); // Structure change, reallocate TableColumns * TableModelEvent(source, 1); // Row 1 changed * TableModelEvent(source, 3, 6); // Rows 3 to 6 inclusive changed * TableModelEvent(source, 2, 2, 6); // Cell at (2, 6) changed * TableModelEvent(source, 3, 6, ALL_COLUMNS, INSERT); // Rows (3, 6) were inserted * TableModelEvent(source, 3, 6, ALL_COLUMNS, DELETE); // Rows (3, 6) were deleted * </pre> * * It is possible to use other combinations of the parameters, not all of them * are meaningful. (...) ```
How to change the startup activity in android? I have two activities namely `login` and `calendar` in my Application. Currently my `startup` activity is "`calendar`". I want to run the `login` as first activity not `calendar`.
The startup activity [Launcher Activity] is declared in the projects' AndroidManifest.xml file Look for that activity tag in the manifest which looks like this ``` <activity android:name=".Main" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> ``` Look at the attribute android:name. Main is the class which is launched when the app starts. Currently your calendar activity name should be there. Change that to the .classpath of your activity that you want to launch. That should do it. You may also want to do the hello world application in the [tutorials](http://developer.android.com/training/index.html) and go through the docs a little to see how Android Applications work.
Making whole card clickable in Reactstrap I'm trying to create a card that, when clicked, performs an action. I've managed to make this work by adding a button to the card, which is bound to an event handler, and works as expected. I'm trying to get the whole card to work with the same event handler, as opposed to using the button, but I can't seem to get this to work as I would expect. ``` const SiteCard = props => { const { site, siteSelectedCallback } = props; return ( <Card onClick={siteSelectedCallback} className="card-item"> <CardBody> <CardTitle>{site.name}</CardTitle> <CardText className="text-muted">{site.address}</CardText> <Button color="primary" className="float-right" value={site.id}> CHOOSE ME </Button> </CardBody> </Card> ); }; ``` I've tried wrapping it in an `<a>` tag, but that also doesn't work. With the example, I'd expect the card to be clickable, but actually the button still works with the event handler. I've also tried removing the button, but that doesn't make the card clickable.
Note that adding `onClick` on the `Card` component is enough to make it clickable. Changing the `cursor` through style makes it more obvious to the user. ``` <Card onClick={onClick} style={{ cursor: "pointer" }}> <CardBody>This is a clickable card.</CardBody> </Card> ``` [![Edit pedantic-leftpad-c0yoz](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/pedantic-leftpad-c0yoz) Wrapping the card with an `a` tag will also work, though, it won't have the pointer cursor without a `href` which can be changed easily with CSS. ``` const SiteCard = ({ site, siteSelectedCallback }) => ( <a style={{ cursor: 'pointer' }} onClick={siteSelectedCallback}> <Card className="card-item"> <CardBody> <CardTitle>{site.name}</CardTitle> <CardText className="text-muted">{site.address}</CardText> </CardBody> </Card> </a> ); ``` [![Edit eloquent-haslett-kz62z](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/eloquent-haslett-kz62z) *Tested it just now with a `console.log`, so if that doesn't work, it's because the callback isn't working as you're expecting it to.* --- Another way would be to make the `Card` an `a` tag by passing a `tag` prop. ``` <Card tag="a" onClick={siteSelectedCallback} style={{ cursor: "pointer" }}> ``` [![Edit xenodochial-curie-jkvon](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/xenodochial-curie-jkvon) All the options available are clearly defined in the [source of the reactstrap's `Card` component](https://github.com/reactstrap/reactstrap/blob/25690dcbb1b3a5109b8aac7a2a55223fe38d18b6/src/Card.js). --- I also tested with a button inside the `Card` without any problems.
What is the name for `<(...)` 'operator' in bash? I'm familiar with creating 'temporary files' for command input in bash, e.g. ``` cat file_1 <(echo hello) file_2 ``` I want to read more about the subject, but I don't know the name of the `<( )` operator. I suspect it's a kind of IO redirect. Is there a name for this notation/operator?
This is called [process substitution](http://wiki.bash-hackers.org/syntax/expansion/proc_subst): > > Process substitution is a form of redirection where the input or > output of a process (some sequence of commands) appear as a temporary > file. > > > Also from [Bash Reference Manual → 3.5.6 Process Substitution](https://www.gnu.org/software/bash/manual/bash.html#Process-Substitution): > > Process substitution allows a process’s input or output to be referred to using a filename. It takes the form of > > > > ``` > <(list) > > ``` > > or > > > > ``` > >(list) > > ``` > > The process list is run asynchronously, and its input or output appears as a filename. This filename is passed as an argument to the current command as the result of the expansion. If the >(list) form is used, writing to the file will provide input for list. If the <(list) form is used, the file passed as an argument should be read to obtain the output of list. Note that no space may appear between the < or > and the left parenthesis, otherwise the construct would be interpreted as a redirection. Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files. > > > When available, process substitution is performed simultaneously with parameter and variable expansion, command substitution, and arithmetic expansion. > > >
How to preprocess data for machine learning? I just wanted some general tips on how data should be pre-processed prior to feeding it into a machine learning algorithm. I'm trying to further my understanding of why we make different decisions at preprocessing times and if someone could please go through all of the different things we need to consider when cleaning up data, removing superfluous data etc. I would find it very informative as I have searched the net a lot for some canonical answers or rules of thumb here and there doesn't seem to be any. I have a set of data in a .tsv file available [here](https://www.kaggle.com/c/stumbleupon/data). The training set amounts to 7,000 rows, the test set 3,000. What different strategies should I use for handling badly-formed data if 100 rows are not readable in each? 500? 1,000? Any guidlines to help me reason about this would be very much appreciated. Sample code would be great to see, but is not necessary if you don't feel like it, I just want to understand what I should be doing! :) Thanks
There is a lot of things which need to be decided according to the actual data. It is not as simple as naming a few steps you need to do always when you get data. However, I can try to name a few of things which usually help a lot. Still, the first and the most important thing is to thoroughly analyze the data and make your best to "understand them". Understanding data and all the background behind the crawling and collecting data is essential part. If you understand how it comes that there are missing data or noise then you can have a clue how to handle it. I will try to give you a few hints, though: 1. **Normalize values** - It is not always necessary to normalize all the features. But generally, normalization can't hurt and it can help a lot. Thus, if you are not limited, give it a try and try using normalization for all the features except of those which are clearly non-sense to be normalized. The most usual normalization methods are: *linear normalization* (mapping the feature values to <0,1> range) and *z-normalization* which means that you subtract the mean of the feature values and divide the result by the standard deviation. It is not possible to generally say which one is better. (we are getting back to the understanding the data) 2. **Missing values** - It is necessary to decide what to do with missing values. There are a few ways how to handle it. *Remove the samples* with missing values. If you have enough data samples, perhaps it is not necessary to care about the samples with missing values. It may only bring a noise to your results. In the case, there is only one feature value missing in the sample, you can just fill the value by the *mean of the feature*. (but be careful because again, you can just bring the noise to the results) 3. **Outliers** - In many cases, you will come across samples which are far away from other samples, i.e. the outliers. The outliers are usually just a noise, mistake in data or it can be a signal of a special behavior (e.g. when there is something which is violating the usual behavior pattern it can be a signal of actions caused by an attacker or something - e.g. bank networks). In most cases, it is good idea to just *remove* the outliers, as the number of outliers is usually really low and it would may have a big influence on your results. *Considering a histogram as an example - I would just cut off let say 0-2.5 percentile and 97.5-100 percentile*. 4. **Mistakes** - It is very likely there will be mistakes in the data. This is the part where I can't give you any hints as it is necessary to *really understand all the background* and to know how it could have happened that there are mistakes. 5. **Nominal values** - If there are any nominal values which can be ordered, then just *replace* the nominal values to numbers (0, 1, 2, 3, 4, and 5). If it is not possible to order the values (e.g. color = blue, black, green...) then the best way is to *split the feature* into as many features as the cardinality of the set of possible values. And just transform the feature to binary values - "Is green?" "Yes/No" (0/1). Summary, it is really hard to answer generally. The good way how to avoid "making things worse" is to start with removing all the "bad values". Just remove all the rows with missing or wrong values. Transform all the other values as mentioned before and try to get your first results. Then you will get better understanding of all the data and you will have better idea where to look for any improvements. If you have any further questions regarding particular "pre-processing problems", I will be happy to edit this answer and add more ideas how to handle it.
Reading a "flipped" table in to a data.frame correctly I have a tab-delimited file that looks like this: ``` AG-AG AG-CA AT-AA AT-AC AT-AG ... 0.0142180094786 0.009478672985781 0.0142180094786 0.4218009478672 ... ``` When I read this into R using read.table i get: ``` nc.tab <- read.table("./percent_splice_pair.tab", sep="\t", header=TRUE) AG.AG AG.CA AT.AA AT.AC AT.AG ... 1 0.01421801 0.009478673 0.01421801 0.4218009 0.03317536 ... ``` This feels somewhat awkward for me, because I am much more used to working with data if its like this: ``` splice.pair counts AG.AG 0.01421801 AG.CA 0.009478673 AT.AA 0.01421801 AT.AG 0.03317536 ... ... ``` so far, my attempts at trying to coerce the table into a data frame like this (using `data.frame()`) have caused very odd results. I can't work out how to get each row of the table I have as a simple list, which I can then use as columns for the data frame. `colnames(nc.tab)` works for the headers but things like `nc.tab[1,]` just give me the table + headers again. Am I missing something obvious? --edit-- Whilst @Andrie's answer gave me the data.frame I needed, I had to do a bit of extra work to coerse the counts values into numeric values so that they would work correctly in ggplot: ``` nc.tab <- read.table("./percent_splice_pair.tab", header=FALSE, sep="\t") nc.mat <- t(as.matrix(nc.tab)) sp <- as.character(nc.tab[,2]) c <- as.numeric(as.character(nc.tab[,2])) nc.dat <- data.frame(Splice.Pair=sp, count=c) Splice.Pair count 1 AG-AG 0.014218009 2 AG-CA 0.009478673 3 AT-AA 0.014218009 4 AT-AC 0.421800948 5 AT-AG 0.033175355 ```
You need the following to read and reshape your data in the way you want: - use `read.table` with the parameter `header=FALSE` - then transpose the data with the function `t()` - rename the columns Here is the code: ``` x <- read.table(..., header=FALSE) df <- as.data.frame(t(x)) names(df) <- c("splice.pair", "counts") df splice.pair counts V1 AG-AG 0.0142180094786 V2 AG-CA 0.009478672985781 V3 AT-AA 0.0142180094786 V4 AT-AC 0.4218009478672 ```
Please help me understand type attribute of web.config custom settings? I am trying to define custom settings in my web.config file and I'm pretty sure I have most of it correct and it all makes sense, except the one crucial part where I don't understand what I'm supposed to use. The tutorial I used to create my SectionHandler, didn't go into an explanation of it and MSDN isn't really helping me fully understand it either. This comes from the tutorial I used: ``` <section name="BlogSettings" type="Fully.Qualified.TypeName.BlogSettings, AssemblyName" /> ``` [Link to Tutorial](http://haacked.com/archive/2007/03/11/custom-configuration-sections-in-3-easy-steps.aspx) This is from MSDN: ``` type="System.Configuration.SingleTagSectionHandler" ``` Yes, I am very new to ASP.NET and I'm trying to learn. I would be happy with any good references that explain what's going on here.
### Description The type Attribute of the Section in web.config is the "path" to the corresponding `ConfigurationSection` class you have implemented. The ConfigurationSection class is the class that defines the section and the possible configuration content. The `ConfigurationSection` is the base class of all configuration sections. This belongs to .NET in general, not only web.config. Check out [Unraveling the Mysteries of .NET 2.0 Configuration](http://www.codeproject.com/Articles/16466/Unraveling-the-Mysteries-of-NET-2-0-Configuration) ### More Information - [MSDN - ConfigurationSection Class](http://msdn.microsoft.com/en-us/library/system.configuration.configurationsection.aspx) - [Unraveling the Mysteries of .NET 2.0 Configuration](http://www.codeproject.com/Articles/16466/Unraveling-the-Mysteries-of-NET-2-0-Configuration)
Weird set index error I am stuck on this chunk of code ``` hdiag = zeros(Float64,2) hdiag = [0,0] println(hdiag) hdiag[1] = randn() ``` In the last line I obtain an `InexactError`. It is strange because randn() it's a `Float64`, but for some reason I have to do `hdiag=randn(2)` and then there should not be a problem.
The line: ``` hdiag = [0,0] ``` *changes* `hdiag` to refer to a completely new and different array than what it was before. In this case, that new array is an integer array, and so any subsequent assignments into it need to be convertible to integers. Indexed assignment is different; it changes the contents of the existing array. So you can use `hdiag[:] = [0,0]` and it will change the contents, converting the integers to floats as it does so. This gets even easier in version 0.5, where you can use the new `.=` dot assignment syntax to assign into an existing array: ``` hdiag .= [0,0] ``` will do what you want. For more details on arrays, bindings, and assignment, I recommend reading this blog post: [Values vs. Bindings: The Map is Not the Territory.](http://www.johnmyleswhite.com/notebook/2014/09/06/values-vs-bindings-the-map-is-not-the-territory/)
run non web java application on tomcat I have a simple Java application that I need to be running at all time (also to start automatically on server restart). I have thought on a service wrapper, but the Windows version is paid. Is there a way that I can configure Tomcat to run a specific class from a project automatically or any other solution that could give the same result?
I think your need is to have an application (whatever web or non web) that starts with tomcat at the same time. Well, you need to have a simple web application that registers a listener (that listens to the application start event i.e. tomcat start event) and launches your class. It's very simple in your web.xml you declare a listener like this : ``` <listener> <description>application startup and shutdown events</description> <display-name>ApplicationListener</display-name> <listener-class>com.myapp.server.config.ApplicationListener</listener-class> </listener> ``` And in you ApplicationListener class you implement ServletContextListener interface. Here is an example : ``` import java.io.File; import javax.servlet.ServletContext; import javax.servlet.ServletContextEvent; import javax.servlet.ServletContextListener; /** * Class to listen for application startup and shutdown * * @author HBR * */ public class ApplicationListener implements ServletContextListener { private static Logger logger = Logger.getLogger(ApplicationListener.class); @Override public void contextDestroyed(ServletContextEvent servletContextEvent) { logger.info("class : context destroyed"); } @Override public void contextInitialized(ServletContextEvent servletContextEvent) { ServletContext context = servletContextEvent.getServletContext(); ///// HERE You launch your class logger.info("myapp : context Initialized"); } } ```
R Split string and keep substrings righthand of match? How to do this stringsplit() in R? Stop splitting when no first names seperated by dashes remain. Keep right hand side substring as given in results. ``` a <- c("tim/tom meyer XY900 123kncjd", "sepp/max/peter moser VK123 456xyz") # result: c("tim meyer XY900 123kncjd", "tom meyer XY900 123kncjd", "sepp moser VK123 456xyz", "max moser VK123 456xyz", "peter moser VK123 456xyz") ```
Here is one possibility using a few of the different base string functions. ``` ## get the lengths of the output for each first name len <- lengths(gregexpr("/", sub(" .*", "", a), fixed = TRUE)) + 1L ## extract all the first names ## using the fact that they all end at the first space character fn <- scan(text = a, sep = "/", what = "", comment.char = " ") ## paste them together paste0(fn, rep(regmatches(a, regexpr(" .*", a)), len)) # [1] "tim meyer XY900 123kncjd" "tom meyer XY900 123kncjd" # [3] "sepp moser VK123 456xyz" "max moser VK123 456xyz" # [5] "peter moser VK123 456xyz" ``` **Addition:** Here is a second possibility, using a little less code. Might be a little faster too. ``` s <- strsplit(a, "\\/|( .*)") paste0(unlist(s), rep(regmatches(a, regexpr(" .*", a)), lengths(s))) # [1] "tim meyer XY900 123kncjd" "tom meyer XY900 123kncjd" # [3] "sepp moser VK123 456xyz" "max moser VK123 456xyz" # [5] "peter moser VK123 456xyz" ```
Second function is not running in jquery I have two functions and I want to call one function after the first is completed. I wrote this: ``` $(document).ready(function () { FetchProducts('@Model.ProductId', function () { SimilarProducts('@Model.Class.Group.SectionId', '@Model.ProductId', '@TempData["Min"]', '@TempData["Max"]'); }); }); ``` `FetchProducts` function runs an ajax call that will fill `TempData["Min"]` and `TempDate["Max"]` and returns a list of products. `SimilarProducts` want to make another ajax request by min and max to get some similar products. `FetchProducts` is running properly but `SimilarProducts` is not running. Whats the problem? **Update** This is `FetchProducts` function: ``` function FetchProducts(productId) { $.getJSON("/product/jsonproducts", { productId: productId }, function (data) { var i = 0; $(".section-items-details").empty(); for (var i = 0; i < data.length; i++) { ... } }); } ``` And this is `SimilarProducts` function: ``` function SimilarProducts(sectionId,productId, minimum, maximum) { $.getJSON("/product/getsimilarproducts", { sectionId: sectionId, productId: productId, min: minimum, max: maximum }, function (data) { var i = 0; for (var i = 0; i < data.length; i++) { ... } }); } ```
Oh well, since your update I can tell you where is the error :) So, talk about your 'FetchProducts' function : ``` function FetchProducts(productId) { $.getJSON("/product/jsonproducts", { productId: productId }, function (data) { var i = 0; $(".section-items-details").empty(); for (var i = 0; i < data.length; i++) { ... } }); ``` } As we can see, your 'FetchProducts' function only accept one argument, but in his call, you added an anonymous function. In order to make it working properly, you shoud edit your 'FetchProducts' function like this : ``` function FetchProducts(productId, callback) { $.getJSON("/product/jsonproducts", { productId: productId }, function (data) { var i = 0; $(".section-items-details").empty(); for (var i = 0; i < data.length; i++) { ... } ... // Everything is ok, let's call our callback function! if ($.isFunction(callback)) callback(); }); } ```
Converting dates with PHP for DATETIME in SQL I have a forum in PHP which takes a date like in the form `dd/mm/yyyy hh:mm:ss`. However, I need to insert it for SQL as a DATETIME in the format as `yyyy-mm-dd hh:mm:ss`. How can I convert this data?
Your date time format is wrong: `dd/mm/yyyy hh:mm:ss`. Probably you mean `d/m/Y H:i:s` If you have 5.3+ version there is safe way to convert the date time into another format. Here's an example: ``` $timestamp = '31/05/2001 12:22:56'; $timestamp = DateTime::createFromFormat('d/m/Y H:i:s', $timestamp); echo $timestamp->format('Y-m-d H:i:s'); ``` or if you like more procedural way: ``` $timestamp = '31/05/2001 12:22:56'; $timestamp = date_create_from_format('d/m/Y H:i:s', $timestamp); echo date_format($timestamp, 'Y-m-d H:i:s'); ``` Be careful with previous suggestions. Some are completely wrong and others could lead to errors.