prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
C++11 uniform initialization: Field initializer is not constant
I'm trying to instantiate a set of strings like this:
```
class POI {
public:
...
static const std::set<std::string> TYPES { "restaurant", "education", "financial", "health", "culture", "other" };
...
}
```
Now, when I do this I get these errors (all on this line):
```
error: field initializer is not constant
static const std::set<std::string> TYPES { "restaurant", "education", "financial", "health", "culture", "other" };
error: in-class initialization of static data member 'const std::set<std::basic_string<char> > POI::TYPES' of non-literal type
error: non-constant in-class initialization invalid for static member 'POI::TYPES'
error: (an out of class initialization is required)
error: 'POI::TYPES' cannot be initialized by a non-constant expression when being declared
```
That would make sense to my eyes if I assumed that the strings inside the set are not treated as being const. Is that really the problem here? Unfortunately, I cannot find a way of declaring those strings inside the initializer as const.. Is that possible?
| You must initialize your static variable out-of-line, as in:
```
#include <set>
#include <string>
class POI {
public:
static const std::set<std::string> TYPES;
};
const std::set<std::string> POI::TYPES { "restaurant", "education", "financial", "health", "culture", "other" };
```
This would work for an integral / enum type, as specified by the standard (section 9.4.2:)
>
> If a static data member is of const integral or const enumeration type, its declaration in the class definition can specify a constant-initializer which shall be an integral constant expression. In that case, the member can appear in integral constant expressions within its scope.
>
>
>
|
Why does is\_array() leak memory in PHP?
According to my xdebug output, is\_array() leaks the amount of memory that gets passed to it as an argument. If I pass it a large array, it leaks a ton of memory just in order to check if it's an array.
Is there a workaround for this?
```
17.4313 21858520 +70004 -> is_array() [...]/app/app_model.php:526
```
Here's the code snippet that causes the leak:
```
$ret = $this->behaviors[$b[$i]]->afterFind($this, $results, true);
if (is_array($ret)) {
$results = $ret;
}
```
I'm running this on Linux (Ubuntu 9.04)
PHP: 5.3.2
xdebug: 2.0.5
uname -a gives me this:
Linux linux8 2.6.28-19-server #64-Ubuntu SMP Wed Aug 18 21:57:33 UTC 2010 i686 GNU/Linux
| My first reaction:
[Select isn't broken](http://pragprog.com/the-pragmatic-programmer/extracts/tips).
My second reaction:
You can conclude three things:
- a widely spread piece of software (`is_array`) is broken - You are the first one to notice
- xdebug is broken reports a leak where there is none
- xdebug and PHP don't work together nicely as it concerns memory management
A widely spread and used function is most often not the problem. Try to narrow down the occurence of the 'xdebug leak report' by running simpler code:
```
$arr = array_fill( 0, 10000, "content" );
$mallocbytes=true;// set to true to get process
$usage=memory_get_usage(!$mallocbytes);
for( $i=0; $i!=1000000; $i=$i+1) {
is_array($arr);
$newusage=memory_get_usage(!$mallocbytes);
if( $newusage != $usage ) {
print( "diff after $i'th is_array: ".($newusage-$usage)."\n" );
}
$usage=$newusage;
}
```
Take a look at the actual memory consumption of your PHP runtime. I bet it won't grow.
|
In C, why do some people cast the pointer before freeing it?
I'm working on an old code base and pretty much every invocation of free() uses a cast on its argument. For example,
```
free((float *)velocity);
free((float *)acceleration);
free((char *)label);
```
where each pointer is of the corresponding (and matching) type. I see no point in doing this at all. It's very old code, so I'm left wondering if it's a K&R thing. If so, I actually wish to support the old compilers that may have required this, so I don't want to remove them.
Is there a technical reason to use these casts? I don't even see much of a pragmatic reason to use them. What's the point of reminding ourselves of the data type right before freeing it?
EDIT: This question is *not* a duplicate of the other question. The other question is a special case of this question, which I think is obvious if the close voters would have read all the answers.
Colophon: I'm giving the checkmark to the answer that gave a reason why this might still need to be done; however, the answer about it being a pre-ANSI C custom (at least among some programmers) seems to be the reason it was used in my case. If there were two checkmarks to give they'd both get one. Lots of good points by many people here. Thank you for your contributions.
| Casting may be required to resolve compiler warnings if the pointers are `const`. Here is an example of code that causes a warning without casting the argument of free:
```
const float* velocity = malloc(2*sizeof(float));
free(velocity);
```
And the compiler (gcc 4.8.3) says:
```
main.c: In function ‘main’:
main.c:9:5: warning: passing argument 1 of ‘free’ discards ‘const’ qualifier from pointer target type [enabled by default]
free(velocity);
^
In file included from main.c:2:0:
/usr/include/stdlib.h:482:13: note: expected ‘void *’ but argument is of type ‘const float *’
extern void free (void *__ptr) __THROW;
```
If you use `free((float*) velocity);` the compiler stops complaining.
|
f-strings vs str.format()
I'm using the `.format()` a lot in my Python 3.5 projects, but I'm afraid that it will be deprecated during the next Python versions because of f-strings, the new kind of string literal.
```
>>> name = "Test"
>>> f"My app name is {name}."
'My app name is Test.'
```
Does the formatted string feature come to fully replace the old `.format()`? And from now on, would it be better to use the new style in all cases?
I understand that it's based on the idea that "Simple is better than complex." However, what about performance issues; is there any difference between them? Or is it just a simple look of the same feature?
|
>
> *I'm afraid that it will be deprecated during the next Python versions*
>
>
>
Don't be, `str.format` does not appear (nor has a reason) to be leaving any time soon, the PEP that introduced `f`prefixed-strings even [states in its Abstract](https://www.python.org/dev/peps/pep-0498/#abstract):
>
> This PEP does not propose to remove or deprecate any of the existing string formatting mechanisms.
>
>
>
Formatted strings were introduced to address some of the shortcomings other methods for formatting strings had; not to throw the old methods away and force god-knows how many projects to use f-string's if they want their code to work for Python 3.6+.
---
As for the performance of these, it seems my initial suspicion that they might be slower is wrong, f-strings seem to easily outperform their `.format` counterparts:
```
➜ cpython git:(master) ./python -m timeit -s "a = 'test'" "f'formatting a string {a}'"
500000 loops, best of 5: 628 nsec per loop
➜ cpython git:(master) ./python -m timeit "'formatting a string {a}'.format(a='test')"
100000 loops, best of 5: 2.03 usec per loop
```
These were done against the master branch of the CPython repository as of this writing; they are definitely subject to change:
- `f-strings`, as a new feature, might have possible optimizations
- Optimizations to CPython might make `.format` faster (e.g [Speedup method calls 1.2x](https://bugs.python.org/issue26110))
But really, don't worry about speed so much, worry about what is more readable to you and to others.
In many cases, that's going to be `f-strings`, but [there's some cases](https://stackoverflow.com/questions/44780357/how-to-use-newline-n-in-f-string-to-format-output-in-python-3-6) where `format` is better.
|
Can I lose "constness" in the return type of an override virtual function?
The following code compiles and runs, and no warning is emitted by either gcc or clang:
```
#include <iostream>
struct Base {
virtual ~Base() = default;
virtual std::string const& get() = 0;
};
struct Derived: Base {
virtual std::string& get() override { return m; }
std::string m;
};
int main()
{
Derived d;
d.get() = "Hello, World";
Base& b = d;
std::cout << b.get() << "\n";
}
```
Is `std::string&` covariant with `std::string const&` then?
| **Yes**
This is specified in **class.virtual**, in the latest draft (n4606) we see:
>
> **§10.3 7/** The return type of an overriding function shall be either identical to the return type of the overridden function
> or *covariant* with the classes of the functions. If a function `D::f` overrides a function `B::f`, the return types
> of the functions are covariant if they satisfy the following criteria:
>
>
> - both are pointers to classes, both are lvalue references to classes, or both are rvalue references to classes111
> - the class in the return type of `B::f` is the same class as the class in the return type of `D::f`, or is an unambiguous and accessible direct or indirect base class of the class in the return type of `D::f`
> - both pointers or references have the same cv-qualification and the class type in the return type of `D::f` has the same cv-qualification as or less cv-qualification than the class type in the return type of `B::f`.
>
>
>
Specifically, the last point addresses exactly the case here: it is acceptable for an overriding type to lose the `const` and/or `volatile` qualifiers (it cannot, however, gain them).
---
Note: as mentioned by [@george](https://stackoverflow.com/users/471160/george) above, paragraph 8/ used to prevent this from working with incomplete class types, but this was [since fixed](http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#1250).
|
Conditional Compilation in Perl
How do I get the following code to work?
```
use strict;
use warnings;
if ($^O eq 'MSWin32' || $^O eq 'MSWin64') {
use Win32;
Win32::MsgBox("Aloha!", MB_ICONINFORMATION, 'Win32 Msgbox');
}
else {
print "Do not know how to do msgbox under UNIX!\n";
}
```
The above runs under Windows. But under UNIX, there is a compilation error as Win32 cannot be found. Replacing "use" with "require" makes things worse -- the code would fail to compile under both Windows and UNIX because the line containing MB\_ICONINFORMATION is always compiled and "MB\_ICONINFORMATION" would be an undeclared bare-word.
So how do I get around this problem?
| Perl compiles code first to an intermediate representation, then executes it. Since the `if` is evaluated at runtime but the `use` is handled during compilation, you are not importing the module conditionally.
To fix this, there are a number of possible strategies:
- conditional import with the `use if` pragma
- conditional import with a BEGIN block
- `require` the module
- defer compilation with `eval`
To import a module only when a certain condition is met, you can use the [`if` pragma](https://metacpan.org/pod/if):
```
use if $^O eq 'MSWin32', 'Win32';
```
You can also run code during compilation by putting it into a BEGIN block:
```
BEGIN {
if ($^O eq 'MSWin32') {
require Win32;
Win32->import; # probably not necessary
}
}
```
That BEGIN block behaves exactly the same like the above `use if`.
Note that we have to use `require` here. With a `use Win32`, the module would have been loaded during the compile time of the begin block, which bypasses the `if`. With `require` the module is loaded during runtime of the begin block, which is during compile time of the surrounding code.
In both these cases, the `Win32` module will only be imported under Windows. That leaves the `MB_ICONINFORMATION` constant undefined on non-Windows systems. In this kind of code, it is better to not import any symbols. Instead, use the fully qualified name for everything and use parentheses for a function call (here: `Win32::MB_ICONINFORMATION()`). With that change, just using a `require` instead of an `use if` may also work.
If you need code to be run later, you can use a string-eval. However, this potentially leads to security issues, is more difficult to debug, and is often slower. For example, you could do:
```
if ($^O eq 'MSWin32') {
eval q{
use Win32;
Win32::MsgBox("Aloha!", MB_ICONINFORMATION, 'Win32 Msgbox');
1;
} or die $@; # forward any errors
}
```
- Because `eval` silences any errors by default, you must check success and possibly rethrow the exception. The `1` statement makes sure that the eval'ed code returns a true value if successful. `eval` returns `undef` if an error occurs. The `$@` variable holds the last error.
- `q{...}` is alternative quoting construct. Aside from the curly braces as string delimiters it is exactly the same as `'...'` (single quotes).
If you have a lot of code that only works on a certain platform, using the above strategies for each snippet is tedious. Instead, create a module for each platform. E.g.:
Local/MyWindowsStuff.pm:
```
package Local::MyWindowsStuff;
use strict;
use warnings;
use Win32;
sub show_message {
my ($class, $title, $contents) = @_;
Win32::MsgBox("Aloha!", MB_ICONINFORMATION, 'Win32 Msgbox');
}
1;
```
Local/MyPosixStuff.pm:
```
package Local::MyPosixStuff;
use strict;
use warnings;
sub show_message {
warn "messagebox only supported on Windows";
}
1;
```
Here I've written them to be usable as classes. We can then conditionally load one of these classes:
```
sub load_stuff {
if ($^O eq 'MSWin32') {
require Local::MyWindowsStuff;
return 'Local::MyWindowsStuff';
}
require Local::MyPosixStuff;
return 'Local::MyPosixStuff';
}
my $stuff = load_stuff();
```
Finally, instead of putting a conditional into your code, we invoke the method on the loaded class:
```
$stuff->show_message('Aloha!', 'Win32 Msgox');
```
If you don't want to create extra packages, one strategy is to eval a code ref:
```
sub _eval_or_throw { my ($code) = @_; return eval "$code; 1" or die $@ }
my $show_message =
($^O eq 'MSWin32') ? _eval_or_throw q{
use Win32;
sub {
Win32::MsgBox("Aloha!", MB_ICONINFORMATION, 'Win32 Msgbox');
}
} : _eval_or_throw q{
sub {
warn "messagebox only supported on Windows";
}
};
```
Then: `$show_message->()` to invoke this code. This avoids repeatedly compiling the same code with `eval`. Of course that only matters when this code is run more than once per script, e.g. inside a loop or in a subroutine.
|
What is the Snap packaging format?
I have a very little knowledge about the 'Snap packaging format'. What I know is that 'Snap' is an alternative packaging format like .deb.
What I don't know is
- Why did Canonical chose it?
- What are the main advantages of 'Snap' over .deb?
- Will .deb be abandoned, or is it already abandoned?
| ## Why did Canonical chose snaps?
To quote the [Ubuntu website](https://developer.ubuntu.com/en/snappy/):
>
> We originally created the snappy technology and application
> confinement system to ensure a carrier-grade update experience for
> Ubuntu mobile users and set a new standard for application security in
> the mobile era.
>
>
>
Essential idea was to fix issues that are present in both `.deb` packages and provide new method for updating the packages (the so called transactional updates , very similar to how android apps are updated). As [Mark Shuttleworth](http://www.markshuttleworth.com/) explains:
>
> Whenever we make a fix to packages in Ubuntu, we’ll publish the same
> fix to Ubuntu Core, and systems can get that fix transactionally. In
> fact, updates to Ubuntu Core are even smaller than package updates
> because we only need to send the precise difference between the old
> and new versions, not the whole package.
>
>
>
### What are the main advantages of .snap packages over .deb packages?
The biggest advantage is the improved security. PPAs and `.deb` packages are typically installed with root privillege , which opens up a venue for security risks.
Snappy apps are isolated, meaning that if some app breaks, it won't break your systems. To quote Mark Shuttleworth:
>
> Snappy packages are automatically confined to ensure that a bug in one
> app doesn’t put your data elsewhere at risk
>
>
>
### Will .deb be abandoned?
As of Ubuntu 16.04 LTS , both methods are available to the users.
To quote [OMG! Ubuntu!](http://www.omgubuntu.co.uk/2016/04/ubuntu-16-04-lts-snap-packages):
>
> Canonical also say that “…the tens of thousands of applications and
> packages in .deb format will continue to be supported in 16.04 and
> beyond, and deb archives in particular will continue to be available
> for all to use and distribute software.”
>
>
>
|
derived instance in base class
```
class baseClass
{
derivedClass nm = new derivedClass();
}
class derivedClass : baseClass
{
}
```
This code builds fine. What might be the possible reason for C# to allow creating `derivedClass` objects in `baseClass`. Can you think of any specific reasons for doing this?
|
>
> This code builds fine.
>
>
>
Yes - why do you think it wouldn't?
>
> What might be the possible reason for C# to allow creating derivedClass objects in baseClass.
>
>
>
Because there's no reason to prohibit it?
>
> Can you think of any specific reasons for doing this?
>
>
>
Static factory methods, for example?
```
// BaseClass gets to decide which concrete class to return
public static BaseClass GetInstance()
{
return new DerivedClass();
}
```
That's actually a pretty common pattern. We use it a lot in [Noda Time](http://noda-time.googlecode.com) for example, where `CalendarSystem` is a public abstract class, but all the concrete derived classes are internal.
Sure, it's crazy to have the exact example you've given - with an instance field initializing itself by creating an instance of a derived class - because it would blow up the stack due to recursion - but that's not a matter of it being a derived class. You'd get the same thing by initializing the *same* class:
```
class Bang
{
// Recursively call constructor until the stack overflows.
Bang bang = new Bang();
}
```
|
Why do forums store posts in a database?
From looking at the way some forum softwares are storing data in a database (eg. phpBB uses MySQL databases for storing just about everything) I started to wonder why they do it that way? Couldn't it be just as fast and efficient to use.. maybe xsl with xslt to store forum topics and posts? Or to at least store the posts in a topic?
| There are loads of reasons why they use databases and not flat files. Here are a few off the top of my head.
[Referential integrity](http://en.wikipedia.org/wiki/Referential_integrity)
[Indexes](http://en.wikipedia.org/wiki/Index_%28database%29) and efficient searching
[SQL Joins](http://en.wikipedia.org/wiki/Join_%28SQL%29)
Here are a couple more posts you can look at for more information :
[If i can store my data in text files and easily can deal with these files, why should i use database like Mysql, oracle etc](https://stackoverflow.com/questions/3395870/if-i-can-store-my-data-in-text-files-and-easily-can-deal-with-these-files-why-sh)
[Why use MySQL over flatfiles?](https://stackoverflow.com/questions/2667850/why-use-mysql-over-flatfiles)
[Why use SQL database?](https://stackoverflow.com/questions/2900324/why-use-sql-database)
|
How to draw a SimpleWeightedGraph on a JPanel?
I have a SimpleWeightedGraph and I want to draw it on a JPanel in a JFrame.
Unfortunately nothing is drawn.
I read [this article](http://jgrapht.org/visualizations.html). They are using a `ListenableDirectedGraph` so I tried a `ListenableUndirectedGraph` with no success.
```
public class DisplayGraphForm extends javax.swing.JFrame {
public DisplayGraphForm(SimpleWeightedGraph g) {
initComponents(); // graphPanel added to JFrame with BorderLayout (Center)
JGraphModelAdapter adapter = new JGraphModelAdapter(g);
JGraph jgraph = new JGraph(adapter);
graphPanel.add(jgraph);
}
}
```
| It looks that you're leaving some important details out of your question, and without a [Minimal, Complete, and Verifiable example](https://stackoverflow.com/help/mcve) it is hard to say where is the problem.
However, note that the sample you're trying to adopt is very old. `JGraph` has moved on to `JGraphX`. Consider the following sample that demonstrates the link of `JGraphT` and `JGraphX` using `JGraphXAdapter`.
```
import javax.swing.JFrame;
import javax.swing.SwingUtilities;
import org.jgrapht.ListenableGraph;
import org.jgrapht.ext.JGraphXAdapter;
import org.jgrapht.graph.DefaultWeightedEdge;
import org.jgrapht.graph.ListenableDirectedWeightedGraph;
import com.mxgraph.layout.mxCircleLayout;
import com.mxgraph.layout.mxIGraphLayout;
import com.mxgraph.swing.mxGraphComponent;
public class DemoWeightedGraph {
private static void createAndShowGui() {
JFrame frame = new JFrame("DemoGraph");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
ListenableGraph<String, MyEdge> g = buildGraph();
JGraphXAdapter<String, MyEdge> graphAdapter =
new JGraphXAdapter<String, MyEdge>(g);
mxIGraphLayout layout = new mxCircleLayout(graphAdapter);
layout.execute(graphAdapter.getDefaultParent());
frame.add(new mxGraphComponent(graphAdapter));
frame.pack();
frame.setLocationByPlatform(true);
frame.setVisible(true);
}
public static void main(String[] args) {
SwingUtilities.invokeLater(new Runnable() {
public void run() {
createAndShowGui();
}
});
}
public static class MyEdge extends DefaultWeightedEdge {
@Override
public String toString() {
return String.valueOf(getWeight());
}
}
public static ListenableGraph<String, MyEdge> buildGraph() {
ListenableDirectedWeightedGraph<String, MyEdge> g =
new ListenableDirectedWeightedGraph<String, MyEdge>(MyEdge.class);
String x1 = "x1";
String x2 = "x2";
String x3 = "x3";
g.addVertex(x1);
g.addVertex(x2);
g.addVertex(x3);
MyEdge e = g.addEdge(x1, x2);
g.setEdgeWeight(e, 1);
e = g.addEdge(x2, x3);
g.setEdgeWeight(e, 2);
e = g.addEdge(x3, x1);
g.setEdgeWeight(e, 3);
return g;
}
}
```
![enter image description here](https://i.stack.imgur.com/NJhP6.png)
Note that `MyEdge` extends `DefaultWeightedEdge` to provide custom `toString()` that displays edge weight. A cleaner solution would be probably to override `mxGraph.convertValueToString`, examine content of cells and provide custom labels as needed. `toString` is a shortcut for the demo and also I noticed that `DefaultWeightedEdge.getWeight()` is protected, so the extension is needed anyway :)
|
Proving $\mathcal{H}\_{Singleton}$ is PAC-learnable
I'm referring to Section 3.5, ex. 2 in [Understanding machine learning](https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf).
To my understanding, given $\varepsilon, \delta$, I need to find minimum sample size $n$ s.t.
$$P[e\_P(ERM(S\_n) > \varepsilon] < \delta$$
Where $S\_n$ is a sample of size $n$. and $ERM$ is an algorithm that given the sample return an hypothesis with minimum empirical error.
I tried to count the number of hypotheses that are "bad", in a way that their true error is more then $\varepsilon$, and to show that the the probability that the $ERM$ algorithm will choose one of those is less than $\delta$, but that wasn't successful.
I also tried doing the opposite - count all possible hypotheses that the $ERM$ algorithm can output and show that the probability that any of them has true error which is larger then $\varepsilon$ is less then $\delta$. That wasn't successful either.
Is there a way proving it without using VC-dimension-arguments?
| For completeness, the complete question is:
>
> Let $X$ be a discrete domain, and let $\mathcal{H}\_{Singleton} = \{h\_z : z \in X\} \cup \{h\_−\}$, where for each $z \in X$, $h\_z$ is the function defined by $h\_z(x) = 1$ iff $x = z$ and $0$ otherwise.
>
> $h\_−$ is simply the all-negative hypothesis, namely, for all $x \in X, h\_−(x) = 0$.
>
>
>
>
> The realizability assumption here implies that the true hypothesis $f$ labels
> negatively all examples in the domain, perhaps except one.
>
>
> 1. Describe an algorithm that implements the ERM rule for learning $\mathcal{H}\_{Singleton}$
> in the realizable setup.
> 2. Show that $\mathcal{H}\_{Singleton}$ is PAC learnable. Provide an upper bound on the
> sample complexity.
>
>
>
I will first describe a learning algorithm for $\mathcal{H}\_{Singleton}$, and then show that this learning algorithm outputs a hypothesis that satisfies the requirements of PAC learning.
Assume an arbitrary distribution $D$ over $X$.
Given the training set $S$ consisting of $m$ samples independently sampled from $D$ and labelled by the true hypothesis $f$, ie, $S = \{(x\_1, f(x\_1)) \ldots (x\_m, f(x\_m))\}$,
our learning algorithm is:
1. Suppose $y\_i = 1$ for some $(x\_i, y\_i) \in S$. Note that this can happen for exactly one $i$, if it does, because of the realizability condition. Then, output $h\_{x\_i}$.
2. Otherwise, all $y\_i = 0$. Then, output $h\_-$.
Note that in both cases, if the output hypothesis is $h\_S$, the loss over the training set for this hypothesis, $L\_S(h\_S)$, is $0$ always. Thus, this is an ERM rule, because $0$ is the lowest possible loss.
This concludes part 1. We will now analyse the error of this learning algorithm.
For $S = \{(x\_1, f(x\_1)) \ldots (x\_m, f(x\_m))\}$, define $S\_x = \{ x\_1, x\_2, \ldots x\_m \},$ the list of unlabelled samples obtained from $X$ through sampling from $D$ exactly $m$ times. Note that $S\_x$ completely determines $S$.
We wish to bound the probability of a *bad* sample (parametrised by the accuracy $\epsilon$) by the confidence parameter $\delta$.
More precisely, given some $\epsilon, \delta$ in $(0, 1)$, we want to show that there is an $m$, such that when our learning algorithm is trained on $m$ samples, we have,
$$ P(\{S\_x|L\_{(D, f)}(h\_S) > \epsilon\}) < \delta $$
Note that $m$ will be a function of $\epsilon$ and $\delta$. $P$ above is the probability measure given by the distribution $D^m$ over $X^m.$
The true error $L\_{(D, f)}$ is defined as,
$$ L\_{(D, f)}(h) = P(\{h(x) \neq f(x)\}) $$
Now, suppose the true hypothesis (or, the actual labelling function) $f$ is $h\_-$. Then, our learning algorithm outputs $h\_-$ as well. The true error of the error hypothesis will be $0$, and hence, will be greater than any $\epsilon > 0$ with probability $0$, which is less than $\delta$. In this case, our learning algorithm outputs a suitable hypothesis.
Otherwise, the true hypothesis $f$ is $h\_{x\_0}$ for some $x\_0$ in $X$. Here, we have two cases:
1. $x\_0 \in S\_x$ : Our learning algorithm will output $h\_{x\_0}$ here, and hence, will have zero true error as $f = h\_{x\_0}$.
2. $x\_0 \not\in S\_x$ : This is the only case where we can have a non-zero true error, because our algorithm will output $h\_-$. Thus,
$$ P(\{S\_x|L\_{(D, f)}(h\_S) > \epsilon\}) \leq P(\{x\_0 \not\in S\_x\}) $$
Note that, as the samples in $S$ are identically and independently sampled from $D$, the probability that $x\_0 \not \in S\_x$ is the same as the probability that we never sample $x\_0$ in $m$ independent trials from the distribution $D$, which is $(1 - P(\{x\_0\}))^m$.
Note that, $f$ and $h\_-$ differ only at one point, $x\_0$. Thus,
$$ \epsilon < L\_{(D, f)}(h\_-) = P(\{h\_-(x) \neq f(x)\}) = P(\{x\_0\}) $$
It follows that, $$ P(\{x\_0 \not\in S\_x\}) = (1 - P(\{x\_0\}))^m < (1 - \epsilon)^m .$$
Thus, if we have, $(1 - \epsilon)^m \leq \delta$, we are done. This is equivalent to,
$$ m \geq \left\lceil{\frac{\ln(\delta)}{\ln(1 - \epsilon)}}\right\rceil $$
and, the righthand-side is an upper bound on the sample complexity of the hypothesis class $\mathcal{H}\_{Singleton}$ by definition of the sample complexity.
This shows that the output hypothesis satisfies the PAC learning requirements, given a sufficiently large but finite $m$. Thus, $\mathcal{H}\_{Singleton}$ is PAC-learnable.
|
pytorch skip connection in a sequential model
I am trying to wrap my head around skip connections in a sequential model. With the functional API I would be doing something as easy as (quick example, maybe not be 100% syntactically correct but should get the idea):
```
x1 = self.conv1(inp)
x = self.conv2(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.deconv4(x)
x = self.deconv3(x)
x = self.deconv2(x)
x = torch.cat((x, x1), 1))
x = self.deconv1(x)
```
I am now using a sequential model and trying to do something similar, create a skip connection that brings the activations of the first conv layer all the way to the last convTranspose. I have taken a look at the U-net architecture implemented [here](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix) and it's a bit confusing, it does something like this:
```
upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,
kernel_size=4, stride=2,
padding=1, bias=use_bias)
down = [downrelu, downconv, downnorm]
up = [uprelu, upconv, upnorm]
if use_dropout:
model = down + [submodule] + up + [nn.Dropout(0.5)]
else:
model = down + [submodule] + up
```
Isn't this just adding layers to the sequential model well, sequentially? There is the `down` conv which is followed by `submodule` (which recursively adds inner layers) and then concatenated to `up` which is the upconv layer. I am probably missing something important on how the `Sequential` API works, but how does the code snipped from U-NET actually implements the skip?
| Your observations are correct, but you may have missed the definition of `UnetSkipConnectionBlock.forward()` (`UnetSkipConnectionBlock` being the `Module` defining the U-Net block you shared), which may clarify this implementation:
(from [`pytorch-CycleGAN-and-pix2pix/models/networks.py#L259`](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/2ecf15f8a7f87fa56e784e0504136e9daf6b93d6/models/networks.py#L259))
```
# Defines the submodule with skip connection.
# X -------------------identity---------------------- X
# |-- downsampling -- |submodule| -- upsampling --|
class UnetSkipConnectionBlock(nn.Module):
# ...
def forward(self, x):
if self.outermost:
return self.model(x)
else:
return torch.cat([x, self.model(x)], 1)
```
The last line is the key (applied for all inner blocks). The skip layer is simply done by concatenating the input `x` and the (recursive) block output `self.model(x)`, with `self.model` the list of operations you mentioned -- so not so differently from the `Functional` code you wrote.
|
Looking for a C# code parser
I'm looking for a set of classes (preferably in the .net framework) that will parse C# code and return a list of functions with parameters, classes with their methods, properties etc. Ideally it would provide all that's needed to build my own intellisense.
I have a feeling something like this should be in the .net framework, given all the reflection stuff they offer, but if not then an open source alternative is good enough.
What I'm trying to build is basically something like Snippet Compiler, but with a twist. I'm trying to figure out how to get the code dom first.
I tried googling for this but I'm not sure what the correct term for this is so I came up empty.
Edit: Since I'm looking to use this for intellisense-like processing, actually compiling the code won't work since it will most likely be incomplete. Sorry I should have mentioned that first.
| While .NET's CodeDom namespace provides the [basic API for code language parsers](http://msdn.microsoft.com/en-us/library/system.codedom.compiler.codedomprovider.parse.aspx), they are not implemented. Visual Studio does this through its own language services. These are not available in the redistributable framework.
You could either...
1. Compile the code then use reflection on the resulting assembly
2. Look at something like the Mono C# compiler which creates these syntax trees. It won't be a high-level API like CodeDom but maybe you can work with it.
There may be [something on CodePlex](http://www.codeplex.com/csparser) or a similar site.
**UPDATE**
See this related post. [Parser for C#](https://stackoverflow.com/questions/81406/parser-for-c)
|
Is it possible in chrome to make the browser look like a print page
Long story short. I'm working on a huge project. Yes I have a print.css and it works very nice. But in the end it is so frustrating to always preview the print page.
And even works you can not inspect the page. Some elements totally look diffrent. Been overwriting them now have an extra 160 lines in my print.css however they just keep displaying vertical instead of horizontal.
It would be so nice, and I would be so happy if chrome had an extension or something to use so I could inspect a print preview or make my browser act as if it is a print.
| See this answer, I believe this is what you are looking for.
[Using Chrome's Element Inspector in Print Preview Mode?](https://stackoverflow.com/questions/9540990/using-chromes-element-inspector-in-print-preview-mode)
>
> Chrome v46+:
>
>
> - Open the Developer Tools (CTRL+SHIFT+I or F12)
> - Click the Toggle device mode button in the left top corner (1).
> - Make sure the console is shown by clicking the menu button (2) > Show console (3) or pressing the ESC key to toggle the console (only
> works when Developer Toolbar has the focus).
> - Open the Emulation (4) > Media (5) tabs, check CSS media and select print (3).
>
>
> [![enter image description here](https://i.stack.imgur.com/Gyoil.png)](https://i.stack.imgur.com/Gyoil.png)
>
>
>
|
Perl: Is this a correct way of creating an unique array?
I'm trying to create an unique array regardless of its original order and using no module, this's what I've come up with so far:
```
my @arr = qw(b a a c d g e f);
my %hash;
@hash{@arr}=();
say keys %hash;
```
| Yes. Since hash keys are unique, this is one idiomatic way to do it. The number of ways to accomplish the same thing are many.
You may also use a module, such as [List::MoreUtils](http://search.cpan.org/perldoc?List%3a%3aMoreUtils)
```
use strict;
use warnings;
use List::MoreUtils qw(uniq);
print join ":", uniq qw(a a a b b c d);
```
**Output:**
```
a:b:c:d
```
Some different ways to dedupe:
```
my @arr = keys { map { $_ => 1 } qw(b a a c d g e f) };
```
The curly braces creates an anonymous hash for `keys`, the map statement creates a list of key/value pairs.
---
```
my @arr = dedupe(qw(a a b c d d e));
sub dedupe {
my %hash = map { $_ => 1 } @_;
return keys %hash;
}
```
Same thing, but in subroutine form, and split into two lines. Note that both lists will be in a semi-random order, since hashes are unordered.
The subroutine used by `List::MoreUtils` is equally simple, and perhaps preferable, since it will preserve the order of arguments. It still uses a hash, though.
```
sub uniq {
my %seen = ();
grep { not $seen{$_}++ } @_;
}
```
|
dict.update overwrites existing keys, how to avoid?
When using the update function for a dictionary in python where you are merging two dictionaries and the two dictionaries have the same keys they are apparently being overwritten.
A simple example:
```
simple_dict_one = {'name': "tom", 'age': 20}
simple_dict_two = {'name': "lisa", 'age': 17}
simple_dict_one.update(simple_dict_two)
```
After the dicts are merged the following dict remains:
```
{'age': 17, 'name': 'lisa'}
```
So if you have the same key in both dict only one remains (the last one apparently).
If i have a lot of names for several sources i would probably want a temp dict from each of those and then want to add it to a whole bigger dict.
Is there a way to merge two dicts and still keep all the keys ? I guess you are only suppose to have one unique key but then how would i merge two dicts without loosing data
|
>
> Well i have several sources i gather information from, for example an
> ldap database and other sources where i have python functions that
> create a temp dict each but i want a complete dict at the end that
> sort of concatenates or displays all information gathered from all the
> sources.. so i would have one dict holding all the info
>
>
>
What you are trying to do with the 'merging' is not quite ideal. As you said yourself
>
> I guess you are only suppose to have one unique key
>
>
>
Which makes it relatively and unnecessarily hard to gather all your information in one dict.
What you could do, instead of calling `.update()` on the existing dict, is add a sub-dict. Its key could be the name of the source from which you gathered the information. The value could be the dict you receive from the source, and if you need to store more than 1 dict of the same source you can store them in a list.
Example
```
>>> data = {}
>>> person_1 = {'name': 'lisa', 'age': 17}
>>> person_2 = {'name': 'tom', 'age': 20}
>>> data['people'] = [person_1, person_2]
>>> data
{'people': [{'age': 17, 'name': 'lisa'}, {'age': 20, 'name': 'tom'}]}
```
Then whenever you need to add newly gathered information, you just add a new entry to the `data` dict
```
>>> ldap_data = {'foo': 1, 'bar': 'baz'} # just some dummy data
>>> data['ldap_data'] = ldap_data
>>> data
{'people': [{'age': 17, 'name': 'lisa'}, {'age': 20, 'name': 'tom'}],
'ldap_data': {'foo': 1, 'bar': 'baz'}}
```
The source-specific data is easily extractable from the `data` dict
```
>>> data['people']
[{'age': 17, 'name': 'lisa'}, {'age': 20, 'name': 'tom'}]
>>> data['ldap_data']
{'foo': 1, 'bar': 'baz'}
```
|
Is using std::optional as efficient as using int?
I have a quad-/octree data structure. Im storing the children indexes/ptrs of a cell in an array. Each position in the array represents the location of a child with respect to its parent, e.g. in 2D:
```
// _____________
// | | |
// | 2 | 3 |
// |_____|_____|
// | | |
// | 0 | 1 |
// |_____|_____|
// for each cell, 4 children are always stored in row-major order
std::vector<std::array<Integer,4>> children;
```
I know that the max number of children is a subset of the values that an `Integer` type can represent. Thus I can identify if a cell is missing a child by using a ''magic'' value like `-1` for `Integer = int`, or `std::numeric_limits<unsigned>::max()` for `Integer = unsigned`. This is something that `std::optional<Integer>` cannot assume.
As far as I understood, this usage of magic values is one of the raison d'être of `std::optional`. Still, I'm worried about the performance of `std::vector<std::optional<int>>` in inner loops.
So,
- Will the performance of `std::vector<std::optional<int>>` be worse than that of `std::vector<int>`? (I'm already doing the comparison for "non-existent" value).
- Or, can the implementation of `std::optional` be optimized to offer the same performance as a raw `int`? And how?
Mixing `std::optional` in the return type of my functions and magic values in my data structure sounds like a very bad idea. I prefer to be consistent and either use one or the other (at least within the same context). Although I could overload the function that performs the comparison with the magic number:
```
template<T> bool is_valid(const T& t) {
return /* comparison with magic value for t */;
}
```
for optional types.
| `std::optional` is going to require additional storage and fit fewer values into cache (it appears you already know the reason for this).
I don't think it's wrong to have a different value stored internally in your data structure from the one exposed by the public API, as long as the internal representation is completely hidden from users.
Furthermore, I suggest you isolate the magic number into a single pair of `inline` conversion functions.
The compiler should help you remember to use the conversion functions consistently, by generating type errors if you forget. You might even use a thin struct wrapper for an `int` in your internal data structure, to ensure that no implicit conversion exists (or define a user-defined conversion).
```
class CompressedOptionalUInt
{
static const unsigned SENTINEL_MISSING = std::numeric_limits<unsigned>::max();
unsigned value;
public:
CompressedOptionalUInt(std::optional<unsigned> val) : value(!val? SENTINEL_MISSING: *val) {}
operator std::optional<unsigned>() const { ... }
};
```
and then use `std::array<CompressedOptionalUInt>`.
Making that into a template, with just the sentinel needing to be defined for each type, should be pretty straightforward.
|
How does the integer overflow work in C?
I am a bit confused about how is arithmetic calculations are handled in a fixed point environment.
Consider following lines of code:
```
/* unsigned short is 16 bit.*/
unsigned short x = 1000;
unsigned short res;
/* Case1: The following yields correct result in res */
res = (x*544/100);
/* Case2: The following yields wrong result in res*/
res = (x*544); /* expected overflow here */
res = res/100;
```
So, my question is:
I can see why case 2 yields wrong result. But
- What is it that the compiler does in case 1 that yields the correct result?
- Isn't the arithmetic operation essentially the same thing in case 1? Except, it's split into two statements?
- Can I expect a different behavior from a different compiler?
| This is due to the usual arithmetic conversion being applied to the operands of multiplication and then to division, which causes the *short* to be promoted to a larger integer type for the purpose of the calculation and then converted to *short* on assignment.
The [draft C99 standard](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf) in section `6.5.5` *Multiplicative operators* says:
>
> The usual arithmetic conversions are performed on the operands.
>
>
>
We need to also note that the *integer constants*, `544` and `100` will have type *int*, we can find the details on why in the question [what are default integer values?](https://stackoverflow.com/questions/21382916/what-are-default-integer-values).
We then can go to section `6.3.1.8` *Usual arithmetic conversions* and we end up at the paragraph that says:
>
> Otherwise, the integer promotions are performed on both operands. Then the
> following rules are applied to the promoted operands:
>
>
>
and we end up at the following rule:
>
> Otherwise, if the type of the operand with signed integer type can
> represent all of the values of the type of the operand with unsigned
> integer type, then the operand with unsigned integer type is converted
> to the type of the operand with signed integer type.
>
>
>
So the result of the calculation is an *int*
Using the `-Wcoversion` flag `gcc` but surprisingly not `clang` produces a warning:
```
warning: conversion to 'short unsigned int' from 'int' may alter its value [-Wconversion]
res = (x*544/100);
^
```
This leads to what you term the *correct* result in the first case since all the calculations are done in as *int* in your second case you lose the intermediate result from the multiplication since you assign it back to `res` and the value is converted to a value that fits into *short*.
|
Is there a canonical definition of SPA or is SPA a broadly agreed-on architecture with fuzzy edges?
Is there a *canonical definition* of **SPA** which would exclude the software architecture model described below?
I'm working on an app with a new web-architecture model (new to me, at any rate) which has features that differentiate it from a **Single Page Application (SPA)** as the term is conventionally understood.
Since the model uses *server-side and client-side variables which always automatically mirror each other*, I would like, at least provisionally, to give the model a label something like **Reflective SPA** (or **veSPA** for short) but I'm concerned that due to its reliance on server-side processes, it may not qualify as an **SPA** at all and, as such, this name would be misleading and/or nonsensical.
Towards the end of the [Wikipedia entry on Single Page Applications](https://en.wikipedia.org/wiki/Single-page_application) there is a statement:
>
> A SPA is fully loaded in the initial page load and then page regions
> are replaced or updated with new page fragments loaded from the server
> on demand. To avoid excessive downloading of unused features, a SPA
> will often progressively download more features as they become
> required, either small fragments of the page, or complete screen
> modules.
>
>
>
I'd strongly subscribe to this as a conventional definition of an **SPA**. This is absolutely what I think of when I read the acronym **SPA**.
---
What differentiates **veSPA** is that *instead of*:
>
> *page regions are replaced or updated with new page fragments loaded from the server on demand*
>
>
>
**veSPA** repeatedly responds to user interactions by updating the `queryString`, either via:
**1.** **Updating the URL** (including the `queryString`), using:
- `window.history.pushState({}, document.title, 'https://example.com/' + queryString);` **(Javascript)**
**2.** **Reloading the URL** (using a new `queryString`), using:
- `window.location.href = 'https://example.com/?' + queryString;` **(Javascript)**
**3.** **Redirecting the URL Request** (using a new `queryString`) at server level, using:
- `header('Location: https://example.com/'.$Query_String);` **(PHP)**
When the page is actually reloaded, various *custom-data* attributes in the root HTML element are populated from the `queryString`, using the super global variable `$_GET` in **PHP**.
When the URL `queryString` is updated, the same *custom-data* attributes in the root HTML element are populated from the `queryString`, using `new URLSearchParams(window.location.search)` in **Javascript**.
Either way - and this is *the most important culmination* of everything described above - the app-view is ***then*** rendered **via CSS** from the values in the *custom-data* attributes in the root HTML element.
Does this repeated use of page-reloads and server-side **PHP** (described above) mean this model is *too differentiated* from **SPA** (as conventionally understood) to have the term **SPA** meaningfully applied to it (or to use **SPA** in its name)?
Is there a canonical definition of **SPA** which would exclude the model described above?
| There is no canonical definition of a single page application, since there is no governing body that defined it. Instead, it is a name that got applied to web applications that exhibit a number of characteristics about how client and server interact.
- **Reduced or eliminated page reloads:** this is the quintessential difference between traditional web apps and SPAs. The user does not navigate away from the page, or does so infrequently, when they perform an action.
- **Rendering logic is pushed to the client:** HTML is rendered on the server in traditional web apps. This logic is written in JavaScript and is executed on the client in SPAs.
- **Service-oriented or micro services architecture on the server:** with clients responsible for rendering logic, server logic is reduced, and the data exchange format is most often changed to JSON. Web API end points are exposed as data services.
- **More complex client side architecture:** with more logic on the client, JavaScript code tends to adhere to one of the MVC or MVVM style architectures in order to promote testability and organization.
Your application does not need all of these attributes. In fact, it may have more than exist in this list. The definition of a single page application is not exact, so don't get to hung up on it. If you use a client side framework like ReactJS or AngularJS, then you could be creating an SPA — if it has the characteristics above. Then again, you can have an SPA without using one of those frameworks if your application has the characteristics above.
The definition is in the behavior and architectural style — how client and server interact — rather than the specific code you write or the frameworks you use.
>
> Since the model includes both server-side processes and client-side processes, ... I'm concerned that due to its reliance on server-side processes, it may not qualify as an SPA at all...
>
>
>
It is common for both server and client to have models. Client side models tend to have more data than logic. Server side models tend to have more business logic. What you describe does not eliminate "SPA" as a description of your application.
>
> Does this use of page-reloads and PHP (described above) mean this model is too differentiated from SPA (as conventionally understood) to have the term SPA meaningfully applied to it (or to use SPA in its name)?
>
>
>
The answer is "it depends." If every user interaction, or a majority of interactions cause a page reload then your application is not an SPA. Page reloads are common in SPAs when the user navigates from one major "application module" to another. Here again we have some fuzziness. What constitutes an application module largely depends on the application and business functions it encapsulates.
|
Create Nested Dictionary From Flat Dictionary
I have the following dictionary in which `keys` are parent classes and `values` are a list of child classes which inherit from them.
```
{
"Animal":
["Dog"]
,
"Dog":
["Labrador"]
,
"Vehicle":
["PetrolCar",
"DieselCar"]
,
"DieselCar":
["Hyundai"]
,
"PetrolCar":
["Hyundai",
"Ford"]
}
```
As you can see, some of the parent classes are also children of another parent class (deep inheritance), i.e. `Animal -> Dog -> Labrador`
How can I format this so that the output represents the levels of inheritance, something like this:
```
{
"Animal": {
"Dog": {
"Labrador": []
}
},
"Vehicle": {
"PetrolCar": {
"Hyundai": [],
"Ford": []
},
"DieselCar": {
"Hyundai": []
}
}
}
```
---
I also want to be able to extend the provided dataset by adding more parents or children. For example: Adding `ElectricCar` as a child of `Vehicle` and `Tesla` as a child of `ElectricCar`. And adding `Cat` as a child of `Animal`, with no children of it's own.
Input:
```
{
"Animal":
["Dog",
"Cat"]
,
"Dog":
["Labrador"]
,
"Vehicle":
["PetrolCar",
"DieselCar",
"ElectricCar"]
,
"DieselCar":
["Hyundai"]
,
"PetrolCar":
["Hyundai",
"Ford"]
,
"ElectricCar":
["Tesla"]
}
```
Output:
```
{
"Animal": {
"Dog": {
"Labrador": []
},
"Cat": []
},
"Vehicle": {
"PetrolCar": {
"Hyundai": [],
"Ford": []
},
"DieselCar": {
"Hyundai": []
},
"ElectricCar": {
"Tesla": []
}
}
}
```
| You can use recursion to produce the nested dictionary, and then remove keys that do not have any children:
```
data = {'Animal': ['Dog', 'Cat'], 'Dog': ['Labrador'], 'Vehicle': ['PetrolCar', 'DieselCar', 'ElectricCar'], 'DieselCar': ['Hyundai'], 'PetrolCar': ['Hyundai', 'Ford'], 'ElectricCar': ['Tesla']}
def build(key):
return {i:[] if i not in data else build(i) for i in data[key]}
results = {i:build(i) for i in data}
```
---
```
import json
print(json.dumps({a:b for a, b in results.items() if any(h for h in b.values())}, indent=4))
```
Output:
```
{
"Animal": {
"Dog": {
"Labrador": []
},
"Cat": []
},
"Vehicle": {
"PetrolCar": {
"Hyundai": [],
"Ford": []
},
"DieselCar": {
"Hyundai": []
},
"ElectricCar": {
"Tesla": []
}
}
}
```
|
How to get the large picture from feed with graph api?
When loading the Facebook feeds from one page, if a picture exist in the feed, I want to display the large picture.
How can I get with the `graph API` ? The picture link in the feed is not the large one.
Thanks.
| This is a new method to get a big image. it was born after the previews method doesn't works
```
/**
* return a big url of facebook
* works onky for type PHOTO
* @param picture
* @param is a post type link
* @return url of image
*/
@Transactional
public String getBigImageByFacebookPicture(String pictrue,Boolean link){
if(link && pictrue.contains("url=http")){
String url = pictrue.substring(pictrue.indexOf("url=") + 4);
try {
url = java.net.URLDecoder.decode(url, "UTF-8");
} catch (UnsupportedEncodingException e) {
StringBuffer sb = new StringBuffer("Big image for Facebook link not found: ");
sb.append(link);
loggerTakePost.error(sb.toString());
return null;
}
return url;
}else{
try {
Document doc = Jsoup.connect(pictrue).get();
return doc.select("#fbPhotoImage").get(0).attr("src");
} catch (Exception e) {
StringBuffer sb = new StringBuffer("Big image for Facebook link not found: ");
sb.append(link);
loggerTakePost.error(sb.toString());
return null;
}
}
}
```
Enjoy your large image :)
|
Difference between typedef and C++11 type alias
I'm reading template aliases here: <http://en.cppreference.com/w/cpp/language/type_alias>
And I'm wondering, even if it's written on the very first line of the page I linked, what's the difference between a typedef and a type alias (`using mytype = T;`)
Aren't they interchangeable?
| **There is absolutely no difference between both.**
If you take a look at the standard :
>
> **7.1.3 The typedef specifier [dcl.typedef ]**
>
>
> A *typedef-name* can also be introduced by an *alias-declaration*. The *identifier* following the `using` keyword becomes a *typedef-name*. **It has the same semantics as if it were introduced by the `typedef` specifier.** In particular, it does not define a new type and it shall not appear in the *type-id*.
>
>
> **7.3.3 The using declaration [namespace.udecl]**
>
>
> If a *using-declaration* uses the keyword *typename* and specifies a dependent name (14.6.2), the name introduced by the *using-declaration* is treated as a *typedef-name*.
>
>
>
---
However from this page : <http://en.cppreference.com/w/cpp/language/type_alias>
It is said :
>
> Type aliases are similar to [typedefs](http://en.cppreference.com/w/cpp/language/typedef), **however, have the advantage of working with templates.**
>
>
>
It seems that this
```
// template type alias
template<class T> using ptr = T*;
// the name 'ptr<T>' is now an alias for pointer to T
ptr<int> x;
```
is only possible with the `using` directive.
---
And do not forget that this is a C++11 feature. Some compilers do not support it yet.
|
Create table and query json data using Amazon Athena?
I want to query JSON data of format using Amazon Athena:
```
[{"id":"0581b7c92be",
"key":"0581b7c92be",
"value":{"rev":"1-ceeeecaa040"},
"doc":{"_id":"0581b7c92be497d19e5ab51e577ada12","_rev":"1ceeeecaa04","node":"belt","DeviceId":"C001"}},
{"id":"0581b7c92be49",
"key":"0581b7c92be497d19e5",
"value":{"rev":"1-ceeeecaa04031842d3ca"},
"doc":{"_id":"0581b7c92be497","_rev":"1ceeeecaa040318","node":"belt","DeviceId":"C001"}
}
]
```
| Athena DDL is based on Hive, so u will want each json object in your array to be in a separate line:
```
{"id": "0581b7c92be", "key": "0581b7c92be", "value": {"rev": "1-ceeeecaa040"}, "doc": {"_id": "0581b7c92be497d19e5ab51e577ada12", "_rev": "1ceeeecaa04", "node": "belt", "DeviceId": "C001"} }
{"id": "0581b7c92be49", "key": "0581b7c92be497d19e5", "value": {"rev": "1-ceeeecaa04031842d3ca"}, "doc": {"_id": "0581b7c92be497", "_rev": "1ceeeecaa040318", "node": "belt", "DeviceId": "C001"} }
```
You might have problems with the nested fields ("value","doc"), so if you can flatten the jsons you will have it easier. (see for example: [Hive for complex nested Json](https://stackoverflow.com/questions/23220759/hive-for-complex-nested-json))
|
Executing specific testng group using build.gradle
I have checked following questions but none of them helped -
[Gradle + TestNG Only Running Specified Group](https://stackoverflow.com/questions/28772744/gradle-testng-only-running-specified-group)
[Gradle command syntax for executing TESTNG tests as a group](https://stackoverflow.com/questions/49563007/gradle-command-syntax-for-executing-testng-tests-as-a-group)
The project I am using is available at - <https://github.com/tarun3kumar/gradle-demo>
It is standard maven project and I am not using testng.xml file.
Test method - `com.org.corpsite.LandingPageTest` is grouped as - `smoke`
I am running test as - `gradle clean test` and test is executed. Test fails due to genuine reason and let's ignore it.
Then I passed test group from command line as -
`gradle clean test -P testGroups='doesnotexist'`
Notice that 'doesnotexist' is not a valid group but it still executes test.
Following this I added `includeGroups` in `build.gradle` as -
```
test {
useTestNG() {
includeGroups 'smoke'
}
}
```
and now `gradle clean test -P testGroups='doesnotexist'` fails with NPE on one of the java class - `java.lang.NullPointerException
at com.org.pageobjects.BasePage.findElements(BasePage.java:24)`
Questions -
1. What is right flag to specify test group from command line? Seems `-P` is wrong else `gradle clean test -P testGroups='doesnotexist'` would not execute test.
2. What is wrong with specifying `includeGroups 'smoke'`?
I am using `Gradle 5.1` on macbook pro
| Here are the set of things that need to be done to get this to work.
1. You need to add the attribute `alwaysRun=true` to your `@BeforeMethod` and `@AfterMethod` annotations from your base class `com.org.core.SelTestCase`. This is to ensure that TestNG executes these configuration methods all the time irrespective of what group is chosen.
2. Alter the `test` task in your `build.gradle` to look like below:
```
test {
def groups = System.getProperty('groups', 'smoke')
useTestNG() {
includeGroups groups
}
}
```
This ensures that we try to extract the JVM argument `groups` value. If its not specified we default to `smoke`.
We now execute the tests by specifying the groups needed using the below command:
```
./gradlew clean test --info -Dgroups=smoke
```
Now if we execute the below command, you would notice that no tests are executed.
```
./gradlew clean test --info -Dgroups=smoke1
```
Here's a patch that you can apply to your project
```
From 25133a5d2a0f96d4a305f34e1f5a17e70be2bb54 Mon Sep 17 00:00:00 2001
From: Krishnan Mahadevan <krishnan.mahadevan@stackoverflow.com>
Date: Mon, 14 Jan 2019 22:38:27 +0530
Subject: [PATCH] Fixing the bug
---
build.gradle | 2 ++
src/main/java/com/org/core/SelTestCase.java | 5 +++--
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/build.gradle b/build.gradle
index 10ba91d..2d08991 100644
--- a/build.gradle
+++ b/build.gradle
@@ -38,7 +38,9 @@ task smokeTests(type: Test) {
}*/
test {
+ def groups = System.getProperty('groups', 'smoke')
useTestNG() {
+ includeGroups groups
}
}
diff --git a/src/main/java/com/org/core/SelTestCase.java b/src/main/java/com/org/core/SelTestCase.java
index 80cad09..651529a 100644
--- a/src/main/java/com/org/core/SelTestCase.java
+++ b/src/main/java/com/org/core/SelTestCase.java
@@ -22,7 +22,7 @@ public class SelTestCase {
private WebDriver webDriver;
- @BeforeMethod
+ @BeforeMethod(alwaysRun = true)
@Parameters({"browser", "url"})
public void setUp(@Optional("firefox") String browser, @Optional("https://www.google.com/") String URL) {
switch (browser) {
@@ -40,8 +40,9 @@ public class SelTestCase {
webDriver.get(URL);
}
- @AfterMethod
+ @AfterMethod(alwaysRun = true)
public void tearDown() {
webDriver.quit();
}
+
}
--
2.20.1
```
You can save the above contents to a file say `mypatch.patch` and then apply the patch using the instructions detailed in [this StackOverFlow post](https://stackoverflow.com/a/2250170).
|
Iterating an HTMLCollection object using for-of loop
I'm using [babel-polyfill](https://babeljs.io/docs/usage/polyfill/) and I'm trying to iterate an [`HTMLCollection`](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCollection) object using for-of loop:
```
const elements = document.getElementsByClassName('some-class')
for (const element of elements) {
console.log(element)
}
```
It's not working. I'm getting an error `elements[Symbol.iterator] is not a function`. How to make it work correctly?
| From ["Iterable DOM collections" on the core-js GitHub page](https://github.com/zloirock/core-js#iterable-dom-collections):
>
> Some DOM collections should have [iterable
> interface](https://heycam.github.io/webidl/#idl-iterable) or should be
> [inherited from
> `Array`](https://heycam.github.io/webidl/#LegacyArrayClass). That mean
> they should have `keys`, `values`, `entries` and `@@iterator` methods
> for iteration. So add them. Module
> [`web.dom.iterable`](https://github.com/zloirock/core-js/blob/v2.4.1/modules/web.dom.iterable.js):
>
>
>
> ```
> {
> NodeList,
> DOMTokenList,
> MediaList,
> StyleSheetList,
> CSSRuleList
> }
> #values() -> iterator
> #keys() -> iterator
> #entries() -> iterator
> #@@iterator() -> iterator (values)
>
> ```
>
>
As you can see, that list doesn't include `HTMLCollection`. In order to be able to use for-of loop with `HTMLCollection`, you have to manually assign `Array.prototype.values` to `HTMLCollection.prototype[Symbol.iterator]`. See this example:
```
HTMLCollection.prototype[Symbol.iterator] = Array.prototype.values
for (const element of document.getElementsByTagName('a')) {
console.log(element.href)
}
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/core-js/2.4.1/core.min.js"></script>
<a href="//www.google.com">Google</a>
<a href="//www.github.com">GitHub</a>
```
Alternatively, you can just use [`document.querySelectorAll()`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelectorAll), which a returns a `NodeList` object.
|
Swift : OverlayView that appears at the bottom of the screen
I'd like to create a very common effect as in the picture below :
[![view that appears at the bottom of the screen](https://i.stack.imgur.com/26FBM.png)](https://i.stack.imgur.com/26FBM.png)
Explanation : the effect I'd like to accomplish consists in a view that appears (slides in) at the bottom of the screen when user clicks a button : you can still see the screen behind this view, but it applies a "dark layer" (black view with let's say 60% opacity) on the top of it. When user clicks on Block or Report absue (as for this example) it would perform the respective actions. Now when user clicks on cancel, and also when he clicks anywhere on the "dark layer" it would bring him back to the screen.
What I tried : presenting a new view controller (but it would use more data than necessary), using a overlay layer but I even didnt get close to that effect that we're usually seeing on apps. I'm not sure but I'd say that the best way to get that effect is using views ?
Does anyone have an idea please ?
Thanks and have a good day,
J
| You are looking for a native element called `UIActionSheet`. It has become the part of `UIAlertController`, and does exactly what you are looking for.
Here is a little help how to set up one:
```
// Create you actionsheet - preferredStyle: .actionSheet
let actionSheet = UIAlertController(title: nil, message: nil, preferredStyle: .actionSheet)
// Create your actions - take a look at different style attributes
let reportAction = UIAlertAction(title: "Report abuse", style: .default) { (action) in
// observe it in the buttons block, what button has been pressed
print("didPress report abuse")
}
let blockAction = UIAlertAction(title: "Block", style: .destructive) { (action) in
print("didPress block")
}
let cancelAction = UIAlertAction(title: "Cancel", style: .cancel) { (action) in
print("didPress cancel")
}
// Add the actions to your actionSheet
actionSheet.addAction(reportAction)
actionSheet.addAction(blockAction)
actionSheet.addAction(cancelAction)
// Present the controller
self.present(actionSheet, animated: true, completion: nil)
```
It should produce the following output:
[![enter image description here](https://i.stack.imgur.com/qDWgZ.png)](https://i.stack.imgur.com/qDWgZ.png)
|
UIPageViewController, how do I correctly jump to a specific page without messing up the order specified by the data source?
I've found a few questions about how to make a `UIPageViewController` jump to a specific page, but I've noticed an added problem with jumping that none of the answers seem to acknowledge.
Without going into the details of my iOS app (which is similar to a paged calendar), here is what I'm experiencing. I declare a `UIPageViewController`, set the current view controller, and implement a data source.
```
// end of the init method
pageViewController = [[UIPageViewController alloc]
initWithTransitionStyle:UIPageViewControllerTransitionStyleScroll
navigationOrientation:UIPageViewControllerNavigationOrientationHorizontal
options:nil];
pageViewController.dataSource = self;
[self jumpToDay:0];
}
//...
- (void)jumpToDay:(NSInteger)day {
UIViewController *controller = [self dequeuePreviousDayViewControllerWithDaysBack:day];
[pageViewController setViewControllers:@[controller]
direction:UIPageViewControllerNavigationDirectionForward
animated:YES
completion:nil];
}
- (UIViewController *)pageViewController:(UIPageViewController *)pageViewController viewControllerAfterViewController:(UIViewController *)viewController {
NSInteger days = ((THDayViewController *)viewController).daysAgo;
return [self dequeuePreviousDayViewControllerWithDaysBack:days + 1];
}
- (UIViewController *)pageViewController:(UIPageViewController *)pageViewController viewControllerBeforeViewController:(UIViewController *)viewController {
NSInteger days = ((THDayViewController *)viewController).daysAgo;
return [self dequeuePreviousDayViewControllerWithDaysBack:days - 1];
}
- (UIViewController *)dequeuePreviousDayViewControllerWithDaysBack:(NSInteger)days {
return [[THPreviousDayViewController alloc] initWithDaysAgo:days];
}
```
Edit Note: I added simplified code for the dequeuing method. Even with this blasphemous implementation I have the exact same problem with page order.
The initialization all works as expected. The incremental paging all works fine as well. The issue is that if I ever call `jumpToDay` again, the order gets jumbled.
If the user is on day -5 and jumps to day 1, a scroll to the left will reveal day -5 again instead of the appropriate day 0. This seems to have something to do with how `UIPageViewController` keeps references to nearby pages, but I can't find any reference to a method that would force it to refresh it's cache.
Any ideas?
| [Programming iOS6](http://www.apeth.com/iOSBook/ch19.html#_page_view_controller), by Matt Neuburg documents this exact problem, and I actually found that his solution feels a little better than the currently accepted answer. That solution, which works great, has a negative side effect of animating to the image before/after, and then jarringly replacing that page with the desired page. I felt like that was a weird user experience, and Matt's solution takes care of that.
```
__weak UIPageViewController* pvcw = pvc;
[pvc setViewControllers:@[page]
direction:UIPageViewControllerNavigationDirectionForward
animated:YES completion:^(BOOL finished) {
UIPageViewController* pvcs = pvcw;
if (!pvcs) return;
dispatch_async(dispatch_get_main_queue(), ^{
[pvcs setViewControllers:@[page]
direction:UIPageViewControllerNavigationDirectionForward
animated:NO completion:nil];
});
}];
```
|
perforce connect history of two different files
I have a problem, in a refactoring attempt I have copied files from one place to another and added them in my scm (perforce). When I was done and everything was working I deleted the old (moved) files.
Can I connect the file histories with each other? The best would be to se the "move" like it should have been done..
Thankful for any help!
| Suppose your original file is `//source/old/file.c#5` and you moved it to `//source/new/file.c`, then deleted the old file in revision `//source/old/file.c#6`. You need to integrate from the old file to the new file, using the `-i` flag so Perforce will allow you to integrate between two files that it doesn't otherwise know of an integration history:
```
p4 integrate -i //source/old/file.c#5 //source/new/file.c
```
then resolve the files. Normally while integrating you'll want to accept a merged version of the file, but in this case you're mostly interested in letting Perforce know you already did the integration, so you can use `-ay` to "accept yours", discarding the old version of the file:
```
p4 resolve -ay //source/new/file.c
```
then submit the revision.
(Ideally you would have integrated first, then made any changes, and submitted everything, but this way the files will be linked in Perforce's integration history.)
|
SwiftUI: Remove 'Focus Ring' Highlight Border from macOS TextField
I used the below code to create a custom search bar in SwiftUI. It works great on iOS / Catalyst:
[![SearchTextView on iOS / Catalyst](https://i.stack.imgur.com/XnD8z.png)](https://i.stack.imgur.com/XnD8z.png)
...but when running natively on macOS, the 'focus ring' highlighted border styling (when the user selects the text field) rather ruins the effect:
[![SearchTextView on Native macOS](https://i.stack.imgur.com/M56Bm.png)](https://i.stack.imgur.com/M56Bm.png)
Using `.textFieldStyle(PlainTextFieldStyle())` has removed most of the default styling from the underlying field (which I believe is an `NSTextField`), but *not* the focus ring.
Is there a way to remove this too? I tried creating a custom `TextFieldStyle` and applying that, but couldn't find any modifier to style that border.
```
public struct SearchTextView: View {
@Binding var searchText: String
#if !os(macOS)
private let backgroundColor = Color(UIColor.secondarySystemBackground)
#else
private let backgroundColor = Color(NSColor.controlBackgroundColor)
#endif
public var body: some View {
HStack {
Spacer()
#if !os(macOS)
Image(systemName: "magnifyingglass")
#else
Image("icons.general.magnifyingGlass")
#endif
TextField("Search", text: self.$searchText)
.textFieldStyle(PlainTextFieldStyle())
.foregroundColor(.primary)
.padding(8)
Spacer()
}
.foregroundColor(.secondary)
.background(backgroundColor)
.cornerRadius(12)
.padding()
}
public init(searchText: Binding<String>) {
self._searchText = searchText
}
}
```
| As stated in an answer by [Asperi](https://stackoverflow.com/users/12299030/asperi) to a similar question [here](https://stackoverflow.com/a/60286113/2272431), it's not (yet) possible to turn off the focus ring for a specific field using SwiftUI; however, **the following workaround will disable the focus ring for all `NSTextField` instances in the app**:
```
extension NSTextField {
open override var focusRingType: NSFocusRingType {
get { .none }
set { }
}
}
```
If you want to replace this with your own custom focus ring within the view, the `onEditingChanged` parameter can help you achieve this (see below example); however, it's unfortunately called on macOS when the user types the first letter, not when they first click on the field (which isn't ideal).
In theory, you could use the `onFocusChange` closure in the `focusable` modifier instead, but that doesn't appear to get called for these macOS text fields currently (as of macOS 10.15.3).
```
public struct SearchTextView: View {
@Binding var searchText: String
@State private var hasFocus = false
#if !os(macOS)
private var backgroundColor = Color(UIColor.secondarySystemBackground)
#else
private var backgroundColor = Color(NSColor.controlBackgroundColor)
#endif
public var body: some View {
HStack {
Spacer()
#if !os(macOS)
Image(systemName: "magnifyingglass")
#else
Image("icons.general.magnifyingGlass")
#endif
TextField("Search", text: self.$searchText, onEditingChanged: { currentlyEditing in
self.hasFocus = currentlyEditing // If the editing state has changed to be currently edited, update the view's state
})
.textFieldStyle(PlainTextFieldStyle())
.foregroundColor(.primary)
.padding(8)
Spacer()
}
.foregroundColor(.secondary)
.background(backgroundColor)
.cornerRadius(12)
.border(self.hasFocus ? Color.accentColor : Color.clear, width: self.hasFocus ? 3 : 0)
.padding()
}
public init(searchText: Binding<String>) {
self._searchText = searchText
}
}
```
|
Is there a way for PHP to validate an SQL syntax without executing it?
I would like to build a PHP script that will validate an SQL query, but does not execute it. Not only should it validate syntax, but should, if possible, let you know if the query can be executed given the command that is in the query. Here's Pseudocode of what I would like it to do:
```
<?php
//connect user
//connect to database
//v_query = $_GET['usrinput'];
if(validate v_query == true){
echo "This query can be executed";
}
else{
echo "This query can't be executed because the table does not exist.";
}
//disconnect
?>
```
Something like this. I want it to simulate the query without it executing it. That's what I want and I can't find anything on this.
An example of why we wouldn't want the query to be executed is if the query adds something to a database. We just want it to simulate it without modifying the database.
Any links or examples would be greatly appreciated!
| From MySQL 5.6.3 on you can use EXPLAIN for most queries
I made this and it works lovely:
```
function checkMySqlSyntax($mysqli, $query) {
if ( trim($query) ) {
// Replace characters within string literals that may *** up the process
$query = replaceCharacterWithinQuotes($query, '#', '%') ;
$query = replaceCharacterWithinQuotes($query, ';', ':') ;
// Prepare the query to make a valid EXPLAIN query
// Remove comments # comment ; or # comment newline
// Remove SET @var=val;
// Remove empty statements
// Remove last ;
// Put EXPLAIN in front of every MySQL statement (separated by ;)
$query = "EXPLAIN " .
preg_replace(Array("/#[^\n\r;]*([\n\r;]|$)/",
"/[Ss][Ee][Tt]\s+\@[A-Za-z0-9_]+\s*:?=\s*[^;]+(;|$)/",
"/;\s*;/",
"/;\s*$/",
"/;/"),
Array("","", ";","", "; EXPLAIN "), $query) ;
foreach(explode(';', $query) as $q) {
$result = $mysqli->query($q) ;
$err = !$result ? $mysqli->error : false ;
if ( ! is_object($result) && ! $err ) $err = "Unknown SQL error";
if ( $err) return $err ;
}
return false ;
}
}
function replaceCharacterWithinQuotes($str, $char, $repl) {
if ( strpos( $str, $char ) === false ) return $str ;
$placeholder = chr(7) ;
$inSingleQuote = false ;
$inDoubleQuotes = false ;
$inBackQuotes = false ;
for ( $p = 0 ; $p < strlen($str) ; $p++ ) {
switch ( $str[$p] ) {
case "'": if ( ! $inDoubleQuotes && ! $inBackquotes ) $inSingleQuote = ! $inSingleQuote ; break ;
case '"': if ( ! $inSingleQuote && ! $inBackquotes ) $inDoubleQuotes = ! $inDoubleQuotes ; break ;
case '`': if ( ! $inSingleQuote && ! $inDoubleQuotes ) $inBackquotes = ! $inBackquotes ; break ;
case '\\': $p++ ; break ;
case $char: if ( $inSingleQuote || $inDoubleQuotes || $inBackQuotes) $str[$p] = $placeholder ; break ;
}
}
return str_replace($placeholder, $repl, $str) ;
}
```
It wil return False if de query is OK (multiple ; separated statements allowed), or an error message stating the error if there is a syntax or other MySQL other (like non-existent table or column).
[PHP Fiddle](http://phpfiddle.org/main/code/gt6p-49m1)
KNOWN BUGS:
- MySQL errors with linenumbers: the linenumbers mostly will not match.
- Does not work for MySQL statements other than SELECT, UPDATE, REPLACE, INSERT, DELETE
|
Upgrade to 16.10 causes desktop backlight flickering
I just upgraded to Ubuntu 16.10 and after the restart the display light flickers constantly, regardless of the application running. The flickering
starts when the login screen is shown. I am using a Lenovo Thinkpad E540.
I have NVIDIA GeForce GT740M with the driver
```
X.Org X server -- Nouveau display driver from xserver-xorg-video-nouveau
```
Other driver options, which I'm not eager to try (since last time the entire graphics display stopped working):
```
NVIDIA binary driver 367.57 (proprietary, tested)
NVIDIA binary driver 340.98 (proprietary)
```
Output of lspic:
```
lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06)
```
My current kernel version is
```
uname -r
4.8.0-26-generic
```
I have tried this solution [here](https://ubuntuforums.org/showthread.php?t=2243912) but it didn't work.
Has anyone a workaround for this very annoying issue?
| This bug first appeared in bug reports in Kernel version 4.6.2 and users found downgrading to 4.5.4 fixed it. Upgrading to 4.7 did not fix it.
# Panel Self Refresh (psr) bug
Links to links to other bug reports say it can be fixed by modifying grub's kernel boot command line with:
```
i915.enable_psr=0
```
To do this you need to `gksu gedit /etc/default/grub`.
Search for `quiet splash` and insert `i915.enable_psr=0` in front of the last double quote. There may be other options but minimally it should look like this:
```
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash i915.enable_psr=0"
```
Save the file and type `sudo update-grub`. Then reboot and the flickering should be gone.
You can read more at: ([LCD flickering on Thinkpad T440p (Haswell) with kernel 4.6-rc4 (PSR enabled)](https://bugs.freedesktop.org/show_bug.cgi?id=95176#c28))
# Frame Buffer Compression
At the same time `psr` was introduced in the `4.6` kernel major revision, `fbc` (Frame Buffer Compression) support was also introduced. It prevents repainting the screen when it doesn't change. An imperceptible power savings feature (.06 watts). To turn update the grub kernel command line (as described above) by adding:
```
i915.enable_fbc=0
```
The final solution is to turn off i915 mode setting altogether with the grub kernel command line option:
```
i915.modset=0
```
Please note these can't be tested on my system and I can only go by bug reports from users with similar systems to yours.
|
Using Rule to Insert Into Secondary Table Auto-Increments Sequence
To automatically add a column in a second table to tie it to the first table via a unique index, I have a rule such as follows:
```
CREATE OR REPLACE RULE auto_insert AS ON INSERT TO user DO ALSO
INSERT INTO lastlogin (id) VALUES (NEW.userid);
```
This works fine if *user.userid* is an integer. However, if it is a sequence (e.g., type **serial** or **bigserial**), what is inserted into table *lastlogin* is the next sequence id. So this command:
```
INSERT INTO user (username) VALUES ('john');
```
would insert column [1, 'john', ...] into *user* but column [2, ...] into *lastlogin*. The following 2 workarounds do work except that the second one consumes twice as many serials since the sequence is still auto-incrementing:
```
CREATE OR REPLACE RULE auto_insert AS ON INSERT TO user DO ALSO
INSERT INTO lastlogin (id) VALUES (lastval());
CREATE OR REPLACE RULE auto_insert AS ON INSERT TO user DO ALSO
INSERT INTO lastlogin (id) VALUES (NEW.userid-1);
```
Unfortunately, the workarounds do not work if I'm inserting multiple rows:
```
INSERT INTO user (username) VALUES ('john'), ('mary');
```
The first workaround would use the same id, and the second workaround is all kind of screw-up.
Is it possible to do this via postgresql rules or should I simply do the 2nd insertion into *lastlogin* myself or use a row trigger? Actually, I think the row trigger would also auto-increment the sequence when I access *NEW.userid*.
| Forget rules altogether. They're **bad**.
Triggers are way better for you. And in 99% of cases when someone thinks he needs a rule. Try this:
```
create table users (
userid serial primary key,
username text
);
create table lastlogin (
userid int primary key references users(userid),
lastlogin_time timestamp with time zone
);
create or replace function lastlogin_create_id() returns trigger as $$
begin
insert into lastlogin (userid) values (NEW.userid);
return NEW;
end;
$$
language plpgsql volatile;
create trigger lastlogin_create_id
after insert on users for each row execute procedure lastlogin_create_id();
```
Then:
```
insert into users (username) values ('foo'),('bar');
select * from users;
```
```
*userid | username
--------+----------
1 | foo
2 | bar
(2 rows)*
```
```
select * from lastlogin;
```
```
*userid | lastlogin\_time
--------+----------------
1 |
2 |
(2 rows)*
```
|
how to show some markers on a static image using openlayers 3
I am trying to show some marker on the static image ie
Given a static image of certain size in feet and set of point in feets how mark some image or a marker on the static image using openlayers3
I understand we have a provision in openlayer3 to use the static image as the base layer of the map
I am not getting how to show the marker on the static image(base layer)for given certain plots on the image
Any help would be more thank you please suggest a war to do it
I am show the static image as the map as shown below
```
<!DOCTYPE html>
<html>
<head>
<title>Static image example</title>
<script src="https://code.jquery.com/jquery-1.11.2.min.js"></script>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css">
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/ol3/3.6.0/ol.css" type="text/css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/ol3/3.6.0/ol.js"></script>
</head>
<body>
<div class="container-fluid">
<div class="row-fluid">
<div class="span12">
<div id="map" class="map"></div>
</div>
</div>
</div>
<script>
// Map views always need a projection. Here we just want to map image
// coordinates directly to map coordinates, so we create a projection that uses
// the image extent in pixels.
var extent = [0, 0, 1024, 968];
var projection = new ol.proj.Projection({
code: 'xkcd-image',
units: 'pixels',
extent: extent
});
var map = new ol.Map({
layers: [
new ol.layer.Image({
source: new ol.source.ImageStatic({
attributions: [
new ol.Attribution({
html: '© <a href="http://xkcd.com/license.html">xkcd</a>'
})
],
url: 'colorful-triangles-background.jpg',
projection: projection,
imageExtent: extent
})
})
],
target: 'map',
view: new ol.View({
projection: projection,
center: ol.extent.getCenter(extent),
zoom: 2
})
});
</script>
</body>
</html>
```
But i have no idea how to plot the markers the plots are json given to plot is some thing like below
[{
x:1.234,
y:3.34,
units:feet
},
{
x:2.234,
y:4.34,
units:feet
},
{
x:7.234,
y:9.34,
units:feet
}]
| - Create an Icon Style
- Create Icon Feature
- Setup a New Vector layer with vector source
- Add the vector layer in Map's layer
- I have displayed the marker on click of the map at mouse position, you can add markers on the event you want
- Also since I did not have the images you were referring I have referred the open layer examples image.
```
<!DOCTYPE html>
<html>
<head>
<title>Static image example</title>
<script src="https://code.jquery.com/jquery-1.11.2.min.js"></script>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com /bootstrap/3.3.4/css/bootstrap.min.css">
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/ol3/3.6.0/ol.css" type="text/css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/ol3/3.6.0/ol.js"> </script>
</head>
<body>
<div class="container-fluid">
<div class="row-fluid">
<div class="span12">
<div id="map" class="map"></div>
</div>
</div>
</div>
<script>
// Map views always need a projection. Here we just want to map image
// coordinates directly to map coordinates, so we create a projection that uses
// the image extent in pixels.
var extent = [0, 0, 1024, 968];
var projection = new ol.proj.Projection({
code: 'xkcd-image',
units: 'pixels',
extent: extent
});
var iconStyle = new ol.style.Style({
image: new ol.style.Icon(({
anchor: [15, 24],
size: [32, 48],
anchorXUnits: 'pixels',
anchorYUnits: 'pixels',
opacity: 0.75,
src: 'http://www2.psd100.com/ppp/2013/11/0501/Map-marker-icon-1105213652.png'
}))
});
//Create a Feature
var iconFeature = new ol.Feature({
geometry: new ol.geom.Point([72.5800, 23.0300])
});
//Setup a Vector Source
var vectorSource = new ol.source.Vector({
features: [iconFeature]
});
//Setup a Vector Layer
var vectorLayer = new ol.layer.Vector({
source: vectorSource
});
iconFeature.setStyle(iconStyle);
var map = new ol.Map({
layers: [
new ol.layer.Image({
source: new ol.source.ImageStatic({
attributions: [
new ol.Attribution({
html: '© <a href="http://xkcd.com/license.html">xkcd</a>'
})
],
url: 'http://imgs.xkcd.com/comics/online_communities.png',
projection: projection,
imageExtent: extent
})
}), vectorLayer //Add Vector in layers
],
target: 'map',
view: new ol.View({
projection: projection,
center: ol.extent.getCenter(extent),
zoom: 2
})
});
//On Map click setup marker
map.on('click', function (evt) {
var feature = new ol.Feature(new ol.geom.Point(evt.coordinate));
feature.setStyle(iconStyle);
vectorSource.clear();
vectorSource.addFeature(feature);
selectedlong = evt.coordinate[0];
selectedlat = evt.coordinate[1];
});
</script>
</body>
</html>
```
|
"You tried to execute a query that does not include the specified aggregate function"
```
SELECT SUM(orders.quantity) AS num, fName, surname
FROM author
INNER JOIN book ON author.aID = book.authorID;
```
I keep getting the error message: "you tried to execute a query that does not include the specified expression "fName" as part of an aggregate function. What do I do?
| The error is because `fName` is included in the `SELECT` list, but is not included in a `GROUP BY` clause and is not part of an aggregate function (`Count()`, `Min()`, `Max()`, `Sum()`, etc.)
You can fix that problem by including `fName` in a `GROUP BY`. But then you will face the same issue with `surname`. So put both in the `GROUP BY`:
```
SELECT
fName,
surname,
Count(*) AS num_rows
FROM
author
INNER JOIN book
ON author.aID = book.authorID;
GROUP BY
fName,
surname
```
Note I used `Count(*)` where you wanted `SUM(orders.quantity)`. However, `orders` isn't included in the `FROM` section of your query, so you must include it before you can `Sum()` one of its fields.
If you have Access available, build the query in the query designer. It can help you understand what features are possible and apply the correct Access SQL syntax.
|
Inheriting parent assignment operator when child's is implicitly deleted
In GCC 4.6, it is possible to inherit a parent's assignment operators even when the child's assignment operators are implicitly deleted due to a move constructor. In later versions of GCC (as well as Clang), this is no longer possible. What is the proper way to have the child class use the parent's assignment operators?
```
struct A
{
A & operator=(A const & other) = default;
};
struct B : public A
{
B() {}
B(B && other) {}
using A::operator=;
};
int main()
{
B b1, b2;
b1 = b2; // error: use of deleted function because B's operator= is implicitly deleted due to move constructor
return 0;
}
```
| A function that is deleted is still *declared*, only the *definition* is deleted. Expanding that in your class definition:
```
struct B : A {
using A::operator=; // A& operator=(const A&)
B& operator=(const B&) = delete;
};
```
At this point, you can note that there are two declarations for `operator=` in the derived type, the first one (brought into scope by means of a *using-declaration*) takes a `const A&` argument, while the second one takes a `const B&` and is *deleted*.
When you later try the assignment:
```
B b1, b2;
b1 = b2;
```
Both declarations are seen by the compiler and the second one is a better match. Because it is marked as *deleted* you get the error. If you had, on the other hand assigned a `A` object it would have worked as expected:
```
B b1, b2;
b1 = static_cast<A&>(b2); // works, BUT...
```
The problem with this approach is that it is only copying the base subobjects which is probably not what you want. If you just want the same behavior you would have had if the assignment had been generated by the compiler you need to ask for it:
```
struct B : A {
// ...
B& operator=(const B&) = default;
};
```
|
How to count total number of watches on a page?
Is there a way, in JavaScript, to count the number of angular watches on the entire page?
We use [Batarang](https://chrome.google.com/webstore/detail/angularjs-batarang/ighdmehidhipcmcojjgiloacoafjmpfk?hl=en), but it doesn't always suit our needs. Our application is big and we're interested in using automated tests to check if the watch count goes up too much.
It would also be useful to count watches on a per-controller basis.
**Edit**: here is my attempt. It counts watches in everything with class ng-scope.
```
(function () {
var elts = document.getElementsByClassName('ng-scope');
var watches = [];
var visited_ids = {};
for (var i=0; i < elts.length; i++) {
var scope = angular.element(elts[i]).scope();
if (scope.$id in visited_ids)
continue;
visited_ids[scope.$id] = true;
watches.push.apply(watches, scope.$$watchers);
}
return watches.length;
})();
```
| ### (You may need to change `body` to `html` or wherever you put your `ng-app`)
```
(function () {
var root = angular.element(document.getElementsByTagName('body'));
var watchers = [];
var f = function (element) {
angular.forEach(['$scope', '$isolateScope'], function (scopeProperty) {
if (element.data() && element.data().hasOwnProperty(scopeProperty)) {
angular.forEach(element.data()[scopeProperty].$$watchers, function (watcher) {
watchers.push(watcher);
});
}
});
angular.forEach(element.children(), function (childElement) {
f(angular.element(childElement));
});
};
f(root);
// Remove duplicate watchers
var watchersWithoutDuplicates = [];
angular.forEach(watchers, function(item) {
if(watchersWithoutDuplicates.indexOf(item) < 0) {
watchersWithoutDuplicates.push(item);
}
});
console.log(watchersWithoutDuplicates.length);
})();
```
- Thanks to erilem for pointing out this answer was missing the `$isolateScope` searching and the watchers potentially being duplicated in his/her answer/comment.
- Thanks to Ben2307 for pointing out that the `'body'` may need to be changed.
---
### Original
I did the same thing except I checked the data attribute of the HTML element rather than its class. I ran yours here:
<http://fluid.ie/>
And got 83. I ran mine and got 121.
```
(function () {
var root = $(document.getElementsByTagName('body'));
var watchers = [];
var f = function (element) {
if (element.data().hasOwnProperty('$scope')) {
angular.forEach(element.data().$scope.$$watchers, function (watcher) {
watchers.push(watcher);
});
}
angular.forEach(element.children(), function (childElement) {
f($(childElement));
});
};
f(root);
console.log(watchers.length);
})();
```
I also put this in mine:
```
for (var i = 0; i < watchers.length; i++) {
for (var j = 0; j < watchers.length; j++) {
if (i !== j && watchers[i] === watchers[j]) {
console.log('here');
}
}
}
```
And nothing printed out, so I'm guessing that mine is better (in that it found more watches) - but I lack intimate angular knowledge to know for sure that mine isn't a proper subset of the solution set.
|
Julia metaprogramming return symbol
I'm trying to figure out how to have a quote block, when evaluated, return a symbol. See the example below.
```
function func(symbol::Symbol)
quote
z = $symbol
symbol
end
end
a = 1
eval(func(:a)) #this returns :symbol. I would like it to return :a
z
```
| The symbol your function returned where the symbol function, due to the last symbol in your qoute did not have $ in front. The second problem is you would like to return the symbol it self, which requires you make a quote inside the quote similar to this question
[Julia: How do I create a macro that returns its argument?](https://stackoverflow.com/questions/30756701/julia-how-do-i-create-a-macro-that-returns-its-argument/30757496#30757496)
```
function func(s::Symbol)
quote
z = $s
$(Expr(:quote, s)) # This creates an expresion inside the quote
end
end
a = 1
eval(func(:a)) #this returns :a
z
```
|
Why is the formula for the density of a transformed random variable expressed in terms of the derivative of the inverse?
In this very nice [answer](https://stats.stackexchange.com/questions/14483/intuitive-explanation-for-density-of-transformed-variable), the intuitive explanation of the formula for the density of a transformed random variable, $Y = g(X)$, leads naturally to an expression like
$$f\_Y(y) = \frac{f\_X(g^{-1}(y))}{g'(g^{-1}(y))},$$
where $f\_X(x)$ is the density function of $X$ (and assuming for simplicity that $g(x)$ is monotone increasing).
However, this formula is often presented (without much explanation) as
$$f\_Y(y) = f\_X(g^{-1}(y)) (g^{-1})'(y) ,$$
which follows from an application of the [Inverse Function Theorem](https://math.libretexts.org/Bookshelves/Calculus/Book%3A_Calculus_(OpenStax)/03%3A_Derivatives/3.7%3A_Derivatives_of_Inverse_Functions). I have seen this pattern in several places: expositions yield the first expression (for example [here](https://www2.stat.duke.edu/courses/Spring11/sta114/lec/114mvnorm.pdf)), but the canonical result seems to be communicated in terms of the second expression, such as the [Wikipedia reference](https://en.wikipedia.org/wiki/Probability_density_function#Scalar_to_scalar). Some [write-ups](http://stla.github.io/stlapblog/posts/ChangeOfVariables.html) motivate it in terms of the former and then explicitly invoke the substitution $$\frac{1}{g'(g^{-1}(y))} = (g^{-1})'(y).$$
Is there anything pedagogically interesting to say about this? Is there a reason to disprefer what seems to be the more "intuitive" expression? Is the more standard version in terms of the derivative of the inverse simply easier for students to remember and calculate with?
| It seems that the heuristic described by @whuber in their answer to the linked problem can be modified slightly to yield the change of variables formula for the density in its more familiar form. Consider a finite sum approximation to the probability elements; the "conservation of mass" requirement stipulates that $$h\_X(x\_j) \Delta\_X(x\_j) = h\_Y(y\_j) \Delta\_Y(y\_j).$$ Here $h\_X(x\_j)$ is the height and $\Delta(x\_j)$ is the width of the interval on which $x\_j$ is the center.
Suppose that $h\_X(x)$ is known and $y = g(x)$ for a monotone continuous function $g(\cdot)$. The goal is to solve for $h\_Y(y)$ in terms of $g(\cdot)$ and $h\_X(\cdot)$. To do so, we will fix either $\Delta\_X(x\_j)$ *or* $\Delta\_Y(y\_j)$ to be some constant $\Delta$ for all values of its argument. Then we will solve for $h\_Y(y)$ and take a limit as $\Delta \rightarrow 0$. Which of $\Delta\_X(x\_j)$ *or* $\Delta\_Y(y\_j)$ is set to the constant determines which of the two forms of the formula is arrived at.
Setting $\Delta\_Y(y\_j) = \Delta$ gives the more common form.
$$\begin{aligned}
h\_Y(y) \Delta &= h\_X(x)\left [g^{-1} \left(y + \dfrac{\Delta}{2} \right) - g^{-1} \left(y - \dfrac{\Delta}{2} \right) \right ],\\
h\_Y(y) &= h\_X(g^{-1}(y))\frac{\left [g^{-1} \left(y + \dfrac{\Delta}{2} \right) - g^{-1} \left(y - \dfrac{\Delta}{2} \right) \right ]}{\Delta},\\
h\_Y(y) &\rightarrow h\_X(g^{-1}(y)) (g^{-1})'(y).
\end{aligned}
$$
Setting $\Delta\_X(x\_j) = \Delta$ gives the other (equivalent) expression.
$$\begin{aligned}
h\_X(x) \Delta &= h\_Y(y) \left [g \left(x + \dfrac{\Delta}{2} \right) - g \left(x - \dfrac{\Delta}{2} \right) \right ],\\
h\_Y(y) &= h\_X(g^{-1}(y)) \frac{ \Delta}{g \left(x + \dfrac{\Delta}{2} \right) - g \left(x - \dfrac{\Delta}{2} \right) },\\
h\_Y(y) &\rightarrow \frac{h\_X(g^{-1}(y))}{g'(g^{-1}(y))}.
\end{aligned}
$$
Presumably this argument fails when Riemann sums fail and more measure theory is called for, but this line of reasoning satisfies my curiosity well enough. Specifically, the first approach, setting $\Delta\_Y(y) = \Delta$ at the outset, inherits the same intuition as explained in @whuber's answer to the other question, but arrives at an expression that will match most other texts (which is desirable to me for pragmatic reasons). Of course, intuition is very personal, so YMMV.
|
How to POST Data into website using Jsoup
I am trying to POST data into website to make a login into the site using Jsoup , but its not working ?
I am trying the code
```
Document docs = Jsoup.connect("http://some.com/login")
.data("cmd", "login","username", "xxxx","password", "yyyyy")
.referrer("http://some.com/login/").post();
```
here it is giving normal page of login in pagesource
i have also tried the code
```
Document docs = (Document) Jsoup.connect("http://some.com/login")
.data("cmd", "login","username", "xxxx","password", "yyyyy")
.referrer("http://some.com/login/").method(Method.POST).execute().parse();
```
here also it is giving normal page of login again in pagesource.
Any suggestions regarding the same would be highly appreciated !!
Thanks....
| I will give the answer of your question by taking an example.
Suppose you want to login to facebook.
Then apart from username and password there are many other parameters that are also passed through `POST` request. Those all parameters are hidden and are passed similarly like username and password.
For Example :
If you will open the `html source` of facebook , then you can see there is one parameter which is hidden is `lgnrnd` and its value is `071129_5D7M`.
So there are many other parameter similar like this.You need to pass all the parameters.
You should also specify the `userAgent.`
```
Document doc = Jsoup.connect("http://www.facebook.com")
.data("email", "myemailid")
.data("pass", "mypassword")
// and other hidden fields which are being passed in post request.
.userAgent("Mozilla")
.post();
System.out.println(doc); // will print html source of homepage of facebook.
```
|
reassignment python modules's variable, but function use that var as default don't change
there is example code:
```
# b.py
c = False
def d(i=c):
print(i, c)
```
I want to write `a.py` to let the output of `b.d` to be `True, True`:
```
# a.py
import b
b.c = True
b.d()
```
but the output is `False, True`.
so, `why` and `how` to get it?
---
**write after answer**
to why:
```
# `inspect` may be useful
import inspect
v = True
def f(i=v):
print(i, v)
s = inspect.signature(f)
s.parameters
Out[6]: mappingproxy({'i': <Parameter "i=True">})
```
| This is unnecessarily complicated -- we can boil down your question to a few lines:
```
default = "Before"
def foo(bar=default):
print(bar)
foo() # "Before"
default = "After"
foo() # "Before"
```
The behavior it seems you expect is that after `default = "After"`, calling `foo()` will print "After". But it continues to print "Before".
Python will evaluate the default argument for a function *once* and "lock it in". Reassigning the name of `default` to something else later has no effect (as we see in the snippet above).
Instead, you can use an approach that's commonly suggested when people want lists as default arguments:
```
default = "Before"
def foo(bar=None):
if bar is None:
bar = default
print(bar)
foo() # "Before"
default = "After"
foo() # "After"
```
In this case, you're not trying to change the default argument, but rather change what is assigned to `bar` when no argument is specified. Each time you call `foo()` with no argument, it'll be assigned `None` and then the logic inside the function will look up and use the value of the global `default`.
|
Deserialize a property as an ExpandoObject using JSON.NET
For example, there's an object like the next one:
```
public class Container
{
public object Data { get; set; }
}
```
And it's used this way:
```
Container container = new Container
{
Data = new Dictionary<string, object> { { "Text", "Hello world" } }
};
```
If I deserialize a JSON string obtained from serializing the above instance, the `Data` property, even if I provide the `ExpandoObjectConverter`, it's not deserialized as an `ExpandoObject`:
```
Container container = JsonConvert.Deserialize<Container>(jsonText, new ExpandoObjectConverter());
```
**How can I deserialize a class property assigned with an anonymous object, or at least, not concrete type as an `ExpandoObject`?**
## EDIT:
*Someone answered that I could just use the dynamic object. This won't work for me. I know I could go this way, but this isn't the case because I need an ExpandoObject.
Thanks.*
## EDIT 2:
*Some other user answered I could deserialize a JSON string into an `ExpandoObject`. This isn't the goal of this question. I need to deserialize a concrete type having a dynamic property. In the JSON string this property could be an associative array.*
| Try this:
```
Container container = new Container
{
Data = new Dictionary<string, object> { { "Text", "Hello world" } }
};
string jsonText = JsonConvert.SerializeObject(container);
var obj = JsonConvert.DeserializeObject<ExpandoObject>(jsonText, new ExpandoObjectConverter());
```
I found that doing this got me an `ExpandoObject` from the call to `DeserializeObject`. I think the issue with the code you have provided is that while you are supplying an `ExpandoObjectConverter`, you are asking `Json.Net` to deserialize a `Container`, so I would imagine that the `ExpandoObjectConverter` is not being used.
**Edit:**
If I decorate the `Data` property with `[JsonConverter(typeof(ExpandoObjectConverter))]` and use the code:
```
var obj = JsonConvert.DeserializeObject<Container>(jsonText);
```
Then the `Data` property is deserialized to an `ExpandoObject`, while `obj` is a `Container`.
|
Fill multidimensional array by row
Before presenting the question, I will point out that something similar was asked [here](https://stackoverflow.com/questions/23409209/r-filling-array-by-rows) but that this thread doesn't really answer my question.
Consider the following dimensional arrays:
```
1D: [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]
2D: [[1,2,3,4,5,6,7,8], [9,10,11,12,13,14,15,16]]
3D: [[[1,2,3,4],[5,6,7,8]], [[9,10,11,12],[13,14,15,16]]]
4D: [[[[1,2],[3,4]], [[5,6],[7,8]], [[[9,10],[11,12]], [[13,14],[15,16]]]]
...
```
- The 1D array is length 16
- The 2D array is 2x8
- The 3D array is 2x2x4
- The 4D array is 2x2x2x2
Suppose I want to create the arrays. For the first two, I could do something like this in R
```
oneD <- array(1:16, dim=16) # class(oneD) = array
twoD <- array(1:16, dim=8) # class(twoD) = matrix
```
However, the twoD array is now represented as
```
[[1,3,5,7,9,11,13,15], [2,4,6,8,10,12,14,16]]
```
I am aware of two ways around this.
```
twoD <- aperm(array(1:16, dim=8))
twoD <- matrix(1:16, nrow=2, byrow=TRUE)
```
However, these methods won't work for filling the 3D and 4D arrays. I fill them below, but I would like them to match my definitions above.
```
threeD <- array(1:16, dim=c(2,2,4)) # class(threeD) = array
fourD <- array(1:16, dim=c(2,2,2,2)) # class(fourD) = array
```
**EDIT**
bgoldst's answer made me realize that in fact aperm does work for what I want.
```
threeD <- aperm(array(1:16, dim=c(2,2,4))
# threeD[1,1,1] = 1
# threeD[1,1,2] = 2
# threeD[1,2,1] = 3
# threeD[1,2,2] = 4
# threeD[2,1,1] = 5
# ....
```
| The way you've written your data, you need to fill your arrays across the deepest dimension first, and then across shallower dimensions. This is the opposite of the way R normally fills matrices/arrays.
It also needs to be said that this is slightly different from simply filling *by row*. To use your 3D array as an illustration of this, you've indicated it requires 4 z-slices, and the innermost "subarrays" have length 4. This means you need to fill across z-slices first, then across columns, then across rows. This is not merely filling *by row*, but by deepest dimension to shallowest dimension (or greatest to least, if you prefer). Admittedly, this concept is often referred to as "by row" or "row-major order", but I don't care for those terms, since they're too 2D, and they're also misleading IMO, since rows are considered to be the shallowest dimension.
To elaborate: It's better to think of fill order as being *across* dimensions rather than *along* dimensions. Think of an *r*×*c*×*z* cube. If you're facing the front of the cube (that is, facing the *r*×*c* matrix formed from *z* = 1), if you move *along* row *r* = 1, that is, from left to right along the top row, then you're also moving *along* (or *within*) z-slice *z* = 1. The idea of moving along a dimension is not helpful. But if you think of such left-to-right movement as being *across* columns, then that is completely unambiguous. Thus, across rows means up-down, across columns means left-right, and across z-slices means front-back. Another way of thinking about this is each respective movement is along the dimension "axis", although I don't usually like to think of it that way, because then you have to introduce the idea of axes. Anyway, this is why I don't care for the terms "by row" and "row-major order" (and similarly "column-major order"), since the proper way to think about that movement (IMO) is *across columns* for 2D, or across the deepest dimension (followed by shallower dimensions) for higher dimensionalities.
You can achieve the requirement by first building the arrays with reversed dimensionality, and then transposing them to "dereverse" (?) the dimensionality. This will lay out the data as you need. Of course, for 1D, no transposition is necessary, and for 2D we can just use [`t()`](https://stat.ethz.ch/R-manual/R-devel/library/base/html/t.html), but for higher dimensionalities we'll need [`aperm()`](https://stat.ethz.ch/R-manual/R-devel/library/base/html/aperm.html). And conveniently, when you call `aperm()` without specifying the `perm` argument, by default it reverses the dimensionality of the input; this is just like calling `t()`.
```
array(1:16,16);
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
t(array(1:16,c(8,2))); ## alternatives: matrix(1:16,2,byrow=T), aperm(array(1:16,c(8,2)))
## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
## [1,] 1 2 3 4 5 6 7 8
## [2,] 9 10 11 12 13 14 15 16
aperm(array(1:16,c(4,2,2))); ## same as aperm(array(1:16,c(4,2,2)),3:1)
## , , 1
##
## [,1] [,2]
## [1,] 1 5
## [2,] 9 13
##
## , , 2
##
## [,1] [,2]
## [1,] 2 6
## [2,] 10 14
##
## , , 3
##
## [,1] [,2]
## [1,] 3 7
## [2,] 11 15
##
## , , 4
##
## [,1] [,2]
## [1,] 4 8
## [2,] 12 16
##
aperm(array(1:16,c(2,2,2,2))); ## same as aperm(array(1:16,c(4,2,2)),4:1)
## , , 1, 1
##
## [,1] [,2]
## [1,] 1 5
## [2,] 9 13
##
## , , 2, 1
##
## [,1] [,2]
## [1,] 3 7
## [2,] 11 15
##
## , , 1, 2
##
## [,1] [,2]
## [1,] 2 6
## [2,] 10 14
##
## , , 2, 2
##
## [,1] [,2]
## [1,] 4 8
## [2,] 12 16
##
```
|
Can you use a ngFor with a modulus operator in Angular 2
I'm writing out images to the web page. Every three images I would like to start a new row. Does angular 2 support this?
| You can achieve it by doing following:
```
<div *ngFor="let t of temp(math.ceil(arr.length/3)).fill(); let i = index" class="row">
<div *ngFor="let item of arr.slice(3*i,3*i + 3);" class="item">
{{item}}
</div>
</div>
```
And in your component:
```
export class App {
temp = Array;
math = Math;
arr= [1,2,3,4,5,6,7,8,9,10,11];
}
```
Here's working **[Plunker](https://plnkr.co/edit/hKa7P82xZvocU7Jh9HZz?p=preview)**
|
Does C# app exit automatically dispose managed resources?
***I am fully aware that using statements are the way to handle `IDisposable`s. Please do not repeat this advice in comments.***
When a C# .NET 4.5 (or higher) application closes, what happens to the `IDisposable`s which were not properly disposed?
I know some have a finalizer for disposing unmanaged resources.
But let's say I have a console app, with a static `Stream` variable. Is it disposed when I close the console app?
What about a `HttpClient`? And how do you know in which situations it does and in which is does not?
Alright, now some actual background info. I often store certain `IDisposable`s as fields, forcing my class to implement `IDisposable`. The end user should use using. But what if that does not happen?
Is it merely unnecessary memory until GC? Or do you suddenly have a memory leak?
| It's important to distinguish between objects implementing `IDisposable` and objects with finalizers. In most cases (probably preferably all), objects with finalizers also implement `IDisposable` but they are in fact two distinct things, most often used together.
A finalizer is a mechanism to say to the .NET Runtime that before it can collect the object, it has to execute the finalizer. This happens when the .NET Runtime detects that an object is eligible for garbage collection. Normally, if the object does not have a finalizer, it will be collected during this collection. If it has a finalizer, it will instead be placed onto a list, the "freachable queue", and there is a background thread that monitors this thread. Sometimes after the collection has placed the object onto this queue, the finalizer thread will process the object from this queue and call the finalizer method.
Once this has happened, the object is again eligible for collection, but it has also been marked as finalized, which means that when the garbage collector finds the object in a future collection cycle, it no longer places it on this queue but collects it normally.
Note that in the above paragraphs of text, `IDisposable` is not mentioned once, and there is a good reason for that. None of the above relies on `IDisposable` **at all**.
Now, objects implementing `IDisposable` may or may not have a finalizer. The general rule is that if the object itself owns unmanaged resources it probably should and if it doesn't it probably shouldn't. *(I'm hesitant to say always and never here since there always seems to be someone that is able to find a cornercase where it makes sense one way or another but breaks the "typically" rule)*
A *TL;DR* summary of the above could be that a finalizer is a way to get a (semi-)guaranteed cleanup of the object when it is collected, but exactly when that happens is not directly under the programmers control, whereas implementing `IDisposable` is a way to control this cleanup directly from code.
Anyway, with all that under our belt, let's tackle your specific questions:
>
> When a C# .NET 4.5 (or higher) application closes, what happens to the IDisposables which were not properly disposed?
>
>
>
**Answer:** Nothing. If they have a finalizer, the finalizer thread will try to pick them up, since when the program terminates, *all* objects become eligible for collection. The finalizer thread is not allowed to run "forever" to do this, however, so it may also run out of time. If, on the other hand, the object implementing `IDisposable` does not have a finalizer it will simply be collected normally (again, `IDisposable` has no bearing at all on garbage collection).
>
> But let's say I have a console app, with a static Stream variable. Is it disposed when I close the console app?
>
>
>
**Answer:** No, it will not be *disposed*. `Stream` by itself is a base class, so depending on the concrete derived class it may or may not have a finalizer. It follows the same rule as above, however, so if it doesn't have a finalizer it will simply be collected. Examples, [MemoryStream](https://referencesource.microsoft.com/#mscorlib/system/io/memorystream.cs) does not have a finalizer, whereas [FileStream](https://referencesource.microsoft.com/#mscorlib/system/io/filestream.cs,1308) does.
>
> What about a HttpClient? And how do you know in which situations it does and in which is does not
>
>
>
**Answer:** The [reference source for HttpClient](https://github.com/dotnet/corefx/blob/master/src/System.Net.Http/src/System/Net/Http/HttpClient.cs) seems to indicate that `HttpClient` does not have a finalizer. It will thus simply be collected.
>
> Alright, now some actual background info. I often store certain IDisposables as fields, forcing my class to implement IDisposable. The end user should use using. But what if that does not happen?
>
>
>
**Answer:** If you forget/don't call `IDisposable.Dispose()` on objects implementing `IDisposable`, everything I've stated here regarding finalizers will still happen, once the object is eligible for collection. Other than that, nothing special will happen. Whether the object implements `IDisposable` or not have no bearing on the garbage collection process, only the presence of a finalizer has.
>
> Is it merely unnecessary memory until GC? Or do you suddenly have a memory leak
>
>
>
**Answer:** Undetermined from this simple information. It depends on what the `Dispose` method would do. For instance, if the object has registered itself somewhere so that there is a reference to it, somewhere, for some code to stop using the object may not actually make the object eligible for collection. The `Dispose` method might be responsible for unregistering it, removing the last reference(s) to it. So this depends on the object. Merely the fact that the object implements `IDisposable` does not create a memory leak. If the last reference to the object is removed, the object becomes eligible for collection and will be collected during a future collection cycle.
---
**Remarks:**
- Note that the above text is also probably simplified. A full collection cycle to actually "collect memory" is probably not done on application termination as there is no point. The operating system will free the memory allocated by the process when it terminates anyway. When an application terminates, .NET Framework makes every reasonable effort to call finalizers for objects that haven't yet been garbage collected, unless such cleanup has been suppressed (by a call to the library method GC.SuppressFinalize, for example). .NET 5 (including .NET Core) and later versions don't call finalizers as part of application termination.[1](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/finalizers#:%7E:text=When%20an%20application,of%20application%20termination.) (I have no additional knowledge one way or another what kind of optimizations is done here)
- The more important part here is that you need to distinguish between memory (or other) leaks **during** program execution and **after** program execution
- When the process terminates, the operating system will reclaim all memory allocated to it, it will close all handles (which may keep sockets, files, etc. open), all threads will be terminated. In short, the program is completely removed from memory
- The process may have left tidbits of itself around though, which are *not* cleaned up unless the process took care to do this beforehand. An open file is closed, as stated above, but it may not have been completely written and thus may be corrupt in some way.
- During program execution, leaks may make the program grow in terms of allocated memory, it may allocate too many handles because it fail to close the ones it no longer needs, etc. and this is important in terms of handling `IDisposable` and finalizers correctly, but when the process terminates, this is no longer a problem.
|
Substitute AltTab in one key
Is it possible to substitute AltTab with only one key press ?
i tried this one but it doesn't work
```
`::AltTab
```
| I use this, you may need to change the Sleep delay.
```
`::
Send {Alt Down}{Tab}
Sleep 100
Send {Alt Up}
return
```
I am running Windows 8.1 64-bit and AutoHotkey v1.1.16.05. And my `C:\Program Files\AutoHotkey\AutoHotkeyU64.exe` is digitally signed by running the script described [here](http://www.autohotkey.com/board/topic/70449-enable-interaction-with-administrative-programs/) ([EnableUIAccess.zip](https://dl.dropbox.com/u/20532918/EnableUIAccess.zip)) so that Windows allows it to simulate Alt+Tab. The digital sign is required if you are using Windows Vista and onwards.
Download the zip file and extract it. Then run `EnableUIAccess.ahk`:
1. It will ask which AutoHotkey executable to sign. Pick one that you need (`AutoHotkeyA32.exe`, `AutoHotkeyU32.exe`, `AutoHotkeyU64.exe` or `AutoHotkey.exe`).
2. Then it will ask to save the new executable. You can choose to overwrite the original file or save as another executable.
3. Finally it will ask to create a "`Run Script with UI Access`" context menu item. If you choose 'Yes', then you can right-click a .ahk file and choose "`Run Script with UI Access`", which will use the digitally signed executable to run the .ahk file. But if you choose to overwrite the original file in step 2, then it is not necessary to create this context menu item.
|
where to release a \_\_block variable?
I have following code snippet:
```
-(void) doSomething
{
__block NSMutableArray *objArray = [[NSMutableArray alloc] initWithCapacity:0];
[self performOperationWithBlock:^(void)
{
//adding objects to objArray
.
.
//operation with objArray finished
// 1. should objArray be released here?
}];
//2. should objArray be released here?
}
```
Should I autorelease the objArray?
| If it's an asynchronous call, it would make sense to create the `NSMutableArray` inside the actual block:
```
[self performOperationWithBlock:^(void)
{
NSMutableArray *objArray = [[NSMutableArray alloc] initWithCapacity:0];
//adding objects to objArray
.
.
//operation with objArray finished
// 1. should objArray be released here?
}];
```
As you won't be needing it after the block (it only makes sense for the duration of the async operation), so in the end release it after you have used it. Or, you can simply:
```
NSMutableArray *objArray = [NSMutableArray array];
```
And in this case you don't need to release it.
If it's a sync call, you should `release` it after the block.
---
**Note:** I am assuming you are populating the `NSMutableArray` before being used on the block, which means it makes sense to be created before the block starts.
**Async approach:**
```
-(void) doSomething
{
// Remove the `__block` qualifier, you want the block to `retain` it so it
// can live after the `doSomething` method is destroyed
NSMutableArray *objArray = // created with something useful
[self performOperationWithBlock:^(void)
{
// You do something with the objArray, like adding new stuff to it (you are modyfing it).
// Since you have the __block qualifier (in non-ARC it has a different meaning, than in ARC)
// Finally, you need to be a good citizen and release it.
}];
// By the the time reaches this point, the block might haven been, or not executed (it's an async call).
// With this in mind, you cannot just release the array. So you release it inside the block
// when the work is done
}
```
**Sync Approach**:
It assumes that you need the result immediately, and it makes sense when you do further work with the Array, after the block has been executed, so:
```
-(void) doSomething
{
// Keep `__block` keyword, you don't want the block to `retain` as you
// will release it after
__block NSMutableArray *objArray = // created with something useful
[self performOperationWithBlock:^(void)
{
// You do something with the objArray, like adding new stuff to it (you are modyfing it).
}];
// Since it's a sync call, when you reach this point, the block has been executed and you are sure
// that at least you won't be doing anything else inside the block with Array, so it's safe to release it
// Do something else with the array
// Finally release it:
[objArray release];
}
```
|
Can the "Starred" folder in the left pane of Files (nautilus) be removed?
On Ubuntu 19.10, I disable tracker because I do not like how my computer is overheating for several minutes after startup, and because I prefer full text search not be enabled in the file manager by default. The "Star" feature [relies on Tracker](https://askubuntu.com/questions/1148262/where-is-the-starred-directory-in-the-nautilus-sidebar-stored) and therefore does not work when tracker is disabled.
No option to disable the "Starred" folder is exposed in the Nautilus preferences, nor is a dconf setting available. The file `user-dirs-dir` determines the "special user folders" displayed in the left pane, but not the "Recent" or "Starred" items.
The question is: can the "Starred" item in the left pane (bookmark pane) of Files (nautilus) be removed?
| There are [a couple of not that trivial ways](https://superuser.com/questions/1359253/how-to-remove-starred-tab-in-gnomes-nautilus) to remove the "Starred" item in the left bar of nautilus. The second option involves editing source code and recompiling. I will only cover the first way here.
1 - Create a folder to store the override
```
mkdir ~/.config/nautilus/ui
```
2 - Extract the resource description of the main window:
```
gresource extract /bin/nautilus \
/org/gnome/nautilus/ui/nautilus-window.ui \
> ~/.config/nautilus/ui/nautilus-window.ui
```
3 - Edit the properties of the GtkPlacesSidebar object: open the file you created in the previous step:
```
gedit ~/.config/nautilus/ui/nautilus-window.ui
```
and change the property `show-starred-location` to `false` as in following code snippet:
```
<object class="GtkPlacesSidebar" id="places_sidebar">
...
<property name="show-recent">False</property>
<property name="show-starred-location">False</property>
...
</object>
```
4 - Set the environment variable to make GLib use this override:
```
export G_RESOURCE_OVERLAYS="/org/gnome/nautilus/ui=$HOME/.config/nautilus/ui"
```
5 - You also need to set this via ~/.pam\_environment, because Nautilus is started via D-Bus:
```
gedit ~/.pam_environment
```
and add following line
```
G_RESOURCE_OVERLAYS DEFAULT="/org/gnome/nautilus/ui=/home/confetti/.config/nautilus/ui"
```
where you change "confetti" by your own login name.
(with thanks to JusticeforMonica and DK Bose for the hints)
You need to log out and back in before this will take effect.
|
How to modify EXIF data in python
I am trying to edit/modify existing metadata within python 2.7. More specifically I have GPS coordinates in a my metedata, however the altitude field is incorrect. Is there a way of changing this?
I have had a look at `PIL` `piexif` `pyexif`, but I cannot seem to find a way to modify existing fields.
Has anyone managed to do this? It sounds like it would be very simple, but I can't seem to work it out.
|
```
import piexif
from PIL import Image
img = Image.open(fname)
exif_dict = piexif.load(img.info['exif'])
altitude = exif_dict['GPS'][piexif.GPSIFD.GPSAltitude]
print(altitude)
```
(550, 1) % some values are saved in a fractional format. This means 550m, (51, 2) would be 25,5m.
```
exif_dict['GPS'][piexif.GPSIFD.GPSAltitude] = (140, 1)
```
This sets the altitude to 140m
```
exif_bytes = piexif.dump(exif_dict)
img.save('_%s' % fname, "jpeg", exif=exif_bytes)
```
|
Fill Empty cells from the below row values in r
I have data as below
```
col1 col2 col3
56 78 89
67 76 43
```
I want to fill the empty cells as below in r
```
col1 col2 col3
56 78 89
56 78 89
56 78 89
67 76 43
67 76 43
```
| We need to change the blank cells (`""`) to `NA` and then use `na.locf` from `zoo`
```
library(zoo)
df1[] <- lapply(df1, function(x) as.numeric(na.locf(replace(x, x=="", NA), fromLast=TRUE)))
df1
# col1 col2 col3
#1 56 78 89
#2 56 78 89
#3 56 78 89
#4 67 76 43
#5 67 76 43
```
### data
```
df1 <- structure(list(col1 = c("", "", "56", "", "67"), col2 = c("",
"", "78", "", "76"), col3 = c("", "", "89", "", "43")), .Names = c("col1",
"col2", "col3"), row.names = c(NA, -5L), class = "data.frame")
```
|
font-awesome not working bundleconfig in MVC5
If I direct refer to `font-awesome.css` in \_layouts page. it will work
```
<link href="~/Content/font-awesome-4.0.3/css/font-awesome.css" rel="stylesheet" />
```
But I used in `BundleConfig.cs`. Icon is not showing.
```
bundles.Add(new StyleBundle("~/Content/css").Include(
"~/Content/font-awesome-4.0.3/css/font-awesome.css",
"~/Content/bootstrap.css",
"~/Content/body.css",
"~/Content/site.css",
"~/Content/form.css"
));
```
and also I observed browser console have error to font directory.
`Failed to load resource: the server responded with a status of 404 (Not Found) http://localhost:51130/fonts/fontawesome-webfont.woff?v=4.0.3`
what could be the problem?
| Try using `CssRewriteUrlTransform` when bundling:
```
bundles.Add(new StyleBundle("~/Content/css").Include(
"~/Content/bootstrap.css",
"~/Content/body.css",
"~/Content/site.css",
"~/Content/form.css"
).Include("~/Content/font-awesome-4.0.3/css/font-awesome.css", new CssRewriteUrlTransform());
```
This changes any urls for assets from within the css file to absolute urls so the bundling doesn't mess up the relative path.
Docs for [CssRewriteUrlTransform](https://learn.microsoft.com/en-us/previous-versions/aspnet/dn202167(v=vs.110))
|
Replacing C with JavaScript as an introductory programming language
I am a new teacher at polytechnic where we teach web development and basic software programming.
For years, the institution where I teach have taught C as their introductory programming language to people who are assumed to have no knowledge in programming.
I am investigating if there is a better option, and was thinking about JavaScript because it covers most of a similar structure to what we teach in C.
The reason I was thinking of JS is that a big part of the pathway that we teach goes into web development and JavaScript is used quite heavily in todays web development world.
The only direct difference I would encounter would be missing the user input via console that can be done in C.
I suppose that would have to be replace by a simple HTML page, so that is not a biggy.
Is there any reason your experience, why JavaScript would be a bad way to start given the limited information I have given you? Or why would it be a good start?
I can see come pro's and con's to the change, so I am after some other programmers opinions.
Also, even though C is the intro language and does not leave the command line interface in the paper, the students are then expected to transition that knowledge into JavaScript and C# in the following year in 2 other subjects.
| I think JavaScript is not the best language to start learning programing concepts, mainly because of its "unusual" prototype approach.
C is meanwhile completely there were assembler was in earlier days. So I think not very interesting for most of the students. Where I think C++ now took the role of C. But I think C++ is not that interesting or fancy for students.
What I think is important to teach your students are the basic concepts of: Debugger, Breakpoints, Memory etc.
I think Microsoft's Community Edition together with C# as language would be a pretty good choice. Every student can also install it for free on his own machine.
If you dive directly into JavaScript you miss a lot of the fundamental parts that someone who has to do with Software should know. A C# Console Application I think is a good start.
But another point: Diving directly into web development I think is not a good strategy. It is too complex I think. Your students would miss the point what it really means to code and debug something.
If you still want to use JavaScript I think NodeJS development could be pretty interesting for your students.
|
How to create hyperlink using XSLT?
I'm new at XSLT. I want to create a hyperlink using XSLT.
Should look like this:
Read our **privacy policy.**
"privacy policy" is the link and upon clicking this, should redirect to example "www.privacy.com"
Any ideas? :)
| **This transformation**:
```
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output omit-xml-declaration="yes" indent="yes"/>
<xsl:template match="/">
<html>
<a href="www.privacy.com">Read our <b>privacy policy.</b></a>
</html>
</xsl:template>
</xsl:stylesheet>
```
**when applied on *any* XML document (not used), produces the wanted result**:
```
<html><a href="www.privacy.com">Read our <b>privacy policy.</b></a></html>
```
**and this is displayed by the browser as**:
Read our **privacy policy.**
**Now imagine that nothing is hardcoded in the XSLT stylesheet -- instead the data is in the source XML document**:
```
<link url="www.privacy.com">
Read our <b>privacy policy.</b>
</link>
```
**Then this transformation**:
```
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output omit-xml-declaration="yes" indent="yes"/>
<xsl:strip-space elements="*"/>
<xsl:template match="node()|@*">
<xsl:copy>
<xsl:apply-templates select="node()|@*"/>
</xsl:copy>
</xsl:template>
<xsl:template match="link">
<a href="{@url}"><xsl:apply-templates/></a>
</xsl:template>
</xsl:stylesheet>
```
**when applied on the above XML document, produces the wanted, correct result**:
```
<a href="www.privacy.com">
Read our <b>privacy policy.</b>
</a>
```
|
NavigationBar setShadowImage not always working
I'm trying to set a custom shadow image for the navigation bar in my table views, but it's only showing in some views. I've created a super class to set the styles for my table views.
```
- (void)viewDidLoad
{
[super viewDidLoad];
// Set navigation bar background
[self.navigationController.navigationBar setBackgroundImage:[UIImage imageNamed:@"navigationbarbackground.png"] forBarMetrics:UIBarMetricsDefault];
// Set navigation bar shadow imag
[self.navigationController.navigationBar setShadowImage:[UIImage imageNamed:@"navigationbarshadow.png"]];
```
In the view I see at starting my app, no shadow is showed. But when I touch the [+] button in my navigation bar to open my '*add new item*' table view, it does show a shadow.
Could someone point me in the right direction here?
| The Appearance proxy should work.
Just call it somewhere (e.g. in your AppDelegate) upon startup.
```
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
[self customizeAppearance];
return YES;
}
- (void) customizeAppearance
{
// Set the background image for *all* UINavigationBars
[[UINavigationBar appearance] setBackgroundImage:[UIImage imageNamed:@"navigationbarbackground"] forBarMetrics:UIBarMetricsDefault];
// Set the shadow image for *all* UINavigationBars
[[UINavigationBar appearance] setShadowImage:[UIImage imageNamed:@"navigationbarshadow.png"]];
//add other appearance stuff here...
}
```
However if you create a storyboard with multiple UINavigationController's in it and a bunch of segue's pushing navigation controller's you might get a corrupt view controller structure which might be the problem here.
Another possible issue might be the `Clip Subviews` option of a Navigation Bar somewhere in your nib file or you storyboard. Make sure it is turned off if you want the shadow (image)!
![ClipSubviews](https://i.stack.imgur.com/2j9b3.png)
By the way, if you use imageNamed you don't need to include the file extension.
|
Divide a region into parts efficiently Python
I have a square grid with some points marked off as being the centers of the subparts of the grid. I'd like to be able to assign each location within the grid to the correct subpart. For example, if the subparts of the region were centered on the black dots, I'd like to be able to assign the red dot to the region in the lower right, as it is the closest black dot.
[![enter image description here](https://i.stack.imgur.com/dYMjn.png)](https://i.stack.imgur.com/dYMjn.png)
Currently, I do this by iterating over each possible red dot, and comparing its distance to each of the black dots. However, the width, length, and number of black dots in the grid is very high, so I'd like to know if there's a more efficient algorithm.
My particular data is formatted as such, where the numbers are just placeholders to correspond with the given example:
```
black_dots = [(38, 8), (42, 39), (5, 14), (6, 49)]
grid = [[0 for i in range(0, 50)] for j in range(0, 50)]
```
For reference, in the sample case, I hope to be able to fill `grid` up with integers 1, 2, 3, 4, depending on whether they are closest to the 1st, 2nd, 3rd, or 4th entry in black\_dots to end up with something that would allow me to create something similar to the following picture where each integer correspond to a color (dots are left on for show).
[![enter image description here](https://i.stack.imgur.com/J9eJb.png)](https://i.stack.imgur.com/J9eJb.png)
To summarize, is there / what is the more efficient way to do this?
| You can use a breadth-first traversal to solve this problem.
1. Create a first-in, first-out queue. (A queue makes a traversal breadth-first.)
2. Create a Visited mask indicating whether a cell in your grid has been added to the queue or not. Set the mask to false.
3. Create a Parent mask indicating what black dot the cell ultimately belongs to.
4. Place all the black dots into the queue, flag them in the Visited mask, and assign them unique ids in the Parent mask.
5. Begin popping cells from the queue one by one. For each cell, iterate of the cell's neighbours. Place each neighbour into the Queue, flag it in Visited, and set its value in Parent to be equal to that of the cell you just popped.
6. Continue until the queue is empty.
The breadth-first traversal makes a wave which expands outward from each source cell (black dot). Since the waves all travel at the same speed across your grid, each wave gobbles up those cells closest to its source.
This solves the problem in *O(N)* time.
|
Remove Blank spaces in a string in O(n)
How to remove blank spaces in a string with a complexity of O(n).
My approach is using two indexes. One will traverse till length on string. Other will be incremented only when a non-blank character is encountered.
But i am not sure of this approach.
TIA,
Praveen
| This approach is fine. The O(n) requirement simply means that the running time is proportional to the number of items which in this case means the number of characters in the string (assuming you mean time complexity which is a fairly safe bet here).
The pseudocode:
```
def removeSpaces (str):
src = pointer to str
dst = src
while not end-of-string marker at src:
if character at src is not space:
set character at dst to be character at src
increment dst
increment src
place end-of-string marker at dst
```
is basically what you're trying to do.
Because it has a single loop dependent only on the number of characters, it is indeed O(n) time complexity.
---
The following C program shows this in action:
```
#include <stdio.h>
// Removes all spaces from a (non-const) string.
static void removeSpaces (char *str) {
// Set up two pointers.
char *src = str;
char *dst = src;
// Process all characters to end of string.
while (*src != '\0') {
// If it's not a space, transfer and increment destination.
if (*src != ' ')
*dst++ = *src;
// Increment source no matter what.
src++;
}
// Terminate the new string.
*dst = '\0';
}
```
```
// Test program.
int main (void)
{
char str[] = "This is a long string with lots of spaces... ";
printf ("Old string is [%s]\n", str);
removeSpaces (str);
printf ("New string is [%s]\n", str);
return 0;
}
```
Running this gives you:
```
Old string is [This is a long string with lots of spaces... ]
New string is [Thisisalongstringwithlotsofspaces...]
```
Note that, if there are no spaces in the string, it simply copies every character over itself. You might think that you could optimise it by checking if `src == dst` and not copying but you'll probably find the check is as expensive as the copy. And, unless you're frequently copying multi-megabyte strings, performance won't be an issue here.
Also keep in mind this will be undefined behaviour with `const` strings but that would be the case in any in-place modification.
|
How to understand virtual memory?
How to understand the sentence
>
> we can generalize and allow each data object to have multiple independent addresses, each chosen from a different address sapce. This is the basic idea of virtual memory (《CSAPP》)
>
>
>
why not a data object has unique address?
| Virtual memory is typically used to simplify programming on computers where there are *several* programs running.
If you have several programs running on your computer, they each see a virtual address space of memory which suits their needs. As far as that program is concerned, life is simple. They can use whatever memory they want, at any particular address, and they don't have to make any effort to avoid memory used by other programs.
Moreover, for security, they *can't* see the memory used by other programs even if they want to. Their memory address space is entirely their own to play with as they wish.
So, in the simple case, where each data object belongs to exactly one program, then each data object *does* have exactly one address.
However, programs may refer to common resources. For example two spell-checkers might need to use a big file full of spellings on disk. Rather than load that into memory twice, an operating system will typically load it once - but it may be seen at different virtual addresses by the two programs that use it. So, in this case, one data object may indeed have several virtual addresses.
|
Risks of Network Partitioning When a Split Brain Creates a Security Flaw
I'm looking to create a high-availability, scalable networking solution by using a distributed system of data. A node here, describes a network that has control over one copy of the data. These nodes might contain more than one machine but has one copy of the data.
The nodes will contain data records which can be in a spent state or an unspent state. A client can request a transition for a record to go from an unspent state to a spent state (a request to spend). There is a security risk if they can successfully do this more than once.
A single node, if it has a connection to all other nodes, can tell the nodes that a spend has been requested and can ensure no other nodes want to access the data and that the spend has not occurred already. The node can change the state of the data to spent and other nodes will not do this since they know one of the nodes is updating it and processing the spend. All nodes will change the data, so the record is in the spent state.
If a node cannot reach another node, it can assume the other node is down and will continue operating with the other nodes until the other node comes back up. In this case the node will send all updates to the node that came back up. If this failed node was in the middle of a spend operation that was incomplete at the time, it can complete it then. This would cause minor downtime for some operations. This would be in the case where a node tells the other nodes it will spend and then fails before it can complete the spend process. In this case the other nodes are blocked from updating it so the failed node needs to come back online before it can be completed.
The problem is, the processing for the spend can only happen once. If the network was partitioned, an attacker knowing this could request the spend on one partition and also on the other. Each partition of the network would assume the other to be down and so would operate independently. This could cause the spend to be processed more than once.
This would not be a problem if the request to the two sides of the network was not being made during the time they were partitioned. The network would become eventually consistent when the connections are re-established. If an attack was successful, the nodes would learn about the attack when they re-establish connections because two sides of the network would announce the same change.
So it is detectable attack but is it practically possible?
An attacker would need to be deliberately trying to do this. The software is not designed to make several spend requests at once. There is a time cost to the attack. If the attacker fails, it will take time before they can recreate an unspent record. Creating unspent records requires money. And more money will need to be used in a single attack to get a higher benefit. The reason there is a time cost, is that it would take time to receive the money back to try again. They could afford many smaller attacks and then the benefit to them would be less and the damaged caused, less too.
Surely partitions are so rare naturally, that an attacker would have to be ridiculously lucky to win, if attempting attacks at any time?
If a connection is naturally lost, a node can halt all operations and try a reconnection. Using a low timeout for the connection to the node means it doesn't have to cause any downtime (Perhaps only rare increased latency). If the reconnection fails then it will continue trying but then restart operations (assuming the node is down). Would something like that protect against occasional connection errors?
So would an attacker be able to detect/cause a partition in the network? How likely is it that partitions will occur and for how long? What ways can issues be resolved if possible?
Thank you.
| Having dealt with similar issues in Clustering scenarios, I'm familiar with the situation you describe. Such systems frequently have the concept of a quorum, which is why such systems require an odd number of member nodes. The quorum is used to determine the majority and minority partitions.
The quorum is the number, greater than half, that defines what is the minimum number of available nodes that needs to be present to provide services. If a network partition happens only one partition will have quorum, the other stop services until the partition goes away. If a *multiple partition* event happens it can lead to no services being provided at all. However, it does guarantee only-one node is serving, and that's how consistency is provided.
As for the likelihood of a partition, that depends on your infrastructure and how your nodes are communicating availability state to each other.
As for their ability to detect a partition event, that depends on your code. The main thing that would make such an attack possible is if both partitions are *independently addressable* during a partition, which may not be the case. In my experience, network partitions frequently exclude end-users from one partition as well as the other nodes. If the partitions are not addressable, then this attack is a lot less likely to succeed.
|
What is Chrome Canary and how is it different from Google Chrome?
As the question title says it all, I read Google support, I read few things here and there, the only thing they say is it's for developers and it is updated at a rapid speed, but how exactly is Chrome Canary different from Google Chrome?
| Chrome has four [release channels](https://www.chromium.org/getting-involved/dev-channel) – stable, beta, dev and canary. Stable is the regular Chrome most users use. Canary is simply a much newer release that's not as well tested, but has the latest shiny stuff. After a while, the version that was released in the canary channel gets any bugs that are found fixed, then filters downward to dev, and then to the beta and stable releases. Other than the lack of testing, and possibly not having all the bugs fixed, canary is merely Chrome *FROM THE FUTURE* (except for those features that might get scrapped due to lack of quality).
In short, you get cool stuff, but it might crash horribly. On the other hand, you don't have to use it as a primary browser (in fact, you cannot set it as default). It's mainly useful if you like living dangerously and want to test bleeding edge features.
|
How to continue javascript execution when an error occurs
I use the WP native function wp\_enqueue\_script() for all my script loading in both WP front and back-end so it can handle duplicated calls to the same script and so on.
One of the issues is that other programmers don't use this function and load their scripts directly from their code, which causes jQuery or jQuery-UI to be loaded twice, leading to a bunch of errors.
The other issue is that code not owned by me triggers an error and stops the execution of JavaScript beyond this point.
In short:
A Javascript error occurs in code not owned by me.
My code doesn't execute due to that error.
I want my code to bypass that error and still execute.
Is there a way to handle these issues?
|
```
function ShieldAgainThirdPartyErrors($) {
// Code you want protect here...
}
// First shot.
// If no error happened, when DOMContentLoaded is triggered, this code is executed.
jQuery(ShieldAgainThirdPartyErrors);
// Backup shot.
// If third party script throw or provoke an unhandled exception, the above function
// call could never be executed, so, lets catch the exception and execute the code.
window.onerror = function () {
ShieldAgainThirdPartyErrors(jQuery);
return true;
}
```
If you want pull the trigger of your gun twice just when necessary ;) set a flag to signal that the first shot was successful and avoid the backup shot, I think that under some circumstances your first shot could be executed even third party code get in trouble and trigger the second shot.
|
What does a wedge-like shape of the PCA plot indicate?
In their [paper on autoencoders for text classification](https://www.cs.toronto.edu/~hinton/science.pdf) Hinton and Salakhutdinov demonstrated the plot produced by 2-dimensional LSA (which is closely related to PCA): ![2-dim LSA](https://danluu.com/images/linear-hammer/PCA.png).
Applying PCA to absolutely different slightly high dimensional data I obtained a similarly looking plot: ![2-dim PCA](https://i.stack.imgur.com/xpMzF.png) (except in this case I really wanted to know if there is any internal structure).
If we feed random data into PCA we obtain a disk-shaped blob, so this wedge-shaped shape is not random. Does it mean anything by itself?
| Assuming the variables are positive or non-negative the edges of the edge are are just points beyond which the data would become 0 or negative respectively. As such real-life data tend to be right skewed, we see greater density of points at the low end of their distribution and hence greater density at the "point" of the wedge.
More generally, PCA is simply a rotation of the data and constraints on those data will generally be visible in the principal components in the same manner as shown in the question.
Here is an example using several log-normally distributed variables:
```
library("vegan")
set.seed(1)
df <- data.frame(matrix(rlnorm(5*10000), ncol = 5))
plot(rda(df), display = "sites")
```
[![enter image description here](https://i.stack.imgur.com/CSd2D.png)](https://i.stack.imgur.com/CSd2D.png)
Depending on the rotation implied by the first two PCs, you might see the wedge or you might see a somewhat different version, show here in 3d using (`ordirgl()` in place of `plot()`)
[![enter image description here](https://i.stack.imgur.com/BMI0J.png)](https://i.stack.imgur.com/BMI0J.png)
Here, in 3d we see multiple spikes protruding from the centre mass.
For Gaussian random variables ($X\_i \sim \mathcal(N)(\mu = 0, \sigma = 1)$) where each has the same mean and variance we see a sphere of points
```
set.seed(1)
df2 <- data.frame(matrix(rnorm(5*10000), ncol = 5))
plot(rda(df2), display = "sites")
```
[![enter image description here](https://i.stack.imgur.com/VyAQr.png)](https://i.stack.imgur.com/VyAQr.png)
[![enter image description here](https://i.stack.imgur.com/bq7sC.png)](https://i.stack.imgur.com/bq7sC.png)
And for uniform positive random variables we see a cube
```
set.seed(1)
df3 <- data.frame(matrix(runif(3*10000), ncol = 3))
plot(rda(df3), display = "sites")
```
[![enter image description here](https://i.stack.imgur.com/XtV2v.png)](https://i.stack.imgur.com/XtV2v.png)
[![enter image description here](https://i.stack.imgur.com/TBsPq.png)](https://i.stack.imgur.com/TBsPq.png)
Note that here, for illustration I show the uniform using just 3 random variables hence the points describe a cube in 3d. With higher dimensions/more variables we can't represent the 5d hypercube perfectly in 3d and hence the distinct "cube" shape gets distorted somewhat. Similar issues effect the other examples shown, but it's still easy to see the constraints in those examples.
For your data, a log transformation of the variables prior to PCA would pull in the tails and stretch out the clumped data, just as you might use such a transformation in a linear regression.
Other shapes can crop up in PCA plots; one such shape is an artefact of the metric representation preserved in the PCA and is known as the *horseshoe*. For data with a long or dominant gradient (samples arranged along a single dimension with variables increasing from 0 to a maximum and then decreasing again to 0 along portions of the data are well known to generate such artefacts. Consider
```
ll <- data.frame(Species1 = c(1,2,4,7,8,7,4,2,1,rep(0,10)),
Species2 = c(rep(0, 5),1,2,4,7,8,7,4,2,1, rep(0, 5)),
Species3 = c(rep(0, 10),1,2,4,7,8,7,4,2,1))
rownames(ll) <- paste0("site", seq_len(NROW(ll)))
matplot(ll, type = "o", col = 1:3, pch = 21:23, bg = 1:3,
ylab = "Abundance", xlab = "Sites")
```
[![enter image description here](https://i.stack.imgur.com/s1iXQ.png)](https://i.stack.imgur.com/s1iXQ.png)
which produces an extreme horseshoe, where points at the ends of the axes bend back into the middle.
[![enter image description here](https://i.stack.imgur.com/9fsYy.png)](https://i.stack.imgur.com/9fsYy.png)
|
xsl: how to split strings?
I want to split an address on semicolons (`;`) into rows separated by `<br />`:
e.g. if `address`=`123 Elm Street`, I want to output `123 Elm Street`,
but if `address`=`123 Elm Street;PO Box 222`, I want to output
```
123 Elm Street<br />PO Box 222
```
and if `address`=`123 Elm Street;PO Box 222;c/o James Jones`, I want to output
```
123 Elm Street<br />PO Box 222<br />c/o James Jones
```
Is there a way to do this? (probably easy but I'm not that familiar with XSLT)
The plain XSL selector is
```
<xsl:value-of select="address"/>
```
and I would like to modify this XSLT fragment to split on semicolon.
---
**update**: Apparently the answer involves the use of `<xsl:call-template>` and the functions `substring-before()` and `substring-after()`.
But I'm a beginner to XSLT and I could really use some help for how to do this.
| I. Plain XSLT 1.0 solution:
**This transformation**:
```
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output omit-xml-declaration="yes" indent="yes"/>
<xsl:template match="text()" name="split">
<xsl:param name="pText" select="."/>
<xsl:if test="string-length($pText)">
<xsl:if test="not($pText=.)">
<br />
</xsl:if>
<xsl:value-of select=
"substring-before(concat($pText,';'),';')"/>
<xsl:call-template name="split">
<xsl:with-param name="pText" select=
"substring-after($pText, ';')"/>
</xsl:call-template>
</xsl:if>
</xsl:template>
</xsl:stylesheet>
```
**when applied on this XML document**:
```
<t>123 Elm Street;PO Box 222;c/o James Jones</t>
```
**produces the wanted, corrected result**:
```
123 Elm Street<br />PO Box 222<br />c/o James Jones
```
**II. FXSL 1 (for XSLT 1.0):**
Here we just use the **[FXSL](http://fxsl.sf.net)** template `str-map` (and do not have to write recursive template for the 999th time):
```
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:f="http://fxsl.sf.net/"
xmlns:testmap="testmap"
exclude-result-prefixes="xsl f testmap"
>
<xsl:import href="str-dvc-map.xsl"/>
<testmap:testmap/>
<xsl:output omit-xml-declaration="yes" indent="yes"/>
<xsl:template match="/">
<xsl:variable name="vTestMap" select="document('')/*/testmap:*[1]"/>
<xsl:call-template name="str-map">
<xsl:with-param name="pFun" select="$vTestMap"/>
<xsl:with-param name="pStr" select=
"'123 Elm Street;PO Box 222;c/o James Jones'"/>
</xsl:call-template>
</xsl:template>
<xsl:template name="replace" mode="f:FXSL"
match="*[namespace-uri() = 'testmap']">
<xsl:param name="arg1"/>
<xsl:choose>
<xsl:when test="not($arg1=';')">
<xsl:value-of select="$arg1"/>
</xsl:when>
<xsl:otherwise><br /></xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
```
**when this transformation is applied on any XML document (not used), the same, wanted correct result is produced**:
```
123 Elm Street<br/>PO Box 222<br/>c/o James Jones
```
**III. Using XSLT 2.0**
```
<xsl:stylesheet version="2.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output omit-xml-declaration="yes" indent="yes"/>
<xsl:template match="text()">
<xsl:for-each select="tokenize(.,';')">
<xsl:sequence select="."/>
<xsl:if test="not(position() eq last())"><br /></xsl:if>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
```
**when this transformation is applied on this XML document**:
```
<t>123 Elm Street;PO Box 222;c/o James Jones</t>
```
**the wanted, correct result is produced**:
```
123 Elm Street<br />PO Box 222<br />c/o James Jones
```
|
Does a Sql Datasource run a select query on load if no contol binds is bound to it?
How can I configure a gridview and datasource on a page to only execute the query if the user clicks a button?
The datasource will return over 1 million records and the page will be accessed by a lot of people at the same time. One potential way to achieve this is to set up the datasource with a connection string and query, but do not assign it to the grid view. Then assign the gridview to the datasource and call databind when it is needed.
In this scenario, will the datasource run the query on page load? Or will it only run the query when I call databind on the gridview?
| The short answer in no, the datasource select statement is only called when bind method is called.
see details <http://msdn.microsoft.com/en-us/library/dz12d98w%28v=vs.80%29.aspx>
<http://msdn.microsoft.com/en-us/library/w1kdt8w2%28v=vs.100%29.aspx>
Transcribe-->
The data source control executes the commands when its corresponding Select, Update, Delete, or Insert method is called. The Select method is called automatically when you call the DataBind method of the page or of a control bound to the data source control. You can also call any of the four methods explicitly when you want the data source control to execute a command. Some controls, such as the GridView control, can call the methods automatically, without requiring that you call the methods or that you explicitly call the DataBind method.
|
Drop-caps using CSS
How can I make the first character of each paragraph look like this:
![enter image description here](https://i.stack.imgur.com/yc7Qk.gif)
I'd prefer using CSS only.
|
```
p:first-letter {
float: left;
font-size: 5em;
line-height: 0.5em;
padding-bottom: 0.05em;
padding-top: 0.2em;
}
```
```
<p> Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
```
Tweak the font, padding, `line-height` as needed.
Example: <http://jsfiddle.net/RLdw2/>
|
Postgresql 'select distinct on' in hibernate
Does hibernate has support on select distinct on queries?
```
SELECT DISTINCT ON (location) location, time, report
FROM weather_reports
ORDER BY location, time DESC;
```
Is there a way to create hibernate criterias for that query?
| Since there was all "suspecting" and "guessing" here, when I stumbled across this question, I'll answer it definitely:
No, Hibernate does not support a DISTINCT ON query.
```
org.hibernate.hql.internal.ast.QuerySyntaxException:
unexpected token: ON near line 1, column 17
[SELECT DISTINCT ON (location) location, time, report FROM weather_reports ORDER BY location, time DESC]
```
Tested under Hibernate 4.3.9-final
Please note that this doesn't apply to normal DISTINCT queries.
See [https://stackoverflow.com/questions/263850...](https://stackoverflow.com/questions/263850/how-do-you-create-a-distinct-query-in-hql)
|
Long notes after stop AKSequencer
Sometimes I need some long notes to keep playing after the sequencer stops.
```
akSequencer.stop() // Need to put some code to ask the question
```
Is there any way to keep the sound of a long note when AKSequencer stops?
| Instead of connecting your AKMusicTrack's MIDI output directly to your sampler (or oscillator bank or whatever), send it to an `AKCallbackInstrument`. In the callback function, you can check the status of the sequencer's MIDI messages, and send your the noteOn and noteOff messages to your sampler from there. In the callback you can add conditional logic, for example, you could use some flag to ignore the noteOff messages under certain conditions.
For the record, this is how I always set up my sequencers, since you can control not only to your sampler, but also external MIDI, Audiobus MIDI and so on, as well as UI updates, from the same AKMusicTrack using a callback.
```
var seq = AKSequencer()
var sampler = AKAppleSampler()
var callbackInst: AKCallbackInstrument!
var track: AKMusicTrack!
var allowNoteOff: Bool = true
func setupSequencerCallback() {
track = seq.newTrack()
callbackInst = AKCallbackInstrument()
track.setMIDIOutput(callbackInst.midiIn)
callbackInst.callback = { status, note, vel in
switch status {
case .noteOn:
try? self.sampler.play(noteNumber: note, velocity: vel, channel: 0)
case .noteOff:
if self.allowNoteOff {
try? self.sampler.stop(noteNumber: note, channel: 0)
}
default:
return
}
}
}
```
|
ngClass - Dynamically add class name based on @input
I would like to dynamically add a class based on an input parameter but only if the input is an 'approved' string.
I have a component with an input and class array from which I want to check the input against:
```
@Input() modalSize?: string;
sizeClassList = ['xs', 'small', 'medium', 'large', 'xl'];
```
I have tried the following method within the component:
```
sizingMethod() {
const isValid = this.sizeClassList.indexOf(this.modalSize) >= 0;
if (isValid) {
return 'modal__dialog--' + this.modalSize;
}
}
```
Within the template:
```
<div class="modal__dialog" [ngClass]="sizingMethod()"> ... </div>
```
Essentially I would like to add an additional sizing class based on an input where the user only has to input the size.
If the user inputs [modalSize]="small", the class added will be 'modal\_\_dialog--small' and if the user inputs [modalSize]="derp", no class will be added.
What is a good way to go about this?
\*edit: Title edited to be more clear
| Your approach is correct, although can be slightly improved:
```
export type Size = 'xs' | 'small' | 'medium' | 'large' | 'xl';
const sizeClassList: Array<Size> = ['xs', 'small', 'medium', 'large', 'xl'];
export class YourComponent implements OnChanges {
@Input() modalSize: Size;
modalClassName: string;
ngOnChanges(changes: SimpleChanges) {
if (changes['modalSize']) {
this.updateModalSize();
}
}
private updateModalSize() {
const isValid = sizeClassList.includes(this.modalSize);
return 'modal__dialog--' + (isValid ? this.modalSize : 'medium');
}
}
```
In template:
```
<div class="modal__dialog" [ngClass]="modalClassName"> ... </div>
```
When you do something like `[ngClass]="sizingMethod()"`, especially without `ChangeDetectionStrategy.OnPush`, the method gets invoked every time Angular detect changes, so it can have poor performance.
|
Why destructuring works differently than in classic assignation in Javascript (ES6)?
As you can see here we set "fibonacci" as an "iterable" object and we loop on it with a for .. of:
```
let fibonacci = {
[Symbol.iterator]() {
let pre = 0, cur = 1;
return {
next() {
[pre, cur] = [cur, pre + cur];
return { done: false, value: cur }
}
}
}
}
for (var n of fibonacci) {
// truncate the sequence at 1000
if (n > 1000)
break;
console.log(n);
}
```
As expected in for of loop, console log write *1,2,3,5,8,..*
**BUT**
if I write `pre = cur; cur = pre + cur;` instead of `[pre, cur] = [cur, pre + cur];`
console.log will write *2,4,8,16,..*
**Why? Isn't destructuring just a way to set multiple values in a single line? How can we explain the difference in assignation?**
|
```
pre = cur; cur = pre + cur;
```
With the assignment to `pre`, you lost the old value of `pre` and the next assignment is wrong.
>
>
> ```
> pre cur comment values
> --- --- ---------------- -------
> 0 1 start values *
> 1 1 pre = cur
> 1 2 cur = pre + cur *
> 2 2 pre = cur
> 2 4 cur = pre + cur *
>
> ```
>
>
```
[pre, cur] = [cur, pre + cur];
```
The [destructuring assignment](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment) keeps the values until the assignment of the whole array.
|
How to get HTML content text of a Wikipedia Page (via Wikipedia API)?
i just want to get content (no link, no categories, no images...just text)
| There is no way to get "just the text" from the Wikipedia API. You can either download the HTML of the page (if you do this via index.php rather than api.php, use [`action=render`](http://www.mediawiki.org/wiki/Manual%3aParameters_to_index.php#Actions) to avoid downloading all the skin content) or the wikitext (which you can do via the API or by passing `action=raw` to index.php); you will then have to parse it yourself to remove the bits you don't want to keep.
In the HTML output, MediaWiki is generally good about adding classes to various interface elements you might want to filter out; the templates and such created by users are perhaps less so (e.g. the [hack for table sorting](http://en.wikipedia.org/wiki/Template%3aNts) just puts some text in a `display:none` span, no class).
To get the wikitext via the API, use [`prop=revisions`](http://www.mediawiki.org/wiki/API%3aProperties#revisions_.2F_rv). To get the rendered HTML, use [`action=parse`](http://www.mediawiki.org/wiki/API%3aParsing_wikitext#parse).
|
DataFrame to HDFS in spark scala
I have a spark data frame of the format org.apache.spark.sql.DataFrame = [user\_key: string, field1: string]. When I use saveAsTextFile to save the file in hdfs results look like [12345,xxxxx]. I don't want the opening and closing bracket written to output file. if i used .rdd to convert into a RDD still the brackets are present in the RDD.
Thanks
| Just concatenate the values and store strings:
```
import org.apache.spark.sql.functions.{concat_ws, col}
import org.apache.spark.sql.Row
val expr = concat_ws(",", df.columns.map(col): _*)
df.select(expr).map(_.getString(0)).saveAsTextFile("some_path")
```
Or even better use `spark-csv`:
```
selectedData.write
.format("com.databricks.spark.csv")
.option("header", "false")
.save("some_path")
```
Another approach is to simply `map`:
```
df.rdd.map(_.toSeq.map(_.toString).mkString(","))
```
and save afterwards.
|
How to iterate over MultiKeyMap?
I'm using the MultiKeyMap from the commons-collections which provide multikey-value pairs. I have 3 keys which are Strings. I have two problems which I don't see how to solve.
How can I iterate over all multikey-value pairs? With a simple HashMap I know it.
Second, how can I get all multikey-value pairs with the first two keys fixed? That means I would like to get something like this `multiKey.get("key1","key2",?);` Where the third key is not specified.
| Iteration over key-value for MultiKeyMap is similar to hash map:
```
MultiKeyMap<String, String> multiKeyMap = new MultiKeyMap();
multiKeyMap.put( "a1", "b1", "c1", "value1");
multiKeyMap.put( "a2", "b2", "c2", "value1");
for(Map.Entry<MultiKey<? extends String>, String> entry: multiKeyMap.entrySet()){
System.out.println(entry.getKey().getKey(0)
+" "+entry.getKey().getKey(1)
+" "+entry.getKey().getKey(2)
+ " value: "+entry.getValue());
}
```
For your second request you can write your own method based on the previous iteration.
```
public static Set<Map.Entry<MultiKey<? extends String>, String>> match2Keys(String first, String second,
MultiKeyMap<String, String> multiKeyMap) {
Set<Map.Entry<MultiKey<? extends String>, String>> set = new HashSet<>();
for (Map.Entry<MultiKey<? extends String>, String> entry : multiKeyMap.entrySet()) {
if (first.equals(entry.getKey().getKey(0))
&& second.equals(entry.getKey().getKey(1))) {
set.add(entry);
}
}
return set;
}
```
|
Collapse runs of consecutive numbers to ranges
Consider the following comma-separated string of numbers:
```
s <- "1,2,3,4,8,9,14,15,16,19"
s
# [1] "1,2,3,4,8,9,14,15,16,19"
```
Is it possible to collapse runs of consecutive numbers to its corresponding ranges, e.g. the run `1,2,3,4` above would be collapsed to the range `1-4`. The desired result looks like the following string:
```
s
# [1] "1-4,8,9,14-16,19"
```
| I took some heavy inspiration from the answers in [this question](https://stackoverflow.com/a/14868742/1465387).
```
findIntRuns <- function(run){
rundiff <- c(1, diff(run))
difflist <- split(run, cumsum(rundiff!=1))
unlist(lapply(difflist, function(x){
if(length(x) %in% 1:2) as.character(x) else paste0(x[1], "-", x[length(x)])
}), use.names=FALSE)
}
s <- "1,2,3,4,8,9,14,15,16,19"
s2 <- as.numeric(unlist(strsplit(s, ",")))
paste0(findIntRuns(s2), collapse=",")
[1] "1-4,8,9,14-16,19"
```
### EDIT: Multiple solutions: benchmarking time!
```
Unit: microseconds
expr min lq median uq max neval
spee() 277.708 295.517 301.5540 311.5150 1612.207 1000
seb() 294.611 313.025 321.1750 332.6450 1709.103 1000
marc() 672.835 707.549 722.0375 744.5255 2154.942 1000
```
@speendo's solution is the fastest at the moment, but none of these have been optimised yet.
|
Detecting folders/directories in javascript FileList objects
I have recently contributed some code to Moodle which uses some of the capabilities of HTML5 to allow files to be uploaded in forms via drag and drop from the desktop (the core part of the code is here: <https://github.com/moodle/moodle/blob/master/lib/form/dndupload.js> for reference).
This is working well, except for when a user **drags** a **folder / directory** instead of a real file. Garbage is then uploaded to the server, but with the filename matching the folder.
What I am looking for is an easy and reliable way to **detect** the presence of a **folder** in the **FileList** object, so I can skip it (and probably return a friendly error message as well).
I've looked through the documentation on MDN, as well as a more general web search, but not turned up anything. I've also looked through the data in the Chrome developer tools and it appears that the **'type'** of the File object is consistently set to **""** for folders. However, I'm not quite convinced this is the most reliable, cross-browser detection method.
Does anyone have any better suggestions?
| You cannot rely on `file.type`. A file without an extension will have a type of `""`. Save a text file with a `.jpg` extension and load it into a file control, and its type will display as `image/jpeg`. And, a folder named "someFolder.jpg" will also have its type as `image/jpeg`.
Instead, try to read the first byte of the file. If you are able to read the first byte, you have a file. If an error is thrown, you probably have a directory:
```
try {
await file.slice(0, 1).arrayBuffer();
// it's a file!
}
catch (err) {
// it's a directory!
}
```
If you are in the unfortunate position of supporting IE11, The file will not have the `arrayBuffer` method. You have to resort to the `FileReader` object:
```
// use this code if you support IE11
var reader = new FileReader();
reader.onload = function (e) {
// it's a file!
};
reader.onerror = function (e) {
// it's a directory!
};
reader.readAsArrayBuffer(file.slice(0, 1));
```
|
How to use two Twitter Bootstrap themes on one page?
I'm developing a single page web application using HTML, CSS, and JavaScript (without iframes).
I have a slide out menu on the left, which I want to contain elements styled according to a dark Bootstrap theme (from Bootswatch).
On the main area of the app, however, I want to place elements styled using another, light, Bootstrap theme.
Is there a way I can do that?
| I would suggest manually adding both themes into a CSS Scope using the `>` operator which is explained very well [in this post](https://stackoverflow.com/questions/4459821/css-selector-what-is-it).
For example, for bootstrap button class:
```
.light > // This is the Scope
.btn {
...
}
}
```
This way, you can use the following syntaxes:
```
<div class="light">
<a href="#" class="btn btn-primary">Light Themed Link</a>
</div>
<div class="dark">
<a href="#" class="btn btn-primary">Dark Themed Link</a>
</div>
```
Since > means Direct Child, it only affects the childs inside the marked Scope Element. This means that you don't have to repeat `class"light"` or `class="dark"` in every element you want to stylize. Instead you select a Scope, it may be body, or a div, or even a span, and then you use bootstrap classes as usual.
You can do this manually but I'd suggest you using `LESS` which already comes integrated with latest bootstrap source code or `SASS` which you can find [here](https://github.com/twbs/bootstrap-sass).
Maybe there is a better option, but this is the only I can think about right now.
|
SimpleDateFormat shows wrong local Time
I want to store a string into a databse (SQLite) for an Android App with the current time and date. For that purpose I am using SimpleDateFormat. Unfortunately it does not show the correct time when. I tried two options.
First Option (from [SimpleDateFormat with TimeZone](https://stackoverflow.com/questions/37747467/simpledateformat-with-timezone/37747640))
```
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss Z", Locale.getDefault());
sdf.format(new Date());
```
Second option (from [Java SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'") gives timezone as IST](https://stackoverflow.com/questions/19112357/java-simpledateformatyyyy-mm-ddthhmmssz-gives-timezone-as-ist))
```
SimpleDateFormat sdf2 = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss'Z'");
sdf2.setTimeZone(TimeZone.getTimeZone("CEST"));
```
In both cases the time is just wrong. It is not the local time that my laptop or phone is showing but the output time is 2 hours earlier. How can I change that? I would like to have the current time of Berlin (CEST) that is also shown on my computer. I appreciate every comment.
| Use `Europe/Berlin` instead of `CEST` and you will get the expected result.
```
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.TimeZone;
public class Main {
public static void main(String[] args) {
SimpleDateFormat sdf2 = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss Z");
sdf2.setTimeZone(TimeZone.getTimeZone("Europe/Berlin"));
System.out.println(sdf2.format(new Date()));
}
}
```
**Output:**
```
2020-09-27 18:38:04 +0200
```
## A piece of advice:
I recommend you switch from the outdated and error-prone `java.util` date-time API and `SimpleDateFormat` to the [modern](https://www.oracle.com/technical-resources/articles/java/jf14-date-time.html) `java.time` date-time API and the corresponding formatting API (package, `java.time.format`). Learn more about the modern date-time API from **[Trail: Date Time](https://docs.oracle.com/javase/tutorial/datetime/index.html)**. If your Android API level is still not compliant with Java-8, check [Java 8+ APIs available through desugaring](https://developer.android.com/studio/write/java8-support-table) and [How to use ThreeTenABP in Android Project](https://stackoverflow.com/questions/38922754/how-to-use-threetenabp-in-android-project).
**Using the modern date-time API:**
```
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.time.format.DateTimeFormatter;
public class Main {
public static void main(String[] args) {
ZonedDateTime zdt = ZonedDateTime.now(ZoneId.of("Europe/Berlin"));
// Default format
System.out.println(zdt);
// Some custom format
System.out.println(zdt.format(DateTimeFormatter.ofPattern("EEEE dd uuuu hh:mm:ss a z")));
}
}
```
**Output:**
```
2020-09-27T18:42:53.620168+02:00[Europe/Berlin]
Sunday 27 2020 06:42:53 pm CEST
```
## The modern API will alert you whereas legacy API may failover:
```
import java.time.ZoneId;
import java.time.ZonedDateTime;
public class Main {
public static void main(String[] args) {
ZonedDateTime zdt = ZonedDateTime.now(ZoneId.of("CEST"));
// ...
}
}
```
**Output:**
```
Exception in thread "main" java.time.zone.ZoneRulesException: Unknown time-zone ID: CEST
at java.base/java.time.zone.ZoneRulesProvider.getProvider(ZoneRulesProvider.java:279)
at java.base/java.time.zone.ZoneRulesProvider.getRules(ZoneRulesProvider.java:234)
at java.base/java.time.ZoneRegion.ofId(ZoneRegion.java:120)
at java.base/java.time.ZoneId.of(ZoneId.java:408)
at java.base/java.time.ZoneId.of(ZoneId.java:356)
at Main.main(Main.java:6)
```
As you can see, you get an exception in this case whereas `SimpleDateFormat` will give you undesirable result as shown below:
```
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.TimeZone;
public class Main {
public static void main(String[] args) {
SimpleDateFormat sdf2 = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss Z");
sdf2.setTimeZone(TimeZone.getTimeZone("CEST"));
System.out.println(sdf2.format(new Date()));
}
}
```
**Output:**
```
2020-09-27 16:47:45 +0000
```
You might be wondering what this undesirable result refers to. **The answer is**: when `SimpleDateFormat` doesn't understand a time-zone, it failovers (defaults) to `GMT` (same as `UTC`) i.e. it has ignored `CEST` and applied `GMT` in this case (not a good feature IMHO ).
|
Netbeans while/for blocks to collapse folding
I use netbeans 8 rc1 , as older version, netbeans does not fold while/for blocks, is there any way to do enable this ?
Thanks.
| This is not possible in java (and c/c++) at the moment (see enhancement bugs [#209041](https://netbeans.org/bugzilla/show_bug.cgi?id=209041), [#222493](https://netbeans.org/bugzilla/show_bug.cgi?id=222493), [#233225](https://netbeans.org/bugzilla/show_bug.cgi?id=233225) and [#209784](https://netbeans.org/bugzilla/show_bug.cgi?id=209784)).
As a workaround you can use NB's code folding:
```
//<editor-fold defaultstate="collapsed" desc="An example of a loop">
while( what != ever )
{
ever++;
}
//</editor-fold>
```
Just mark the code you want to wrap into a folding and click on the yellow "hint bulb" and you can select folding there.
Btw, you can change the values of `defaultstate` and `desc`.
|
How would I move a message from Gmail Inbox to a label?
I'm trying to move messages away from Inbox into Processed label with this code:
```
$inbox = imap_open($host,$user,$pass) or die('Error: ' . imap_last_error());
if( $emails = imap_search($inbox,'ALL') )
{
foreach($emails as $email_number) {
imap_mail_move($inbox, $email_number, 'Processed') or die('Error');
}
}
imap_expunge($inbox);
imap_close($inbox);
```
Unfortunately, while the messages get the Processed label, they're still left in Inbox too.
How would I make them go away from Inbox?
| Actually... The reason why the emails were left in the inbox was that when imap\_mail\_move did it's thing, the IDs of all the leftover messages got decremented by one, so when the foreach loop moved to the next message, one message was left behind. This skipping a message repeated for every iteration. That's why it seemed that imap\_mail\_move was not working.
The solution is to use unique message UIDs instead of potentially repeating IDs:
```
$inbox = imap_open( $host, $user, $pass );
$emails = imap_search( $inbox, 'ALL', SE_UID );
if( $emails ) {
foreach( $emails as $email_uid ) {
imap_mail_move($inbox, $email_uid, 'processed', CP_UID);
}
}
```
|
Getting a list from a config file with ConfigParser
I have something like this in my config file (a config option that contains a list of strings):
```
[filters]
filtersToCheck = ['foo', '192.168.1.2', 'barbaz']
```
Is there a more elegant (built-in) way to get a list from filtersToCheck instead of removing the brackets, single-quotes, spaces and then using `split()` to do that? Maybe a different module?
(Using python3.)
| You cannot use the python object like a list in the value for the config file. But you can ofcourse have them as comma separated values and once you get it do a split
```
[filters]
filtersToCheck = foo,192.168.1.2,barbaz
```
and do
```
filtersToCheck = value.split(',')
```
The other approach is ofcourse, subclassing SafeConfigParser class and removing the [ and ] and constructing the list. You termed this as ugly, but this is a viable solution.
The third way is to use Python module as a config file. Projects do this. Just have the filtersToCheck as a variable available from your config.py module and use the list object. That is a clean solution. Some people are concerned about using python file as config file (terming it as security hazard, which is somewhat an unfounded fear), but there also this group who believe that users should edit config files a not python files which serve as config file.
|
Alternating Background Color react-bootstrap-table
I am using react-bootstrap-table and I am trying to alternate the background color. The documentation leaves it a bit unclear what type of data in particular goes into it's implementation of the conditional rendering function so I cannot receive the correct result. What am I doing wrong?
```
// Customization Function
function rowClassNameFormat(row, rowIdx) {
// row is whole row object
// rowIdx is index of row
return rowIdx % 2 === 0 ? 'backgroundColor: red' : 'backgroundColor: blue';
}
// Data
var products = [
{
id: '1',
name: 'P1',
price: '42'
},
{
id: '2',
name: 'P2',
price: '42'
},
{
id: '3',
name: 'P3',
price: '42'
},
];
// Component
class TrClassStringTable extends React.Component {
render() {
return (
<BootstrapTable data={ products } trClassName={this.rowClassNameFormat}>
<TableHeaderColumn dataField='id' isKey={ true }>Product ID</TableHeaderColumn>
<TableHeaderColumn dataField='name'>Product Name</TableHeaderColumn>
<TableHeaderColumn dataField='price'>Product Price</TableHeaderColumn>
</BootstrapTable>
);
}
}
```
| You can customize the inline styles with `trStyle` instead of `trClassName`. The inlined styles should also be returned in object form, not as a string.
**Example**
```
function rowStyleFormat(row, rowIdx) {
return { backgroundColor: rowIdx % 2 === 0 ? 'red' : 'blue' };
}
class TrClassStringTable extends React.Component {
render() {
return (
<BootstrapTable data={ products } trStyle={rowStyleFormat}>
<TableHeaderColumn dataField='id' isKey={ true }>Product ID</TableHeaderColumn>
<TableHeaderColumn dataField='name'>Product Name</TableHeaderColumn>
<TableHeaderColumn dataField='price'>Product Price</TableHeaderColumn>
</BootstrapTable>
);
}
}
```
|
c# DataGrid BindingListCollectionView custom filter throwing invalid usage of aggregate function mean
I have a collection view in which I would like to apply the filter greater than average.
Issue is column type is string.
So normal greater than with any number works perfect after converting to double type, issue is how to do it for average.
I tried following code:
```
collectionView.CustomFilter = $"CONVERT({col}, 'System.Double') > AVG([{col}])";
```
as expected, it breaks as AVG can't be applied on string type. But when i tried to put
```
AVG([CONVERT({col}, 'System.Double')])
```
it doesn't evaluate conversion.
Any suggestion to overcome it?
| It's actually a limitation of the underlying `DataView.RowFilter` (and `DataColumn.Expression`) supported [Aggregates](https://learn.microsoft.com/en-us/dotnet/api/system.data.datacolumn.expression?view=netframework-4.8#aggregates):
>
> An aggregate can only be applied to a single column and no other expressions can be used inside the aggregate.
>
>
>
The only way to overcome it I see is to add (dynamically) calculated column to the underlying `DataTable` which performs the `CONVERT`, and then use that column inside the filter expression.
Something like this:
```
var dataView = collectionView.SourceCollection as DataView;
if (dataView.Table.Columns[col].DataType == typeof(string))
{
var calcCol = col + "_Double";
if (!dataView.Table.Columns.Contains(calcCol))
dataView.Table.Columns.Add(calcCol, typeof(double), $"CONVERT({col}, 'System.Double')");
col = calcCol;
}
collectionView.CustomFilter = $"{col} > AVG({col})";
```
|
User editable slugs with Friendly ID
**Case:**
My station forms contain a slug field, if a value is entered it should be used as the slug.
EDIT: some clarification:
What I want is much like how slugs work in wordpress:
- If no slug is provided -> slug the name
- If slug is provided -> use the user entered slug
- If slug is updated -> push old slug to history
**My problem:**
Can´t figure out how to get Friendly Id to use the user provided slug.
```
class Station < ActiveRecord::Base
extend FriendlyId
belongs_to :user
has_many :measures
validates_uniqueness_of :hw_id
validates_presence_of :hw_id
class_attribute :zone_class
self.zone_class ||= Timezone::Zone
friendly_id :name, :use => [:slugged, :history]
before_save :set_timezone!
....
def should_generate_new_friendly_id?
name_changed? or slug_changed?
end
end
```
---
edit:
```
<%= form_for(@station) do |f| %>
<%=
f.div_field_with_label(:name) do |key|
f.text_field(key)
end
%>
<%=
f.div_field_with_label(:slug) do |key|
f.text_field(key)
end
%>
<%=
f.div_field_with_label(:hw_id, 'Hardware ID') do |key|
f.text_field(key)
end
%>
<%=
f.div_field_with_label(:latitude) do |key|
f.text_field(key)
end
%>
<%=
f.div_field_with_label(:longitude) do |key|
f.text_field(key)
end
%>
<%= f.div_field_with_label(:user_id, "Owner") do |key|
f.select(:user_id, options_from_collection_for_select(User.all, :id, :email), { include_blank: true })
end
%>
<div class="actions">
<%= f.submit %>
</div>
<% end %><%= form_for(@station) do |f| %>
<%=
f.div_field_with_label(:name) do |key|
f.text_field(key)
end
%>
<%=
f.div_field_with_label(:slug) do |key|
f.text_field(key)
end
%>
<%=
f.div_field_with_label(:hw_id, 'Hardware ID') do |key|
f.text_field(key)
end
%>
<%=
f.div_field_with_label(:latitude) do |key|
f.text_field(key)
end
%>
<%=
f.div_field_with_label(:longitude) do |key|
f.text_field(key)
end
%>
<%= f.div_field_with_label(:user_id, "Owner") do |key|
f.select(:user_id, options_from_collection_for_select(User.all, :id, :email), { include_blank: true })
end
%>
<div class="actions">
<%= f.submit %>
</div>
<% end %>
```
| This is how i solved it:
```
class Station < ActiveRecord::Base
extend FriendlyId
belongs_to :user
has_many :measures
validates_uniqueness_of :hw_id
validates_presence_of :hw_id
class_attribute :zone_class
self.zone_class ||= Timezone::Zone
friendly_id :name, :use => [:slugged, :history]
before_save :evaluate_slug
before_save :set_timezone!
def should_generate_new_friendly_id?
if !slug?
name_changed?
else
false
end
end
end
```
**And the tests:**
`/spec/models/station_spec.rb`
```
describe Station do
...
let(:station) { create(:station) }
describe "slugging" do
it "should slug name in absence of a slug" do
station = create(:station, name: 'foo')
expect(station.slug).to eq 'foo'
end
it "should use slug if provided" do
station = create(:station, name: 'foo', slug: 'bar')
expect(station.slug).to eq 'bar'
end
end
...
end
```
`/spec/controllers/stations_controller.rb`
```
describe StationsController do
...
describe "POST create" do
it "creates a station with a custom slug" do
valid_attributes[:slug] = 'custom_slug'
post :create, {:station => valid_attributes}
get :show, id: 'custom_slug'
expect(response).to be_success
end
...
end
describe "PUT update" do
it "updates the slug" do
put :update, {:id => station.to_param, :station => { slug: 'custom_slug' }}
get :show, id: 'custom_slug'
expect(response).to be_success
end
...
end
...
end
```
|
Confused about "super" keyword in this Java example
In this example on java website's tutorial [page](https://docs.oracle.com/javase/tutorial/java/IandI/override.html). Two interfaces define the same default method `startEngine()`. A class `FlyingCar` implements both interfaces and must override `startEngine()` because of the obvious conflict.
```
public interface OperateCar {
// ...
default public int startEngine(EncryptedKey key) {
// Implementation
}
}
public interface FlyCar {
// ...
default public int startEngine(EncryptedKey key) {
// Implementation
}
}
public class FlyingCar implements OperateCar, FlyCar {
// ...
public int startEngine(EncryptedKey key) {
FlyCar.super.startEngine(key);
OperateCar.super.startEngine(key);
}
}
```
I don't understand why, from `FlyingCar`, `super` is used to refer to both versions of `startEngine()` in `OperateCar` and `FlyCar` interfaces. As I understand it, `startEngine()` was not defined in any super class, therefore shouldn't be referred as resident in one. I also do not see any relationship between `super` and the two interfaces as implemented in `FlyingCar`
|
>
> As I understand it, startEngine() was not defined in any super class, therefore shouldn't be referred as resident in one.
>
>
>
Yes it was defined. It's the default implementation, for example:
>
>
> ```
> public interface OperateCar {
> // ...
> default public int startEngine(EncryptedKey key) {
> // Implementation
> }
> }
>
> ```
>
>
`OperateCar.super.startEngine(key)` will execute the default implementation.
If there was no default implementation, just an interface method,
then the statement wouldn't make sense, as the interface wouldn't contain an implementation, like this:
```
public interface OperateCar {
// ...
int startEngine(EncryptedKey key);
}
```
---
>
> I also do not see any relationship between super and the two interfaces as implemented in FlyingCar
>
>
>
Not sure I understand what you're asking.
`super` is a way to call the implementation in the parent interface.
Without `super`, there's just no other way to express that.
|
Why does `this` inside filter() gets undefined in VueJS?
I am creating a DOB Form.
I am using VueJS in the form. The user should their date of month first so that day is displayed according to the number of days in the respective month.
I am using `filter()` and the problem is `this` inside `filter()` is undefined. How can I fix this?
```
new Vue ({
el: '.app',
data: {
months: [
{month: 'January', days: 31},
{month: 'February', days: 28},
{month: 'March', days: 31},
{month: 'April', days: 30},
{month: 'May', days: 31},
{month: 'June', days: 30},
{month: 'July', days: 31},
{month: 'August', days: 31},
{month: 'September', days: 30},
{month: 'October', days: 31},
{month: 'November', days: 30},
{month: 'December', days: 31},
],
selectedMonth: []
},
computed: {
filterDays() {
return this.months.filter(function(value) {
return value.month === this.selectedMonth;
});
}
},
});
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.16/vue.js"></script>
<div class="app">
<select id="dobd">
<option v-for="day in filterDays[0].days" :value="day">{{ day }}</option>
</select>
</div>
```
I know using a global variable might be the solution but I want to use `selectedMonth` inside `data()` due to my own needs.
| With `function () {}` the context (`this`) is lost. Which means, inside the `filter` argument function, `this` will not be the Vue instance.
There are some possible solutions:
- Use arrow functions (**preferred**):
```
filterDays() {
return this.months.filter((value) => {
return value.month === this.selectedMonth;
});
}
```
- Use `.bind()`:
```
filterDays() {
return this.months.filter(function(value) {
return value.month === this.selectedMonth;
}.bind(this));
}
```
- Use a local variable outside the function:
```
filterDays() {
let vm = this;
return this.months.filter(function(value) {
return value.month === vm.selectedMonth;
});
}
```
Demo:
```
new Vue ({
el: '.app',
data: {
months: [
{month: 'January', days: 31},
{month: 'February', days: 28},
{month: 'March', days: 31},
{month: 'April', days: 30},
{month: 'May', days: 31},
{month: 'June', days: 30},
{month: 'July', days: 31},
{month: 'August', days: 31},
{month: 'September', days: 30},
{month: 'October', days: 31},
{month: 'November', days: 30},
{month: 'December', days: 31},
],
selectedMonth: 'January' // changed to a valid month
},
computed: {
filterDays() {
return this.months.filter((value) => {
return value.month === this.selectedMonth;
});
}
},
});
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.16/vue.js"></script>
<div class="app">
<select id="dobd">
<option v-for="day in filterDays[0].days" :value="day">{{ day }}</option>
</select>
</div>
```
|
Accessing Python dict values with the key start characters
I was wondering: would it be possible to access dict values with uncomplete keys (as long as there are not more than one entry for a given string)? For example:
```
my_dict = {'name': 'Klauss', 'age': 26, 'Date of birth': '15th july'}
print my_dict['Date']
>> '15th july'
```
Is this possible? How could it be done?
| You can't do such directly with `dict[keyword]`. You have to iterate through the `dict` and match each key against the keyword and return the corresponding value if the keyword is found.
This is going to be an `O(N)` operation.
```
>>> my_dict = {'name': 'Klauss', 'age': 26, 'Date of birth': '15th july'}
>>> next(v for k,v in my_dict.items() if 'Date' in k)
'15th july'
```
To get all such values use a list comprehension:
```
>>> [ v for k, v in my_dict.items() if 'Date' in k]
['15th july']
```
use `str.startswith` if you want only those values whose keys starts with 'Date':
```
>>> next( v for k, v in my_dict.items() if k.startswith('Date'))
'15th july'
>>> [ v for k, v in my_dict.items() if k.startswith('Date')]
['15th july']
```
|
is there any way to get samples under each leaf of a decision tree?
I have trained a decision tree using a dataset. Now I want to see which samples fall under which leaf of the tree.
From here I want the red circled samples.
[![enter image description here](https://i.stack.imgur.com/DYhwf.png)](https://i.stack.imgur.com/DYhwf.png)
I am using Python's Sklearn's implementation of decision tree .
| If you want only the leaf for each sample you can just use
```
clf.apply(iris.data)
```
>
> array([ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
> 1,
> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5,
> 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
> 5, 5, 14, 5, 5, 5, 5, 5, 5, 10, 5, 5, 5, 5, 5, 10, 5,
> 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 16, 16,
> 16, 16, 16, 16, 6, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
> 8, 16, 16, 16, 16, 16, 16, 15, 16, 16, 11, 16, 16, 16, 8, 8, 16,
> 16, 16, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16])
>
>
>
If you want to get all samples for each node you could calculate all the decision paths with
```
dec_paths = clf.decision_path(iris.data)
```
Then loop over the decision paths, convert them to arrays with `toarray()` and check whether they belong to a node or not. Everything is stored in a `defaultdict` where the key is the node number and the values are the sample number.
```
for d, dec in enumerate(dec_paths):
for i in range(clf.tree_.node_count):
if dec.toarray()[0][i] == 1:
samples[i].append(d)
```
**Complete code**
```
import sklearn.datasets
import sklearn.tree
import collections
clf = sklearn.tree.DecisionTreeClassifier(random_state=42)
iris = sklearn.datasets.load_iris()
clf = clf.fit(iris.data, iris.target)
samples = collections.defaultdict(list)
dec_paths = clf.decision_path(iris.data)
for d, dec in enumerate(dec_paths):
for i in range(clf.tree_.node_count):
if dec.toarray()[0][i] == 1:
samples[i].append(d)
```
**Output**
```
print(samples[13])
```
>
> [70, 126, 138]
>
>
>
|
Using Forms in HTML to calculate total price
How can I write a function that will calculate to total price of the computer components selected by the user, this is what I have so far but now I seem to be stuck. Any ideas? I am trying to create an array for the memory prices, hdd prices and network prices. Then dont know where to go from here.
```
<script type = "text/javascript">
function calculatePrice(myform)
{
var memPrice=myForm.memoryItem.selectedIndex;
}
</script>
</head>
<body>
<table width="80%" align="center" cellpadding="0" cellspacing="0">
<tr>
<td height="75" colspan="2"><img src="CCLogo.jpg" width="515" height="79"></td>
</tr>
<tr>
<td width="29%" height="103"><img src="computer.jpg" width="120" height="83"></td>
<td width="71%"><p class="c">The base price of this Computer is $499.<br />Because this is back-to-school special<br /> the manufacturer offers limited options.</p></td>
</tr>
<tr>
<td height="56" colspan="2" class="d">Intel i5 Pentium, 4GB RAM, 500 GB HD, DVD/CDROM Drive, 2GB 3D AGP<br /> graphics adapter, 15 inch monitor, 32-bit Wave sound card and speakers</td>
</tr>
<tr>
<td>Optional Upgrades</td>
<td> </td>
</tr>
<tr>
<td height="50">
<FORM Name="myform">
<SELECT NAME="memoryItem" onChange="calculatePrice(myform)">
<OPTION>Select One Choice from List-Memory Upgrade
<OPTION>8 GB add $49
<OPTION>12 GB add $98
<OPTION>16 GB add $159
</SELECT>
</td>
<td> </td>
</tr>
<tr>
<td height="48">
<SELECT NAME="hddItem" onChange="calculatePrice(myform)">
<OPTION>Select One Choice from List-HDD Upgrade
<OPTION>1 TB HD add $109
<OPTION>1.5 TB HD add $150
<OPTION>2 TB HD add $199
<OPTION>250 GB SSD add $299
</SELECT>
</td>
<td> </td>
</tr>
<tr>
<td height="48">
<SELECT NAME="networkItem" onChange="calculatePrice(myform)">
<OPTION>Select One Choice from List- Network Upgrade
<OPTION>56K V90 or X2 Modem add $109
<OPTION>10/100 NIC add $79
<OPTION>Combo Modem and NIC add $279
</SELECT>
</FORM>
</td>
<td> </td>
</tr>
<tr>
<td height="58">
<button type="button" onclick="caculatePrice()">Calculate</button>
</td>
<td> </td>
</tr>
<tr>
<td height="73">The new calculated price:<INPUT Type="Text" Name="PicExtPrice" Size=8> </td>
<td> </td>
</tr>
</table>
</body>
```
| - Add id:s to select-elements
- Add value attribute to option tag
- Fix javascript
Something like this:
```
function calculatePrice(){
//Get selected data
var elt = document.getElementById("memoryItem");
var memory = elt.options[elt.selectedIndex].value;
elt = document.getElementById("hddItem");
var hdd = elt.options[elt.selectedIndex].value;
elt = document.getElementById("networkItem");
var network = elt.options[elt.selectedIndex].value;
//convert data to integers
memory = parseInt(memory);
hdd = parseInt(hdd);
network = parseInt(network);
//calculate total value
var total = memory+hdd+network;
//print value to PicExtPrice
document.getElementById("PicExtPrice").value=total;
}
```
And html
```
<FORM Name="myform">
<SELECT NAME="memoryItem" onChange="calculatePrice()" id="memoryItem">
<OPTION value="0">Select One Choice from List-Memory Upgrade</OPTION>
<OPTION value="49">8 GB add $49</OPTION>
<OPTION value="98">12 GB add $98</OPTION>
<OPTION value="159">16 GB add $159</OPTION>
</SELECT>
<SELECT NAME="hddItem" onChange="calculatePrice()" id="hddItem">
<OPTION value="0">Select One Choice from List-HDD Upgrade</OPTION>
<OPTION value="109">1 TB HD add $109</OPTION>
<OPTION value="150">1.5 TB HD add $150</OPTION>
<OPTION value="199">2 TB HD add $199</OPTION>
<OPTION value="299">250 GB SSD add $299</OPTION>
</SELECT>
<SELECT NAME="networkItem" onChange="calculatePrice()" id="networkItem">
<OPTION value="0">Select One Choice from List- Network Upgrade</OPTION>
<OPTION value="109">56K V90 or X2 Modem add $109</OPTION>
<OPTION value="79">10/100 NIC add $79</OPTION>
<OPTION value="279">Combo Modem and NIC add $279</OPTION>
</SELECT>
</FORM>
<button type="button" onclick="calculatePrice()">Calculate</button>
The new calculated price:<INPUT type="text" id="PicExtPrice" Size=8>
```
Try it here <http://jsfiddle.net/Wm6zC/>
|
CSS triangle and box shadow removing from a specific area
I have this code for making a box and to show a triangle attached to it on the left side :
CSS:
```
.triangle-box{
width: 0;
height: 0;
margin-top: 10px;
border-top: 15px solid transparent;
border-right: 15px solid #fff;
border-bottom: 15px solid transparent;
float:left;
}
.triangle-box-content{
background-color: white;
-webkit-border-radius: 2px;
-moz-border-radius: 2px;
border-radius: 2px;
border-bottom-color: #989898;
height: 140px;
width: 530px;
float:left;
text-align: left;
}
```
Now I want to attach a shadow to this element as a whole. So I added this code in the triangle-box and triangle-box-content class :
```
-webkit-box-shadow: 0 0 3px 5px #7a7a7a;
-moz-box-shadow: 0 0 3px 5px #7a7a7a;
box-shadow: 0 0 3px 5px #7a7a7a;
```
But this makes the shadow go around the box and the triangle making it look like two different divs. I want to remove the shadow from the region where the triangle and the box meet. Is there any way to do that?
HTML:
```
<div class="triangle-box"></div>
<div class="triangle-box-content"></div>
```
| I was a bit longer but I'll post it all the same.This technique rotates a pseudo element by 45 degrees with a bottom left shadow that sticks to the arrow.
**----UPDATE----**
This technique works without `.triangle-box`.
---
**[FIDDLE](http://jsfiddle.net/webtiki/xfVeh/)**
HTML :
```
<div class="triangle-box-content"></div>
```
CSS :
```
.triangle-box-content:before, .triangle-box-content:after{
content:"";
position:absolute;
background:#fff;
}
.triangle-box-content:before {
z-index:-1;
top:13px;
left:-10px;
height:25px;
width:25px;
-moz-box-shadow: -5px 5px 5px 0px #7a7a7a;
-webkit-box-shadow: -5px 5px 5px 0px #7a7a7a;
-o-box-shadow: -5px 5px 5px 0px #7a7a7a;
box-shadow: -5px 5px 5px 0px #7a7a7a;
transform:rotate(45deg);
-ms-transform:rotate(45deg);
-webkit-transform:rotate(45deg);
}
.triangle-box-content {
height: 140px;
width: 530px;
float:left;
margin-left:50px;
text-align: left;
position:relative;
}
.triangle-box-content:after {
width:100%;
height:100%;
z-index:-2;
left:0;
top:0;
-webkit-border-radius: 2px;
-moz-border-radius: 2px;
border-radius: 2px;
-webkit-box-shadow: 0 0 3px 5px #7a7a7a;
-moz-box-shadow: 0 0 3px 5px #7a7a7a;
box-shadow: 0 0 3px 5px #7a7a7a;
}
```
|
Is there any limit on an expired date in the cookie browser?
I am trying to set cookies in a javascript file, and I need to set the "max-age" of cookies for 2 years.
Probably the cookie updated to 1 year and a few days only.
Does someone know if there is any limit on the expired date?
My test in dev-tools that doesn't work:
document.cookie='test=res;path=/;**max-age=31619000**';
| 400 days (or less depending on the browser).
[HTTP Working Group Specification](https://httpwg.org/http-extensions/draft-ietf-httpbis-rfc6265bis.html#name-the-expires-attribute) states the following:
>
> The user agent MUST limit the maximum value of the Expires attribute. The limit SHOULD NOT be greater than 400 days (34560000 seconds) in the future. The RECOMMENDED limit is 400 days in the future, but the user agent MAY adjust the limit (see [Section 7.2](https://httpwg.org/http-extensions/draft-ietf-httpbis-rfc6265bis.html#cookie-policy)). Expires attributes that are greater than the limit MUST be reduced to the limit.
>
>
>
Explanation from [chrome status](https://chromestatus.com/feature/4887741241229312)
>
> When cookies are set with an explicit Expires/Max-Age attribute the
> value will now be capped to no more than 400 days in the future.
> Previously, there was no limit and cookies could expire as much as
> multiple millennia in the future.
>
>
> 400 days was
> chosen as a round number close to 13 months in duration. 13 months was
> chosen to ensure that sites one visits roughly once a year (e.g.,
> picking health insurance benefits) will continue to work.
>
>
>
|
What is JB used for here?
I'm trying to understand some example asm that came with a development board (XL400), with a view to converting it to C.
The asm code is included below, unfortunately the documentation, such as it is, is translated very badly from Chinese, which makes it worse than useless.
Also its a long time since I've used asm!
In the code there is a JB instruction in the DELAY subroutine (third last line of code). I cannot understand what it does or how it is supposed to operate. Googling JB results in explanation for a different form (JB label) so Im not sure if the asm is right even??
Help much appreciated, Ta
```
RS EQU P2.0
RW EQU P2.1
E EQU P2.2
ORG 0080H
MOV P0,#00000001B ;- Screen
ACALL ENABLE
MOV P0,#00000001B ;- Screen
ACALL ENABLE
MOV P0,#00111000B ;Display
ACALL ENABLE
MOV P0,#00001111B ;Show switch control
ACALL ENABLE
MOV P0,#00000110B ;Input mode
ACALL ENABLE
MOV P0,#0C0H ;Data memory address
ACALL ENABLE
mov p0,#01000001b ;ASCII code
SETB RS
CLR RW
CLR E
ACALL DELAY
SETB E
AJMP $
ENABLE: CLR RS ;Send orders
CLR RW
CLR E
ACALL DELAY
SETB E
RET
DELAY: MOV P0,#0FFH
CLR RS
SETB RW
CLR E
NOP
SETB E
JB P0.7,DELAY ;Judgement busy signs
RET
END
```
| The JB instruction in 8051 assembler jumps to the address indicated by the label in the second operand if the bit specified by the first operand is set. So in your case it will jump to `MOV P0,#0FFH` if `P0.7` is set.
The `JB label` instruction you are referring to is an 8086 instruction (jump below based on the result of the CMP instruction just before) so you were looking on the wrong page.
EDIT: I don't know exactly what type of LCD they're using but I think it's the busy flag - as these displays are all rather closely related to the venerable [Hitachi 44780](http://en.wikipedia.org/wiki/Hitachi_HD44780_LCD_controller). In the board's [schematic](http://www.51c51.com/enweb/down/xl400sch.pdf) P0.7 is connected to display pin 14, which commonly is DB7, and that's where the busy flag lives. Of course it's always best to use the documentation of the actual display, but [this one](http://www.sparkfun.com/datasheets/LCD/GDM1602K-Extended.pdf) is probably pretty close and could get you started. Also, that display is *so popular* that it's very easy to find code in all possible languages on howto program it. Might be easier to follow that route than to reverse engineer the assembly.
|
svchost.exe using lots of memory slowing my PC down
On my Windows 7 32-bit. `svchost.exe` is using lots of Memory and slowing my PC down big time.
I already have auto update turned off and it is on manual mode.
How can I fix this problem?
Thanks
| There's no way for us to know what is causing a `svchost.exe` high CPU usage problem on any given machine because:
`svchost.exe` is a host process that contains running DLLs as services in Windows XP and beyond. At any given time, there are multiple services running inside `svchost.exe`. You could kill the process, but you would never be able to tell which service is causing the problem, because you would be killing all of them.
To determine which one is causing high CPU usage, you can try a few methods:
- Open Task Manager, right-click the `svchost.exe` that is causing problems, then click the last option - "Go to Services"
![enter image description here](https://i.stack.imgur.com/mjxKN.png)
You will get a list of all the services that are running in that particular `svchost`.
![enter image description here](https://i.stack.imgur.com/1LJxG.png)
- You can also use [Process Explorer](http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) to view which services are running in a particular `svchost` instance:
![enter image description here](https://i.stack.imgur.com/ebl9d.png)
[Source](http://www.howtogeek.com/howto/windows-vista/what-is-svchostexe-and-why-is-it-running/)
---
*While a virus could certainly cause this problem, it's not any more likely than just a poorly written software program hogging resources, or a poor choice of antivirus suite.*
|
What are some good machine learning programming exercises?
Ideally, they would have the following characteristics:
1. They can be completed in just an evening of coding. It will not require a week or more to get interesting results. That way, I can feel like I've learned and accomplished something in just one (possibly several hour long) sitting.
2. The problems are from the real world, or they are at least toy versions of a real world problems.
3. If the problem requires data to test the solution, there are real-world datasets readily available, or it is trivial to generate interesting test data myself.
4. It is easy to evaluate how good of a job I've done. When I test my solution, it will be clear from the results that I've accomplished something nontrivial, either by simple inspection, or by a quantifiable measure of the quality of the results.
| Implement the following algorithms:
- Perceptron, margin perceptron: you can try to detect images of faces (classify images of faces and non-faces) using any face database. Try for example the [MIT CBCL face database](http://cbcl.mit.edu/cbcl/software-datasets/FaceData2.html). You can also try the [MNIST data](http://yann.lecun.com/exdb/mnist/) and write a poor man's OCR system.
- LVQ, Kohonen map: you can try to compress images. You can download large images from any wallpaper site.
- Naive bayes classifier: you can classify spam and not spam. There are also more scientific datasets, such as [Reuters](http://www.daviddlewis.com/resources/testcollections/reuters21578/) and Newsgroups, etc. which you have to determine the topic, given the article.
- Backpropagation, multi layer perceptron: you can try this with the faces, or with the spam, or [with the text/histogram data](http://people.csail.mit.edu/jrennie/20Newsgroups/).
- Primal SVM linear learning using SGD: you can try this with [MNIST](http://yann.lecun.com/exdb/mnist/) digits, for example.
There are a bunch of projects, some of them take a couple hours, some a couple of days, but you will definitely learn a lot.
|
Hide Status Bar and Increase the height of UINavigationBar
I am using story board to create navigation bar.
My requirement is to hide the status bar and increase the height of Navigation bar. When i hide the status bar, the navigation bar sticks to top and the height is 44 px. i need a navigation bar height as 64 px (44px+status bar height). Is there any way to do this?
**With status bar**
![enter image description here](https://i.stack.imgur.com/fwZFa.png)
**Without status bar**
![Without Status bar](https://i.stack.imgur.com/8U73C.png)
| To start off, you hide your `statusBar` by following these steps:
First, put this code in `viewWillAppear`:
```
[[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:UIStatusBarAnimationFade];
```
Second, set your `info.plist` file as the below image shows:
![enter image description here](https://i.stack.imgur.com/tJ1B7.png)
Next, you can make a `Category` of `UINavigationBar` and in it set the height of the `navigaionBar`.
**Objective-c**
in .h file
```
@interface UINavigationBar (Custom)
- (CGSize)sizeThatFits:(CGSize)size ;
```
and in .m file
```
@implementation UINavigationBar (Custom)
- (CGSize)sizeThatFits:(CGSize)size {
CGFloat width = [UIScreen mainScreen].bounds.size.width;
CGSize newSize = CGSizeMake(width, 100);
return newSize;
}
```
**Swift**
```
extension UINavigationBar {
public override func sizeThatFits(size: CGSize) -> CGSize {
let width = UIScreen.mainScreen().bounds.width
let newSize = CGSize(width: width, height: 64)
return newSize
}
}
```
|
Why does ToString() method of Exception class in .Net not use a StringBuilder to build the string?
I would expect the code below (from the Exception class in .Net framework) to use a StringBuilder to build a string and then return a builder.ToString() instead of using the "+" operator on the existing string, which ends up creating a new string everytime.
I expect folks writing .Net source code to follow the best practices where ever possible, therefore I want to double check if this is still somehow optimal
```
private String ToString(bool needFileLineInfo, bool needMessage) {
String message = (needMessage ? Message : null);
String s;
if (message == null || message.Length <= 0) {
s = GetClassName();
}
else {
s = GetClassName() + ": " + message;
}
if (_innerException!=null) {
s = s + " ---> " + _innerException.ToString(needFileLineInfo, needMessage)
+ Environment.NewLine
+ " "
+ Environment.GetResourceString("Exception_EndOfInnerExceptionStack");
}
string stackTrace = GetStackTrace(needFileLineInfo);
if (stackTrace != null)
{
s += Environment.NewLine + stackTrace;
}
return s;
}
```
| It's impossible to know for sure why the author of the code chose this implementation, without asking the author of the code.
That said, it's worth keeping in mind that the general guidance to prefer `StringBuilder` over simple `string` concatenation applies mainly to looping scenarios where you have arbitrarily large iteration counts.
In the example above, the main part of the method should wind up calling the `Concat()` method, which should do a better job of creating a single new `string` object from the inputs than appending each part iteratively using `StringBuilder`. To introduce `StringBuilder` for the other concatenation, especially when they are not always going to happen (since they are conditional), could very well be less optimal in the common scenarios.
See e.g. [String concatenation vs String Builder. Performance](https://stackoverflow.com/a/1612835), where this difference is specifically called out.
|
How to fire a command on double-click listbox item using MVVM?
I'm trying to launch an ICommand when the user double-clicks on a listbox item. Also, I'm trying to do this using the MVVM pattern.
In this XAML, the key press "p" works perfectly. When I double click on the list box, the command never starts. I've set a break point to confirm "PlayVideoCommand" is not called with a double-click. Am I missing something or do I have to use Setter (which I'm not familiar with)?
```
<ListBox Name="SmallVideoPreviews" Grid.Column="1" MaxHeight="965"
ItemsSource="{Binding BrowseVideos}"
ItemTemplate="{StaticResource BrowseTemplate}">
<ListBox.InputBindings>
<KeyBinding Key="p"
Command="{Binding PlayVideoCommand}"
CommandParameter="{Binding ElementName=SmallVideoPreviews, Path=SelectedItem}"/>
<MouseBinding Gesture="LeftDoubleClick"
Command="{Binding PlayVideoCommand}"
CommandParameter="{Binding ElementName=SmallVideoPreviews, Path=SelectedItem}"/>
</ListBox.InputBindings>
</ListBox>
```
Both double-click and "p" should execute the same command. When using the mouse, I can see the listboxitem is selected. I have a hunch that the MouseBinding Command property is not a dependency property but I don't know how to confirm this.
| What's happening in your sample is that the listbox itself is reacting to the double click, but only in the part of it's area that is not covered by a list box item.
You need the event handler to be tied to the listboxitem.
Some ways to do it are here:
[Double Click a ListBox item to open a browser](https://stackoverflow.com/questions/821564/double-click-a-listbox-item-to-open-a-browser)
And some discussion about why a little code-behind in MVVM is not necessarily a terrible thing:
[Firing a double click event from a WPF ListView item using MVVM](https://stackoverflow.com/questions/1035023/firing-a-double-click-event-from-a-wpf-listview-item-using-mvvm)
More discussion:
<http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/9fb566a2-0bd6-48a7-8db3-312cd3e93340/>
|
How to enable DDoS protection?
DDoS (Distributed Denial of Service Attacks) are generally blocked on a server level right?
Is there a way to block it on a PHP level, or at least reduce it?
If not, what is the fastest and most common way to stop DDoS attacks?
| DDOS is a family of attacks which overwhelm key systems in the datacenter including:
- The hosting center's network connection to the internet
- The hosting center's internal network and routers
- Your firewall and load balancers
- Your web servers, application servers and database.
Before you start on building your DDOS defence, consider what the worst-case value-at-risk is. For a non-critical, free-to-use service for a small community, the total value at risk might be peanuts. For a paid-for, public-facing, mission-critical system for an established multi-billion dollar business, the value might be the worth of the company. In this latter case, you shouldn't be using StackExchange :) Anyway, to defend against DDOS, you need a defence in-depth approach:
1. **Work with your hosting center** to understand the services they offer, including IP and port filtering at their network connections to the internet and firewall services they offer. This is critical: Many sites are pulled from the internet *by the hosting company* as the hosting company deals with the data center-wide disruption caused by the DDOS to one customer. Also, during an DDOS attack, you will be working very closely with the hosting center's staff, so know their emergency numbers and be on good terms with them :) They should be able to block of whole international regions, completely block specific services or network protocols and other broad-spectrum defensive measures, or alternatively allow only whitelisted IPs (depending on your business model)
2. While on the hosting center - use a **[Content Delivery Network](http://en.wikipedia.org/wiki/Content_delivery_network)** to distribute (mainly static) services close to your end users and hide your real servers from the DDOS architects. The full CDN is too big for a DDOS to take out all nodes in all countries; if the DDOS is focused on one country, at least other users are still OK.
3. Keep all your systems and software packages **updated with the latest security patches** - and I mean all of them:
- Managed switches - yup these sometimes need updating
- Routers
- Firewalls
- Load balancers
- Operating systems
- Web servers
- Languages and their libraries
4. Ensure that you have a **good firewall or security appliance** set up *and regularly reviewed by a qualified security expert*. Strong rules on the firewall are a good defence against many simple attacks. It's also useful to be able to manage bandwidth available for each open service.
5. Have good **[network monitoring tools](http://en.wikipedia.org/wiki/Network_monitoring)** in place - this can help you understand:
- That you're under attack rather than simply being under heavy load
- Where the attack is coming from (which may include countries you don't normally do business with) and
- What the attack actually is (ports, services, protocols, IPs and packet contents)
6. The attack might simply be heavy use of legitimate web site services (eg hitting 'legal' URIs running queries or inserting/updating/deleting data) - thousands or millions of requests coming from tens to millions of different IP addresses will bring a site to its knees. Alternatively, some services might be so expensive to run that only a few requests cause a DOS - think a really expensive report. So you need good **application level monitoring** of what is going on:
- Which services have been invoked and what arguments/data are sent (i.e. logging in your application)
- Which users are doing the invoking and from which IPs (i.e. logging in your application)
- What queries and inserts/updates/deletes the DB is performing
- Load average, CPU utilization, disk i/o, network traffic on all computers (and VMs) in your system
- Making sure that all this information is easily retrievable and that you can correlate logs from different computers and services (i.e. ensure all computers are time synchronized using ntp).
7. **Sensible constraints and limits in your application**. For example, you might:
- Use a QoS feature in the load balancer to send all anonymous sessions to separate application servers in your cluster, while logged-on users use another set. This prevents an application-level anonymous DDOS taking out valuable customers
- Using a strong CAPCHA to protect anonymous services
- Session timeouts
- Have a session-limit or rate-limit on certain types of request like reports. Ensure that you can turn off anonymous access if necessary
- Ensure that a user has a limit to the number of concurrent sessions (to prevent a hacked account logging on a million times)
- Have different database application users for different services (eg transactional use vs. reporting use) and use database resource management to prevent one type of web request from overwhelming all others
- If possible make these constraints dynamic, or at least configurable. This way, while you are under attack, you can set aggressive temporary limits in place ('throttling' the attack), such as only one session per user, and no anonymous access. This is certainly not great for your customers, but a lot better than having no service at all.
8. Last, but not least, write a **DOS Response Plan** document and get this internally reviewed by all relevant parties: Business, Management, the SW dev team, the IT team and a security expert. The process of writing the document will cause you and your team to think through the issues and help you to be prepared if the worst should happen at 3am on your day off. The document should cover (among other things):
- What is at risk, and the cost to the business
- Measures taken to protect the assets
- How an attack is detected
- The planned response and escalation procedure
- Processes to keep the system and this document up-to-date
So, preamble aside, here are some specific answers:
>
> DDOS are generally blocked on a server level, right?
>
>
>
Not really - most of the worst DDOS attacks are low-level (at the IP packet level) and are handled by routing rules, firewalls, and security devices developed to handle DDOS attacks.
>
> Is there a way to block it on a PHP level, or at least reduce it?
>
>
>
Some DDOS attacks are aimed at the application itself, sending valid URIs and HTTP requests. When the rate of requests goes up, your server(s) begin to struggle and you will have an SLA outage. In this case, there are things you can do at the PHP level:
- Application level monitoring: Ensure each service/page logs requests in a way that you can see what is going on (so you can take actions to mitigate the attack). Some ideas:
- Have a log format that you can easily load into a log tool (or Excel or similar), and parse with command-line tools (grep, sed, awk). Remember that a DDOS will generate millions of lines of log. You will likely need to slice'n'dice your logs (especially with respect to URI, time, IP and user) to work out what is going on, and need to generate data such as:
- What URIs are being accessed
- What URIs are failing at a high rate (a likely indicator of the specific URIs the attackers are attacking)
- Which users are accessing the service
- How many IPs are each user accessing the service from
- What URIs are anonymous users accessing
- What arguments are being used for a given service
- Audit a specific users actions
- Log the IP address of each request. DON'T reverse DNS this - ironically the cost of doing this makes a DDOS easier for the attackers
- Log the whole URI and HTTP method, eg "GET <http://example.com/path/to/service?arg1=ddos>"
- Log the User ID if present
- Log important HTTP arguments
- Sensible rate limits: You might implement limits on how many requests a given IP or User can make in a given time period. Could a legitimate customer make more than 10 requests per second? Can anonymous users access expensive reports at all?
- CAPTCHA for anonymous access: Implement a CAPTCHA for all anonymous requests to verify that the user is a person, not a DDOS bot.
>
> What's the fastest and most common way to stop DDOS attacks?
>
>
>
The fastest is probably to give in to the blackmail, although this might not be desirable.
Otherwise, the first thing you to do is contact your hosting and/or CDN provider and work with them (if they haven't contacted you already asking what the hell is going on...). When a DDOS occurs, it will likely collaterally affect other customers of the hosting provider, and the provider may be under considerable pressure to shut down your site simply to protect their resources. Be prepared to share your logs (any and all information) with the provider; these logs, combined with their network monitors, may together provide enough information to block/mitigate the attack.
If you are expecting a DDOS, it's a very good idea to qualify your hosting provider on the level of protection they can provide. They should have DDOS experience and tools to mitigate it - understand their tools, processes and escalation procedures. Also ask about what support the hosting provider has from *their* upstream providers. These services might mean more up-front or monthly cost, but treat this as an insurance policy.
While under attack, you will need to grab your logs and mine them - try and work out the pattern of the attack. You should consider switching off anonymous access and throttling the services under attack (i.e. decrease the application's rate limit for the service).
If lucky and you have a small, fixed customer-base, you might be able to determine your valid customers IP addresses. If this is the case, you might switch to a white-list approach for a short while. Make sure all your customers know this is going on so they can call if they need to access from a new IP :)
---
[Doug McClean](https://stackoverflow.com/users/11173/doug-mcclean) has some great advice at: <https://stackoverflow.com/a/1029613/1395668>
|
Sequential feature selection Matlab
Can somebody explain how to use this function in Matlab
"sequentialfs"
it looks straight forward but I do not know how can we design a function handler for it?!
any clue?!
| Here's a simpler example than the one in the documentation.
First let's create a very simple dataset. We have some class labels `y`. 500 are from class `0`, and 500 are from class `1`, and they are randomly ordered.
```
>> y = [zeros(500,1); ones(500,1)];
>> y = y(randperm(1000));
```
And we have 100 variables `x` that we want to use to predict `y`. 99 of them are just random noise, but one of them is highly correlated with the class label.
```
>> x = rand(1000,99);
>> x(:,100) = y + rand(1000,1)*0.1;
```
Now let's say we want to classify the points using linear discriminant analysis. If we were to do this directly without applying any feature selection, we would first split the data up into a training set and a test set:
```
>> xtrain = x(1:700, :); xtest = x(701:end, :);
>> ytrain = y(1:700); ytest = y(701:end);
```
Then we would classify them:
```
>> ypred = classify(xtest, xtrain, ytrain);
```
And finally we would measure the error rate of the prediction:
```
>> sum(ytest ~= ypred)
ans =
0
```
and in this case we get perfect classification.
To make a function handle to be used with `sequentialfs`, just put these pieces together:
```
>> f = @(xtrain, ytrain, xtest, ytest) sum(ytest ~= classify(xtest, xtrain, ytrain));
```
And pass all of them together into `sequentialfs`:
```
>> fs = sequentialfs(f,x,y)
fs =
Columns 1 through 16
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 17 through 32
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 33 through 48
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 49 through 64
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 65 through 80
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 81 through 96
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 97 through 100
0 0 0 1
```
The final `1` in the output indicates that variable 100 is, as expected, the best predictor of `y` among the variables in `x`.
The example in the documentation for `sequentialfs` is a little more complex, mostly because the predicted class labels are strings rather than numerical values as above, so `~strcmp` is used to calculate the error rate rather than `~=`. In addition it makes use of cross-validation to estimate the error rate, rather than direct evaluation as above.
|
Entity Framework selection query
I am making a simple application for insert, update, delete, select data with `Entity Framework`
I already made insertion, deletion and select all data.
Now I want to select with where condition with filter of two fields
**For ex :** I have table with
```
userid
username
password
email
```
Now need selection like `where email = "" and password = ""`
I know how to write query in SQL but having not clues with `entity framework`.
Also need to store this result in datatable and looping solution both for learning purpose.
This can help many beginners
| Using Linq To Entities with lambda expression:
```
var result = dBContext.Account.Where(a=> a.email == "" && a.password =="").ToList();
```
Using Linq To Entities the less fancy way:
```
var result = (from a in dBContext.Account
where a.email == "" && a.password ==""
select a).ToList();
```
Lambda expressions are used most of the time.
Some people find lambda's less readable.
I think it's more a personal taste that depends from your background.
**EDIT:**
dbContext should be replaced by the name you gave your dbContext/Entities when setting up your Entitiy framework EDMX or Code First Classes.
Account should be replaced by the name of your table/Entity
To loop and edit the results you can do:
```
foreach(var account in results)
{
//do something with the properties
account.email = "test@updated.com"
}
//store the changes again in the db
dbContext.SaveChanges();
```
|
Linux distro for a developer
I am mainly using Linux for programming. I basically started with Archlinux and Manjaro and I kinda like it.
What I really like is the package management. It has a huge collection of new software and the updates are coming out really fast.
For example when GCC 4.8 was released I instantly had it 2 days after the release which was pretty neat.
Even small libraries such as "OpenAssetImporter" are in the repos.
It is so convenient because if you have a huge collection of libraries that are coming out frequently, all you have to do is a system update.
What bugs me is that my system breaks really often, and I don't want to spend so much time to fix stuff.
Basically all I want is up to date libraries such as gcc etc. I don't really care if I have up to date Gnome etc.
Any recommendations that you can give me?
| I'd recommand you Gentoo for programming. I use it myself and it's very convenient:
- latest updates with a powerful system to prevent you break all the dependencies
- rolling release, so there is no jumping from a version to another
- it's a compiled distribution, so they are particularly concerned with the packaging of the toolchains, and the fact you compile all your packets yourself give you a great control over the options of compilation and may optimize a little your software
- tools for cross-development are very handy
- you can install several versions of the same library at the same time in different "slots", that can be useful sometimes, when there are huge changes between two versions and you want to be able to use both. For example, I've got three versions of python and two versions af gcc.
It's a matter of choice, of course, but I used Fedora before and I can tell you that it's a lot easier to start developping on a Gentoo.
|
Can I "prime" the CLR GC to expect profligate memory use?
We have a server app that does a lot of memory allocations (both short lived and long lived). We are seeing an awful lot of GC2 collections shortly after startup, but these collections calm down after a period of time (even though the memory allocation pattern is constant).
These collections are hitting performance early on.
I'm guessing that this could be caused by GC budgets (for Gen2?). Is there some way I can set this budget (directly or indirectly) to make my server perform better at the beginning?
One counter-intuitive set of results I've seen: We made a big reduction to the amount of memory (and Large Object Heap) allocations, which saw performance over the long term improve, but early performance gets worse, and the "settling down" period gets longer.
The GC apparently needs a certain period of time to realise our app is a memory hog and adapt accordingly. I already know this fact, how do I convince the GC?
**Edit**
- OS:64 bit Windows Server 2008 R2
- We're using .Net 4.0 ServerGC Batch Latency. Tried 4.5 and the 3 different latency modes, and while average performance was improved slightly, worst case performance actually deteriorated
**Edit2**
- A GC spike can double time taken (we're talking seconds) going from acceptable to unacceptable
- Almost all spikes correlate with gen 2 collections
- My test run causes a final 32GB heap size. The initial frothiness lasts for the 1st 1/5th of the run time, and performance after that is actually better (less frequent spikes), even though the heap is growing. The last spike near the end of the test (with largest heap size) is the same height as (i.e. as bad as) 2 of the spikes in the initial "training" period (with much smaller heaps)
| Allocation of extremely large heap in .NET can be insanely fast, and number of blocking collections will not prevent it from being that fast. Problems that you observe are caused by the fact that you don't just allocate, but also have code that causes dependency reorganizations and actual garbage collection, all at the same time when allocation is going on.
There are a few techniques to consider:
- try using LatencyMode (<http://msdn.microsoft.com/en-us/library/system.runtime.gcsettings.latencymode(v=vs.110).aspx>), set it to LowLatency while you are actively loading the data - see comments to this answer as well
- use multiple threads
- **do not populate cross-references to newly allocated objects while actively loading;** first go through active allocation phase, use only integer indexes to cross-reference items, but not managed references; then force full GC couple times to have everything in Gen2, and only then populate your advanced data structures; you may need to re-think your deserialization logic to make this happen
- try forcing your biggest root collections (arrays of objects, strings) to second generation as early as possible; do this by preallocating them and forcing full GC two times, before you start populating data (loading millions of small objects); if you are using some flavor of generic Dictionary, make sure to preallocate its capacity early on, to avoid reorganizations
- any big array of references is a big source of GC overhead - until both array and referenced objects are in Gen2; the bigger the array - the bigger the overhead; prefer arrays of indexes to arrays of references, especially for temporary processing needs
- **avoid having many utility or temporary objects deallocated or promoted** while in active loading phase on any thread, carefully look through your code for string concatenation, boxing and 'foreach' iterators that can't be auto-optimized into 'for' loops
- if you have an array of references and a hierarchy of function calls that have some long-running tight loops, avoid introducing local variables that cache the reference value from some position in the array; instead, cache the offset value and keep using something like "myArrayOfObjects[offset]" construct across all levels of your function calls; it helped me a lot with processing pre-populated, Gen2 large data structures, my personal theory here is that this helps GC manage temporary dependencies on your local thread's data structures, thus improving concurrency
Here are the reasons for this behavior, as far as I learned from populating up to ~100 Gb RAM during app startup, with multiple threads:
- when GC moves data from one generation to another, it actually copies it and thus modifies all references; therefore, the fewer cross-references you have during active load phase - the better
- GC maintains a lot of internal data structures that manage references; if you do massive modifications to references themselves - or if you have a lot of references that have to be modified during GC - it causes significant CPU and memory bandwidth overhead during both blocking and concurrent GC; sometimes I observed GC constantly consuming 30-80% of CPU without any collections going on - simply by doing some processing, which looks weird until you realize that any time you put a reference to some array or some temporary variable in a tight loop, GC has to modify and sometimes reorganize dependency tracking data structures
- server GC uses thread-specific Gen0 segments and is capable of pushing entire segment to next Gen (without actually copying data - not sure about this one though), keep this in mind when designing multi-threaded data load process
- ConcurrentDictionary, while being a great API, does not scale well in extreme scenarios with multiple cores, when number of objects goes above a few millions (consider using unmanaged hashtable optimized for concurrent insertion, such as one coming with Intel's TBB)
- if possible or applicable, consider using native pooled allocator (Intel TBB, again)
BTW, latest update to .NET 4.5 has defragmentation support for large object heap. One more great reason to upgrade to it.
.NET 4.6 also has an API to ask for no GC whatsoever (GC.TryStartNoGCRegion), if certain conditions are met: <https://msdn.microsoft.com/en-us/library/dn906202(v=vs.110).aspx>
Also see a related post by Maoni Stephens: <https://blogs.msdn.microsoft.com/maoni/2017/04/02/no-gcs-for-your-allocations/>
|
How to embed hex values in a lua string literal (i.e. \x equivalent)
In various languages, you can embed hex values in a string literal by using the \x escape sequence:
`"hello \x77\x6f\x72\x6c\x64"`
How can I do the same thing in Lua 5.1?
| Since Lua 3.1, you can use *decimal* escapes in strings liberals.
Starting with Lua 5.2, you can use hex escapes in string literals.
In Lua 5.1, you can convert hex escapes a posteriori:
```
s=[[hello \x77\x6f\x72\x6c\x64]]
s=s:gsub("\\x(%x%x)",function (x) return string.char(tonumber(x,16)) end)
print(s)
```
Note the use of *long strings*, which do not interpret escape sequences. If you use *short strings* (in quotes) as in your original code, then `\x` will be silently converted to `x`, because Lua 5.1 does not understand `\x`. Lua 5.2 and later complains about escape sequences that it does not understand.
|
Can't execute external process with PHP
I have the following code
```
function generate_pdf() {
$fdf_data_strings = $this->get_hash_for_pdf();
#$fdf_data_names = array('49a' => "yes");
$fdf_data_names = array();
$fields_hidden = array();
$fields_readonly = array();
$hud_pdf = ABSPATH.'../pdf/HUD3.pdf';
$fdf= forge_fdf( '',
$fdf_data_strings,
$fdf_data_names,
$fields_hidden,
$fields_readonly );
/* echo "<pre>";
print_r($fdf);
echo "</pre>";
die('');
*/
$fdf_fn= tempnam( '.', 'fdf' );
$fp= fopen( $fdf_fn, 'w' );
if( $fp ) {
fwrite( $fp, $fdf );
//$data=fread( $fp, $fdf );
// echo $data;
fclose( $fp );
header( 'Content-type: application/pdf' );
header( 'Content-disposition: attachment; filename=settlement.pdf' ); // prompt to save to disk
passthru( 'pdftk HUD3.pdf fill_form '. $fdf_fn.' output - flatten');
unlink( $fdf_fn ); // delete temp file
}
else { // error
echo 'Error: unable to open temp file for writing fdf data: '. $fdf_fn;
}
}
}
```
is there anything wrong with it?
the problem is, I have installed pdftk
runing `whereis pdftk` gives me `'/usr/local/bin/pdftk'`
physically checked the location, pdftk is there at the said location..
using terminal, if i run `pdftk --version` or any other command, it runs
if I use php like `passthru('/usr/local/bin/pdftk --version')` nothing is displayed
if I used php like `system("PATH=/usr/local/bin && pdftk --version");` it says `'/usr/local/bin /pdftk :there is no directory of file '`
when I run this function script , prompt for file download pops, but when i save it, nothng is saved,
i have checked permission for this folder and changed it `0755, 0766, 0777, 0666` i have tried all, nothng works
For 3 days, i am striving to get over it, and I have asked question regarding this too, but Can't figure out what the hell is going on with me.
Can somebody help me before i strike my head with wall?
| The pasthru function does not execute the program through the shell.
Pass the exact path into the passthru command.
E.g.
```
passthru( '/usr/local/bin/pdftk HUD3.pdf fill_form '. $fdf_fn.' output - flatten');
```
or
passthru( '/usr/local/bin/pdftk' . $hud\_pdf . 'fill\_form '. $fdf\_fn.' output - flatten');
If this still doesn't work test using
`<?php passthru("/path/to/pdftk --help"); ?>` where /path/to/pdftk is your path returned by which or where is, to ensure path is correct.
If path is correct then the issue may be related to permissions either on the temporary directory you tell pdftk to use or the permissions on the pdftk binary with regards to the apache user.
If these permissions are fine you can verify the pdftk starts up from php but hangs from running your command, then might be able to try the workaround listed [here](http://farbfinal.wordpress.com/2009/08/16/pdftk-php-problem-hang-without-errors/).
Further documentation on passthru is avaliable [passthru PHP Manual](http://php.net/manual/en/function.passthru.php).
As a side note, the [putenv](http://php.net/manual/en/function.putenv.php) php function is used to set environment variables.
E.g. `putenv('PATH='.getenv('PATH').':.');`
All 3 PHP functions: exec(), system() and passthru() executes an external command, but the differences are:
- exec(): returns the last line of output from the command and flushes nothing.
- shell\_exec(): returns the entire output from the command and flushes nothing.
- system(): returns the last line of output from the command and tries to flush the output buffer after each line of the output as it goes.
- passthru(): returns nothing and passes the resulting output without interference to the browser, especially useful when the output is in binary format.
Also see [PHP exec vs-system vs passthru SO Question](https://stackoverflow.com/questions/732832/php-exec-vs-system-vs-passthru).
The implementation of these functions is located at [exec.c](https://github.com/php/php-src/blob/master/ext/standard/exec.c#L60) and uses popen.
|