prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Iterating through a golang map
I have a map of type: `map[string]interface{}`
And finally, I get to create something like (after deserializing from a yml file using goyaml)
```
mymap = map[foo:map[first: 1] boo: map[second: 2]]
```
How can I iterate through this map? I tried the following:
```
for k, v := range mymap{
...
}
```
But I get an error:
```
cannot range over mymap
typechecking loop involving for loop
```
Please help.
| For example,
```
package main
import "fmt"
func main() {
type Map1 map[string]interface{}
type Map2 map[string]int
m := Map1{"foo": Map2{"first": 1}, "boo": Map2{"second": 2}}
//m = map[foo:map[first: 1] boo: map[second: 2]]
fmt.Println("m:", m)
for k, v := range m {
fmt.Println("k:", k, "v:", v)
}
}
```
Output:
```
m: map[boo:map[second:2] foo:map[first:1]]
k: boo v: map[second:2]
k: foo v: map[first:1]
```
|
Ingress Nginx - how to serve assets to application
I have an issue, I am deploying an application on [hostname]/product/console, but the .css .js files are being requested from [hostname]/product/static, hence they are not being loaded and I get 404.
I have tried `nginx.ingress.kubernetes.io/rewrite-target:` to no avail.
I also tried using: `nginx.ingress.kubernetes.io/location-snippet: |
location = /product/console/ {
proxy_pass http://[hostname]/product/static/;
}`
But the latter does not seem to be picked up by the nginx controller at all. This is my ingress.yaml
```
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
# nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/location-snippet: |
location = /product/console/ {
proxy_pass http://[hostname]/product/static/;
}
spec:
rules:
- host: {{.Values.HOSTNAME}}
http:
paths:
- path: /product/console
backend:
serviceName: product-svc
servicePort: prod ##25022
- path: /product/
backend:
serviceName: product-svc
servicePort: prod #25022
```
--
Can I ask for some pointers? I have been trying to google this out and tried some different variations, but I seem to be doing something wrong. Thanks!
| **TL;DR**
To diagnose the reason why you get error 404 you can check in `nginx-ingress` controller pod logs. You can do it with below command:
`kubectl logs -n ingress-nginx INGRESS_NGINX_CONTROLLER_POD_NAME`
You should get output similar to this (depending on your use case):
```
CLIENT_IP - - [12/May/2020:11:06:56 +0000] "GET / HTTP/1.1" 200 238 "-" "REDACTED" 430 0.003 [default-ubuntu-service-ubuntu-port] [] 10.48.0.13:8080 276 0.003 200
CLIENT_IP - - [12/May/2020:11:06:56 +0000] "GET /assets/styles/style.css HTTP/1.1" 200 22 "http://SERVER_IP/" "REDACTED" 348 0.002 [default-ubuntu-service-ubuntu-port] [] 10.48.0.13:8080 22 0.002 200
```
With above logs you can check if the requests are handled properly by `nginx-ingress` controller and where they are sent.
Also you can check the [Kubernetes.github.io: ingress-nginx: Ingress-path-matching](https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/). It's a document describing how `Ingress` matches paths with regular expressions.
---
You can experiment with `Ingress`, by following below example:
- Deploy `nginx-ingress` controller
- Create a `pod` and a `service`
- Run example application
- Create an `Ingress` resource
- Test
- Rewrite example
### Deploy `nginx-ingress` controller
You can deploy your `nginx-ingress` controller by following official documentation:
[Kubernetes.github.io: Ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/)
### Create a `pod` and a `service`
Below is an example definition of a pod and a service attached to it which will be used for testing purposes:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-deployment
spec:
selector:
matchLabels:
app: ubuntu
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
---
apiVersion: v1
kind: Service
metadata:
name: ubuntu-service
spec:
selector:
app: ubuntu
ports:
- name: ubuntu-port
port: 8080
targetPort: 8080
nodePort: 30080
type: NodePort
```
### Example page
I created a basic `index.html` with one `css` to simulate the request process. You need to create this files inside of a pod (manually or copy them to pod).
The file tree looks like this:
- **index.html**
- assets/styles/**style.css**
**index.html**:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="assets/styles/style.css">
<title>Document</title>
</head>
<body>
<h1>Hi</h1>
</body>
```
Please take a specific look on a line:
```
<link rel="stylesheet" href="assets/styles/style.css">
```
**style.css**:
```
h1 {
color: red;
}
```
You can run above page with `python`:
- `$ apt update && apt install -y python3`
- `$ python3 -m http.server 8080` where the `index.html` and `assets` folder is stored.
## Create an `Ingress` resource
Below is an example `Ingress` resource configured to use `nginx-ingress` controller:
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-example
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: ubuntu-service
servicePort: ubuntu-port
```
After applying above resource you can start to test.
### Test
You can go to your browser and enter the external IP address associated with your `Ingress` resource.
**As I said above you can check the logs of `nginx-ingress` controller pod to check how your controller is handling request.**
If you run command mentioned earlier `python3 -m http.server 8080` you will get logs too:
```
$ python3 -m http.server 8080
Serving HTTP on 0.0.0.0 port 8080 (http://0.0.0.0:8080/) ...
10.48.0.16 - - [12/May/2020 11:06:56] "GET / HTTP/1.1" 200 -
10.48.0.16 - - [12/May/2020 11:06:56] "GET /assets/styles/style.css HTTP/1.1" 200 -
```
### Rewrite example
I've edited the `Ingress` resource to show you an example of a path rewrite:
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-example
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host:
http:
paths:
- path: /product/(.*)
backend:
serviceName: ubuntu-service
servicePort: ubuntu-port
```
Changes were made to lines:
```
nginx.ingress.kubernetes.io/rewrite-target: /$1
```
and:
```
- path: /product/(.*)
```
Steps:
- The browser sent: `/product/`
- Controller got `/product/` and had it rewritten to `/`
- Pod got `/` from a controller.
Logs from the`nginx-ingress` controller:
```
CLIENT_IP - - [12/May/2020:11:33:23 +0000] "GET /product/ HTTP/1.1" 200 228 "-" "REDACTED" 438 0.002 [default-ubuntu-service-ubuntu-port] [] 10.48.0.13:8080 276 0.001 200 fb0d95e7253335fc82cc84f70348683a
CLIENT_IP - - [12/May/2020:11:33:23 +0000] "GET /product/assets/styles/style.css HTTP/1.1" 200 22 "http://SERVER_IP/product/" "REDACTED" 364 0.002 [default-ubuntu-service-ubuntu-port] [] 10.48.0.13:8080 22 0.002 200
```
Logs from the pod:
```
10.48.0.16 - - [12/May/2020 11:33:23] "GET / HTTP/1.1" 200 -
10.48.0.16 - - [12/May/2020 11:33:23] "GET /assets/styles/style.css HTTP/1.1" 200 -
```
Please let me know if you have any questions in that.
|
why ({}+{})="[object Object][object Object]"?
I have tested the code:
```
{}+{} = NaN;
({}+{}) = "[object Object][object Object]";
```
Why does adding the `()` change the result?
| `{}+{}` is a *block* followed by an expression. The first `{}` is the block (like the kind you attach to an `if` statement), the `+{}` is the expression. The first `{}` is a block because when the parser is looking for a statement and sees `{`, it interprets it as the opening of a block. That block, being empty, does nothing. Having processed the block, the parser sees the `+` and reads it as a unary `+`. That shifts the parser into handling an expression. In an expression, a `{` starts an object initializer instead of a block, so the `{}` is an object initializer. The object initializer creates an object, which `+` then tries to coerce to a number, getting `NaN`.
In `({}+{})`, the opening `(` shifts the parser into the mode where it's expecting an expression, not a statement. So the `()` contains *two* object initializers with a *binary* `+` (e.g., the "addition" operator, which can be arithmetic or string concatenation) between them. The binary `+` operator will attempt to add or concatenate depending on its operands. It coerces its operands to primitives, and in the case of `{}`, they each become the string `"[object Object]"`. So you end up with `"[object Object][object Object]"`, the result of concatenating them.
|
ZFS storage on Docker
I would like to try out ZFS on Ubuntu(16.04) docker container. Followed the following <https://docs.docker.com/engine/userguide/storagedriver/zfs-driver/>
```
> lsmod | grep zfs
zfs 2813952 5
zunicode 331776 1 zfs
zcommon 57344 1 zfs
znvpair 90112 2 zfs,zcommon
spl 102400 3 zfs,zcommon,znvpair
zavl 16384 1 zfs
```
Listing the ZFS mounts
```
>sudo zfs list
NAME USED AVAIL REFER MOUNTPOINT
zpool-docker 261K 976M 53.5K /zpool-docker
zpool-docker/docker 120K 976M 120K /var/lib/docker
```
After starting docker
```
> sudo docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: zfs
Dirs: 0
...
```
Wonder why I still get \*\*Storage Driver: aufs & Root Dir: /var/lib/docker/aufs" in place of zfs?
Also how can I map "/zpool-docker" into the Ubuntu container image?
| Assuming you have:
- a ZFS pool (let's call it `data`)
- a ZFS dataset mounted on `/var/lib/docker` (created with a command along the line of: `zfs create -o mountpoint=/var/lib/docker data/docker`)
Then:
Stop your docker daemon (eg. `systemctl stop docker.service`)
Create a file `/etc/docker/daemon.json` or amend it to contain a line with `"storage-driver"` set to `zfs`:
```
{
...
"storage-driver": "zfs"
...
}
```
Restart your docker daemon.
`docker info` should now reveal:
```
Storage Driver: zfs
Zpool: data
Zpool Health: ONLINE
Parent Dataset: data/docker
```
|
How to return the result from Task?
I have the following methods:
```
public int getData() { return 2; } // suppose it is slow and takes 20 sec
// pseudocode
public int GetPreviousData()
{
Task<int> t = new Task<int>(() => getData());
return _cachedData; // some previous value
_cachedData = t.Result; // _cachedData == 2
}
```
I don't want to wait for the result of an already running operation.
I want to return `_cachedData` and update it after the `Task` will finish.
How to do this? I'm using `.net framework 4.5.2`
| You might want to use an `out` parameter here:
```
public Task<int> GetPreviousDataAsync(out int cachedData)
{
Task<int> t = Task.Run(() => getData());
cachedData = _cachedData; // some previous value
return t; // _cachedData == 2
}
int cachedData;
cachedData = await GetPreviousDataAsync(out int cachedData);
```
Pay attention to the `Task.Run` thing: this starts a task using the thread pool and returns a `Task<int>` to let the caller decide if it should be awaited, continued or *fire and forget* it.
See the following sample. I've re-arranged everything into a class:
```
class A
{
private int _cachedData;
private readonly static AutoResetEvent _getDataResetEvent = new AutoResetEvent(true);
private int GetData()
{
return 1;
}
public Task<int> GetPreviousDataAsync(out int cachedData)
{
// This will force calls to this method to be executed one by one, avoiding
// N calls to his method update _cachedData class field in an unpredictable way
// It will try to get a lock in 6 seconds. If it goes beyong 6 seconds it means that
// the task is taking too much time. This will prevent a deadlock
if (!_getDataResetEvent.WaitOne(TimeSpan.FromSeconds(6)))
{
throw new InvalidOperationException("Some previous operation is taking too much time");
}
// It has acquired an exclusive lock since WaitOne returned true
Task<int> getDataTask = Task.Run(() => GetData());
cachedData = _cachedData; // some previous value
// Once the getDataTask has finished, this will set the
// _cachedData class field. Since it's an asynchronous
// continuation, the return statement will be hit before the
// task ends, letting the caller await for the asynchronous
// operation, while the method was able to output
// previous _cachedData using the "out" parameter.
getDataTask.ContinueWith
(
t =>
{
if (t.IsCompleted)
_cachedData = t.Result;
// Open the door again to let other calls proceed
_getDataResetEvent.Set();
}
);
return getDataTask;
}
public void DoStuff()
{
int previousCachedData;
// Don't await it, when the underlying task ends, sets
// _cachedData already. This is like saying "fire and forget it"
GetPreviousDataAsync(out previousCachedData);
}
}
```
|
Most efficient way to find an entry in a C++ vector
I'm trying to construct an output table containing 80 rows of table status, that could be `EMPTY` or `USED` as below
```
+-------+-------+
| TABLE | STATE |
+-------+-------+
| 00 | USED |
| 01 | EMPTY |
| 02 | EMPTY |
..
..
| 79 | EMPTY |
+-------+-------+
```
I have a vector `m_availTableNums` which contains a list of available table numbers. In the below example, I'm putting 20 random table numbers into the above vector such that all the remaining 60 would be empty. My logic below works fine.
Is there a scope for improvement here on the find logic?
```
#include <iostream>
#include <iomanip>
#include <cmath>
#include <string.h>
#include <cstdlib>
#include <vector>
#include <algorithm>
#include <ctime>
using namespace std;
int main(int argc, const char * argv[]) {
std::vector<uint8_t> m_availTableNums;
char tmpStr[50];
uint8_t tableNum;
uint8_t randomCount;
srand(time(NULL));
for( tableNum=0; tableNum < 20; tableNum++ )
{
randomCount = ( rand() % 80 );
m_availTableNums.push_back( randomCount );
}
sprintf(tmpStr, "+-------+-------+");
printf("%s\n", tmpStr);
tmpStr[0]='\0';
sprintf(tmpStr, "| TABLE | STATE |");
printf("%s\n", tmpStr);
tmpStr[0]='\0';
sprintf(tmpStr, "+-------+-------+");
printf("%s\n", tmpStr);
tableNum = 0;
for( tableNum=0; tableNum < 80 ; tableNum++ )
{
tmpStr[0]='\0';
if ( std::find(m_availTableNums.begin(), m_availTableNums.end(), tableNum) != m_availTableNums.end() )
{
sprintf(tmpStr, "| %02d | EMPTY |", tableNum );
} else {
sprintf(tmpStr, "| %02d | USED |", tableNum );
}
printf("%s\n", tmpStr);
}
tmpStr[0]='\0';
sprintf(tmpStr, "+-------+-------+");
printf("%s\n", tmpStr);
return 0;
}
```
| # Header files
It's strange that this code uses the C header `<string.h>` but the C++ versions of `<cmath>`, `<ctime>` and `<cstdlib>`. I recommend sticking to the C++ headers except on the rare occasions that you need to compile the same code with a C compiler. In this case, I don't see anything using `<cstring>`, so we can probably just drop that, along with `<iomanip>`, `<iostream>` and `<cmath>`. And we need to add some missing includes: `<cstdint>` and `<cstdio>`.
# Avoid `using namespace std`
The standard namespace is not one that's designed for wholesale import like this. Unexpected name collisions when you add another header or move to a newer C++ could even cause changes to the meaning of your program.
# Use the appropriate signature for `main()`
Since we're ignoring the command-line arguments, we can use a `main()` that takes no parameters:
```
int main()
```
# Remove pointless temporary string
Instead of formatting into `tmpStr` and immediately printing its contents to standard output, we can eliminate that variable by formatting directly to standard output (using the same format string). For example, instead of:
>
>
> ```
> std::sprintf(tmpStr, "+-------+-------+");
> std::printf("%s\n", tmpStr);
> tmpStr[0]='\0';
>
> std::sprintf(tmpStr, "| TABLE | STATE |");
> std::printf("%s\n", tmpStr);
> tmpStr[0]='\0';
>
> std::sprintf(tmpStr, "+-------+-------+");
> std::printf("%s\n", tmpStr);
>
> ```
>
>
we could simply write:
```
std::puts("+-------+-------+\n"
"| TABLE | STATE |\n"
"+-------+-------+");
```
And instead of
>
>
> ```
> tmpStr[0]='\0';
> if ( std::find(m_availTableNums.begin(), m_availTableNums.end(), tableNum) != m_availTableNums.end() )
> {
> std::sprintf(tmpStr, "| %02d | EMPTY |", tableNum );
> } else {
> std::sprintf(tmpStr, "| %02d | USED |", tableNum );
> }
>
> printf("%s\n", tmpStr);
>
> ```
>
>
we would have:
```
if (std::find(m_availTableNums.begin(), m_availTableNums.end(), tableNum) != m_availTableNums.end()) {
std::printf("| %02d | EMPTY |\n", tableNum);
} else {
std::printf("| %02d | USED |\n", tableNum);
}
```
# Reduce duplication
Most of these statements are common:
>
>
> ```
> std::printf("| %02d | EMPTY |\n", tableNum);
> } else {
> std::printf("| %02d | USED |\n", tableNum);
> }
>
> ```
>
>
The only bit that's different is the `EMPTY` or `USED` string. So let's decide that first:
```
const char *status =
std::find(m_availTableNums.begin(), m_availTableNums.end(), tableNum) != m_availTableNums.end()
? "EMPTY" : "USED";
std::printf("| %02d | %-5s |\n", tableNum, status);
```
# Prefer `nullptr` value to `NULL` macro
The C++ null pointer has strong type, whereas `NULL` or `0` can be interpreted as integer.
# Reduce scope of variables
`randomCount` doesn't need to be valid outside the first `for` loop, and we don't need to use the same `tableNum` for both loops. Also, we could follow convention and use a short name for a short-lived loop index; `i` is the usual choice:
```
for (std::uint8_t i = 0; i < 20; ++i) {
std::uint8_t randomCount = rand() % 80;
m_availTableNums.push_back(randomCount);
}
```
```
for (std::uint8_t i = 0; i < 80; ++i) {
```
# Avoid magic numbers
What's special about `80`? Could we need a different range? Let's give the constant a name, and then we can be sure that the loop matches this range:
```
constexpr std::uint8_t LIMIT = 80;
...
std::uint8_t randomCount = rand() % LIMIT;
...
for (std::uint8_t i = 0; i < LIMIT; ++i) {
```
# A departure from specification
The description says
>
> I'm putting 20 random table numbers into the above vector such that, all the remaining 60 would be empty.
>
>
>
That's not exactly what's happening, as we're sampling *with replacement* from the values 0..79. There's nothing to prevent duplicates being added (it's actually quite unlikely that there will be exactly 60 empty values).
# Reduce the algorithmic complexity
Each time through the loop, we use `std::find()` to see whether we have any matching elements. This is a *linear* search, so it examines elements in turn until it finds a match. Since it only finds a match one-quarter of the time, the other three-quarters will examine *every element in the list*, and the time it takes will be proportional to the list length - we say it scales as O(*n*), where *n* is the size of the vector. The complete loop therefore scales as O(*mn*), where *m* is the value of `LIMIT`.
We can reduce the complexity to O(*m* + *n*) if we use some extra storage to store the values in a way that makes them easy to test. For example, we could populate a vector that's indexed by the values from `m_availTableNums`:
```
auto by_val = std::vector<bool>(LIMIT, false);
for (auto value: m_availTableNums)
by_val[value] = true;
for (std::uint8_t i = 0; i < LIMIT; ++i) {
const char *status = by_val[i] ? "EMPTY" : "USED";
std::printf("| %02d | %-5s |\n", i, status);
}
```
If the range were much larger, we might use an (unordered) set instead of `vector<bool>`. We might also choose `vector<char>` instead of `vector<bool>` for better speed at a cost of more space.
---
# Simplified code
Here's my version, keeping to the spirit of the original (creating a list of indices, rather than changing to storing in the form we want to use them):
```
#include <algorithm>
#include <cstdint>
#include <cstdio>
#include <cstdlib>
#include <ctime>
#include <vector>
int main()
{
constexpr std::uint8_t LIMIT = 80;
std::vector<std::uint8_t> m_availTableNums;
std::srand(std::time(nullptr));
for (std::uint8_t i = 0; i < 20; ++i) {
std::uint8_t randomCount = rand() % LIMIT;
m_availTableNums.push_back(randomCount);
}
std::puts("+-------+-------+\n"
"| TABLE | STATE |\n"
"+-------+-------+");
auto by_val = std::vector<bool>(LIMIT, false);
for (auto value: m_availTableNums)
by_val[value] = true;
for (std::uint8_t i = 0; i < LIMIT; ++i) {
const char *status = by_val[i] ? "EMPTY" : "USED";
std::printf("| %02d | %-5s |\n", i, status);
}
std::puts("+-------+-------+");
}
```
|
When searchController is active, status bar style changes
Throughout my app I have set the status bar style to light content.[![enter image description here](https://i.stack.imgur.com/st8JB.png)](https://i.stack.imgur.com/st8JB.png)
However, when the search controller is active, it resets to the default style: [![enter image description here](https://i.stack.imgur.com/sCfIM.png)](https://i.stack.imgur.com/sCfIM.png)
I have tried everything to fix this, including checking if the search controller is active in an if statement, and then changing the tint color of the navigation bar to white, and setting the status bar style to light content. How do I fix this?
| a couple of option, and this could be a problem that is a bug, but in the mean time, have you tried this:
**Option 1:**
info.plist, set up the option in your info.plist for "Status bar style", this is a string value with the value of "UIStatusBarStyleLightContent"
Also, in your infor.plist, set up the variable "View controller-based status bar appearance" and set it's value to "NO"
Then, in each view controller in your app, explicitly declare the following in command in your initializers, your ViewWillAppear, and your ViewDidLoad
```
UIApplication.sharedApplication().statusBarStyle = UIStatusBarStyle.LightContent
```
**Option 2:**
In your info.plist set up the option for "Status bar style" to "UIStatusBarStyleLightContent". Also, in your infor.plist, set up the variable "View controller-based status bar appearance" and set it's value to "YES"
Then, in each view controller place the following methods
```
override func preferredStatusBarStyle() -> UIStatusBarStyle {
return UIStatusBarStyle.LightContent;
}
override func prefersStatusBarHidden() -> Bool {
return false
}
```
Also, you may need to do something like this:
```
self.extendedLayoutIncludesOpaqueBars = true
```
Also, I translated it to Swift code for you
|
Remote Desktop Forgets Multi Monitor Configuration
By default, I RDP from my personal PC to my work laptop, in order to make use of all my monitors without needing to resort to a KVM.
On my old laptop, it would remember the position of windows between RDP sessions, provided that I had not logged into the machine physically between sessions. This new laptop, however, forgets the window positions on each connection and forces my windows all onto the main monitor.
It *does* span multiple monitors, and I'm able to use them fine during the session, but once I end it and reconnect, it resets every time.
I'm using the same shortcut I used with the old laptop (I use a USB-C ethernet dongle on the laptop with a static IP assigned on the router), so the settings should all be the same.
How can I stop the laptop from resetting the screens on every reconnection.
| I think this is a bug courtesy of the latest [two] windows updates (1903 as is listed in the tags in the original post.. but also the more recently released 1909) because I had the same exact issue that was resolved by rollback to 1809.
I have a 12 monitor local computer and rdp into a host computer with only 1 monitor with the use all local monitor option and I was, prior to updating from 1809 I was able to use all 12 local monitors and it would remember the window locations between sessions prior to the latest windows update. Now it is like starting from scratch each time I connect via RDP even if there isn’t a local session between RDP sessions. Annoying AF. The first solution I came up with is to do a rollback to 1809.
I also had a second host that I also discovered had the same issue that too was upgraded from 1809 to 1909 but this host was past the 10 day rollback window. Since the rollback to 1809 resolved the issue with the first host I investigated further as to what was changed from 1809 to 1909 to try and find the root cause, and the issue seems to be the deprecation of the XDDM display driver and the forced use of WDDM.
Using GPO or registry modification to eliminate the forced use of WDDM also resolved the intersession windowing issue (amongst other things), without needing a rollback. See here for instructions on how to implement: [Remote Desktop black screen](https://answers.microsoft.com/en-us/windows/forum/all/remote-desktop-black-screen/63c665b5-883d-45e7-9356-f6135f89837a?page=2)
>
> In [Local Group Policy Editor->Local Computer Policy->Administrative
> Templates->Windows Components->Remote Desktop Services->Remote Desktop
> Session Host->Remote Session Enviroment], set the Policy **[Use WDDM
> graphics display driver for Remote Desktop Connections]** to
> **Disabled**.
>
>
> [![enter image description here](https://i.stack.imgur.com/znCHG.png)](https://i.stack.imgur.com/znCHG.png)
>
>
>
Alternatively, to use the modify the registry method, open the command prompt with administrator privileges and type
reg add “HKLM\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services” /v “fEnableWddmDriver” /t REG\_DWORD /d 0 /f
See: <https://answers.microsoft.com/en-us/windows/forum/all/windows-10-1903-may-update-black-screen-with/23c8a740-0c79-4042-851e-9d98d0efb539?page=1>
Note that the machine must be restarted for this change to take effect.
---
Note: Some articles also claim KB452941 will solve various display issues related to the XDDM vs WDDM driver as well but it did not in my case.
|
Does NSPasteboard retain owner objects?
You can call `NSPasteboard` like this:
```
[pboard declareTypes:types owner:self];
```
Which means that the pasteboard will later ask the owner to supply data for a type as needed. However, what I can't find from the docs (and maybe I've missed something bleeding obvious), is whether or not `owner` is retained.
In practice what's worrying me is if the owner is a **weak** reference, it could be deallocated, causing a crash if the pasteboard then tries to request data from it.
**Note:** I should probably clarify that I'm interested in this more as an aid to tracking down a bug, than making my app rely on it. But I do also want the docs clarified.
| The docs:
>
> *newOwner*
>
>
> The object responsible for writing
> data to the pasteboard, or nil if you
> provide data for all types
> immediately. If you specify a newOwner
> object, it must support all of the
> types declared in the newTypes
> parameter and must remain valid for as
> long as the data is promised on the
> pasteboard.
>
>
>
Translation: The pasteboard may or may not retain the owner. Whether it does is an implementation detail that you should not rely upon. It is your responsibility to retain the owner for as long as it acts as an owner.
What the docs are saying about "remain valid" actually refers to the *proxied contents* that you might lazily provide. I.e. if the user were to copy something, you wouldn't want the owner's representation of what was copied to change as the user makes further edits with an intention of pasting sometime later.
The documentation says nothing about the retain/release policy of the owner (nor is there any kind of blanket rule statement). It should be clarified (rdar://8966209 filed). As it is, making an assumption about the retain/release behavior is dangerous.
|
I can't start a new project on Netbeans
## The issue:
When I open the "add new project" dialog (screenshot below), I can't create a new project. The loading message (hourglass icon) stays on forever. Except for "cancel", the other buttons are disabled.
It was working fine a few days ago, I haven't changed any setting prior to the issue appearing. I ran the internal update feature, but the issue persists.
![enter image description here](https://i.stack.imgur.com/tc0ud.png)
## The info:
**My OS version**: Ubuntu 12.04.2 LTS 64 bits
**Netbeans version**:
Help -> about
```
Product Version: NetBeans IDE 7.2.1 (Build 201210100934)
Java: 1.6.0_27; OpenJDK 64-Bit Server VM 20.0-b12
System: Linux version 3.2.0-49-generic running on amd64; UTF-8; pt_BR (nb)
User directory: /home/user/.netbeans/7.2.1
Cache directory: /home/user/.cache/netbeans/7.2.1
```
## What I tried:
- Changing the Look and Feel with the `--laf` command-line option. The look-and-feel does change, but the issue persists.
- Using the internal update command, a plugin got updated but the issue persists.
- Downloading and installing the latest version (7.31), it imported the settings from the previous version and the issue persists.
- Removing the settings folder `~/.netbeans/7.3.1`, restarting netbeans, choosing not to import settings and rather have a new clean install
| Just posted the same question [here](https://askubuntu.com/questions/326933/netbeans-broken-after-openjdk-update) ... the solution for me was to downgrade OpenJDK from **6b27** to **6b24** (look at the link for details).
My NetBeans was looking ***excactly*** like in your sreenshot and also had some other strange problems.
I would suggest you do `java -version` if this shows that you have **6b27** installed and `cat /var/log/dpkg.log | grep openjdk` shows that you recently received the OpenJDK update you can try to fix the problem reverting to **6b24** using this command:
```
apt-get install openjdk-6-jre=6b24-1.11.1-4ubuntu2 openjdk-6-jre-headless=6b24-1.11.1-4ubuntu2 icedtea-6-jre-cacao=6b24-1.11.1-4ubuntu2
```
\*\*\*\*edit\*\*\*\*
As I discovered some other problems (SSH connection wouldn't establish within NetBeans) I finally took the step to upgrade to Oracle JDK7. To start NetBeans with another JDK you have got to edit `./netbeans-7.X/etc/netbeans.conf` and change the line `netbeans_jdkhome="/usr/lib/jvm/java-7-oracle/jre"` (I guess this gets generated in the NetBeans installation process). Otherwise it will still use your old JDK no matter what `java -version` says.
|
Recyclerview item click ripple effect
I am trying to add `Ripple` Effect to `RecyclerView`'s item. I had a look online, but could not find what I need. I have tried `android:background` attribute to the `RecyclerView` itself and set it to `"?android:selectableItemBackground"` but it did not work.:
My Parent layout is like this
```
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:padding="10dp">
<android.support.v7.widget.RecyclerView
android:id="@+id/dailyTaskList"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:clickable="true"
android:focusable="true"
android:scrollbars="vertical" />
</LinearLayout>
```
and adapter template is shown in below
```
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:custom="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical">
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical"
android:padding="5dp">
<TextView
android:id="@+id/txtTitle"
style="@style/kaho_panel_sub_heading_textview_style"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/txtDate"
style="@style/kaho_content_small_textview_style"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
</LinearLayout>
</LinearLayout>
```
Kindly give me solution
| Adding the `android:background="?attr/selectableItemBackground"` to the top most parent of your item layout should do the trick.
However in some cases it still misses the animation, adding the `android:clickable="true"` does it.
```
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="?attr/selectableItemBackground"
android:orientation="vertical">
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical"
android:padding="5dp">
<TextView
android:id="@+id/txtTitle"
style="@style/kaho_panel_sub_heading_textview_style"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/txtDate"
style="@style/kaho_content_small_textview_style"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
</LinearLayout>
</LinearLayout>
```
|
C++ name mangling and linker symbol resolution
The name mangling schemes of C++ compilers vary, but they are documented publicly. Why aren't linkers made to decode a mangled symbol from an object file and attempt to find a mangled version via any of the mangling conventions across the other object files/static libraries? If linkers could do this, wouldn't it help alleviate the problem with C++ libraries that they have to be re-compiled by each compiler to have symbols that can be resolved?
List of mangling documentations I found:
- [MSVC++ mangling conventions](http://en.wikipedia.org/wiki/Microsoft_Visual_C++_Name_Mangling)
- [GCC un-mangler](https://stackoverflow.com/questions/4468770/c-name-mangling-decoder-for-g?answertab=votes#tab-top)
- [MSVC++ un-mangling function documentation](http://msdn.microsoft.com/en-us/library/windows/desktop/ms681400.aspx)
- [LLVM mangle class](https://llvm.org/svn/llvm-project/cfe/tags/RELEASE_26/lib/CodeGen/Mangle.cpp)
| Name mangling is a very small part of the problem.
Object layout is only defined in the C++ standard for a very restricted set of classes (essentially only standard layout types - and then only as much as the C standard does, alignment and padding are still to be considered). For anything that has virtuals, any form of non-trivial inheritance, mixed public and private members, etc. the standard doesn't say how they should be layed out in memory.
Two compilers can (and this is not purely hypothetical, this does happen in practice) return different values for `sizeof(std::string)` for instance. There is nothing in the standard that says: *an `std::string` is represented like this in memory*. So interoperability at the object file level doesn't exist.
Binary compatibility between C++ compilers is a much larger problem than just name mangling. You'd need to standardize much more than what is currently specified.
|
apply function not changing original value
Related to question on this page: [Randomly associate elements of two vectors given conditions](https://stackoverflow.com/questions/25221199/randomly-associate-elements-of-two-vectors-given-conditions)
If I have following data:
```
loss=c(45,10,5,1)
capitals = structure(list(capital = c(100L, 50L, 4L, 25L, 5L), loss = c(5L,
10L, 10L, 1L, 45L)), .Names = c("capital", "loss"), class = "data.frame", row.names = c(NA,
-5L))
capitals
capital loss
1 100 5
2 50 10
3 4 10
4 25 1
5 5 45
>
```
I am trying to correct any row with loss>capital (assign another random value from vector loss so that loss<=capital) by following command:
```
apply(capitals, 1, function(x){while(x[2]>x[1]) {x[2] = sample(loss,1); print(x[2])} })
```
print function shows that the value is changing in the function but value is not changing in dataframe capitals:
```
apply(capitals, 1, function(x){while(x[2]>x[1]) {x[2] = sample(loss,1); print(x[2])} })
loss
5
loss
10
loss
10
loss
1
loss
5
NULL
> capitals
capital loss
1 100 5
2 50 10
3 4 10
4 25 1
5 5 45
>
```
Why is value in capitals dataframe not changing and how can this be corrected? Thanks for your help.
| `apply` is evaluating a function, and assignment within functions do not affect the enclosing environment. A copy is being modified, and that copy is destroyed when the function exits.
Instead, to make use of `apply`, you should build an object, letting `apply` return each element. Something like this perhaps:
```
capitals$loss <-
apply(capitals, 1,
function(x){
while(x[2]>x[1])
x[2] <- sample(loss,1)
x[2]
}
)
capitals
## capital loss
## 1 100 5
## 2 50 10
## 3 4 1
## 4 25 1
## 5 5 5
```
Here, the new value for `loss` (`x[2]`) is returned from the function, and collected into a vector by `apply`. This is then used to replace the column in the data frame.
This can be done without the `while` loop, by sampling the desired subset of `loss`. An `if` is required to determine if sampling is needed:
```
apply(capitals, 1,
function(x)
if (x[2] > x[1])
sample(loss[loss<=x[1]], 1)
else
x[2]
)
```
Better yet, instead of using `if`, you can replace only those rows where the condition holds:
```
r <- capitals$capital < capitals$loss
capitals[r, 'loss'] <-
sapply(capitals[r,'capital'],
function(x) sample(loss[loss<=x], 1)
)
```
Here, the rows where replacement is needed is represented by `r` and only those rows are modified (this is the same condition present for the `while` in the original, but the order of the elements has been swapped -- thus the change from greater-than to less-than).
The `sapply` expression loops through the values of `capital` for those rows, and returns a single sample from those entries of `loss` that do not exceed the `capital` value.
|
IIS\_IUSRS and IUSR permissions in IIS8
I've just moved away from IIS6 on Win2003 to IIS8 on Win2012 for hosting ASP.NET applications.
Within one particular folder in my application I need to Create & Delete files. After copying the files to the new server, I kept seeing the following errors when I tried to delete files:
>
> Access to the path 'D:\WebSites\myapp.co.uk\companydata\filename.pdf' is denied.
>
>
>
When I check IIS I see that the application is running under the DefaultAppPool account, however, I never set up Windows permissions on this folder to include **IIS AppPool\DefaultAppPool**
Instead, to stop screaming customers I granted the following permissions on the folder:
**IUSR**
- Read & Execute
- List Folder Contents
- Read
- Write
**IIS\_IUSRS**
- Modify
- Read & Execute
- List Folder Contents
- Read
- Write
This seems to have worked, but I am concerned that too many privileges have been set. I've read conflicting information online about whether **IUSR** is actually needed at all here. Can anyone clarify which users/permissions would suffice to Create and Delete documents on this folder please? Also, is IUSR part of the IIS\_IUSRS group?
## Update & Solution
Please see [my answer below](https://stackoverflow.com/a/36597241/792888). I've had to do this sadly as some recent suggestions were not well thought out, or even safe (IMO).
| I hate to post my own answer, but some answers recently have ignored the solution I posted in my own question, suggesting approaches that are nothing short of foolhardy.
In short - **you do not need to edit any Windows user account privileges at all**. Doing so only introduces risk. The process is entirely managed in IIS using inherited privileges.
## Applying Modify/Write Permissions to the *Correct* User Account
1. Right-click the domain when it appears under the Sites list, and choose *Edit Permissions*
[![](https://i.stack.imgur.com/b237H.png)](https://i.stack.imgur.com/b237H.png)
Under the *Security* tab, you will see `MACHINE_NAME\IIS_IUSRS` is listed. This means that IIS automatically has read-only permission on the directory (e.g. to run ASP.Net in the site). **You do not need to edit this entry**.
[![](https://i.stack.imgur.com/1Jnl7.png)](https://i.stack.imgur.com/1Jnl7.png)
2. Click the *Edit* button, then *Add...*
3. In the text box, type `IIS AppPool\MyApplicationPoolName`, substituting `MyApplicationPoolName` with your domain name or whatever application pool is accessing your site, e.g. `IIS AppPool\mydomain.com`
[![](https://i.stack.imgur.com/P4BQe.png)](https://i.stack.imgur.com/P4BQe.png)
4. Press the *Check Names* button. The text you typed will transform (notice the underline):
[![](https://i.stack.imgur.com/cZDJK.png)](https://i.stack.imgur.com/cZDJK.png)
5. Press *OK* to add the user
6. With the new user (your domain) selected, now you can safely provide any *Modify* or *Write* permissions
[![](https://i.stack.imgur.com/x3TXY.png)](https://i.stack.imgur.com/x3TXY.png)
|
How much space does BigInteger use?
How many bytes of memory does a BigInteger object use in general ?
| BigInteger internally uses an `int[]` to represent the huge numbers you use.
Thus it really **depends on the size of the number you store in it**. The `int[]` will grow if the current number doesn't fit in dynamically.
To get the number of bytes your `BigInteger` instance *currently* uses, you can make use of the `Instrumentation` interface, especially [`getObjectSize(Object)`](http://docs.oracle.com/javase/7/docs/api/java/lang/instrument/Instrumentation.html#getObjectSize%28java.lang.Object%29).
```
import java.lang.instrument.Instrumentation;
public class ObjectSizeFetcher {
private static Instrumentation instrumentation;
public static void premain(String args, Instrumentation inst) {
instrumentation = inst;
}
public static long getObjectSize(Object o) {
return instrumentation.getObjectSize(o);
}
}
```
To convince yourself, take a look at the [source code](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/math/BigInteger.java), where it says:
```
/**
* The magnitude of this BigInteger, in <i>big-endian</i> order: the
* zeroth element of this array is the most-significant int of the
* magnitude. The magnitude must be "minimal" in that the most-significant
* int ({@code mag[0]}) must be non-zero. This is necessary to
* ensure that there is exactly one representation for each BigInteger
* value. Note that this implies that the BigInteger zero has a
* zero-length mag array.
*/
final int[] mag;
```
|
Single quote Issue when executing Linux command in Java
I need to execute Linux command like this using Runtime.getRuntime().exec() :
```
/opt/ie/bin/targets --a '10.1.1.219 10.1.1.36 10.1.1.37'
```
Basically, this command is to connect each targets to server one by one (10.1.1.219, 10.1.1.36, 10.1.1.37). It works well in terminal, the result should be :
```
['10.1.1.219', '10.1.1.36', '10.1.1.37']
```
But if I execute the command using Runtime.getRuntime().exec(execute), like this :
```
execute = "/opt/ie/bin/targets" + " " + "--a" + " " + "'" + sb
+ "'";
```
Java will treat the single quote as string to execute, the result will be :
```
callProcessWithInput executeThis=/opt/ie/bin/targets --a '10.1.1.219 10.1.1.36 10.1.1.37'
The output for removing undesired targets :["'10.1.1.219"]
```
Anyone knows how to solve it? Thanks!
| Quote characters are interpreted by the shell, to control how it splits up the command line into a list of arguments. But when you call `exec` from Java, you're not using a shell; you're invoking the program directly. When you pass a single `String` to `exec`, it's split up into command arguments using a `StringTokenizer`, which just splits on whitespace and doesn't give any special meaning to quotes.
If you want more control over the arguments passed to the program, call one of the versions of `exec` that takes a `String[]` parameter. This skips the `StringTokenizer` step and lets you specify the exact argument list that the called program should receive. For example:
```
String[] cmdarray = { "/opt/ie/bin/targets", "--a", "10.1.1.219 10.1.1.36 10.1.1.37" };
Runtime.getRuntime().exec(cmdarray);
```
|
Python csv.DictReader: parse string?
I am downloading a CSV file directly from a URL using `requests`.
How can I parse the resulting string with `csv.DictReader`?
Right now I have this:
```
r = requests.get(url)
reader_list = csv.DictReader(r.text)
print reader_list.fieldnames
for row in reader_list:
print row
```
But I just get `['r']` as the result of `fieldnames`, and then all kinds of weird things from `print row`.
| From the documentation of [`csv`](https://docs.python.org/3/library/csv.html#csv.reader), the first argument to [`csv.reader`](https://docs.python.org/3/library/csv.html#csv.reader) or [`csv.DictReader`](https://docs.python.org/3/library/csv.html#csv.DictReader) is `csvfile` -
>
> *csvfile* can be any object which supports the [iterator](https://docs.python.org/3/glossary.html#term-iterator) protocol and returns a string each time its `__next__()` method is called — [file objects](https://docs.python.org/3/glossary.html#term-file-object) and list objects are both suitable.
>
>
>
In your case when you give the string as the direct input for the `csv.DictReader()` , the `__next__()` call on that string only provides a single character, and hence that becomes the header, and then `__next__()` is continuously called to get each row.
Hence, you need to either provide an in-memory stream of strings using `io.StringIO`:
```
>>> import csv
>>> s = """a,b,c
... 1,2,3
... 4,5,6
... 7,8,9"""
>>> import io
>>> reader_list = csv.DictReader(io.StringIO(s))
>>> print(reader_list.fieldnames)
['a', 'b', 'c']
>>> for row in reader_list:
... print(row)
...
{'a': '1', 'b': '2', 'c': '3'}
{'a': '4', 'b': '5', 'c': '6'}
{'a': '7', 'b': '8', 'c': '9'}
```
or a list of lines using [`str.splitlines`](https://docs.python.org/3/library/stdtypes.html#str.splitlines):
```
>>> reader_list = csv.DictReader(s.splitlines())
>>> print(reader_list.fieldnames)
['a', 'b', 'c']
>>> for row in reader_list:
... print(row)
...
{'a': '1', 'b': '2', 'c': '3'}
{'a': '4', 'b': '5', 'c': '6'}
{'a': '7', 'b': '8', 'c': '9'}
```
|
Regular expression to check if a String is a positive natural number
I want to check if a string is a positive natural number but I don't want to use `Integer.parseInt()` because the user may enter a number larger than an int. Instead I would prefer to use a regex to return false if a numeric String contains all "0" characters.
```
if(val.matches("[0-9]+")){
// We know that it will be a number, but what if it is "000"?
// what should I change to make sure
// "At Least 1 character in the String is from 1-9"
}
```
Note: the string must contain only `0`-`9` and it must not contain all `0`s; in other words it must have at least 1 character in `[1-9]`.
| You'd be better off using [`BigInteger`](http://docs.oracle.com/javase/7/docs/api/java/math/BigInteger.html) if you're trying to work with an arbitrarily large integer, however the following pattern should match a series of digits containing at least one non-zero character.
```
\d*[1-9]\d*
```
![Regular expression visualization](https://www.debuggex.com/i/rg9jF3AETqaCMAj8.png)
[Debuggex Demo](https://www.debuggex.com/r/rg9jF3AETqaCMAj8)
Debugex's unit tests seem a little buggy, but you can play with the pattern there. It's simple enough that it should be reasonably cross-language compatible, but in Java you'd need to escape it.
```
Pattern positiveNumber = Pattern.compile("\\d*[1-9]\\d*");
```
---
Note the above (intentionally) matches strings we wouldn't normally consider "positive natural numbers", as a valid string can start with one or more `0`s, e.g. `000123`. If you don't want to match such strings, you can simplify the pattern further.
```
[1-9]\d*
```
![Regular expression visualization](https://www.debuggex.com/i/ZEmlyzwjNlq1mnX-.png)
[Debuggex Demo](https://www.debuggex.com/r/ZEmlyzwjNlq1mnX-)
```
Pattern exactPositiveNumber = Pattern.compile("[1-9]\\d*");
```
|
What does idl attribute mean in the WHATWG html5 standard document?
While reading over the WHATWG's [HTML5 - A technical specification for Web developers](http://developers.whatwg.org) I see many references such as:
>
> # Reflecting content attributes in IDL attributes
>
>
> Some IDL attributes are defined to reflect a particular content
> attribute. This means that on getting, the IDL attribute returns the
> current value of the content attribute, and on setting, the IDL
> attribute changes the value of the content attribute to the given
> value.
>
>
>
and:
>
> In conforming documents, there is only one body element. The
> document.body IDL attribute provides scripts with easy access to a
> document's body element.
>
>
> The body element exposes as event handler content attributes a number
> of the event handlers of the Window object. It also mirrors their
> event handler IDL attributes.
>
>
>
My (admittedly fuzzy) understanding comes from the Windows world. I think an .idl file is used to map remote procedure calls in an n-tier distributed app. I would assume a content attribute refers to html element attributes.
There is no place in the standard *that I can see* that explains this usage of the terms "content attribute" and "IDL attribute". Could anyone explain what these terms mean and how the two kinds of attributes relate?
| The IDL ([Interface Definition Language](https://en.wikipedia.org/wiki/Interface_description_language)) comes from the [Web IDL](http://dev.w3.org/2006/webapi/WebIDL/) spec:
>
> This document defines an interface definition language, Web IDL, that
> can be used to describe interfaces that are intended to be implemented
> in web browsers. Web IDL is an IDL variant with a number of features
> that allow the behavior of common script objects in the web platform
> to be specified more readily. How interfaces described with Web IDL
> correspond to constructs within ECMAScript execution environments is
> also detailed in this document.
>
>
>
Content attributes are the ones that appear in the markup:
```
<div id="mydiv" class="example"></div>
```
In the above code `id` and `class` are attributes. Usually a content attribute will have a corresponding IDL attribute.
For example, the following JavaScript:
```
document.getElementById('mydiv').className = 'example'
```
Is equivalent to setting the `class` content attribute.
In JavaScript texts, the IDL attributes are often referred to as properties because they are exposed as properties of DOM objects to JavaScript.
While there's usually a corresponding pair of a content attribute and an IDL attribute/property, they are not necessarily interchangeable. For example, for an `<option>` element:
- the content attribute `selected` indicates the *initial* state of the option (and does not change when the user changes the option),
- the property `selected` reflects the *current* state of the control
|
What is the difference between "workflow engine" and "business process management engine"?
I have heard about these two concepts after a lot of time.
Such as "windows workflow foundation" and Activiti and jBPM and other project is "business process management engine".
Are these two nouns ("workflow engine" and "business process management engine") the same thing?
| In Wikipedia, a ["Workflow Engine"](https://en.wikipedia.org/wiki/Workflow_engine) is defined as:
>
> A software application that manages and executes modeled computer processes.
>
>
>
and from the same source, ["Business Process Management"](https://en.wikipedia.org/wiki/Business_process_management) is defined as:
>
> A "holistic management" approach to aligning an organization's business processes
> with the wants and needs of clients
>
>
>
Hence these nouns should ***not*** be used interchangeably; a "Workflow engine" is a generic engine used for modelling *any* workflow, whereas a "Business Process Management Engine" is a specific implementation of a "Workflow Engine" which models business process management.
So you could make the analogy, "Workflow Engine" is to a block of wood as "Business Process Management Engine" is to a carved wooden walking stick.
|
Why does map return an additional element when using ranges in Haskell?
I've just started learning Haskell and found a strange thing.
Let we have a list:
```
ghci> [0,2..5]
[0,2,4]
```
It has 3 elements. When I use `map` with this list I get 3 element as output, for example:
```
ghci> map (+ 1) [0,2..5]
[1,3,5]
ghci> map (* 2) [0,2..5]
[0,4,8]
ghci> map (`div` 2) [0,2..5]
[0,1,2]
```
But when I use fractional division I get 4 elements in output list:
```
ghci> map (/ 2) [0,2..5]
[0.0,1.0,2.0,3.0]
ghci> length (map (/ 2) [0,2..5])
4
```
Could you please explain why `map` may return more elements then it was?
Thank you!
| It's due to the implementation of `Enum` for `Float` and `Double`:
```
> [0,2..5] :: [Float]
[0.0,2.0,4.0,6.0]
```
It's not `map` doing it, but `Float`. Specifically, if you call `enumFromThenTo 0 2 5 :: [Float]`, you'll get the same list. You'll see the same results for `Double`.
This is hinted at in [the haskell report](https://www.haskell.org/onlinereport/haskell2010/haskellch9.html#verbatim-224), but the behavior is definitely non-obvious. Essentially, it comes down to the implementation of `numericEnumFromThenTo` (we're getting into some Haskell internals here), which is used by the `Enum Float` instance:
```
numericEnumFromThenTo n n' m = takeWhile p (numericEnumFromThen n n')
where
p | n' >= n = (<= m + (n' - n) / 2)
| otherwise = (>= m + (n' - n) / 2)
numericEnumFromThen n m = iterate (+ (m - n)) n
```
So you have `numericEnumFromThen 0.0 2.0` generating the list `[0.0,2.0,4.0,6.0,8.0,...]`, then you do `takeWhile p` on that, which in this case is equivalent to the function `\x -> x <= 5.0 + (2.0 - 0.0) / 2`, or more simply `\x -> x <= 6.0`, which is why `6.0` is included in the output list of `[0.0,2.0..5.0]`.
I can't explain *why* it's implemented this way, that's pretty baffling to me too, but hopefully I've answered the *how* for its implementation.
|
How to check the uniqueness inside a for-loop?
Is there a way to check slices/maps for the presence of a value?
I would like to add a value to a slice ***only*** if it does ***not*** exist in the slice.
This works, but it seems verbose. Is there a better way to do this?
```
orgSlice := []int{1, 2, 3}
newSlice := []int{}
newInt := 2
newSlice = append(newSlice, newInt)
for _, v := range orgSlice {
if v != newInt {
newSlice = append(newSlice, v)
}
}
newSlice == [2 1 3]
```
| Your approach would take linear time for each insertion. A better way would be to use a `map[int]struct{}`. Alternatively, you could also use a `map[int]bool` or something similar, but the empty `struct{}` has the advantage that it doesn't occupy any additional space. Therefore `map[int]struct{}` is a popular choice for a set of integers.
**Example:**
```
set := make(map[int]struct{})
set[1] = struct{}{}
set[2] = struct{}{}
set[1] = struct{}{}
// ...
for key := range(set) {
fmt.Println(key)
}
// each value will be printed only once, in no particular order
// you can use the ,ok idiom to check for existing keys
if _, ok := set[1]; ok {
fmt.Println("element found")
} else {
fmt.Println("element not found")
}
```
|
Does using a 'foreign' domain as email sender reduce email reputation?
We want to send emails through our webapp.
Users of the app provide their email adresses.
In some cases, we want to send transactional email from the webapp, using the current user as a sender.
Does using the User's name and email adress in the email `from` header affect email deliverability reputation?
Are there any other (bad) consquences, we should be aware of?
**adding details about the use case:**
- Say PersonA uses our app on myapp.com
- PersonA verified his email adress personA@example.com with a confirmation email we send him (he clicked a unique url in the email he got).
- Using the app, PersonA can invite other people to do something (attend an event for example)
- If PersonA invites PersonB we want to send an email, to let PersonB know that he has been invited. To do so, we would like to send an email from personA@example.com to PersonB.
- Having a sender header with myapp.com is totally fine. But PersonB should see see "PersonA ".
We are not going to send hundreds of emails like that. But we would like to create some trust when PersonB sees, that his good old friend PersonA invited him, not a stupid "notification@myapp.com" he never heard about.
| You would be opening a whole can of worms if you do not authenticate the email address first.
This would allow users to send emails with any from address. If you get each user to authenticate the email address they want to use, i.e. send an email to the address they specify, and get them to provide information in that email (which should be unique) or click on a unique link.
After an email has been authenticated, you know that they have (or at least had) access to that email account. It is now safer to send emails as that user.
However, this will still cause issues under specific circumstances. If the users domain has SPF enabled (SPF checks that only certain ip's send emails for that domain), it is likely that emails will be tagged as spam ( at least for that users with domains that use SPF).
This may increase the overall spam "rating" of your server with specific servers under specific circumstances. It is possible to alleviate this in various ways but that is a fair bit of work.
Unless there is a really good reason to have the emails show up as from a user, it would be better to not do that.
There is an option to use the "Sender: " header which may resolve this issue for you. <https://stackoverflow.com/questions/4367358/whats-the-difference-between-sender-from-and-return-path> provides a good example.
I, however, have no experience with this or its impact on messages or servers being tagged as spam.
|
Guice don't inject to Jersey's resources
Parsed allover the whole internet, but can't figure out why this happens. I've got a simplest possible project (over jersey-quickstart-grizzly2 archetype) with one Jersey resource. I'm using Guice as DI because CDI doesn't want to work with Jersey either. The problem is that Guice can't resolve the class to use when injecting in Jersey's resources. It works great outside, but not with Jersey.
Here is the Jersey resource:
```
import com.google.inject.Inject;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
@Path("api")
public class MyResource {
private Transport transport;
@Inject
public void setTransport(Transport transport) {
this.transport = transport;
}
@GET
@Produces(MediaType.TEXT_PLAIN)
public String getIt() {
return transport.encode("Got it!");
}
}
```
Transport interface:
```
public interface Transport {
String encode(String input);
}
```
And it's realization:
```
public class TransportImpl implements Transport {
@Override
public String encode(String input) {
return "before:".concat(input).concat(":after");
}
}
```
Following Google's GettingStarted manual, I've inherited `AbstractModule` and bound my classes like this:
```
public class TransportModule extends AbstractModule {
@Override
protected void configure() {
bind(Transport.class).to(TransportImpl.class);
}
}
```
I get injector in `main()` with this, but don't really need it here:
```
Injector injector = Guice.createInjector(new TransportModule());
```
Btw, there's no problem when I try to do smth like this:
```
Transport transport = injector.getInstance(Transport.class);
```
| Jersey 2 already has a DI framework, [HK2](https://hk2.java.net/2.4.0-b07/). You can either use it, or if you want, you can use the HK2/Guice bridge to bride your Guice module with HK2.
If you want to work with HK2, at the most basic level, it's not much different from the Guice module. For example, in your current code, you could do this
```
public class Binder extends AbstractBinder {
@Override
public void configurer() {
bind(TransportImpl.class).to(Transport.class);
}
}
```
Then just register the binder with Jersey
```
new ResourceConfig().register(new Binder());
```
One difference is the the binding declarations. With Guice, it "bind contract to implementation", while with HK2, it's "bind implementation to contract". You can see it's reversed from the Guice module.
If you want to bridge Guice and HK2, it's little more complicated. You need to understand a little more about HK2. Here's an example of how you can get it to work
```
@Priority(1)
public class GuiceFeature implements Feature {
@Override
public boolean configure(FeatureContext context) {
ServiceLocator locator = ServiceLocatorProvider.getServiceLocator(context);
GuiceBridge.getGuiceBridge().initializeGuiceBridge(locator);
Injector injector = Guice.createInjector(new TransportModule());
GuiceIntoHK2Bridge guiceBridge = locator.getService(GuiceIntoHK2Bridge.class);
guiceBridge.bridgeGuiceInjector(injector);
return true;
}
}
```
Then register the feature
```
new ResourceConfig().register(new GuiceFeature());
```
Personally, I would recommend getting familiar with HK2, if you're going to use Jersey.
**See Also:**
- [HK2 Documentation](https://hk2.java.net/2.4.0-b07/)
- [Custom Injection and Lifecycle Management](https://jersey.java.net/documentation/latest/ioc.html)
---
### UPDATE
Sorry, I forgot to add that to use the Guice Bridge, you need to dependency.
```
<dependency>
<groupId>org.glassfish.hk2</groupId>
<artifactId>guice-bridge</artifactId>
<version>2.4.0-b31</version>
</dependency>
```
Note that this is the dependency that goes with Jersey 2.22.1. If you are using a different version of HK2, you should make sure to use the same HK2 version that your Jersey version is using.
|
Text based game in Java
To help with learning code in my class, I've been working on this text based game to keep myself coding (almost) every day. I have a class called `BasicUnit`, and in it I have methods to create a custom class. I use 2 methods for this, allowing the user to enter the information for the class. I'm just wondering if I can do this in a more simplified manner?
```
public void buildCustomClass(int maxHP, int maxMP, int maxSP, int baseMeleeDmg, int baseSpellDmg, int baseAC, int baseSpeed) {
this.maxHP = maxHP;
this.maxMP = maxMP;
this.maxSP = maxSP;
this.baseMeleeDmg = baseMeleeDmg;
this.baseSpellDmg = baseSpellDmg;
this.baseAC = baseAC;
this.baseSpeed = baseSpeed;
lvl = 1;
xp = 0;
curHP = maxHP;
curMP = maxMP;
curSP = maxSP;
}
public void createCustomClass() {
kb = new Scanner(System.in);
System.out.println("Enter the information for your class: ");
System.out.println("Enter HP: ");
maxHP = kb.nextInt();
System.out.println("Enter MP: ");
maxMP = kb.nextInt();
System.out.println("Enter SP: ");
maxSP = kb.nextInt();
System.out.println("Enter Base Melee Damage: ");
baseMeleeDmg = kb.nextInt();
System.out.println("Enter Base Spell Damage: ");
baseSpellDmg = kb.nextInt();
System.out.println("Enter AC: ");
baseAC = kb.nextInt();
System.out.println("Enter Speed: ");
baseSpeed = kb.nextInt();
buildCustomClass(maxHP, maxMP, maxSP, baseMeleeDmg, baseSpellDmg, baseAC, baseSpeed);
}
```
| Welcome to Code Review and thanks for sharing your code!
# General issues
## Naming
Finding good names is the hardest part in programming. So always take your time to think carefully of your identifier names.
### Naming Conventions
It looks like you already know the
[Java Naming Conventions](http://www.oracle.com/technetwork/java/codeconventions-135099.html).
### Avoid abbreviations
In your code you use some abbreviations such as `maxSP` and `baseMeleeDmg`.
Although this abbreviation makes sense to you (now) anyone reading your code being not familiar with the problem (like me) has a hard time finding out what this means.
If you do this to save typing work: remember that you way more often read your code than actually typing something. Also for Java you have good IDE support with code completion so that you most likely type a long identifier only once and later on select it from the IDEs code completion proposals.
---
>
> The other identifiers make perfect sense to me. [because of context] – AJD
>
>
>
Context looks like your friend but in fact it is your enemy.
There are two reasons:
1. Context depends on *knowlegde* and *experience*. But different persons have different knowlegde and experience, so for one it might be easy to remember the context and for another it might be hard. Also your own knowledge and experiences change over time so you might find it hard to remember the context of any given code snipped you wrote when you come back to it in 3 in years or even 3 moth.
The point is that in any case you need (more or less) *time* to bring the context back to you brain.
But the only thing that you usually don't have when you do programming as a business ist time. So anything that makes you faster in understanding the code is a money worth benefit. And not needing to remember any context is such a time saver.
2. You may argue that we have a very simple promblem with an easy and common context. Thats true.
But:
- Real life projects usually have higher complexity and less eays to remember contexts. The point here is:
At which point is your context so complext that you switch from "acronym naming" to "verbose nameing"?
Again this point changes with you knowledge and your experience wich may lead to code that others have a hard time to understand.
The much better way to deal with it is to *always* white your code in a way that the dumbest person you know may be able to understand it. And this includes not to use akronyms in your identifiers that *might* need context to understand.
- This is a *training* project. When you train a physical skill like Highjumping you start with a very low bar that you can easily pass even not using the *flop* technique just to have a safe environment.
Same is here: The problem may be simple enough to be understood having the acronymed identifiers, but for the sake of training you should avoid acronyms.
### Avoid misleading naming
Both of your method names are misleading: They claim to *create* and *build* something but in reality neither one is creating or building anything.
One method is doing *user interaction* and the other is *configuring* the object.
The names of the methods should reflect that.
### Add units to identifiers for physical values
Physical values do mean nothing without a *unit*. This is a special case of the *context problem* mentioned above. The most famous example is the fail of the two space missions *Mars Climate Orbiter* and *Mars Polar Lander*. The Flight control software was build by NASA and worked with *metric* measurement (i.e. `meters`, `meters per second` or `newton seconds`) while the engine (and their driver software) has been build by *Lockheed Martin* which use *imperial* units (i.e. `feet`, `feet per minute` or `Pound-force second`).
The point is: not having *units* of physical values in your identifiers forces you to *think* if there is a problem or not:
```
double acceleration = flightManagement.calculateAcceleration();
engine.accelerate(acceleration);
```
But usually you don't quesion it unless you have a reason...
Having the units in the identifier names the problem becomes obvious:
```
double meterPerSquareSeconds =
flightManagement.calculateAccelerationInMeterPerSquareSecond();
engine.accelerateByFeedPerSquareMinute(
meterPerSquareSeconds); // oops
```
And we have the same argument again: your particular code is so easy and so small that we don't need the overhead.
But then: How do you decide at which point the overhead is needed? And again it depends on knowledge and experience which still are different among people...
# Flawed implementation
Both methods using the *same* member variables.
E.g. you variable `maxSP`: in `createCustomClass()` you assign it the result of `kb.nextInt()`. Then you pass this value as parameter to `buildCustomClass()` where you again assign the parameter value again to the *same* member variable.
Beside this being useless it may lead to confusing bugs later.
# keep same level of abstraction
Methods should either do "primitive" opeerations or call other methods, not both at the same time.
At the end of your method `createCustomClass()` you call the other method (`buildCustomClass()`). The better way to do this is to extract the code before the call to `buildCustomClass()` in a separate (private) method:
```
public void createCustomClass() {
aquireDataFromUser();
buildCustomClass(
maxHP,
maxMP,
maxSP,
baseMeleeDmg,
baseSpellDmg,
baseAC,
baseSpeed);
}
private void aquireDataFromUser() {
kb = new Scanner(System.in);
System.out.println("Enter the information for your class: ");
System.out.println("Enter HP: ");
maxHP = kb.nextInt();
System.out.println("Enter MP: ");
maxMP = kb.nextInt();
System.out.println("Enter SP: ");
maxSP = kb.nextInt();
System.out.println("Enter Base Melee Damage: ");
baseMeleeDmg = kb.nextInt();
System.out.println("Enter Base Spell Damage: ");
baseSpellDmg = kb.nextInt();
System.out.println("Enter AC: ");
baseAC = kb.nextInt();
System.out.println("Enter Speed: ");
baseSpeed = kb.nextInt();
}
```
Beside making `createCustomClass()` shorter it makes the useless reassingment of the member variables obvious.
|
Disable Python requests SSL validation for an imported module
I'm running a Python script that uses the `requests` package for making web requests. However, the web requests go through a proxy with a self-signed cert. As such, requests raise the following Exception:
`requests.exceptions.SSLError: ("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",)`
I know that SSL validation can be disabled in my own code by passing `verify=False`, e.g.: `requests.get("https://www.google.com", verify=False)`. I also know that if I had the certificate bundle, I could set the `REQUESTS_CA_BUNDLE` or `CURL_CA_BUNDLE` environment variables to point to those files. However, I do not have the certificate bundle available.
How can I disable SSL validation for external modules without editing their code?
| **Note**: This solution is a complete hack.
**Short answer**: Set the `CURL_CA_BUNDLE` environment variable to an empty string.
Before:
```
$ python
import requests
requests.get('http://www.google.com')
<Response [200]>
requests.get('https://www.google.com')
...
File "/usr/local/lib/python2.7/site-packages/requests-2.17.3-py2.7.egg/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: ("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",)
```
After:
```
$ CURL_CA_BUNDLE="" python
import requests
requests.get('http://www.google.com')
<Response [200]>
requests.get('https://www.google.com')
/usr/local/lib/python2.7/site-packages/urllib3-1.21.1-py2.7.egg/urllib3/connectionpool.py:852: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning)
<Response [200]>
```
**How it works**
This solution works because Python `requests` overwrites the default value for `verify` from the environment variables `CURL_CA_BUNDLE` and `REQUESTS_CA_BUNDLE`, as can be seen [here](https://github.com/psf/requests/blob/8c211a96cdbe9fe320d63d9e1ae15c5c07e179f8/requests/sessions.py#L718):
```
if verify is True or verify is None:
verify = (os.environ.get('REQUESTS_CA_BUNDLE') or
os.environ.get('CURL_CA_BUNDLE'))
```
The environment variables are meant to specify the path to the certificate file or CA\_BUNDLE and are copied into `verify`. However, by setting `CURL_CA_BUNDLE` to an empty string, the empty string is copied into `verify` and in Python, an empty string evaluates to `False`.
Note that this hack only works with the `CURL_CA_BUNDLE` environment variable - it does not work with the `REQUESTS_CA_BUNDLE`. This is because `verify` [is set with the following statement](https://github.com/psf/requests/blob/8c211a96cdbe9fe320d63d9e1ae15c5c07e179f8/requests/sessions.py#L718):
`verify = (os.environ.get('REQUESTS_CA_BUNDLE') or os.environ.get('CURL_CA_BUNDLE'))`
It only works with `CURL_CA_BUNDLE` because `'' or None` is not the same as `None or ''`, as can be seen below:
```
print repr(None or "")
# Prints: ''
print repr("" or None )
# Prints: None
```
|
Preserve default arguments of wrapped/decorated Python function in Sphinx documentation
How can I replace `*args` and `**kwargs` with the real signature in the documentation of decorated functions?
Let's say I have the following decorator and decorated function:
```
import functools
def mywrapper(func):
@functools.wraps(func)
def new_func(*args, **kwargs):
print('Wrapping Ho!')
return func(*args, **kwargs)
return new_func
@mywrapper
def myfunc(foo=42, bar=43):
"""Obscure Addition
:param foo: bar!
:param bar: bla bla
:return: foo + bar
"""
return foo + bar
```
Accordingly, calling `print(myfunc(3, 4))` gives us:
```
Wrapping Ho!
7
```
So far so good. I also want my library containing `myfunc` properly documented with Sphinx.
However, if I include my function in my sphinx html page via:
```
.. automodule:: mymodule
:members: myfunc
```
It will actually show up as:
## myfunc(\*args, \*\*kwargs)
Obscure Addition
- **Parameters:**
- **foo**: bar!
- **bar**: bla bla
- **Returns:**
foo + bar
How can I get rid of the generic `myfunc(*args, **kwargs)` in the title? This should be replaced by **myfunc(foo=42, bar=43)**. How can I change sphinx or my decorator `mywrapper` such that the default keyword arguments are preserved in the documentation?
**EDIT**:
As pointed out this question has been asked before, but the answers are not so helpful.
However, I had an idea and wonder if this is possible. Does Sphinx set some environment variable that tells my module that it is actually imported by Sphinx? If so, I could simply monkey-patch my own wrappers. If my module is imported by Sphinx my wrappers return the original functions instead of wrapping them. Thus, the signature is preserved.
| I came up with a monkey-patch for `functools.wraps`.
Accordingly, I simply added this to the `conf.py` script in my project documentation's sphinx `source` folder:
```
# Monkey-patch functools.wraps
import functools
def no_op_wraps(func):
"""Replaces functools.wraps in order to undo wrapping.
Can be used to preserve the decorated function's signature
in the documentation generated by Sphinx.
"""
def wrapper(decorator):
return func
return wrapper
functools.wraps = no_op_wraps
```
Hence, when building the html page via `make html`, `functools.wraps` is replaced with this decorator `no_op_wraps` that does absolutely nothing but simply return the original function.
|
difference between "address in use" with bind() in Windows and on Linux - errno=98
I have a small TCP server that listens on a port. While debugging it's common for me to CTRL-C the server in order to kill the process.
On Windows I'm able to restart the service quickly and the socket can be rebound. On Linux I have to wait a few minutes before bind() returns with success
When bind() is failing it returns errno=98, address in use.
I'd like to better understand the differences in implementations. Windows sure is more friendly to the developer, but I kind of doubt Linux is doing the 'wrong thing'.
My best guess is Linux is waiting until all possible clients have detected the old socket is broken before allowing new sockets to be created. The only way it could do this is to wait for them to timeout
is there a way to change this behavior during development in Linux? I'm hoping to duplicate the way Windows does this
| You want to use the `SO_REUSEADDR` option on the socket on Linux. The relevant manpage is [`socket(7)`](http://linux.die.net/man/7/socket). Here's an [example](http://beej.us/guide/bgnet/output/html/multipage/setsockoptman.html) of its usage. [This question](https://stackoverflow.com/questions/775638/using-so-reuseaddr-what-happens-to-previously-open-socket) explains what happens.
[Here's](https://stackoverflow.com/questions/3855890/closing-a-listening-tcp-socket-in-c/3855943#3855943) a duplicate of this answer.
On Linux, `SO_REUSEADDR` allows you to bind to an address unless an active connection is present. On Windows this is the default behaviour. On Windows, SO\_REUSEADDR allows you to additionally bind multiple sockets to the same addresses. See [here](http://itamarst.org/writings/win32sockets.html) and [here](http://bugs.python.org/issue2550) for more.
|
How to implement blurred background for Modal Bottome Sheet in Flutter?
I am working with Modal Bottom sheet and want to give blurred background, but the type of the parameter *barriercolor* is Color, so I cannot use BackdropFiter().
Does anyone know how to implement blurred background for Modal Bottom Sheet??
| Update:
Sorry for my careless.
You can set `backgroundColor:Colors.transparent` and `expand:true` and make your own `barrier` in `builder`.
It may look like this:
```
showMaterialModalBottomSheet(
context: context,
backgroundColor: Colors.transparent,
expand: true,
builder: (context) => BackdropFilter(
filter: ImageFilter.blur(sigmaX: 20, sigmaY: 20),
child: Column(
mainAxisAlignment: MainAxisAlignment.end,
children: [
Container(
height: 200,
width: MediaQuery.of(context).size.width,
color: Colors.white,
child: Text('Im child'),
)
],
),
),
);
```
|
Can I associate a CODE reference with a HASH reference that contains it in Perl?
I want to create a hash reference with code references mapped to scalars (strings) as its members.
So far I have a map reference that looks something like this:
```
my $object;
$object = {
'code1' => sub {
print $_[0];
},
'code2' => sub {
return 'Hello, World!';
},
'code3' => sub {
$object->{code1}->($object->{code2}->());
}
};
$object->{code3}->();
```
I would like to be able to "bless" the 'code3' reference in $object with $object, so I can do something like:
```
my $object;
$object = {
'code1' => sub {
print $_[0];
},
'code2' => sub {
return 'Hello, World!';
},
'code3' => sub {
$self = shift;
$self->{code1}->($self->{code2}->());
}
};
$object->{code3}->();
```
However, bless only works with packages, rather than hash tables.
Is there a way to do this in Perl 5 version 22?
Note: now that I think of it, it's better to pass $object to the method explicitly, as it solves JavaScript's ["this"](http://www.i-programmer.info/programmer-puzzles/137-javascript/1922-the-this-problem.html) problem. I am just too used to Java's "this" which makes sense in Java where everything is a class and therefore all methods have a "this", but in scripting, it really helps to know if the "this" is actually passed, or is it just called as a function(and you end up accidentally polluting global scope or triggering strict warning) passing $self explicitly makes it clear that you are not calling it as a function, but as a method.
| You are doing sub calls (not method calls), so you simply forgot to pass `$self` as a parameter.
```
my $object = {
code1 => sub {
print $_[0];
},
code2 => sub {
return 'Hello, World!';
},
code3 => sub {
my $self = shift;
$self->{code1}->( $self, $self->{code2}->($self) );
}
};
$object->{code3}->($object);
```
---
But I think you're trying to create JavaScript-like objects. You can start with the following:
```
package PrototypeObject;
sub new {
my $class = shift;
my $self = bless({}, $class);
%$self = @_;
return $self;
}
sub AUTOLOAD {
my $self = shift;
( my $method = our $AUTOLOAD ) =~ s/^.*:://s;
return $self->{$method}->($self, @_);
}
1;
```
```
use PrototypeObject qw( );
my $object = PrototypeObject->new(
code1 => sub {
print $_[1];
},
code2 => sub {
return 'Hello, World!';
},
code3 => sub {
my $self = shift;
$self->code1( $self->code2() );
}
);
$object->code3();
```
Note that this will slow down your method calls as it must call AUTOLOAD before calling your method. This could be addressed by overloading the method call operator.
Check on CPAN. Someone might already have a more complete implementation.
|
TinyMCE editor with React Cannot access local files
Im using the tinyMCE editor plugin with react js. Im trying to upload files from my local machine to the editor and then to s3. I can drag and drop photos into the editor, however, when I click insert photo button i cannot gain access to my file system. Any suggestions?
```
class Editor extends React.Component{
handleEditorChange = (e) => {
console.log('e',e);
console.log('Content was updated:', e.target.getContent());
}
render(){
return(
<TinyMCE
content="<p>This is the initial content of the editor</p>"
config={{
height:600,
paste_data_images: true,
plugins: [
'advlist autolink lists link image charmap print preview anchor',
'searchreplace wordcount visualblocks code fullscreen',
'insertdatetime media table contextmenu paste code'
],
toolbar: 'insertfile undo redo | styleselect | bold italic | alignleft aligncenter alignright alignjustify | bullist numlist outdent indent | link image', file_picker_types: 'file image media',
paste_data_images:true,
file_browser_callback_types: 'image',
images_upload_handler: function (blobInfo, success, failure) {
console.log('blobInfo',blobInfo);
},
selector: 'textarea', // change this value according to your HTML
file_picker_callback: function(callback, value, meta) {
if (meta.filetype == 'file') {
//callback('mypage.html', {text: 'My text'});
}
if (meta.filetype == 'image') {
}
if (meta.filetype == 'media') {
//callback('movie.mp4', {source2: 'alt.ogg', poster: 'image.jpg'});
}
}
}}
onChange={this.handleEditorChange}
/>
)
}
}
export default Editor
```
| I wrote a hack for a workaround. Put an input in the html and then grabbed it with an onlick handler
```
import React from 'react';
import TinyMCE from 'react-tinymce';
class Editor extends React.Component{
handleEditorChange = (e) => {
console.log('e',e);
console.log('Content was updated:', e.target.getContent());
}
render(){
return(
<div>
<input id="my-file" type="file" name="my-file" style={{display:"none"}} onChange="" />
<TinyMCE
content="<p>This is the initial content of the editor</p>"
config={{
// selector: '.post-article #' + editorId,
height: 400,
plugins: [
'advlist autolink lists link image charmap print preview anchor',
'searchreplace wordcount visualblocks code fullscreen',
'insertdatetime media table contextmenu paste code'
],
toolbar: 'insertfile undo redo | styleselect | bold italic | alignleft aligncenter alignright alignjustify | bullist numlist outdent indent | link image',
content_css: '//www.tinymce.com/css/codepen.min.css',
file_browser_callback_types: 'image',
file_picker_callback: function (callback, value, meta) {
if (meta.filetype == 'image') {
var input = document.getElementById('my-file');
input.click();
input.onchange = function () {
var file = input.files[0];
var reader = new FileReader();
reader.onload = function (e) {
console.log('name',e.target.result);
callback(e.target.result, {
alt: file.name
});
};
reader.readAsDataURL(file);
};
}
},
paste_data_images: true,
}}
onChange={this.handleEditorChange}
/>
</div>
)
}
}
export default Editor
```
|
Android SDK having trouble with ADB
So, I installed the Android SDK, Eclipse, and the ADT. Upon firing up Eclipse the first time after setting up the ADT, this error popped up:
```
[2012-05-29 12:11:06 - adb] /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory
[2012-05-29 12:11:06 - adb] 'adb version' failed!
/home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory
[2012-05-29 12:11:06 - adb] Failed to parse the output of 'adb version':
Standard Output was:
Error Output was:
/home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory
[2012-05-29 12:11:06 - adb] /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory
[2012-05-29 12:11:06 - adb] 'adb version' failed!
/home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory
[2012-05-29 12:11:06 - adb] Failed to parse the output of 'adb version':
Standard Output was:
Error Output was:
/home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory
```
I'm not quite sure how this is. Feels weird that there's a missing library there. I'm using Ubuntu 12.04. No adb is a pretty big blow as an Android developer. How do I fix?
| Android SDK platform tools requires `ia32-libs`, which itself is a big package of libraries:
```
sudo apt-get install ia32-libs
```
---
**UPDATE:**
Below are the [latest instructions from Google](https://developer.android.com/sdk/installing/index.html?pkg=tools) on how to install Android SDK library dependencies:
>
> If you are running a 64-bit distribution on your development machine, you need to install additional packages first. For Ubuntu 13.10 (Saucy Salamander) and above, install the `libncurses5:i386`, `libstdc++6:i386`, and `zlib1g:i386` packages using `apt-get`:
>
>
>
> ```
> sudo dpkg --add-architecture i386
> sudo apt-get update
> sudo apt-get install libncurses5:i386 libstdc++6:i386 zlib1g:i386
>
> ```
>
> For earlier versions of Ubuntu, install the `ia32-libs` package using `apt-get`:
>
>
>
> ```
> apt-get install ia32-libs
>
> ```
>
>
|
Tail Recursion in Dataweave
Is there a way to take a recursive function (like the following) and make it tail recursive? I have an input like this:
```
{
"message": "Test ",
"read": [
{
"test": " t "
}
]
}
```
and this Dataweave function
```
fun trimWS(item) = item match {
case is Array -> $ map trimWS($)
case is Object -> $ mapObject {
($$): $ match {
case is String -> trim($)
case is Object -> trimWS($)
case is Array -> $ map trimWS($)
else -> $
}
}
case is String -> trim($)
else -> $
}
```
| I reworked a little bit your existing function to simplify it and I also run a few tests under Mule 4.2.1.
By building a data structure with over 840 levels deep, I was able to navigate and trim the fields. My guess is because of the structure of the data and lazy evaluation I am able to get past 256 depths which is the default value where DW 2.0 is throwing StackOverflow.
You can also increase the default value by passing a runtime parameter, its name is `com.mulesoft.dw.stacksize` (e.g. `com.mulesoft.dw.stacksize=500`) or any other number provided your system can handle it.
As I said creating a tail-recursive version is not easy, it will complicate the code, it will be way less maintainable as compared to the existing version, etc.
I hope it helps even if I am not directly answering your question.
```
%dw 2.0
output application/json
var ds = {
"message": "Test ",
"read": [
{
"test": " t "
}
]
}
var deepData = (0 to 840) as Array reduce (e, acc=ds) -> {value: " TO_TRIM ",next: acc}
fun trimWS(item) = item match {
case is Array -> $ map trimWS($)
case is Object -> $ mapObject {($$): trimWS($)}
case is String -> trim($)
else -> $
}
---
trimWS(deepData)
```
|
Redirecting from getInitialProps in \_error.js in nextjs?
Any way to redirect to another url from getInitialProps in **\_error.js** in nextjs?
Already tried **res.redirect('/');** inside getInitialProps.
Its giving
*TypeError: res.redirect is not a function*
| Although, this redirect from `_error.js` doesn't feel right to me, you can try something like below:
```
import Router from 'next/router'
// in your getInitialProps
if (res) { // server
res.writeHead(302, {
Location: '/'
});
res.end();
} else { // client
Router.push('/');
}
```
Since `getInitialProps` might be executed on the client when navigating to a different route, you should also consider adding the else case.
Also, I would suggest you to rethink your approach. `_error.js` is used to handle 404 and 500 errors and you shouldn't need a redirect at this level.
---
In case you are importing the [Error component](https://nextjs.org/docs/advanced-features/custom-error-page#reusing-the-built-in-error-page), `getInitialProps` of `_error.js` will not be triggered.
|
deprecation warning when compiling: eta expansion of zero argument method
When compiling this snippet, the scala compiler issues the following warning:
>
> Eta-expansion of zero-argument method values is deprecated. Did you
> intend to write Main.this.porFiles5()? [warn] timerFunc(porFiles5)
>
>
>
It occurs when I pass a function to another one for a simple timing. The timer function takes a parameterless function returning unit, at this line: `timerFunc(porFiles5)`. Is this warning necessary? What would be the idiomatic way to avoid it?
```
package example
import java.nio.file._
import scala.collection.JavaConverters._
import java.time._
import scala.collection.immutable._
object Main extends App {
val dir = FileSystems.getDefault.getPath("C:\\tmp\\testExtract")
def timerFunc (func:()=>Unit ) = {
val start = System.currentTimeMillis()
timeNow()
func()
val finish = System.currentTimeMillis()
timeNow()
println((finish - start) / 1000.0 + " secs.")
println("==================")
}
def porFiles5(): Unit = {
val porFiles5 = Files.walk(dir).count()
println(s"You have $porFiles5 por5 files.")
}
def timeNow(): Unit = {
println(LocalTime.now)
}
timeNow()
timerFunc(porFiles5)
timeNow()
}
```
| `porFiles5` is *not* a function. It is a *method*, which is something completely different in Scala.
If you have a method, but you need a function, you can use η-expansion to lift the method into a function, like this:
```
someList.foreach(println _)
```
Scala will, in some cases, also perform η-expansion automatically, if it is absolutely clear from context what you mean, e.g.:
```
someList.foreach(println)
```
However, there is an ambiguity for parameterless methods, because Scala allows you to call parameterless methods without an argument list, i.e. a method defined with an *empty* parameter list can be called without any argument list at all:
```
def foo() = ???
foo // normally, you would have to say foo()
```
Now, in your case, there is an ambiguity: do you mean to *call* `porFiles5` or do you mean to η-expand it? At the moment, Scala arbitrarily disambiguates this situation and performs η-expansion, but in future versions, this will be an error, and you will have to explicitly perform η-expansion.
So, to get rid of the warning, simply use explicit η-expansion instead of implicit η-expansion:
```
timerFunc(porFiles5 _)
```
|
Primefaces dataExporter to xls Float number becomes text in spreadsheet cell
Environment:
- jsf 2.2
- primefaces 6.1
- wilfly 10
I'm trying to export a dataTable to an excel with dataExporter from primefaces, but I'm firstly getting
```
<p:commandButton id="btnExpExcel"
alt="#{msgs.inv_exportinvoices}"
ajax="false">
<p:dataExporter type="xls" target="lstFactures"
fileName="invoices"/>
</p:commandButton>
<p:dataTable id="lstFactures" var="inv"
...
```
**Option 1** I get in xls pex. 83.2 but we use , as decimal instead of .
```
...
<p:column headerText="#{msgs.total}">
<h:outputText value="#{inv.total}">
<f:convertNumber locale="#{localeBean.locale}"/>
</h:outputText>
</p:column>
...
```
**Option 2** I get in xls pex. 83,2 but excel deal with that as text instead of number
```
...
<p:column headerText="#{msgs.total}">
<h:outputText value="#{inv.total}" />
</p:column>
...
```
\*\*Option3 \*\* with
public void postProcessXLS(Object document) {
HSSFWorkbook wb = (HSSFWorkbook) document;
HSSFSheet sheet = wb.getSheetAt(0);
HSSFRow header;
```
HSSFCellStyle cellStyle = wb.createCellStyle();
cellStyle.setFillForegroundColor(HSSFColor.GREEN.index);
cellStyle.setFillPattern(HSSFCellStyle.SOLID_FOREGROUND);
int ind = 0;
for (int row = 0; row < invoices.size() + 1; row++) {
header = sheet.getRow(row);
for (int col = 0; col < header.getPhysicalNumberOfCells(); col++) {
...
}
if (col == 5) {
HSSFCell cell = header.getCell(col);
//Total is a float
cell.setCellValue(invoices.get(ind).getTotal());
ind++;
}
}
}
}
}
```
I also tried to exportFuction="#{inv.total}" but I got some kind of error exportFunction="#{inv.total}": Method not found...
What I'm getting in xls is the following
[![enter image description here](https://i.stack.imgur.com/prUX4.png)](https://i.stack.imgur.com/prUX4.png)
| All the fields in p:dataTable are exported as text.
If you want to convert a value in a different format, you have to implement a postProcessor method.
Example:
page.xhtml
```
<p:dataExporter type="xls" target="lstFactures" fileName="invoices" postProcessor="#{bean.ppMethod}" />
```
Class Bean
```
public void ppMethod(Object document) {
Workbook workbook = (Workbook) document;
...
CellStyle totalCellStyle = workbook.createCellStyle();
totalCellStyle.setDataFormat(workbook.createDataFormat().getFormat("#,##0.00"));
Cell currentCell = workbook.getSheetAt(0).getRow(0).getCell(0);
currentCell.setCellValue(Double.parseDouble(currentCell.getStringCellValue()));
currentCell.setCellStyle(defaultCellStyle);
...
}
```
|
Foreign language characters in Regular expression in C#
In C# code, I am trying to pass chinese characters: `" 中文ABC123"`.
When I use alphanumeric in general using `"^[a-zA-Z0-9\s]+$"`,
it doesn't pass for `"中文ABC123"` and regex validation fails.
What other expressions do I need to add for C#?
| To match any letter character from any language use:
```
\p{L}
```
If you also want to match numbers:
```
[\p{L}\p{Nd}]+
```
`\p{L}` ... matches a character of the unicode category letter.
it is the short form for [\p{Ll}\p{Lu}\p{Lt}\p{Lm}\p{Lo}]
`\p{Ll}` ... matches lowercase letters. (abc)
`\p{Lu}` ... matches uppercase letters. (ABC)
`\p{Lt}` ... matches titlecase letters.
`\p{Lm}` ... matches modifier letters.
`\p{Lo}` ... matches letters without case. (中文)
`\p{Nd}` ... matches a character of the unicode category decimal digit.
Just replace: `^[a-zA-Z0-9\s]+$` with `^[\p{L}0-9\s]+$`
|
How to define empty character array in matlab?
```
for i=1:POPULATION_SIZE
for j=1:NO_PARAMETERS
c=dec2bin(parameters(j),NO_BITS_PARAMETER);
chromosomes(i) = [chromosomes(i) c];
end
end
```
above code is giving the following error:
>
> ??? Undefined function or method 'chromosomes' for input arguments of type 'double'.
>
>
>
I need an empty array of characters named `chromosomes`.
I tried adding following line before the above loops.
```
chromosomes(1:POPULATION_SIZE)='';
```
but its not working. It gives error
`??? Index of element to remove exceeds matrix dimensions.`
| Do you want chromosomes to be character array (when all rows have the same size) or cell array (with variable size of ith elements)?
In the first case you define the variable as:
```
chromosomes = char(zeros(POPULATION_SIZE,NO_PARAMETERS*NO_BITS_PATAMETER));
```
or
```
chromosomes = repmat(' ',POPULATION_SIZE,NO_PARAMETERS*NO_BITS_PATAMETER);
```
Then in for loop:
```
chromosomes(i,(j-1)*NO_BITS_PATAMETER+1:j*NO_BITS_PATAMETER) = c;
```
In the case of cell array:
```
chromosomes = cell(POPULATION_SIZE, NO_PARAMETERS); % each paramater in separate cell
for i=1:POPULATION_SIZE
for j=1:NO_PARAMETERS
c=dec2bin(parameters(j),NO_BITS_PARAMETER);
chromosomes{i,j} = c;
end
end
```
or
```
chromosomes = cell(POPULATION_SIZE,1); % all parameters in a single cell per i
for i=1:POPULATION_SIZE
for j=1:NO_PARAMETERS
c=dec2bin(parameters(j),NO_BITS_PARAMETER);
chromosomes{i} = [chromosomes{i} c];
end
end
```
**EDIT**:
Actually you can apply DEC2BIN to the whole array of numbers at once. It also looks like variable `parameters` are the same for every ith row. Then you can do:
```
c = dec2bin(parameters,NO_BITS_PARAMETER);
chromosomes = reshape(c',1,[]);
chromosomes = repmat(chromosomes,POPULATION_SIZE,1);
```
|
MariaDB Cluster vs Percona Cluster for MySQL
What are the advantages and and disadvantages between the two? I've only been able to find information on these two implementations without any specifics on clusters.
I'm currently implementing a Percona Cluster but my only concern currently is with MYISAM databases for replication. I run several wordpress databases in INNODB on these servers but when I need to migrate databases from other systems, they are sometimes fully or partially MYISAM which has caused some problems with my setup lately.
Is moving from a Percona Cluster to a MariaDB Cluster a better choice?
| Both platforms use the same mechanism for replication: [Galera](http://codership.com/content/using-galera-cluster). On the page at that link, you'll notice there are images featuring both PXC and MariaDB Cluster.
Galera library provides *transactional* replication. MyISAM doesn't do transactions, so the problems you may be having now are very likely related and would not be any different on the alternate platform.
>
> Currently replication works only with InnoDB storage engine. Any writes to tables of other types, including system (mysql.\*) tables, are not replicated. However, DDL statements are replicated in statement level, and changes to mysql.\* tables will get replicated that way. So, you can safely issue: CREATE USER..., but issuing: INSERT INTO mysql.user..., will not be replicated.
>
>
> — <http://www.percona.com/doc/percona-xtradb-cluster/limitation.html>
>
>
> Currently MariaDB Galera Cluster only supports the InnoDB/XtraDB storage engine.
>
>
> — <https://mariadb.com/kb/en/getting-started-with-mariadb-galera-cluster/>
>
>
>
And, of course, PXC uses XtraDB, Percona's compatible replacement for InnoDB (it has "XtraDB" right in the name), and [MariaDB also uses Percona's XtraDB](https://mariadb.com/kb/en/about-xtradb/) instead of Oracle's InnoDB, although, for compatibility on both systems, the storage engine still calls itself InnoDB.
Since the two systems share a signficant amount of code and are all intended to be essentially drop-in replacements for one another, the decision of which platform to use is largely a matter of opinion. I personally prefer vendor "x" which might mean I'd personally recommend you use MariaDB and also might mean I'd personally recommend sticking with PXC, but I need not actually actually tell you my preference, since it doesn't matter -- it's based largely on opinions and impressions and documentation and personalities and not on any kind of valuable evidence.
Your best solution for migrating MyISAM is probably going to be to modify the dumpfiles to `ENGINE=InnoDB` (and any other changes that necessitates) or staging them on a standalone server, then converting and exporting them as fully-InnoDB before trying to import them to your cluster.
If the MyISAM issue is your only concern, or even just your primary concern, then MariaDB Cluster and PXC are essentially identical in that regard -- they don't support it, for a very sensible reason -- it's not feasible to do so.
|
Nginx Site Config Templates and Variables
Hi I am looking to set up a simple nginx config, I read you can set variables using `set $variable content;` but so far I've had no luck...
Below is what I have come up with so far/what I am trying to achieve:
```
server {
##################################################
# Set variables
set $port 80; # e.g. 80 or 443;
set $domain domain.com *.domain.com; # e.g. www.domain.com or just domain.com
set $subdomain false; # e.g. subdomain in subdomain.domain.com or false
set type live; # live, dev or stage
##################################################
# Include /web/sites configuration template
include /etc/nginx/site-config-template;
}
```
Here are the contents of /etc/nginx/site-config-template:
```
##################################################
# If $subdomain is not false at slash
if ( $subdomain ) {
set $subdomain "{$subdomain}/";
}
##################################################
# Define server main variables
listen $port;
server_name $domain;
root /web/sites/$domain/$subdomain$type/public_html/current/;
index index.php index.html;
##################################################
# Logs
access_log /web/sites/$domain/$subdomain$type/logs/access.log;
error_log /web/sites/$domain/$subdomain$type/logs/error.log;
##################################################
# push all to index.php
location / {
try_files $uri $uri/ /index.php$args;
}
##################################################
# pass php files to fastcgi
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_param SITE_TYPE $type;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
}
```
Am I not able to set variables in this manner in config files or am I just doing something horribly wrong?
| The [nginx FAQ](http://nginx.org/en/docs/faq/variables_in_config.html) is pretty clear on this topic:
>
> Q: Is there a proper way to use nginx variables to make sections of the configuration shorter, using them as macros for making parts of configuration work as templates?
>
>
> A: **Variables should not be used as template macros.** Variables are evaluated in the run-time during the processing of each request, so they are rather costly compared to plain static configuration. Using variables to store static strings is also a bad idea. Instead, a macro expansion and "include" directives should be used to generate configs more easily and it can be done with the external tools, e.g. sed + make or any other common template mechanism.
>
>
>
Building a configuration using external tools is the way to go. I've used [`m4`](https://www.gnu.org/software/m4/m4.html) + `make`. I use a custom Python script in one of my projects. The choices are plenty.
|
C++ pure virtual function have body
Pure virtual functions (when we set `= 0`) can also have a function body.
What is the use to provide a function body for pure virtual functions, if they are not going to be called at all?
| Your assumption that pure virtual function cannot be called is absolutely incorrect. When a function is declared pure virtual, it simply means that this function cannot get called *dynamically*, through a virtual dispatch mechanism. Yet, this very same function can easily be called *statically*, *non-virtually*, *directly* (without virtual dispatch).
In C++ language a non-virtual call to a virtual function is performed when a qualified name of the function is used in the call, i.e. when the function name specified in the call has the `<class name>::<function name>` form.
For example
```
struct S
{
virtual void foo() = 0;
};
void S::foo()
{
// body for pure virtual function `S::foo`
}
struct D : S
{
void foo()
{
S::foo();
// Non-virtual call to `S::foo` from derived class
this->S::foo();
// Alternative syntax to perform the same non-virtual call
// to `S::foo` from derived class
}
};
int main()
{
D d;
d.S::foo();
// Another non-virtual call to `S::foo`
}
```
|
Erase all characters in string between the first parenthesis "(" andthe last parenthesis "(" including these parentheses C++
I have a trouble to remove all the characters between the first parenthesis "(" and the last parenthesis "(" including them. Here is the test program I use to make it work, but without success...
```
#include <iostream>
#include <string>
using namespace std;
int main()
{
string str = "( 1221 ( 0 0 0 ) (1224.478541112155452 (1.32544e-7 0 0 ) ) )";
int count = 0;
for (std::string::size_type i = 0; i < str.size(); ++i)
{
if (str[i] == '(')
{
count += 1;
}
cout << "str[i]: " << str[i] << endl;
if (count <= 4)
{
str.erase(0, 1);
//str.replace(0, 1, "");
}
cout << "String: " << str << endl;
if (count == 4)
{
break;
}
cout << "Counter: " << count << endl;
}
cout << "Final string: " << str << endl;
system("PAUSE");
return 0;
}
```
In the example I have showed above, my target is (at least) to get the string:
```
"1.32544e-7 0 0 ) ) )"
```
which is extracted from original string
```
"( 1221 ( 0 0 0 ) (1224.478541112155452 (1.32544e-7 0 0 ) ) )"
```
To be more precise, I want to extract the value
```
"1.32544e-7"
```
and convert to double in order to use in calculations.
I have managed successfully to remove the
```
" 0 0 ) ) )"
```
since it is a kind of constant value.
Thank you!
| Rephrasing the problem as "I want to extract the double immediately following the last '('", a C++ translation is pretty straightforward:
```
int main()
{
string str = "( 1221 ( 0 0 0 ) (1224.478541112155452 (1.32544e-7 0 0 ) ) )";
// Locate the last '('.
string::size_type pos = str.find_last_of("(");
// Get everything that follows the last '(' into a stream.
istringstream stream(str.substr(pos + 1));
// Extract a double from the stream.
double d = 0;
stream >> d;
// Done.
cout << "The number is " << d << endl;
}
```
(Format validation and other bookkeeping left out for clarity.)
|
What is the right way to typecheck dependent lambda abstraction using 'bound'?
I am implementing a simple dependently-typed language, similar to the one [described by Lennart Augustsson](http://augustss.blogspot.dk/2007/10/simpler-easier-in-recent-paper-simply.html), while also using [bound](https://hackage.haskell.org/package/bound) to manage bindings.
When typechecking a dependent lambda term, such as `λt:* . λx:t . x`, I need to:
1. "Enter" the outer lambda binder, by instantiating `t` to *something*
2. Typecheck `λx:t . x`, yielding `∀x:t . t`
3. Pi-abstract the `t`, yielding `∀t:* . ∀x:t . t`
If lambda was non-dependent, I could get away with instantiating `t` with its *type* on step 1, since the type is all I need to know about the variable while typechecking on step 2.
But on step 3 I lack the information to decide which variables to abstract over.
I could introduce a fresh name supply and instantiate `t` with a `Bound.Name.Name` containing both the type and a unique name. But I thought that with `bound` I shouldn't need to generate fresh names.
Is there an alternative solution I'm missing?
| We need some kind of context to keep track of the lambda arguments. However, we don't necessarily need to instantiate them, since `bound` gives us de Bruijn indices, and we can use those indices to index into the context.
Actually using the indices is a bit involved, though, because of the type-level machinery that reflects the size of the current scope (or in other words, the current depth in the expression) through the nesting of `Var`-s. It necessitates the use of polymorphic recursion or GADTs. It also prevents us from storing the context in a State monad (because the size and thus the type of the context changes as we recurse). I wonder though if we could use an indexed state monad; it'd be a fun experiment. But I digress.
The simplest solution is to represent the context as a function:
```
type TC a = Either String a -- our checker monad
type Cxt a = a -> TC (Type a) -- the context
```
The `a` input is essentially a de Bruijn index, and we look up a type by applying the function to the index. We can define the empty context the following way:
```
emptyCxt :: Cxt a
emptyCxt = const $ Left "variable not in scope"
```
And we can extend the context:
```
consCxt :: Type a -> Cxt a -> Cxt (Var () a)
consCxt ty cxt (B ()) = pure (F <$> ty)
consCxt ty cxt (F a) = (F <$>) <$> cxt a
```
The size of the context is encoded in the `Var` nesting. The increase in the size is apparent here in the return type.
Now we can write the type checker. The main point here is that we use `fromScope` and `toScope` to get under binders, and we carry along an appropriately extended `Cxt` (whose type lines up just perfectly).
```
data Term a
= Var a
| Star -- or alternatively, "Type", or "*"
| Lam (Type a) (Scope () Term a)
| Pi (Type a) (Scope () Term a)
| App (Type a) (Term a)
deriving (Show, Eq, Functor)
-- boilerplate omitted (Monad, Applicative, Eq1, Show1 instances)
-- reduce to normal form
rnf :: Term a -> Term a
rnf = ...
-- Note: IIRC "Simply easy" and Augustsson's post reduces to whnf
-- when type checking. I use here plain normal form, because it
-- simplifies the presentation a bit and it also works fine.
-- We rely on Bound's alpha equality here, and also on the fact
-- that we keep types in normal form, so there's no need for
-- additional reduction.
check :: Eq a => Cxt a -> Type a -> Term a -> TC ()
check cxt want t = do
have <- infer cxt t
when (want /= have) $ Left "type mismatch"
infer :: Eq a => Cxt a -> Term a -> TC (Type a)
infer cxt = \case
Var a -> cxt a
Star -> pure Star -- "Type : Type" system for simplicity
Lam ty t -> do
check cxt Star ty
let ty' = rnf ty
Pi ty' . toScope <$> infer (consCxt ty' cxt) (fromScope t)
Pi ty t -> do
check cxt Star ty
check (consCxt (rnf ty) cxt) Star (fromScope t)
pure Star
App f x ->
infer cxt f >>= \case
Pi ty t -> do
check cxt ty x
pure $ rnf (instantiate1 x t)
_ -> Left "can't apply non-function"
```
Here's [the working code containing](https://gist.github.com/AndrasKovacs/2b0fce538ca5e91b85a3) the above definitions. I hope I didn't mess it up too badly.
|
Can re ignore a lazy quantifier?
Given this code (Python 3.6):
```
>>> import re
>>> a = re.search(r'\(.+?\)$', '(canary) (wharf)')
>>> a
<_sre.SRE_Match object; span=(0, 16), match='(canary) (wharf)'>
>>>
```
Why doesn't re stop searching at the first parethesis closure?
The expected output is `None`. The search should detect that **there is not an end of line** after `(canary)`, but it doesn't.
Edit:**If there is only ONE word between parens, it should match, if there are more than one, it shouldn't match at all.**
Any help would be hugely appreciated.
| The lazy flag isn't being ignored.
You get a match on the entire string because `.+?` means match *anything* one or more times until you find a match, *expanding as needed*. If the [regex was `\([^)]+?\)$`](https://regex101.com/r/kicH9t/1) it would have matched only the last `(wharf)` because we excluded the `+?` from matching `)`
Or if the regex was `\(.+?\)`, it would have matched the `(canary)` *and* the `(wharf)`, which shows that it's being lazy.
`\(.+?\)$` matches everything because you *make it* match everything until the end of the line.
If you want to ensure that there is only one group in parentheses in the entire string, we can do that with our "no-parentheses-regex" from above and force the start of the string to match the start of your regex.
`^\([^)]+?\)$`
Try it: <https://regex101.com/r/Ts9JeF/1>
Explanation:
- `^\(`: Match a literal `(` at the start of the string
- `[^)]+?`: Match anything but `)`, as many times as needed
- `\)$`: Match a literal `)$` at the end of the line.
Or, if you want to allow other words before and after the one in parentheses, but nothing in parentheses, do this:
`^[^()]*?\([^)]+?\)[^()]*$`
Try it: <https://regex101.com/r/Ts9JeF/3>
Explanation:
- `^[^()]*?`: At the start of the string, match anything but parentheses zero or more times.
- `\([^)]+?\)`: *Very* similar to our previous regex
- `[^()]*$`: Match zero or more non-parentheses characters until the end of the string.
|
Programmatically setting instance name with the OpenStack Nova API
I have resigned myself to the fact that many of the features that EC2 users are accustomed to (in particular, tagging) do not exist in OpenStack. There is, however, one piece of functionality whose absence is driving me crazy.
Although OpenStack doesn't have full support for instance tags (like EC2 does), it **does** have the notion of an instance name. This name is exposed by the Web UI, which even allows you to set it:
![Instance Name in the instance list](https://i.stack.imgur.com/wcVoW.png)
![Editing the Instance Name](https://i.stack.imgur.com/wwXwX.png)
This name is also exposed through the `nova list` command line utility.
However (and this is my problem) this field is *not* exposed through the `nova-ec2` API layer. The cleanest way for them to integrate this with existing EC2 platform tools would be to simulate an instance Tag with name "Name", but they don't do this. What's more, I can't figure out which Nova API endpoint I can use to read and write the name (it doesn't seem to be documented on the [API reference](http://api.openstack.org/)); but of course it must be somehow possible since the web client and `nova-client` can both somehow do it.
At the moment, I'm forced to change it manually from the website every time I launch a new instance. (I can't do it during instance creation because I use the `nova-ec2` API, not the `nova` command line client).
My question is:
1. Is there a way to read/write the instance name through the EC2 API layer?
2. Failing that, what is the REST endpoint to set it programmatically?
3. **(BONUS)**: What is Nova's progress on supporting general instance tagging?
| The Python `novaclient.v1_1` package has a method on the `server` object:
```
def update(self, server, name=None):
"""
Update the name or the password for a server.
:param server: The :class:`Server` (or its ID) to update.
:param name: Update the server's name.
"""
if name is None:
return
body = {
"server": {
"name": name,
},
}
self._update("/servers/%s" % base.getid(server), body)
```
This indicates that you can update the name of a server by POST-ing
the following JSON to `http://nova-api:port/v2.0/servers/{server-id}`:
```
{
"server": {
"name": "new_name"
}
}
```
Of course, all of the usual authentication headers (namely `X-Auth-Token`
from your Keystone server) are still required, so it is probably easier to
use a client library for whatever language you prefer to manage all that.
|
Understanding Recursive Algebraic Types in Functional Programming
Hey I'm having some trouble understanding how Recursive Algebraic Types work and how to use them exactly. For example, take the below RAT definition for the natural numbers:
```
data Nat = Zero | Succ Nat
```
We're using a RAT here because the set of values needs to be infinite and I know the principle is to express each new value in terms of a previous one, but I don't understand how this forms the natural numbers. Would someone mind clearing this up? Thanks
| This states that:
- `Nat` is a type.
- `Zero` has type `Nat`. This represents the natural number 0.
- If `n` has type `Nat`, then `Succ n` has type `Nat`. This represents the natural number *n*+1.
So, for example, `Succ (Succ Zero)` represents 2, `Succ (Succ (Succ Zero))` represents 3, `Succ (Succ (Succ (Succ Zero)))` represents 4, and so on. (This system of defining the natural numbers from 0 and successors is called the [Peano axioms](http://en.wikipedia.org/wiki/Peano_axioms).)
In fact, `Zero` and `Succ` are just special kinds of functions *(constructors)* that are declared to create `Nat` values:
```
Zero :: Nat
Succ :: Nat -> Nat
```
The difference from regular functions is that you can take them apart with pattern-matching:
```
predecessor :: Nat -> Nat
predecessor Zero = Zero
predecessor (Succ n) = n
```
Nothing about this is special to recursive algebraic data types, of course, just algebraic data types; but the simple fact that an algebraic data type can have a value of the same type as one of its fields is what creates the recursion here.
|
Rails 3 & jQuery - How the two work together to create a web app?
I need helping understanding the end to end flow in Rails.
I could use help with the following example.. Lets take Facebook.
When you're on Facebook.com, and click MESSAGES, the URL changes to (facebook.com/?sk=messages) and then AJAX is used to download the HTML/ JS content which is injected with JavaScript into the content pannel... No browser refresh which is what I'm after.
My specific questions are:
1. For the content that is download via AJAX, is that content comingfrom a rails partial?... like(app>views>messages>\_messagestable.html.erb
2. Where should the JavaScript reside that knows to fetch the messagescontent and then inject the content into the content panel? (is thatthe application.js?
3. Once the messages content (\_messagestable.html.erb) is injectedinto the content panel, it will require new JavaScript functionsspecific to that content... Where should that live?
| Presumably Rails works just like any other major framework out there. Typically, you want your AJAX and GET requests to work nicely together. So imagine you have this url:
>
> <http://www.example.com/messages>
>
>
>
Going here will load up the messages section of your site without having to make an AJAX call. However, if you go to:
>
> <http://www.example.com/#/messages>
>
>
>
You will land on www.example.com and you can use AJAX to send the "/messages" part off to your server where it can interpret it as a request to that controller to get that messages view (or however you've got it set up). This is a technique that I've seen referred to as hijax because you can pepper your page with actual anchor elements that link correctly to
>
> <http://www.example.com/messages>
>
>
> or
>
>
> <http://www.example.com/maps>
>
>
>
But in your javascript, you can cancel the operation of the link and redirect the browser to:
>
> <http://www.example.com/#/messages>
>
>
> or
>
>
> <http://www.example.com/#maps>
>
>
>
Now since you have that other javascript function listening for changes in the hash tag, it does the same thing as if you had simply gone to that link directly...that is sending "/maps" off to the server, have the server interpret it, and kick back the required content HTML that you want to fill the page with. This way search engines can crawl your site and you can have a user experience that works great for people with and without javascript.
Now for loading additional Javascript functionality after a page has already loaded, you could, I believe, simply use Javascript to insert a new tag, and add the source to wherever your new script resides. You can do this easily using jQuery's .getScript() method:
<http://api.jquery.com/jQuery.getScript/>
As to where all this script resides...I guess that's up to you or Rails. The ability to parse the URL hash is something you definitely need to make it all come together. For this, you have the option of a number of jQuery libraries:
[Detecting Back Button/Hash Change in URL](https://stackoverflow.com/questions/172957/detecting-back-button-hash-change-in-url)
Hope this helps..
|
Extracting text OpenCV
I am trying to find the bounding boxes of text in an image and am currently using this approach:
```
// calculate the local variances of the grayscale image
Mat t_mean, t_mean_2;
Mat grayF;
outImg_gray.convertTo(grayF, CV_32F);
int winSize = 35;
blur(grayF, t_mean, cv::Size(winSize,winSize));
blur(grayF.mul(grayF), t_mean_2, cv::Size(winSize,winSize));
Mat varMat = t_mean_2 - t_mean.mul(t_mean);
varMat.convertTo(varMat, CV_8U);
// threshold the high variance regions
Mat varMatRegions = varMat > 100;
```
When given an image like this:
![enter image description here](https://i.stack.imgur.com/VfDfJ.png)
Then when I show `varMatRegions` I get this image:
![enter image description here](https://i.stack.imgur.com/64wu4.png)
As you can see it somewhat combines the left block of text with the header of the card, for most cards this method works great but on busier cards it can cause problems.
The reason it is bad for those contours to connect is that it makes the bounding box of the contour nearly take up the entire card.
Can anyone suggest a different way I can find the text to ensure proper detection of text?
**200 points to whoever can find the text in the card above the these two.**
![enter image description here](https://i.stack.imgur.com/7h4SJ.jpg)
![enter image description here](https://i.stack.imgur.com/8ipeJ.jpg)
| You can detect text by finding close edge elements (inspired from a LPD):
```
#include "opencv2/opencv.hpp"
std::vector<cv::Rect> detectLetters(cv::Mat img)
{
std::vector<cv::Rect> boundRect;
cv::Mat img_gray, img_sobel, img_threshold, element;
cvtColor(img, img_gray, CV_BGR2GRAY);
cv::Sobel(img_gray, img_sobel, CV_8U, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
cv::threshold(img_sobel, img_threshold, 0, 255, CV_THRESH_OTSU+CV_THRESH_BINARY);
element = getStructuringElement(cv::MORPH_RECT, cv::Size(17, 3) );
cv::morphologyEx(img_threshold, img_threshold, CV_MOP_CLOSE, element); //Does the trick
std::vector< std::vector< cv::Point> > contours;
cv::findContours(img_threshold, contours, 0, 1);
std::vector<std::vector<cv::Point> > contours_poly( contours.size() );
for( int i = 0; i < contours.size(); i++ )
if (contours[i].size()>100)
{
cv::approxPolyDP( cv::Mat(contours[i]), contours_poly[i], 3, true );
cv::Rect appRect( boundingRect( cv::Mat(contours_poly[i]) ));
if (appRect.width>appRect.height)
boundRect.push_back(appRect);
}
return boundRect;
}
```
Usage:
```
int main(int argc,char** argv)
{
//Read
cv::Mat img1=cv::imread("side_1.jpg");
cv::Mat img2=cv::imread("side_2.jpg");
//Detect
std::vector<cv::Rect> letterBBoxes1=detectLetters(img1);
std::vector<cv::Rect> letterBBoxes2=detectLetters(img2);
//Display
for(int i=0; i< letterBBoxes1.size(); i++)
cv::rectangle(img1,letterBBoxes1[i],cv::Scalar(0,255,0),3,8,0);
cv::imwrite( "imgOut1.jpg", img1);
for(int i=0; i< letterBBoxes2.size(); i++)
cv::rectangle(img2,letterBBoxes2[i],cv::Scalar(0,255,0),3,8,0);
cv::imwrite( "imgOut2.jpg", img2);
return 0;
}
```
Results:
a. element = getStructuringElement(cv::MORPH\_RECT, cv::Size(17, 3) );
![imgOut1](https://i.stack.imgur.com/VmMJZ.jpg)
![imgOut2](https://i.stack.imgur.com/0s0U2.jpg)
b. element = getStructuringElement(cv::MORPH\_RECT, cv::Size(30, 30) );
![imgOut1](https://i.stack.imgur.com/4oBM3.jpg)
![imgOut2](https://i.stack.imgur.com/DRW5p.jpg)
Results are similar for the other image mentioned.
|
Recursive iteration over type lists and concatenation into a result type list
Consider a scenario having various classes/structs, some having complex data members, which can contain more of them itself. In order to setup / initialize, a list of all dependencies is required before instantiantion.
Because the types are known before instantiation, my approach is to define a type list containing involved/relevant types in each class/struct like this:
```
template<typename...> struct type_list {};
struct a {
using dependencies = type_list<>;
};
struct b {
using dependencies = type_list<>;
};
struct c {
using dependencies = type_list<b>;
b b_;
};
struct d {
using dependencies = type_list<a>;
a a_;
};
struct e {
using dependencies = type_list<c, a>;
c c_;
a a_;
x x_; // excluded
};
struct f {
using dependencies = type_list<a,b>;
a a_;
b b_;
y y_; // excluded
};
```
For example I want to pre-initialize `d, e, f`.
The next steps are:
- iterate through the `dependencies` of `d,e,f` and concat each item to a result list
- recursively iterate through each element of every `dependencies[n]::dependencies` and concat each item to a result list and do the same for each type until type list is empty
The result may contain duplicates. These get reduced and sorted in a later step. I intend to do this using a constexpr hash map using hashes of `__FUNCSIG__ / __PRETTY_FUNCTION__` (not part of this).
How can this (iterating, accessing type list elements, concat into result list) be achieved using C++20 metaprogramming?
| I'll just look at the metaprogramming part. As always, the solution is to use Boost.Mp11. In this case, it's one of the more involved algorithms: [`mp_iterate`](https://www.boost.org/doc/libs/develop/libs/mp11/doc/html/mp11.html#mp_iteratev_f_r).
This applies a function to a value until failure - that's how we can achieve recursion. We need several steps.
First, a metafunction to get the dependencies for a single type
```
template <typename T> using dependencies_of = typename T::dependencies;
```
Then, we need a way to get all the dependencies for a list of types. Importantly, this needs to fail at some point (for `mp_iterate`'s stopping condition), so we force a failure on an empty list:
```
template <typename L>
using list_dependencies_of = std::enable_if_t<
not mp_empty<L>::value,
mp_flatten<mp_transform<dependencies_of, L>>>;
```
And then we can iterate using those pieces:
```
template <typename L>
using recursive_dependencies_of = mp_unique<mp_apply<mp_append,
mp_iterate<
L,
mp_identity_t,
list_dependencies_of
>>>;
```
The `mp_append` concatenates the list of lists that `mp_iterate` gives you, and then `mp_unique` on top of that since you don't want to have duplicates.
This takes a list, so can be used like `recursive_dependencies_of<mp_list<a>>` (which is just `mp_list<a>`) or `recursive_dependencies_of<mp_list<d, e, f>>` (which is `mp_list<d, e, f, a, c, b>`).
[Demo](https://godbolt.org/z/8ob79soEW).
|
requestAnimationFrame loop not correct FPS
I have a javascript function that my game loops through (hopefully) 60 times a second that controls input, drawing, etc.
The way it is currently coded it seems to be always be around 52, noticeably lower than 60 fps, and it even dips to 25-30 fps even when nothing else is happening
```
function loop() {
setTimeout(function () {
requestAnimationFrame(loop);
time += (1000 / 60);
if (time % 600 == 0) {
oldtick = tick;
tick += 1;
time = 0;
aiMovement();
combat();
}
context.clearRect(0, 0, c.width, c.height);
drawMap();
playerInput();
movePlayer();
drawEntities();
drawPopups();
var thisLoop = new Date;
var fps = 1000 / (thisLoop - lastLoop);
lastLoop = thisLoop;
context.drawImage(cursor, mouse.x, mouse.y, 16, 16);
context.fillStyle = "#ffff00";
context.fillText("FPS: " + Math.floor(fps) + " Time: " + Math.floor(time) + " tick: " + tick, 10, 450);
context.fillText("Gold: " + gold, 10, 460);
//requestAnimationFrame(loop);
}, 1000 / 60);
}
```
if I remove the setTimeout and the first requestAnimationFrame from the top and uncomment the reuqestAnimationFrame at the bottom and remove the other setTimeout things, the FPS improves to 58 but rapidly changes between 58 and 62, again, not statically 60. Does it have something to do with 1000/60 is not a whole number? How would people using requestAnimationFrame achieve 60 fps if this was true?
| ## Don`t use setTimeout or setInterval for animation.
The problem is that you are calling a timer event from within the request animation event. Remove the timeout and just use requestAnimationFrame.
```
function loop(time){ // microsecond timer 1/1,000,000 accuracy in ms 1/1000th
// render code here
requestAnimationFrame(loop);
// or render code here makes no diff
}
requestAnimationFrame(loop); // to start
```
RequestAnimationFrame (rAF) is always in sync (unless the browser has vertical sync turned off). The next frame will be presented in 1/60th, 2/60th, 3/60th etc... of a second. You will not get 52frame per second using rAF, rather 60fps, 30fps, 15fps, etc...
The Demo below show the difference in use.
Because requestAnimationFrame uses some smarts to time the animation they can not both run at the same time so click on the canvas to start it.
You can also add a load to simulate rendering. There is a 14ms load and a 28 ms load. The 28ms load is design to mess up rAF as it will on many machines flick between 30 and 60 frames per second. The point is to show that rAF can only have 60, 30, 20,.. etc frames per second.
```
var ctx1 = can1.getContext("2d");
var ctx2 = can2.getContext("2d");
var ctx3 = can3.getContext("2d");
var lastTime1 = 0;
var lastTime2 = 0;
var lastTime3 = 0;
var frameFunction = frame1;
var frameText = "";
var drag = false;
var loadAmount = 14;
var stats = [{
data : [],
pos : 0,
add(val){
this.data[(this.pos ++) % 150] = val;
}
},{
data : [],
pos : 0,
add(val){
this.data[(this.pos ++) % 150] = val;
}
},{
data : [],
pos : 0,
add(val){
this.data[(this.pos ++) % 150] = val;
}
}
];
for(let i = 0; i < 150; i += 1){
stats[0].add(0);
stats[1].add(0);
stats[2].add(0);
}
setupContext(ctx1);
setupContext(ctx2);
setupContext(ctx3);
drawFrameTime(ctx1,0);
drawFrameTime(ctx2,0);
drawFrameTime(ctx3,0);
can1.addEventListener("click",()=>frameFunction = frame1);
can2.addEventListener("click",()=>frameFunction = frame2);
can3.addEventListener("click",()=>frameFunction = frame3);
load.addEventListener("click",()=>{
if(drag){
drag = false;
load.value = "Add load.";
}else{
drag = true;
load.value = "Remove load.";
}
});
loadPlus.addEventListener("click",()=>{
if(loadAmount === 14){
loadAmount = 28;
loadPlus.value = "28ms";
}else{
loadAmount = 14;
loadPlus.value = "14ms";
}
});
function CPULoad(){
if(drag){
var stopAt = performance.now() + loadAmount;
while(performance.now() < stopAt);
}
}
function setupContext(ctx){
ctx.font = "64px arial";
ctx.textAlign = "center";
ctx.textBaseline = "middle";
}
function drawStats(ctx,stat){
ctx.setTransform(1,0,0,1,0,64);
ctx.strokeStyle = "red";
ctx.strokeRect(-1,16.666,152,0);
ctx.strokeStyle = "black";
ctx.beginPath();
var i = stat.pos + 149;
var x = 0;
ctx.moveTo(x,stat.data[(i++) % 150]);
while(x ++ < 150 && stat.data[i % 150] !== undefined) {
ctx.lineTo(x,stat.data[(i++) % 150]);
}
ctx.stroke();
}
function drawFrameTime(ctx,time){
ctx.fillStyle = "black";
ctx.setTransform(1,0,0,1,0,0);
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
if(time > 0){
ctx.fillStyle = drag ? "red" : "black";
ctx.setTransform(1,0,0,1,ctx.canvas.width / 2,ctx.canvas.height *0.25);
ctx.fillText(time,0,0);
ctx.setTransform(0.4,0,0,0.4,ctx.canvas.width / 2,ctx.canvas.height * 0.75);
ctx.fillText(Math.round(1000 / Number(time)) + "fps",0,0);
}else{
ctx.setTransform(0.4,0,0,0.4,ctx.canvas.width / 2,ctx.canvas.height * 0.75);
ctx.fillText("Click to Start.",0,0);
}
ctx.fillStyle = "black";
ctx.setTransform(0.2,0,0,0.2,ctx.canvas.width / 2,ctx.canvas.height * 0.9);
ctx.fillText(frameText,0,0);
if(drag){
ctx.fillStyle = "red";
ctx.setTransform(0.2,0,0,0.2,ctx.canvas.width / 2,ctx.canvas.height * 0.5);
ctx.fillText("Load " + loadAmount + "ms",0,0);
}
}
function frame1(time){
requestAnimationFrame(frameFunction);
frameText = "Using rAF.";
var frameTime = time - lastTime1;
lastTime1 = time;
stats[0].add(frameTime);
drawFrameTime(ctx1,frameTime.toFixed(2));
drawStats(ctx1,stats[0]);
CPULoad()
}
function frame2() {
setTimeout(function () {
frameText = "Using rAF & setTimeout.";
var time = performance.now();
var frameTime = time - lastTime2;
stats[1].add(frameTime);
lastTime2 = time;
drawFrameTime(ctx2, frameTime.toFixed(2));
drawStats(ctx2,stats[1]);
CPULoad();
requestAnimationFrame(frameFunction);
}, 1000 / 60);
}
function frame3() {
setTimeout(frameFunction,1000/60);
frameText = "SetTimeout by itself.";
var time = performance.now();
var frameTime = time - lastTime3;
stats[2].add(frameTime);
lastTime3 = time;
drawFrameTime(ctx3, frameTime.toFixed(2));
drawStats(ctx3,stats[2]);
CPULoad();
}
requestAnimationFrame(frameFunction);
```
```
body {
font-family : arial ;
}
canvas {
border : 1px solid black;
}
div {
text-align : center;
}
```
```
<div><h2>RequestAnimationFrame (rAF)</h2>
rAF V rAF & setTimeout V setTimeout<br>
<canvas id = can1 width = 150></canvas>
<canvas id = can2 width = 150></canvas>
<canvas id = can3 width = 150></canvas><br>
Click the frame to set the current test.<br>
The left frame is using rAF alone, the middle using setTimeout and rAf, and the rigth frame uses setTimeout alone.<br>
Click <input type="button" id=load value="add Load"></input> to simulate a rendering load of around <input type="button" id=loadPlus value="14ms" title="click to change CPU load between 14 and 28ms"></input> <br>
Try draging and selecting this text and see how it effects the different methods.<br>
rAF is by far the most stable of the 3.<br>
</div>
```
|
Receiving 32-bit registers from 64-bit nasm code
I am learning 64-bit nasm, I assemble the .nasm file, which ONLY contains 64-bit registers, by doing the following
```
nasm -f elf64 HelloWorld.nasm -o HelloWorld.o
```
and link it doing the following
```
ld HelloWorld.o -o HelloWorld
```
the program runs correctly and even says it is a 64-bit ELF when I run the `file` command, but when I use `objdump` or `gdb` to disassemble the executable, the registers I put as 64-bit registers in the code show up as 32-bit registers when disassembled. (example: `rax` in source showing up as `eax` when disassembled)
Why is this?
This does not happen on just one computer, and it is a new problem, it hasn't been doing this before.
**HelloWorld.nasm:**
```
global _start
section .text
_start:
mov rax, 1
mov rdi, 1
mov rsi, hello_world
mov rdx, length
syscall
mov rax, 60
mov rdi, 11
syscall
section .data
hello_world: db 'Hello World',0xa
length: equ $-hello_world
```
**Disassembled HelloWorld:**
```
...
00000000004000b0 <_start>:
4000b0: b8 01 00 00 00 mov eax,0x1
4000b5: bf 01 00 00 00 mov edi,0x1
4000ba: 48 be d8 00 60 00 00 movabs rsi,0x6000d8
4000c1: 00 00 00
4000c4: ba 0c 00 00 00 mov edx,0xc
4000c9: 0f 05 syscall
4000cb: b8 3c 00 00 00 mov eax,0x3c
4000d0: bf 0b 00 00 00 mov edi,0xb
4000d5: 0f 05 syscall
...
```
| Why does
```
...
mov rax, 1
mov rdi, 1
mov rsi, hello_world
...
```
gets disassembled as
```
...
4000b0: b8 01 00 00 00 mov eax,0x1
4000b5: bf 01 00 00 00 mov edi,0x1
4000ba: 48 be d8 00 60 00 00 movabs rsi,0x6000d8
4000c1: 00 00 00
...
```
Because the literal `0x1` fits into 32 bits, and [the upper 32 bits of a 64 bit register are set to `0` when loading the lower 32 bits](https://stackoverflow.com/questions/11177137/why-do-most-x64-instructions-zero-the-upper-part-of-a-32-bit-register) through the corresponding `E`-register. Hence the assembler can optimize the `mov` to a 32 bit operation.
Note that the address loaded into `rsi` might not fit into 32 bits, hence `rsi` remains as such.
If you add the following instructions, you can see the effect very clearly:
```
mov rbx, 0x0ffffffff ; still fits into 32 bit
mov rbx, 0x100000000 ; does not fit into 32 bits anymore
```
gets disassembled as
```
a: bb ff ff ff ff mov $0xffffffff,%ebx
f: 48 bb 00 00 00 00 01 movabs $0x100000000,%rbx
16: 00 00 00
```
You can disable nasm optimization with `-O0`, in which case the instructions keep their long format:
```
nasm -O0 -f elf64 HelloWorld.asm
```
Result:
```
14: 48 bb ff ff ff ff 00 movabs $0xffffffff,%rbx
1b: 00 00 00
1e: 48 bb 00 00 00 00 01 movabs $0x100000000,%rbx
25: 00 00 00
```
|
jQuery - match element that has a class that starts with a certain string
I have a few links that look like this:
```
<a href="#" class="somelink rotate-90"> ... </a>
```
How can I bind a function to all elements that have a class that starts with "`rotate-`" ?
| You can use [starts with](http://api.jquery.com/attribute-starts-with-selector/) selector like this:
```
$('a[class^="rotate-"]')
```
---
>
> Description: Selects elements that
> have the specified attribute with a
> value beginning exactly with a given
> string.
>
>
>
So your code should be:
```
$('a[class^="rotate-"]').click(function(){
// do stuff
});
```
---
**Note:** If you want to find out elements that contain certain text anywhere in their attributes, you should do this instead:
```
$('a[class*="rotate-"]').click(function(){
// do stuff
});
```
|
Multiple-Target Assignments
I am reading a book about Python and there is a special part in the book about Multiple-Target Assignments. Now the book explains it like this:
![enter image description here](https://i.stack.imgur.com/QhyMP.png)
but I dont see use of this. This makes no sense for me. Why would you use more variables?
Is there a reason to do this? What makes this so different from using: `a='spam'`and then printing out `a` 3 times?
I can only think of using it for emptying variables in one line.
| A very good use for multiple assignment is setting a bunch of variables to the same number.
Below is a demonstration:
```
>>> vowels = consonants = total = 0
>>> mystr = "abcdefghi"
>>> for char in mystr:
... if char in "aeiou":
... vowels += 1
... elif char in "bcdfghjklmnpqrstvwxyz":
... consonants += 1
... total += 1
...
>>> print "Vowels: {}\nConsonants: {}\nTotal: {}".format(vowels, consonants, total)
Vowels: 3
Consonants: 6
Total: 9
>>>
```
Without multiple assignment, I'd have to do this:
```
>>> vowels = 0
>>> consonants = 0
>>> total = 0
```
As you can see, this is a lot more long-winded.
Summed up, multiple assignment is just Python syntax sugar to make things easier/cleaner.
|
knitr: how to use child .Rnw docs with (relative) figure paths?
I have a parent and a child `Rnw` document. The child doc is located in the subfolder `children`, i.e.
```
+-- parent.Rnw
+-- children
+-- child.Rnw
+-- figure
+-- test.pdf
```
Now I want to create the (margin) figure `test.pdf` from inside the child doc using the `pdf` function and place it in the folder `figure` inside the `children` folder (i.e. the local `figure` folder for `child.Rnw`).
**parent.Rnw**
```
\documentclass{article}
\begin{document}
I am the parent
<<child, child='children/child.Rnw'>>=
@
\end{document}
```
**child.Rnw**
```
<<parent, echo=FALSE, cache=FALSE>>=
knitr::set_parent("../parent.Rnw")
@
I am the child doc.
<<>>=
pdf("figure/test.pdf")
plot(1:10)
dev.off()
@
\marginpar{ \includegraphics[width=\marginparwidth]{figure/test.pdf} }
```
When compiling the `child.Rnw` everything works fine. The path to `figure/test.pdf` is correct for the child doc but not when compiling the parent doc. Then it would have to be `children/figure/test.pdf`.
**Question:** How can I have a correct path for the compilation of the child **AND** the parent doc?
| For me the following solution is suitable:
At the top of the child doc, I define a function that adjusts a relative path depending on whether the doc is run as a child or not:
```
# rp: a relative path
adjust_path <- function(path_to_child_folder, rp)
{
is.child <- knitr:::child_mode()
function(rp)
{
if (is.child)
rp <- file.path(path_to_child_folder, rp)
rp
}
}
```
Now, we supply the *from-the-parent-to-the-child-doc* path to the function `adjust_path`.
```
ap <- adjust_path("children")
```
The function returns a new function which can be used to adjust a relative path in a child doc. Now we can write
```
\includegraphics[width=\textwidth]{\Sexpr{ap("figure/test.pdf")}}
```
and the path will be correct if run as a child or standalone document.
|
Confusion in array operation in numpy
I generally use `MATLAB` and `Octave`, and i recently switching to `python` `numpy`.
In numpy when I define an array like this
```
>>> a = np.array([[2,3],[4,5]])
```
it works great and size of the array is
```
>>> a.shape
(2, 2)
```
which is also same as MATLAB
But when i extract the first entire column and see the size
```
>>> b = a[:,0]
>>> b.shape
(2,)
```
I get size `(2,)`, what is this? I expect the size to be `(2,1)`. Perhaps i misunderstood the basic concept. Can anyone make me clear about this??
| A 1D numpy array\* is literally 1D - it has no size in any second dimension, whereas in MATLAB, a '1D' array is actually 2D, with a size of 1 in its second dimension.
If you want your array to have size 1 in its second dimension you can use its `.reshape()` method:
```
a = np.zeros(5,)
print(a.shape)
# (5,)
# explicitly reshape to (5, 1)
print(a.reshape(5, 1).shape)
# (5, 1)
# or use -1 in the first dimension, so that its size in that dimension is
# inferred from its total length
print(a.reshape(-1, 1).shape)
# (5, 1)
```
## Edit
As Akavall pointed out, I should also mention `np.newaxis` as another method for adding a new axis to an array. Although I personally find it a bit less intuitive, one advantage of `np.newaxis` over `.reshape()` is that it allows you to add multiple new axes in an arbitrary order without explicitly specifying the shape of the output array, which is not possible with the `.reshape(-1, ...)` trick:
```
a = np.zeros((3, 4, 5))
print(a[np.newaxis, :, np.newaxis, ..., np.newaxis].shape)
# (1, 3, 1, 4, 5, 1)
```
`np.newaxis` is just an alias of `None`, so you could do the same thing a bit more compactly using `a[None, :, None, ..., None]`.
---
\* An `np.matrix`, on the other hand, is always 2D, and will give you the indexing behavior you are familiar with from MATLAB:
```
a = np.matrix([[2, 3], [4, 5]])
print(a[:, 0].shape)
# (2, 1)
```
For more info on the differences between arrays and matrices, see [here](https://web.archive.org/web/20150818181140/http://wiki.scipy.org/NumPy_for_Matlab_Users#head-e9a492daa18afcd86e84e07cd2824a9b1b651935).
|
Why doesn't nodelist have forEach?
I was working on a short script to change `<abbr>` elements' inner text, but found that `nodelist` does not have a `forEach` method. I know that `nodelist` doesn't inherit from `Array`, but doesn't it seem like `forEach` would be a useful method to have? Is there a particular implementation issue I am not aware of that prevents adding `forEach` to `nodelist`?
Note: I am aware that Dojo and jQuery both have `forEach` in some form for their nodelists. I cannot use either due to limitations.
| ## NodeList now has forEach() in all major browsers
See [nodeList forEach() on MDN](https://developer.mozilla.org/en-US/docs/Web/API/NodeList/forEach).
## Original answer
None of these answers explain *why* NodeList doesn't inherit from Array, thus allowing it to have `forEach` and all the rest.
The answer is found [on this es-discuss thread](https://esdiscuss.org/topic/why-does-legacy-content-break-when-making-array-likes-real-arrays). In short, it breaks the web:
>
> The problem was code that incorrectly assumed instanceof to mean that the instance was an Array in combination with Array.prototype.concat.
>
>
> There was a bug in Google's Closure Library which caused almost all Google's apps to fail due to this. The library was updated as soon as this was found but there might still be code out there that makes the same incorrect assumption in combination with concat.
>
>
>
That is, some code did something like
```
if (x instanceof Array) {
otherArray.concat(x);
} else {
doSomethingElseWith(x);
}
```
However, `concat` will treat "real" arrays (not instanceof Array) differently from other objects:
```
[1, 2, 3].concat([4, 5, 6]) // [1, 2, 3, 4, 5, 6]
[1, 2, 3].concat(4) // [1, 2, 3, 4]
```
so that means that the above code broke when `x` was a NodeList, because before it went down the `doSomethingElseWith(x)` path, whereas afterward it went down the `otherArray.concat(x)` path, which did something weird since `x` wasn't a real array.
For some time there was a proposal for an `Elements` class that was a real subclass of Array, and would be used as "the new NodeList". However, that was [removed from the DOM Standard](https://github.com/whatwg/dom/commit/10b6cf1ba02806220d5461a3bdb7939728b73635), at least for now, since it wasn't feasible to implement yet for a variety of technical and specification-related reasons.
|
Why doesn't trunc() turn a float into an integer?
When doing `trunc(3.5)`, it returns a float, `3.0`, why?
I know that you can do `trunc(Int64, 3.5)`, but isn't the purpose of `trunc` to convert a float into an integer? Why does it work this way?
| Let us focus on the case when you pass a `Float64` to `trunc` (the analysis can be similarly extended to other types). Take that the value you want to truncate is `x`.
First note that then `trunc` can always perform the truncation of `x` to the nearest integral value less or equal than it. So in short - this operation is always well defined, possible to perform, fast, and type stable.
If we wanted to return an integer we have a choice: do we want to be type stable.
In Julia the answer in Base is yes. But this would mean that you would have to return `BigInt` value for the operation to be always well defined. But probably when you do `trunc` you do not expect to get `BigInt` as it will be expensive.
The alternative would be to return some other integer type, but then you would have to throw an error if the float is too large, again - this is something that most likely you do not want to get.
Here is an example showing the issue:
```
julia> x = 1e300
1.0e300
julia> trunc(x)
1.0e300
julia> trunc(Int, x)
ERROR: InexactError: trunc(Int64, 1.0e300)
Stacktrace:
[1] trunc(::Type{Int64}, ::Float64) at ./float.jl:703
[2] top-level scope at REPL[35]:1
julia> trunc(BigInt, x)
1000000000000000052504760255204420248704468581108159154915854115511802457988908195786371375080447864043704443832883878176942523235360430575644792184786706982848387200926575803737830233794788090059368953234970799945081119038967640880074652742780142494579258788820056842838115669472196386865459400540160
```
So in summary - because floats can span much wider range of value than *normal* integers the only safe option is to return float by default.
|
Why can swapping standard library containers be problematic in C++11 (involving allocators)?
**Note:** Originially asked by [GreenScape](https://stackoverflow.com/users/966376/greenscape) as [comment](https://stackoverflow.com/questions/23754223/why-are-the-swap-member-functions-in-stl-containers-not-declared-noexcept/23755126?noredirect=1#comment36524813_23755126).
---
After reading [Why are the swap member functions in STL containers not declared noexcept?](https://stackoverflow.com/q/23754223/1090079) it seems that the reason for potential *undefined behavior* when doing `a.swap(b)` for standard containers boils down to also swapping, or not swapping, the underlying allocators.
- Why is swapping allocators along with data problematic?
| Let's start of by digging into the Standard ([N3797](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3797.pdf)):
>
> `23.2.1p9` **General Container Requirements** `[container.requirements.general]`
>
>
>
> >
> > If
> > `allocator_traits<allocator_type>::propagate_on_container_swap::value`
> > is `true`, then the allocators of `a` and `b` shall also be exchanged
> > using an unqalified call to non-member `swap`. **Otherwise, they shall
> > not be swapped, and the behavior is undefined unless `a.get_allocator() == b.get_allocator()`**.
> >
> >
> >
>
>
>
---
**What is the purpose of `propagate_on_container_swap`?**
If an Allocator has a *typedef* named `propagate_on_container_swap` that refers to `std::true_type` the underlying Allocators of two containers being swapped, will also swap.[1]
If `propagate_on_container_swap` is `std::false_type` only the data of the two containers will swap, but the allocators will remain in their place.
[1] This means that after `a.swap(b)`, `a.get_allocator()` will be that which was previously `b.get_allocator()`; the allocators has *swapped*.
---
**What are the implications of stateful Allocators?**
The Allocator is not only responsible for allocating memory for the elements within a Standard container, they are also responsible for the deallocation of said elements.
C++03 didn't allow *stateful allocators* within standard containers, but C++11 mandates that support for such must be present. This means that we could define an allocator that, depending on how it's constructed, acts in a certain way.
If the allocator has `propagate_on_container_swap::value` equal to `false` the difference in state between the two allocators involved might lead to *undefined behavior*, since one instance of the Allocator might not be compatible with the data handled by the other.
---
**What might be the problem with stateful allocators if they are not swapped properly?**
Let's say we have a `MagicAllocator` which either uses `malloc` or `operator new` to allocate memory, depending on how it's constructed.
If it uses `malloc` to allocate memory it must use `free` to deallocate it, and in case of `operator new`, `delete` is required; because of this it must maintain some information saying which of the two it should use.
If we have two `std::vector` which both uses `MagicAllocator` but with different states (meaning that one uses `malloc` and the other `operator new`), and we don't swap the allocators upon `a.swap(b)` the allocator won't match the memory allocated for the elements in the two vectors after the swap - which in terms means that the wrong `free`/`delete` might be called upon deallocation.
|
Media Information Extractor for Java
I need a media information extraction library (pure Java or JNI wrapper) that can handle common media formats. I primarily use it for video files and I need at least these information:
1. Video length (Runtime)
2. Video bitrate
3. Video framerate
4. Video format and codec
5. Video size (width X height)
6. Audio channels
7. Audio format
8. Audio bitrate and sampling rate
There are several libraries and tools around but I couldn't find for Java.
| After a few days of asking this question, I have found [MediaInfo](http://mediainfo.sourceforge.net) which supplies dozens of technical and tag information about a video or audio file.
There is a JNI wrapper for MediaInfo in [subs4me](http://code.google.com/p/subs4me/)'s [source tree](http://code.google.com/p/subs4me/source/browse/trunk/Subs4me/src/net/sourceforge/filebot/mediainfo/?r=93)
that I find very useful. Here are some code snippets that show how to extract some information from a media file:
```
String fileName = "path/to/my/file";
File file = new File(fileName);
MediaInfo info = new MediaInfo();
info.open(file);
String format = info.get(MediaInfo.StreamKind.Video, i, "Format",
MediaInfo.InfoKind.Text, MediaInfo.InfoKind.Name);
int bitRate = info.get(MediaInfo.StreamKind.Video, i, "BitRate",
MediaInfo.InfoKind.Text, MediaInfo.InfoKind.Name);
float frameRate = info.get(MediaInfo.StreamKind.Video, i, "FrameRate",
MediaInfo.InfoKind.Text, MediaInfo.InfoKind.Name);
short width = info.get(MediaInfo.StreamKind.Video, i, "Width",
MediaInfo.InfoKind.Text, MediaInfo.InfoKind.Name);
int audioBitrate = info.get(MediaInfo.StreamKind.Audio, i, "BitRate",
MediaInfo.InfoKind.Text, MediaInfo.InfoKind.Name);
int audioChannels = info.get(MediaInfo.StreamKind.Audio, i, "Channels",
MediaInfo.InfoKind.Text, MediaInfo.InfoKind.Name);
```
Please note that the above code is a basic example and does not contain any error checking (which is a bad habit in a real scenario). Also note that information that you can extract with MediaInfo does not limited to the ones above. See MediaInfo's raw output to learn which kind of media information you can extract or read [MediaInfo C++ SDK](http://mediainfo.sourceforge.net/tr/Support/SDK).
|
LoadMask image - extjs
I have used mask on loading in my view.
Right now it is showing me default loading image (x-mask-loading)
I want to add my own image instead of that default image.
Can anybody having some idea?
Please let me know.
| You can change loading icon by appliing css rules to loadMask Element (see [loadingCls](http://docs.sencha.com/ext-js/4-1/#!/api/Ext.view.AbstractView-cfg-loadingCls) config):
```
Ext.create('Ext.view.View', {
loadingCls: 'custom-loader'
// ...
});
```
```
/* CSS */
.x-mask-msg .custom-loader {
background-image: url(http://example.com/custom-loading.gif);
}
```
Check out a [demo](http://jsfiddle.net/molecule/frzk8/).
### UPDATE
>
> This doesn't work anymore with extjs 4.2. Checkout the new [version](http://jsfiddle.net/eC39Q/). Any idea how to fix this?
>
>
>
This is strange but sencha doesn't provide api (at least I didn't find one) for changing loadMask icon. So the only option is to fiddle with css again:
```
/* CSS */
.x-mask-msg .custom-loader .x-mask-msg-text {
background-image: url(http://example.com/custom-loading.gif);
}
```
[ExtJs4.2 demo](http://jsfiddle.net/eC39Q/1).
|
What is the difference between qDebug() used as a stream and as a function
I have seen bits of Qt code that uses qDebug as if it were `printf()`
```
qDebug( format, ... );
```
Mostly i see it used like `std::cout`
```
qDebug() << "one " << var_one;
```
What is the difference in the usages and when is it correct/better to use one of the other? The Qt help online somehow seems to reference the function version but never explain it.
| `qDebug(pattern, object1, object2)` it's basically the old fashioned `fprintf(stderr, pattern, object1, object2)`, as such you depend on compiler support to avoid - for instance - to crash your program with wrong patterns, like `int x; qDebug("%s\n", x);`. Well, GCC catches this one, but the compiler *cannot always* know if the pattern is appropriate, I think.
I always use `qDebug() << object << ...;`, as the documentation states
>
> If you include QtDebug, a more convenient syntax is also available:
>
>
>
```
qDebug() << "Brush:" << myQBrush << "Other value:" << i;
```
>
> With this syntax, the function returns a QDebug object that is configured to use the QtDebugMsg message type. It automatically puts a single space between each item, and outputs a newline at the end. It supports many C++ and Qt types.
>
>
>
you can pass most of Qt objects to qDebug() << ... and get them rendered in readable way
try for instance `qDebug() << QTime::currentTime();`
|
Element-wise Matrix Replication in MATLAB
I have a 3 dimensional matrix. I want to replicate the matrix of size 8x2x9 to a specified number of times in the third dimension given by a vector say `[3, 2, 1, 1, 5, 4, 2, 2, 1]` so that the resultant matrix is of size 8x2x21. Is there any built-in MATLAB function (I'm running version 2014a) to do this similar to the newer `repelem` function, for matrices?
A simple example of what I need:
```
% Input:
A(:,:,1) = [1 2; 1 2];
A(:,:,2) = [2 3; 2 3];
% Function call:
A = callingfunction(A, 1, 1, [1 2]);
% Output:
A(:,:,1) = [1 2; 1 2];
A(:,:,2) = [2 3; 2 3];
A(:,:,3) = [2 3; 2 3];
```
| ## For R2015a and newer...
According to the documentation for [`repelem`](http://www.mathworks.com/help/matlab/ref/repelem.html) (first introduced in version R2015a), it can operate on matrices as well. I believe the following code should accomplish what you want (I can't test it because I have an older version):
```
newMat = repelem(mat, 1, 1, [3 2 1 1 5 4 2 2 1]);
```
## For pre-R2015a versions...
You can use one of the approaches from [this question](https://stackoverflow.com/q/1975772/52738) to replicate an index into the third dimension, then simply index your matrix with that. For example (adapting [Divakar's solution](https://stackoverflow.com/a/29079288/52738)):
```
vals = 1:size(mat, 3);
clens = cumsum([3 2 1 1 5 4 2 2 1]);
index = zeros(1, clens(end));
index([1 clens(1:end-1)+1]) = diff([0 vals]);
newMat = mat(:, :, cumsum(index));
```
You can then generalize this into a function to operate on multiple dimensions like `repelem` does:
```
function A = my_repelem(A, varargin)
index = cell(1, nargin-1);
for iDim = 1:nargin-1
lens = varargin{iDim};
if isscalar(lens)
if (lens == 1)
index{iDim} = ':';
continue
else
lens = repmat(lens, 1, size(A, iDim));
end
end
vals = 1:size(A, iDim);
clens = cumsum(lens);
index{iDim} = zeros(1, clens(end));
index{iDim}([1 clens(1:end-1)+1]) = diff([0 vals]);
index{iDim} = cumsum(index{iDim});
end
A = A(index{:});
end
```
And for your sample data, you would use it like so:
```
>> A(:,:,1) = [1 2; 1 2];
>> A(:,:,2) = [2 3; 2 3];
>> A = my_repelem(A, 1, 1, [1 2])
A(:,:,1) =
1 2
1 2
A(:,:,2) =
2 3
2 3
A(:,:,3) =
2 3
2 3
```
|
JDBC driver for Oracle 10G XE
I have installed Oracle 10G XE. I want to connect to it using JDBC . Which driver should i use for it and from where can i download it ?
Thank You.
| On the machine you have installed the server, Oracle JDBC drivers are in `ORACLE_HOME/jdbc/lib`. Just put `ojdbc14.jar` on your classpath (`ojdbc14_g.jar` is the same as `ojdbc14.jar`, except that classes were compiled with "javac -g" and contain some tracing information).
**EDIT:** According to [Oracle Database 10g Release 2 (10.2.0.4) JDBC Drivers](http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc_10201.html), `ojdbc14.jar` contains *classes for use with JDK 1.4 and 1.5* (and I don't see why it **wouldn't** work with a JDK 6, some features of JDBC 4.0 won't just be available).
Some newer drivers are available at [Oracle Database 11g Release 2 JDBC Drivers](http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc_112010.html) but I don't really see the difference between `ojdbc14.jar` and `ojdbc15.jar` (except that `ojdbc15.jar` requires a JDK 5+): they are both [JDBC 3.0 driver](http://www.oracle.com/technology/tech/java/sqlj_jdbc/htdocs/jdbc_faq.html#02_04) so I think it's just a matter of end of life support for `ojbdc14.jar`. If you want JDBC 4.0 support, you'll need `ojdbc16.jar` though. Have a look at the [Oracle JDBC FAQ](http://www.oracle.com/technology/tech/java/sqlj_jdbc/htdocs/jdbc_faq.html) if you want more details.
|
combine(Flow...) is not working with 3 Flows
I'm trying to use `combine()` to combine 3 flows but nothing happens. This is my code:
```
combine(flowX, flowY, flowZ) { x, y, z ->
println("$x, $y, $z") // this line is never reached
}
```
I know my flows are good because this works:
```
val x = flowX.first()
val y = flowY.first()
val z = flowZ.first()
println("$x, $y, $z") // this line is executed and prints the right values
```
Why would my flows emit with `first()` but not `combine()`?
| `combine()` is not a terminal flow operator. It only creates a new flow and returns it, but it doesn't start collecting the flow. You still need to collect the flow to actually start executing it:
```
combine(flowX, flowY, flowZ) { x, y, z ->
println("$x, $y, $z")
}.collect {}
```
This solution seems a little strange to me, as it prints in the lambda that was supposed to transform the value, then it returns a flow of `Unit` and collects doing nothing. Alternatively, you can do it this way:
```
combine(flowX, flowY, flowZ) { x, y, z -> Triple(x, y, z) }
.collect { (x, y, z) -> println("$x, $y, $z") }
```
|
Insert screenshots in SpecRun/SpecFlow test execution reports
I'm using **SpecFlow** with **Selenium WebDriver** and **SpecRun** as test runner to create and execute automated test cases and I'm looking for a solution to insert screenshots in the test execution report.
I wrote a method to create screenshots after every `Assert` function. The images are saved to a specific location, but when I make the result analysis I have to follow the report and the images as well. It would be nice to have them in the same location (precisely in the report html).
Is there any way to perform this (something similar to Console outputs)?
| (reposting from <https://groups.google.com/forum/#!topic/specrun/8-G0TgOBUbY>)
Yes, this is possible. You have to do the following steps:
1. save the screenshot to the output folder (this is the current working folder where the tests are running).
2. Write out a file line to the console from the test: Console.WriteLine("file:///C:\fullpath-of-the-file.png");
3. Make sure that the generated images are also saved on the build server as artifacts
During the report generation, SpecRun scans the test output for such file URLs and convert them to anchor tags with relative path, so that these will work wherever you distribute your report file (as long as the images are next to it). You can of course tup the images to a subfolder as well.
Here is a code snippet that works with Selenium WebDriver. This also saves the HTML source together with the screenshot.
```
[AfterScenario]
public void AfterWebTest()
{
if (ScenarioContext.Current.TestError != null)
{
TakeScreenshot(cachedWebDriver);
}
}
private void TakeScreenshot(IWebDriver driver)
{
try
{
string fileNameBase = string.Format("error_{0}_{1}_{2}",
FeatureContext.Current.FeatureInfo.Title.ToIdentifier(),
ScenarioContext.Current.ScenarioInfo.Title.ToIdentifier(),
DateTime.Now.ToString("yyyyMMdd_HHmmss"));
var artifactDirectory = Path.Combine(Directory.GetCurrentDirectory(), "testresults");
if (!Directory.Exists(artifactDirectory))
Directory.CreateDirectory(artifactDirectory);
string pageSource = driver.PageSource;
string sourceFilePath = Path.Combine(artifactDirectory, fileNameBase + "_source.html");
File.WriteAllText(sourceFilePath, pageSource, Encoding.UTF8);
Console.WriteLine("Page source: {0}", new Uri(sourceFilePath));
ITakesScreenshot takesScreenshot = driver as ITakesScreenshot;
if (takesScreenshot != null)
{
var screenshot = takesScreenshot.GetScreenshot();
string screenshotFilePath = Path.Combine(artifactDirectory, fileNameBase + "_screenshot.png");
screenshot.SaveAsFile(screenshotFilePath, ImageFormat.Png);
Console.WriteLine("Screenshot: {0}", new Uri(screenshotFilePath));
}
}
catch(Exception ex)
{
Console.WriteLine("Error while taking screenshot: {0}", ex);
}
}
```
|
Switching between JPanels in a JFrame
Now I know there are many, many questions on this and I've read a dozen. But I've just hit a wall, I can't make heads or tails of it.
Heres my question.
I have 3 Panel classes.
```
ConfigurePanel.java
ConnectServerPanel.java
RunServerPanel.java
```
and my JFrame class
```
StartUPGUI.java
```
This is what is initialised at startup
```
private void initComponents() {
jPanel1 = new javax.swing.JPanel();
startUp = new sjdproject.GUI.ConfigurePanel();
runServer = new sjdproject.GUI.RunServerPanel();
serverConnect = new sjdproject.GUI.ConnectServerPanel();
setDefaultCloseOperation(javax.swing.WindowConstants.DO_NOTHING_ON_CLOSE);
jPanel1.setLayout(new java.awt.CardLayout());
jPanel1.add(startUp, "card2");
jPanel1.add(runServer, "card4");
jPanel1.add(serverConnect, "card3");
javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane());
getContentPane().setLayout(layout);
layout.setHorizontalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(javax.swing.GroupLayout.Alignment.TRAILING, layout.createSequentialGroup()
.addContainerGap(27, Short.MAX_VALUE)
.addComponent(jPanel1, javax.swing.GroupLayout.PREFERRED_SIZE, 419, javax.swing.GroupLayout.PREFERRED_SIZE)
.addGap(38, 38, 38))
);
layout.setVerticalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(layout.createSequentialGroup()
.addGap(27, 27, 27)
.addComponent(jPanel1, javax.swing.GroupLayout.PREFERRED_SIZE, 281, javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap(30, Short.MAX_VALUE))
);
```
My StartUPGUI calls the StartUpPanel first. In my StartUpPanel.java I have a button which calls the setPanel method in StartUPGUI
```
StartUpGUI s = new StartUpGUI();
String str = "";
if(runserverbtn.isSelected()){
str = "runserver";
}
else if(connectserverbtn.isSelected()){
str = "connectserver";
}
else{
str = "";
}
s.setPanel(str);
```
Here is my setPanel method
```
void setPanel(String str){
if(str == "runserver"){
}
else if(str == "connectserver"){
}
else{
}
}
```
What do I need to put inside the if blocks to change panel views? I would have assumed jPanel1.something() but I don't know what that something is.
|
>
> "What do I need to put inside the if blocks to change panel views? I would have assumed jPanel1.something() but I don't know what that something is."
>
>
>
1. Don't compare string with `==`, it will not work. Use `.equals`.. `if("runserver".equals(str)){`
2. You need to use the method `.show` from the `CardLayout`
```
CardLayout.show(jPanel1, "whichPanel");
```
- `public void show(Container parent, String name)` - Flips to the component that was added to this layout with the specified name, using addLayoutComponent. If no such component exists, then nothing happens.
```
void setPanel(String str){
CardLayout layout = (CardLayout)jPanel1.getLayout();
if("runserver".equals(str)){
layout.show(jPanel1, "card4");
}else if("connectserver".equals(str)){
layout.show(jPanel1, "card3");
} else{
layout.show(jPanel1, "card2");
}
}
```
---
See [**How to Use CardLayout**](http://docs.oracle.com/javase/tutorial/uiswing/layout/card.html) for more details and see the [**API**](http://docs.oracle.com/javase/7/docs/api/java/awt/CardLayout.html) for more methods.
---
**UPDATE**
Try running this example and examine it with your code to see if you notice anything that will help
```
import java.awt.BorderLayout;
import java.awt.CardLayout;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.SwingUtilities;
public class TestCardLayout {
PanelOne p1 = new PanelOne();
PanelTwo p2 = new PanelTwo();
PanelThree p3 = new PanelThree();
CardLayout layout = new CardLayout();
JPanel cardPanel = new JPanel(layout);
public TestCardLayout() {
JButton showOne = new JButton("Show One");
JButton showTwo = new JButton("Show Two");
JButton showThree = new JButton("Show Trree");
JPanel buttonsPanel = new JPanel();
buttonsPanel.add(showOne);
buttonsPanel.add(showTwo);
buttonsPanel.add(showThree);
showOne.addActionListener(new ButtonListener());
showTwo.addActionListener(new ButtonListener());
showThree.addActionListener(new ButtonListener());
cardPanel.add(p1, "panel 1");
cardPanel.add(p2, "panel 2");
cardPanel.add(p3, "panel 3");
JFrame frame = new JFrame("Test Card");
frame.add(cardPanel);
frame.add(buttonsPanel, BorderLayout.SOUTH);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.pack();
frame.setVisible(true);
}
private class ButtonListener implements ActionListener {
public void actionPerformed(ActionEvent e) {
String command = e.getActionCommand();
if ("Show One".equals(command)) {
layout.show(cardPanel, "panel 1");
} else if ("Show Two".equals(command)) {
layout.show(cardPanel, "panel 2");
} else {
layout.show(cardPanel, "panel 3");
}
}
}
public static void main(String[] args) {
SwingUtilities.invokeLater(new Runnable() {
@Override
public void run() {
TestCardLayout testCardLayout = new TestCardLayout();
}
});
}
}
class PanelOne extends JPanel {
public PanelOne() {
setBackground(Color.GREEN);
add(new JLabel("Panel one"));
}
@Override
public Dimension getPreferredSize() {
return new Dimension(300, 300);
}
}
class PanelTwo extends JPanel {
public PanelTwo() {
setBackground(Color.BLUE);
add(new JLabel("Panel two"));
}
@Override
public Dimension getPreferredSize() {
return new Dimension(300, 300);
}
}
class PanelThree extends JPanel {
public PanelThree() {
setBackground(Color.YELLOW);
add(new JLabel("Panel three"));
}
@Override
public Dimension getPreferredSize() {
return new Dimension(300, 300);
}
}
```
---
**UPDATE 2**
The problem is, the the button is in the `ConfigurePanel` class. Trying to create a new `StartUPGUI` in that class, won't reference the same components. What you need to do is pass a reference of the `StartUPGUI` to the `ConfigurePanel`. Something like this
```
public class ConfigurePanel extends JPanel {
StartUPGUI startup;
public ConfigurePanel(StartUPGUI startup) {
this.startup = startup;
}
....
public void actionPerformed(ActionEvent e) {
startup.setPanel("runserver");
}
}
```
And instantiate `ConfigurePanel` from the `StartUPGUI` like this
```
new ConfigurePanel(StartUPGUI.this);
```
|
How to check if a string already exists and if it does add +1 at the end?
Let's say i check if
```
$strig = "red-hot-chili-peppers-californication";
```
already exists in my database:
```
$query = dbquery("SELECT * FROM `videos` WHERE `slug` = '".$strig."';");
$checkvideo = dbrows($query);
if($checkvideo == 1){
// the code to be executed to rename $strig
// to "red-hot-chili-peppers-californication-2"
// it would be great to work even if $string is defined as
// "red-hot-chili-peppers-californication-2" and
// rename $string to "red-hot-chili-peppers-californication-3" and so on...
}
```
I want to do this to create unique slugs for a more friendly url structure.
Thanks.
| I can offer you the source of [Codeigniter's](http://codeigniter.com/user_guide/helpers/string_helper.html) `increment_string()` function:
```
/**
* CodeIgniter String Helpers
*
* @package CodeIgniter
* @subpackage Helpers
* @category Helpers
* @author ExpressionEngine Dev Team
* @link http://codeigniter.com/user_guide/helpers/string_helper.html
*/
/**
* Add's _1 to a string or increment the ending number to allow _2, _3, etc
*
* @param string $str required
* @param string $separator What should the duplicate number be appended with
* @param string $first Which number should be used for the first dupe increment
* @return string
*/
function increment_string($str, $separator = '_', $first = 1)
{
preg_match('/(.+)'.$separator.'([0-9]+)$/', $str, $match);
return isset($match[2]) ? $match[1].$separator.($match[2] + 1) : $str.$separator.$first;
}
```
>
> Increments a string by appending a number to it or increasing the
> number. Useful for creating "copies" or a file or duplicating database
> content which has unique titles or slugs.
>
>
> **Usage example:**
>
>
>
> ```
> echo increment_string('file', '_'); // "file_1"
> echo increment_string('file', '-', 2); // "file-2"
> echo increment_string('file-4'); // "file-5"
>
> ```
>
>
|
What is the idea behind using nn.Identity for residual learning?
So, I've read about half the original ResNet paper, and am trying to figure out how to make my version for tabular data.
I've read a few blog posts on how it works in PyTorch, and I see heavy use of `nn.Identity()`. Now, the paper also frequently uses the term *identity mapping*. However, it just refers to adding the input for a stack of layers the output of that same stack in an element-wise fashion. If the in and out dimensions are different, then the paper talks about padding the input with zeros or using a matrix `W_s` to project the input to a different dimension.
Here is an abstraction of a residual block I found in a blog post:
```
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, activation='relu'):
super().__init__()
self.in_channels, self.out_channels, self.activation = in_channels, out_channels, activation
self.blocks = nn.Identity()
self.shortcut = nn.Identity()
def forward(self, x):
residual = x
if self.should_apply_shortcut: residual = self.shortcut(x)
x = self.blocks(x)
x += residual
return x
@property
def should_apply_shortcut(self):
return self.in_channels != self.out_channels
block1 = ResidualBlock(4, 4)
```
And my own application to a dummy tensor:
```
x = tensor([1, 1, 2, 2])
block1 = ResidualBlock(4, 4)
block2 = ResidualBlock(4, 6)
x = block1(x)
print(x)
x = block2(x)
print(x)
>>> tensor([2, 2, 4, 4])
>>> tensor([4, 4, 8, 8])
```
So at the end of it, `x = nn.Identity(x)` and I'm not sure the point of its use except to mimic math lingo found in the original paper. I'm sure that's not the case though, and that it has some hidden use that I'm just not seeing yet. What could it be?
**EDIT** Here is another example of implementing residual learning, this time in Keras. It does just what I suggested above and just keeps a copy of the input for adding to the output:
```
def residual_block(x: Tensor, downsample: bool, filters: int, kernel_size: int = 3) -> Tensor:
y = Conv2D(kernel_size=kernel_size,
strides= (1 if not downsample else 2),
filters=filters,
padding="same")(x)
y = relu_bn(y)
y = Conv2D(kernel_size=kernel_size,
strides=1,
filters=filters,
padding="same")(y)
if downsample:
x = Conv2D(kernel_size=1,
strides=2,
filters=filters,
padding="same")(x)
out = Add()([x, y])
out = relu_bn(out)
return out
```
|
>
> What is the idea behind using nn.Identity for residual learning?
>
>
>
There is none (almost, see the end of the post), all [`nn.Identity`](https://pytorch.org/docs/stable/generated/torch.nn.Identity.html) does is forwarding the input given to it (basically `no-op`).
As shown in [PyTorch repo issue](https://github.com/pytorch/pytorch/issues/9160) you linked in comment this idea was first rejected, later merged into PyTorch, due to other use (see the rationale [in this PR](https://github.com/pytorch/pytorch/pull/19249)). This rationale **is not** connected to ResNet block itself, see end of the answer.
## ResNet implementation
Easiest generic version I can think of with projection would be something along those lines:
```
class Residual(torch.nn.Module):
def __init__(self, module: torch.nn.Module, projection: torch.nn.Module = None):
super().__init__()
self.module = module
self.projection = projection
def forward(self, inputs):
output = self.module(inputs)
if self.projection is not None:
inputs = self.projection(inputs)
return output + inputs
```
You can pass as `module` things like two stacked convolutions and add `1x1` convolution (with padding or with strides or something) as projection module.
For `tabular` data you could use this as `module` (assuming your input has `50` features):
```
torch.nn.Sequential(
torch.nn.Linear(50, 50),
torch.nn.ReLU(),
torch.nn.Linear(50, 50),
torch.nn.ReLU(),
torch.nn.Linear(50, 50),
)
```
Basically, all you have to do is is add `input` to some module to it's output and that is it.
## Rationale behing `nn.Identity`
It might be easier to construct neural networks (and read them afterwards), example for batch norm (taken from aforementioned PR):
```
batch_norm = nn.BatchNorm2d
if dont_use_batch_norm:
batch_norm = Identity
```
Now you can use it with `nn.Sequential` easily:
```
nn.Sequential(
...
batch_norm(N, momentum=0.05),
...
)
```
And when printing the network it always has the same number of submodules (with either `BatchNorm` or `Identity`) which also makes the whole thing a little smoother IMO.
Another use case, mentioned [here](https://github.com/pytorch/pytorch/issues/9160#issuecomment-456766137) might be removing parts of existing neural networks:
```
net = tv.models.alexnet(pretrained=True)
# Assume net has two parts
# features and classifier
net.classifier = Identity()
```
Now, instead of running `net.features(input)` you can run `net(input)` which might be easier for others to read as well.
|
Why does ForEach behave differently than % even though they are both aliases of ForEach-Object in Powershell?
For some reason the % alias for ForEach-Object throws an exception when using the ( $Thing in $Things) syntax while the ForEach alias works fine.
Here are two examples:
**Using the % alias:**
```
$ints = @(1, 2, 3, 4, 5)
% ($i in $ints)
{Write-Host $i}
```
This fails with the error Unexpected token 'in' in expression or statment.
**Using the ForEach alias:**
```
$ints = @(1, 2, 3, 4, 5)
foreach ($i in $ints)
{Write-Host $i}
```
This succeeds without issue.
Why is there a difference if they are both aliases of ForEach-Object?
| These are two different things:
`%` is the alias for the cmdlet `ForEach-Object` and `foreach` is *also* the alias for the cmdlet `ForEach-Object`... *and* `foreach` is a looping statement which *does not* work with pipelining.
As written, your first command expands to:
```
ForEach-Object ($i in $ints) {
Write-Host $i
}
```
...which isn't valid syntax for the `ForEach-Object` cmdlet.
When `foreach` appears, as with your second command, it is interpreted as a `foreach` *statement*, not the cmdlet alias. In this case the syntax *is* valid and loops as expected.
You can compare the differences with `get-help about_Foreach` and `get-help ForEach-Object`. This blog post [Essential PowerShell: Understanding foreach](http://poshoholic.com/2007/08/21/essential-powershell-understanding-foreach/) also does a nice job explaining.
|
Can 2 result sets be viewed from bigquery.client.query()?
I would like to run 2 Select parameters using the bigquery API.
for example, if I run the below query
```
SELECT 1;
SELECT 2;
```
When I run this using the following python script I only obtain the result of 2nd query.
```
def runquery();
bqclient = bigquery.client()
query = """ SELECT 1;
SELECT 2;"""
query_job = bqclient.query(query)
data = query_job.result()
rows = list(data)
print(rows)
```
Result:
```
[Row((2,), {'f0_': 0})]
```
But if I run the same query in bigquery query composer I would be able to view both the result.
[![Image from BQ Query Editor](https://i.stack.imgur.com/dejcN.png)](https://i.stack.imgur.com/dejcN.png)
How would I be able to get both the query results in Bigquery API resultset? Would I need to add a jobconfig to client.query() statement?
| Here's a quick example of walking a script. In this example you parent job is of type script, which is composed of two child jobs that are both select statements. Once the parent is complete, you can invoke `list_jobs` with a parent filter to find the child jobs and interrogate them for results. Child jobs don't nest, so you only have to worry about one level of children below the parent job.
```
def multi_statement_script():
from google.cloud import bigquery
bqclient = bigquery.Client()
query = """ SELECT 1;
SELECT 2;
"""
parent_query = bqclient.query(query)
# wait for parent job to finish (which completes when all children are done)
parent_query.result()
print("parent job {}".format(parent_query.job_id))
children = bqclient.list_jobs(parent_job=parent_query.job_id)
# note the jobs are enumerated newest->oldest, so the reverse
# ordering specified in the script
for child in children:
print("job {}".format(child.job_id))
rows = list(child.result())
print(rows)
```
|
Make an animation for collapsing | with Bootstrap
So, I have a bootstrap table:
```
<table class="table">
<tr class="content-row" id="content_1">
<td><a id="more_1" class="more" role="button" data-toggle="collapse" href="#additional_row1" aria-expanded="false" aria-controls="collapseExample">More</a>
</td>
</tr>
<tr class="collapse additional-row" id="additional_row1">
<td class="additional-row-td">Content</td>
</tr>
</table>
```
When I collapse the row, it just appears below the row, where the 'More' link is. But I need the animation to be present. I also tried to add transition to css rule, but it seems it doesn't have any effect. Is there any way to make collapsing with animation?
| The answer that was given wasn't actually providing the correct solution. It was a solution, but when you need to keep the `tr` and `td` elements this solution is more complete
The fact is that there is no way in bootstrap to animate `tr` `td` elements.
What you can do is instead of toggling the `tr` or `td` is create a `div` or `span` element in which you are going to add all of your content and then there animate the element.
As you can see it looks the same as if you had an actual table
```
<link href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet" />
<script src="https://code.jquery.com/jquery-1.11.1.min.js" type="text/javascript" ></script>
<script src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" type="text/javascript" ></script>
<table class="table">
<tr class="content-row" id="content_1">
<td><a id="more_1" class="more" role="button" data-toggle="collapse" href="#additional_row1" aria-expanded="false" aria-controls="collapseExample">More</a>
</td>
</tr>
<tr>
<td class="additional-row-td">
<div class="collapse additional-row" id="additional_row1">Content</div>
</td>
</tr>
</table>
```
|
Add highlight/background to only text using Swift
I want to highlight or add a background only on a text on a label that is not center-aligned.
I already tried Attributed Strings (<https://stackoverflow.com/a/38069772/676822>) and using regex but didn't get near a good solution.
NSAttributedString won't work because my label is not centered and it doesn't contain line breaks. It's just a long text that takes multiple lines.
This is what I'm trying to accomplish:
[![enter image description here](https://i.stack.imgur.com/1l3zw.png)](https://i.stack.imgur.com/1l3zw.png)
Note: It's not "Evangelizing\nDesign\nThinking" it's "Evangelizing Design Thinking"
Thanks!
| As far as I have tried its not possible to get what you want simply with attributed text because using:
```
let attributedText = NSMutableAttributedString(string: "Evangelizing Desing Thinking",
attributes: [
.font: UIFont.systemFont(ofSize: 14),
.backgroundColor: UIColor.gray
]
)
```
Will add extray gray background at the end of each line. My previous answer was not good neither because it only adds a gray background on each word, not on spaces, and as @Alladinian noticed, ranges can be wrong in some cases.
So here is a hack you can use to achieve what you want. It uses multiple labels but it can be easily improved by putting labels in a custom view. So, in your `viewDidLoad` / `CustomView` function add:
```
// Maximum desired width for your text
let maxLabelWidth: CGFloat = 80
// Font you used
let font = UIFont.systemFont(ofSize: 14)
// Your text
let text = "Eva ngel izing Des ing a Thin king"
// Width of a space character
let spaceWidth = NSString(string: " ").size(withAttributes: [NSAttributedString.Key.font: font]).width
// Width of a row
var currentRowWidth: CGFloat = 0
// Content of a row
var currentRow = ""
// Previous label added (we keep it to add constraint betweeen labels)
var prevLabel: UILabel?
let subStrings = text.split(separator: " ")
for subString in subStrings {
let currentWord = String(subString)
let nsCurrentWord = NSString(string: currentWord)
// Width of the new word
let currentWordWidth = nsCurrentWord.size(withAttributes: [NSAttributedString.Key.font: font]).width
// Width of the row if you add a new word
let currentWidth = currentRow.count == 0 ? currentWordWidth : currentWordWidth + spaceWidth + currentRowWidth
if currentWidth <= maxLabelWidth { // The word can be added in the current row
currentRowWidth = currentWidth
currentRow += currentRow.count == 0 ? currentWord : " " + currentWord
} else { // Its not possible to add a new word in the current row, we create a label with the current row content
prevLabel = generateLabel(with: currentRow,
font: font,
prevLabel: prevLabel)
currentRowWidth = currentWordWidth
currentRow = currentWord
}
}
// Dont forget to add the last row
generateLabel(with: currentRow,
font: font,
prevLabel: prevLabel)
```
Then you have to create the `generateLabel` function:
```
@discardableResult func generateLabel(with text: String,
font: UIFont,
prevLabel: UILabel?) -> UILabel {
let leftPadding: CGFloat = 50 // Left padding of the label
let topPadding: CGFloat = 100 // Top padding of (first) label
let label = UILabel()
label.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(label)
label.leftAnchor.constraint(equalTo: self.view.leftAnchor, constant: leftPadding).isActive = true
if let prevLabel = prevLabel {
label.topAnchor.constraint(equalTo: prevLabel.bottomAnchor).isActive = true
} else {
label.topAnchor.constraint(equalTo: self.view.topAnchor, constant: topPadding).isActive = true
}
label.font = font
label.text = text
label.backgroundColor = .gray
return label
}
```
---
**Previous answer:**
As Yogesh suggested, you can use attributed string:
```
// Init label
let label = UILabel(frame: CGRect(x: 50, y: 50, width: 90, height: 120))
self.view.addSubview(label)
label.lineBreakMode = .byTruncatingTail
label.numberOfLines = 0
label.backgroundColor = .white
// Create attributed text
let text = "Evangelizing Desing Thinking"
let attributedText = NSMutableAttributedString(string: text,
attributes: [
.font: UIFont.systemFont(ofSize: 14)
]
)
// Find ranges of each word
let subStrings = text.split(separator: " ")
let ranges = subStrings.map { (subString) -> Range<String.Index> in
guard let range = text.range(of: subString) else {
fatalError("something wrong with substring") // This case should not happen
}
return range
}
// Apply background color for each word
ranges.forEach { (range) in
let nsRange = NSRange(range, in: text)
attributedText.addAttribute(.backgroundColor, value: UIColor.gray, range: nsRange)
}
// Finally set attributed text
label.attributedText = attributedText
```
|
UILabel with mutiple lines to truncate one long word
I have a `UIlabel` view which allow to show two lines of strings. But in my case, there is one long word only. Whatever I set the line break mode to `UILineBreakModeTailTruncation` or `UILineBreakModeWordWrap`, it always break the word into two lines. Like this:
"xxxxxx
xx"
I would like to truncate it in the first line, like this:
"xxxx..."
Is there any way to implement that.
In most of the cases, it should allow to show two lines of words.
Additional edit:
Take following image as an example. The top two labels are what I expected: one long word can be truncated in one line; multiple short words can be show in two lines.
The bottom label is currently happened.
![enter image description here](https://i.stack.imgur.com/a0sjh.png)
| In order to do what you're asking you need to find out if there is only one word. If there is, set the number of lines to 1 and the auto-shrink should fix things. If there is not, then set the number of lines to 0.
e.g.
```
UILabel *label = [[UILabel alloc] init];
label.font = [UIFont systemFontOfSize:12.0];
label.adjustsFontSizeToFitWidth = YES;
label.minimumScaleFactor = 0.8;
label.text = NSLocalizedString(@"Information", @"E: 'Information'");
NSUInteger words = [label.text componentsSeparatedByCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]].count;
label.numberOfLines = (words == 1) ? 1 : 0;
```
Swift 3.0
```
let label = UILabel()
label.font = UIFont.systemFont(ofSize: 12)
label.adjustsFontSizeToFitWidth = true
label.minimumScaleFactor = 0.8
label.text = NSLocalizedString("Information", comment: "E: 'Information'")
let words = label.text?.components(separatedBy: NSCharacterSet.whitespacesAndNewlines).count
label.numberOfLines = words == 1 ? 1 : 0
```
|
What's the default /etc/network/interfaces?
I messed up my network configuration. Can anyone help me and copy the default `/etc/network/interfaces`?
| **For eth0 with dhcp:**
```
# The loopback network interface
auto lo eth0
iface lo inet loopback
# The primary network interface
iface eth0 inet dhcp
```
**For eth0 static:**
```
# The loopback network interface
auto lo eth0
iface lo inet loopback
# The primary network interface
iface eth0 inet static
address 192.168.10.33
netmask 255.255.255.0
broadcast 192.168.10.255
network 192.168.10.0
gateway 192.168.10.254
dns-nameservers 192.168.10.254
```
link:
<http://www.iasptk.com/ubuntu/20202-ubuntu-server-networking-configuration-dhcp-address-static-ip-address-second-ip-address-or-virtual-ip-address>
|
richtext control doesn't store the input content
my XPage has a RT-control in which the user can fill with text snippets, plus complete the content with more text.
The eventHandler of the "filling button":
```
<xp:eventHandler event="onclick" submit="true" refreshMode="partial" refreshId="Body1">
<xp:this.action><![CDATA[#{javascript:
var mykey = getComponent("Aufgabe1").getValue();
var bodytxt:string = @DbLookup(@DbName(), "lookupOrdertypes",mykey,4,"[FAILSILENT]");
if (checkContent(bodytxt)) getComponent("Body1").setValue(bodytxt);
}]]></xp:this.action>
</xp:eventHandler>
```
The text is filled in, the user see it and writes some some more. At last the user submits the form: But in the richtext field is only saved the filled in text snippet! If the user doesn't use that button, but types in only his text, the text is saved correctly.
When I change the richtext control into a multiline edit box, everything works fine.
thanks for any help
Uwe
| The problem is that you are not refreshing the richtext completly, only the textarea with the id of the richttext component. But there are two other components which has to be refreshed: inputRichText1\_mod and inputRichText1\_h, two automatically generated fields from the *XspInputRichText* component.
If you refresh a surrounding element instead your code should work:
```
<xp:div id="refreshMe">
<xp:inputRichText id="Body1" value="#{document1.Body}"></xp:inputRichText>
</xp:div>
```
Now refresh the *div* instead:
```
<xp:eventHandler event="onclick" submit="true" refreshMode="partial" refreshId="refreshMe">
<xp:this.action><![CDATA[#{javascript:
var mykey = getComponent("Aufgabe1").getValue();
var bodytxt:string = @DbLookup(@DbName(), "lookupOrdertypes",mykey,4,"[FAILSILENT]");
if (checkContent(bodytxt)) getComponent("Body1").setValue(bodytxt);
}]]></xp:this.action>
</xp:eventHandler>
```
|
Executing a function at specific intervals
The task is to execute a function (say `Processfunction()`) every x (say x=10) seconds.
With below code, I'm able to call `Processfunction()` every x seconds.
**Question:** How to handle the case where the function takes more than 10 seconds to finish execution?
One way would be to have a flag to indicate the end of `Processfunction()` execution and check for it before calling `Processfunction()`.
Is there a better way to do this ?
---
```
#include <pthread.h>
#include <unistd.h> // for sleep() and usleep()
void *timerthread(void *timer_parms) {
struct itimerspec new_value;
int max_exp, fd;
struct timespec now;
uint64_t exp;
ssize_t s;
struct timer_params *p =(struct timer_params*)timer_parms;
printf("starttimer Start\n");
/* Create a CLOCK_REALTIME absolute timer with initial
expiration and interval as specified in command line */
if (clock_gettime(CLOCK_REALTIME, &now) == -1)
handle_error("clock_gettime");
new_value.it_value.tv_sec = now.tv_sec;
new_value.it_value.tv_nsec = now.tv_nsec + p->tv_nsec;
new_value.it_interval.tv_sec = p->tv_sec;
new_value.it_interval.tv_nsec = p->tv_nsec;
//max_exp = 5; //No of times
fd = timerfd_create( CLOCK_REALTIME , 0);
if (fd == -1)
handle_error("timerfd_create");
if (timerfd_settime(fd, TFD_TIMER_ABSTIME, &new_value, NULL) == -1)
handle_error("timerfd_settime");
printf("timer started\n");
while(1) // keep checking
{
s = read(fd, &exp, sizeof(uint64_t));
if (s != sizeof(uint64_t))
handle_error("read");
Processfunction(); // Say after X seconds call this function
}
return NULL;
}
int main() {
struct timer_params timer_params_obj;
int res;void *thread_result;
timer_params_obj.tv_sec = 10;
//timer_params_obj.tv_nsec = 10000000 ; //10ms
timer_params_obj.tv_nsec = 0 ;
pthread_t pt;
pthread_create(&pt, NULL, timerthread, &timer_params_obj);
// thread is running and will call Processfunction() every 10 sec
}
```
| Why do you need a timer for this?
You could just measure the execution time and take a sleep according to the relation of elapsed time to desired interval duration.
Example:
```
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>
int main() {
srand(1);
for (;;) {
double interval = 10; /* seconds */
/* start time */
time_t start = time(NULL);
/* do something */
int duration = rand() % 13;
printf("%2d seconds of work started at %s", duration, ctime(&start));
sleep(duration);
/* end time */
time_t end = time(NULL);
/* compute remaining time to sleep and sleep */
double elapsed = difftime(end, start);
int seconds_to_sleep = (int)(interval - elapsed);
if (seconds_to_sleep > 0) { /* don't sleep if we're already late */
sleep(seconds_to_sleep);
}
}
return 0;
}
```
Output:
```
$ gcc test.c && ./a.out
0 seconds of work started at Sun Mar 17 21:20:28 2013
9 seconds of work started at Sun Mar 17 21:20:38 2013
11 seconds of work started at Sun Mar 17 21:20:48 2013
4 seconds of work started at Sun Mar 17 21:20:59 2013
1 seconds of work started at Sun Mar 17 21:21:09 2013
^C
```
|
Translating VBA to VBScript for removing duplicates in a column
I have a sheet in an Excel file in which I want to remove duplicate values from column 1. In Excel it has this feature when you click under Data, then Remove Duplicates on the column named "Code" which is the first column. I am trying to translate this as a VBScript, unsure how. I tried recording a macro to grab the VBA syntax but it isn't the same for VBScript. The code I get from the recorded macro is
```
Sub Macro1()
Columns("A:A").Select
ActiveSheet.Range("$A$1:$K$523").RemoveDuplicates Columns:=1, Header:=xlYes
End Sub
```
| You can't use named parameters in VBScript. You just need to provide parameters in the proper order, as they appear in the function declaration. Also you won't be able to use Excel's constants (`xlNo`, `xlYes`, etc) without first defining them yourself.
For your `RemoveDuplicates()` function, the VBScript equivalent would look like (assuming `objExcel` is your application object):
```
objExcel.ActiveSheet.Range("$A$1:$K$523").RemoveDuplicates 1
```
Since `Columns` is the first parameter to `RemoveDuplicates()`. If you wanted to specify a header row, it would look like this:
```
Const xlYes = 1
objExcel.ActiveSheet.Range("$A$1:$K$523").RemoveDuplicates 1, xlYes
```
|
How to download a file without knowing it's type or filename?
I've got a download link like this:
```
https://someURL.com/PiPki.aspx?ident=594907&jezik=de
```
The download result could be a file with any file type. For example `Picture.jpg` or `something.pdf`.
How is it possible to download whatever file is behind this link with its original name and extension?
| Via HTTP there is not only the possibility to transmit payload data, but there is also a header you can use to transmit meta-data. On the receiver side you can use that data e.g. to determine the name to store the file as.
In order to determine the file type, the HTTP response must have the correct `Content-Type` header (see [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type)). If the transmitted file is a PDF, the HTTP response would have the header field
```
Content-Type: application/pdf
```
Furthermore there is the possibility to pass a file name in the `Content-Disposition` header (see [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition)) *if* the disposition is set to `attachment` (i.e. a downloadable file instead of inline content)
```
Content-Disposition: attachment; filename="something.pdf"
```
If there is a known `Content-Type`, but no file name, your option would be to use a default file name and the extension matching the `Content-Type`, for example `download.pdf`. If the `Content-Type` is missing or generic, you're having bad luck. You could try to go for the contents of the file, but this may or may not be successful and could be unreliable for certain file types.
Since this is a C# question
```
var client = new HttpClient();
using (var response = await client.GetAsync("https://someURL.com/PiPki.aspx?ident=594907&jezik=de"))
{
string fileName = null;
if (response.Headers.Contains("Content-Disposition"))
{
fileName = GetFileNameFromContentDisposition(response.Headers);
}
if (fileName == null && response.Headers.Contains("Content-Type"))
{
var extension = GetExtensionFromContentType(response.Headers);
fileName = $"download.{extension}";
}
using (var fileStream = File.OpenWrite(fileName))
{
await response.Content.CopyToAsync(fileStream);
}
}
```
|
How to style text in action overflow menu on device with Android>4.0 and hardware button?
I would like custom background & text color for my overflow menu. It works fine with devices without hardware menu button, but I'm not able to style devices with hardware menu button. See the screenshots:
![Screnshot from Galaxy Nexus](https://i.stack.imgur.com/s0mHR.png)
![Screenshot from Galaxy S3](https://i.stack.imgur.com/jDABX.png)
I'm using these styles:
```
<style name="Theme.settleup" parent="@style/Theme.Sherlock.Light.DarkActionBar">
<item name="android:popupMenuStyle">@style/settleup_PopupMenu</item>
<item name="android:actionMenuTextColor">@android:color/white</item>
</style>
<style name="settleup_PopupMenu" parent="@style/Widget.Sherlock.ListPopupWindow">
<item name="android:popupBackground">@drawable/menu_dropdown_panel_settleup</item>
</style>
```
| I've looked into the problem. And there seems to be no way in changing `textColor` for the options menu for Android >= 4.0 devices with HW menu key. Not even changing primary, secondary or tertiary colors affected the text color.
The only "solution" that comes to my mind now is some pretty nasty use of java reflection API.
However, you can set the background of the bottom options menu of Android >= 4.0 HW key devices separately from the background of the non-HW key dropdown menu.
You already has the styling for non-HW key dropdown menu. The non-HW key dropdown menu is defined by `android:popupMenuStyle` (and `popupMenuStyle`):
```
<style name="Theme.settleup" parent="@style/Theme.Sherlock.Light.DarkActionBar">
<item name="android:popupMenuStyle">@style/settleup_PopupMenu</item>
</style>
```
But the background of the Android >= 4.0 HW key bottom options menu is defined with `android:panelBackground` item in the theme:
```
<style name="Theme.settleup" parent="@style/Theme.Sherlock.Light.DarkActionBar">
<item name="android:panelBackground">@drawable/mybackground</item>
</style>
```
Since it seems to be a problem to change the text color for bottom options menu on Android >= 4.0 HW key devices, I would suggest to leave this part to its default styling.
|
Kotlin general setter function
I am new to kotlin. I wonder if this is possible
I wish to create a function that will change the value of the properties of the object and return the object itself. The main benefit is that I can chain this setter.
```
class Person {
var name:String? = null
var age:Int? = null
fun setter(propName:String, value:Any): Person{
return this.apply {
try {
// the line below caused error
this[propName] = value
} catch(e:Exception){
println(e.printStackTrace())
}
}
}
}
//usage
var person = Person(null,null)
person
.setter(name, "Baby")
.setter(age, 20)
```
But I get error "unknown references"
This question is marked as duplicate, however the possible duplicate question specifically want to change the property of "name", but I wish to change anyProperty that is pass from the function to object. Can't seem to connect the dot between two questions. @Moira Kindly provide answer that explain it. thankyou
| Why not just simplify your answer to
```
fun setter(propName: String, value: Any): Person {
val property = this::class.memberProperties.find { it.name == propName }
when (property) {
is KMutableProperty<*> ->
property.setter.call(this, value)
null ->
// no such property
else ->
// immutable property
}
}
```
Java reflection isn't needed, its only effect is to stop non-trivial properties from being supported.
Also, if you call it [`operator fun set`](https://kotlinlang.org/docs/reference/operator-overloading.html#indexed) instead of `fun setter`, the
```
this[propName] = value
```
syntax can be used to call it.
|
Remove signing from an assembly
I have a project open in Visual Studio (it happens to be Enyim.Caching). This assembly wants to be delay-signed. In fact, it desires so strongly to be delay-signed, that I am unable to force Visual studio to compile it *without* delay signing.
1. I have unchecked "Delay sign only" and "Sign the assembly" on the Visual Studio project properties box, and rebuilt. The assembly is still marked for delay sign (as shown by `sn.exe -v`).
2. I have unloaded the project and verified that the signing is set to false. When reloading the project, the check boxes for "Sign" and "Delay Sign" are checked.
3. I have verified that no attributes are present in the AssemblyInfo (or elsewhere) that would cause this.
4. I have searched the Internet for a solution, but have found none.
How can I do this?
| **In this case, the problem is a project "common properties" reference.**
Inside of the project .csproj file is this innocuous little line:
>
> <Import Project="..\build\CommonProperties.targets" />
>
>
>
Unfortunately, the file (`CommonProperties.targets`) instructs VS to re-write the properties, but it does not provide any clear indication in the user interface that this is taking place.
The solution is to go into the `CommonProperties.targets` file and delete the following lines:
```
<!-- delay sign the assembly if the PrivateKeyPath property is not specified -->
<PropertyGroup Condition=" '$(PrivateKeyPath)' == '' And '$(PrivateKeyName)' == ''">
<AssemblyOriginatorKeyFile>..\public_key.snk</AssemblyOriginatorKeyFile>
<DelaySign>true</DelaySign>
</PropertyGroup>
<!-- sign the assembly using the specified key file containing both the private and public keys -->
<PropertyGroup Condition=" '$(PrivateKeyPath)' != '' ">
<AssemblyOriginatorKeyFile>$(PrivateKeyPath)</AssemblyOriginatorKeyFile>
<DelaySign>false</DelaySign>
</PropertyGroup>
<!-- sign the assembly using the specified key container containing both the private and public keys -->
<PropertyGroup Condition=" '$(PrivateKeyName)' != '' ">
<AssemblyOriginatorKeyFile></AssemblyOriginatorKeyFile>
<DelaySign>false</DelaySign>
</PropertyGroup>
```
Replace those lines with the following:
```
<PropertyGroup>
<DelaySign>false</DelaySign>
<SignAssembly>false</SignAssembly>
</PropertyGroup>
```
|
"TypeError: 'type' object is not subscriptable" in a function signature
Why am I receiving this error when I run this code?
```
Traceback (most recent call last):
File "main.py", line 13, in <module>
def twoSum(self, nums: list[int], target: int) -> list[int]:
TypeError: 'type' object is not subscriptable
```
```
nums = [4,5,6,7,8,9]
target = 13
def twoSum(self, nums: list[int], target: int) -> list[int]:
dictionary = {}
answer = []
for i in range(len(nums)):
secondNumber = target-nums[i]
if(secondNumber in dictionary.keys()):
secondIndex = nums.index(secondNumber)
if(i != secondIndex):
return sorted([i, secondIndex])
dictionary.update({nums[i]: i})
print(twoSum(nums, target))
```
| **The following answer only applies to Python < 3.9**
The expression `list[int]` is attempting to subscript the object [`list`](https://docs.python.org/3/library/functions.html#func-list), which is a class. Class objects are of the type of their metaclass, which is [`type`](https://docs.python.org/3/library/functions.html#type) in this case. Since `type` does not define a [`__getitem__`](https://docs.python.org/3/reference/datamodel.html#object.__getitem__) method, you can't do `list[...]`.
To do this correctly, you need to import [`typing.List`](https://docs.python.org/3/library/typing.html#typing.List) and use that instead of the built-in `list` in your type hints:
```
from typing import List
...
def twoSum(self, nums: List[int], target: int) -> List[int]:
```
If you want to avoid the extra import, you can simplify the type hints to exclude generics:
```
def twoSum(self, nums: list, target: int) -> list:
```
Alternatively, you can get rid of type hinting completely:
```
def twoSum(self, nums, target):
```
|
Android Studio publish local library
I've recently switched to Android Studio from Eclipse, and its better for the most part.
But now I'm at the point of wanting to create libraries to be reused later. I know about modules, but don't want to use them, as it seems to copy a duplicate in each project (I'd rather have a reference to a lib as in Eclipse).
So I've turned my attention to a Maven/Gradle solution. Ideally I'd like to be able to export my lib to a local repo and/or maven repo, and reference it through that via gradle. I've been looking for quite a while now, and each answer is different, and none of them worked for me. [This](http://blog.blundell-apps.com/locally-release-an-android-library-for-jcenter-or-maven-central-inclusion/) is the closest I've found to what I'm looking for, but there is an error in javadoc creation.
These sites ([link](https://www.virag.si/2015/01/publishing-gradle-android-library-to-jcenter/) and [link](http://zserge.com/blog/gradle-maven-publish.html)) require a Bintray or Sonatype account to publish the libraries globally. Publishing the library is an end goal for me so I'd like to keep that option open if possible, but for now I just want my libs to be private.
It would be pretty great if there was a plugin that I could just specify a maven repo to export to, but I haven't found anything that looks promising yet.
So my question is: Is there a recommended "simple" way to export a library in Android Studio, which I can reference then through Gradle?
Bonus Marks: Would I be able to obfuscate libraries with proguard once published? Currently setting `minifyEnabled=true` on my lib results in the `.aar` file not being generated when building.
| You can use [maven-publish](https://docs.gradle.org/current/userguide/publishing_maven.html) plugin to publish your artifacts to any repository you have access to. [I am using it combined with Amazon S3](https://github.com/sm4/s3repo-demo). The repository can be your local maven repo (the .m2 directory), local Artifactory, Archiva or any other maven repository server or a hosted solution. S3 works well and I am sure there are hosted Artifactories etc out there.
If you are using an external repository, add it to the repositories section, but just using the plugin should give you the choice of publishing to local maven. The gradle task is called *publishToMavenLocal*. You need to define your *publication*. Publication is simply a set of artifacts created by your build, in your case it will probably be a jar, but it can be anything. Short example how do I use it with S3.
```
apply plugin: 'maven'
apply plugin: 'java'
apply plugin: 'maven-publish'
repositories {
maven {
name "s3Repo"
url "your S3 URL"
credentials(AwsCredentials) {
accessKey awsAccessKey
secretKey awsSecretKey
}
}
maven {
... any other repo here
}
}
publishing {
publications {
maven(MavenPublication) {
artifactId 'artifact that you want'
from components.java
artifact distZip {
... example artifact created from zip distribution
}
}
}
repositories {
add "s3Repo"
}
}
```
The *publishing -> repositories -> add* is the place where you specify all repos where you want to upload your archives.
If you want to use library from your local maven, simply add:
```
repositories {
...
mavenLocal()
...
}
```
And then refer to your lib as to any other dependency with group id, artifact id and version.
---
As far as I know, obfuscated libraries are the same as the non-obfuscated, just with changed names of identifiers. If the requirement is to publish an obfuscated library, that should be fine. Just don't obfuscate the API interfaces and classes (the entry points). If you publish non-obfuscated library, I am not sure if it gets obfuscated in the final archive.
|
How to point AWS API gateway stage to specific lambda function alias?
So as per the AWS documentaion
>
> Instead of using Amazon Resource Names (ARNs) for Lambda function in
> event source mappings, you can use an alias ARN. This approach means
> that you don't need to update your event source mappings when you
> promote a new version or roll back to a previous version.
>
>
>
I have AWS lambda function `pets` and i have created 2 aliases `dev` and `prod` pointing to different versions of lambda function.
Then in API Gateway i am using this lambda function in `Integration Request`. I have two stages of API, `development` and `production`. I want `development` API stage point to `dev` Lambda alias ARN and `production` needs to point to `prod` alias.
When i select lambda function as `Integration Type` then drop down list shows whatever display name i have given earlier while creating lambda function..[![enter image description here](https://i.stack.imgur.com/mW5MU.png)](https://i.stack.imgur.com/mW5MU.png)
I am not finding any stage specific configuration for lambda function. Based on my research on SO i have to follow these steps to deploy `development` stage pointing to `dev` alias
1> Go to `Integration Request`
2> Select Lambda function and change it to `pets:dev`
3> Deploy to `development` stage
Follow the same steps for `production` by changing Lambda function to `pets:prod` before deployment.
This is going to be maintenance nightmare as our lambda function grows. Is there any easier way to do it?
| I found it
<https://aws.amazon.com/blogs/compute/using-api-gateway-stage-variables-to-manage-lambda-functions/>
Here are the steps I followed:
1. After creating lambda function, create 2 aliases for lambda
function. `dev` pointing to `$latest` version and `prod` pointing a
particular version you want to use in prod
2. Then goto API Gateway console ->Integration Request ->Lambda function, and type
`pets:${stageVariables.lambdaAlias}` where `pets` is my function name and `lambdaAlias` is the stage variable that we will have to add in each API stage
3. Deploy your API to new API stages `development` and `production`
4. In each API stage add stage variable `lambdaAlias` with value `dev` and `prod` respectively. The stage variable value must match with lambda function's alias
[![enter image description here](https://i.stack.imgur.com/81HKJ.png)](https://i.stack.imgur.com/81HKJ.png)
Now we don't have to keep changing lambda alias name for each API deployment
|
Why is REMOTE\_ADDR only sometimes available as an Apache environment variable?
To avoid having to parse `X-Forwarded-For` in Varnish, I'm trying to just set a header on the SSL terminator (currently Apache) that stores the direct client IP in a header.
On our development machine, this works:
```
RequestHeader set X-Foo %{REMOTE_ADDR}e
```
However, in staging it doesn't. Specifically, the header is empty, as illustrated by both `varnishlog`:
```
13 TxHeader b X-Foo: (null)
```
(On the development machine, this shows the IP address as expected.)
Similarly, logging `REMOTE_ADDR` shows that it only appears to be populated on the dev machine:
```
# Config
LogFormat "%{X-Forwarded-For}i %{REMOTE_ADDR}e" combined
CustomLog "/var/log/httpd/access_log" combined
# Log file, staging
<my ip> -
# Log file, development
<my ip> <my ip>
```
Since the dev machine is, well, a dev machine, it is different in a number of ways; however, I can't track down which difference is causing this. The versions of Apache are the same (2.2.22), and I don't see anything relevant in any of the standard config files or `/etc/sysconfig/httpd`. And the rest of the system is reasonably similar, since they're built off the same CentOS 5 base image.
I can't even tell from the Apache documentation whether `REMOTE_ADDR` is expected to exist or not as [an environment variable](https://httpd.apache.org/docs/2.2/env.html), but it clearly works on one machine, whether by fluke or design, and the inconsistency is driving me mad.
| Looking through the sources, `REMOTE_ADDR` is set only for these handlers that come with the server itself: mod\_proxy\_scgi, mod\_ext\_filter, mod\_include, mod\_isapi, mod\_cgid, and mod\_cgi (only they call `ap_add_common_vars`) so somehow one of these handlers is getting called before mod\_headers or mod\_log\_config on your dev box but not on the staging box. (You may have other handlers.)
In less technical speak, from the doc you reference it says the same thing:
>
> In addition to all environment variables set within the Apache
> configuration and passed from the shell, **CGI scripts and SSI pages** (emphasis mine) are
> provided with a set of environment variables containing
> meta-information about the request as required by the CGI
> specification.
>
>
>
|
Scanf allow space between words
I am try to read in some data with an id followed by firstname lastname from a text file and I cannot seem to get scanf to allow the space.
input may look like:
123456 FirstName LastName
`scanf("%d%s", &id, fullName)` doesn't work because it cuts off at the space between first and last name. I would like to have 'first last' in one string (preferrably without concat because there are instances where the last name is not included).
| You can use the set notation to specify a set of allowable characters in your name, and explicitly allow spaces:
```
scanf("%d %[ a-zA-Z]", &id, fullName);
```
Here, `[]` specifies a set of characters to allow (see [man 3 scanf](http://linux.die.net/man/3/scanf)), and within there there is a space, `a-z`, which means "all lower-case characters" and `A-Z` which means "all upper-case characters.
All that said, you should specify a maximum width, to avoid overflowing your buffer. For example:
```
char fullName[64]; // allocate 64-byte array and set to 0
scanf("%d %63[ a-zA-Z]", &id, fullName);
```
Note the use of 63 instead of 64; the man page notes that it will read in the number of characters specified, and *then* a NULL-terminator:
>
> String input conversions store a null terminator ('\0') to mark the end of the input; the maximum field width does not include this terminator.
>
>
>
|
How to Running celeryd as a daemon in ubuntu?
i am trying to install an init.d script, to run celery for scheduling tasks. when i tried to start it by `sudo /etc/init.d/celeryd start`, it throws error `"User does not exist: 'celery'"`
my celery configuration file (`/etc/default/celeryd`) contains these:
```
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
```
i know that these are wrong that is why it throws error.
The [documentation](http://ask.github.io/celery/cookbook/daemonizing.html) just says this:
```
CELERYD_USER
User to run celeryd as. Default is current user.
```
nothing more about it.
Any help will be appreciated.
| I am adding a proper answer in order to be clearly visible:
Workers are unix processes that will run the various celery tasks. As you can see in the documentation, the CELERYD\_USER and CELERYD\_GROUP determine the name of user and group these workers will be run in your Unix environment.
So, what happened initially in your case is that celery tried to start the worker with a user named "celery" which did not exist in your system. When you commented out these two options, then celery started the workers with the user that issued the command `sudo /etc/init.d/celeryd start` which in this case is the root (administrator) user (default is the current user).
However, it is recommended to run the workers as unpriviledged users and not as root for obvious reasons. So I recommend to actually add the celery user and group using the small tutorial found here <http://www.cyberciti.biz/faq/unix-create-user-account/> and uncomment again the
```
CELERYD_USER="celery"
CELERYD_GROUP="celery"
```
options.
|
Very generic argmax function in C++ wanted
I'm a spoiled Python programmer who is used to calculating the [argmax](https://en.wikipedia.org/wiki/Argmax) of a `collection` with respect to some `function` with
```
max(collection, key=function)
```
For example:
```
l = [1,43,10,17]
a = max(l, key=lambda x: -1 * abs(42 - x))
```
`a` then contains 43, the number closest to 42.
Is it possible to write a C++ function which takes any "iterable" and any function and returns the argmax like above? I guess this would involve template parameters, the `auto` keyword, and [range-based iteration](http://www.cprogramming.com/c++11/c++11-ranged-for-loop.html), but I was not able to piece it together.
| Since @leemes solutions are too many. All are correct, except that none attempts to *imitate* the Python version in your example, Here is my attempt to imitate that:
Convenient generic argmax-function just like Python version:
```
template<typename Container, typename Fn>
auto max(Container const & c, Fn && key) -> decltype(*std::begin(c))
{
if ( std::begin(c) == std::end(c) )
throw std::invalid_argument("empty container is not allowed.");
typedef decltype(*std::begin(c)) V;
auto cmp = [&](V a, V b){ return key(a) < key(b); };
return *std::max_element(std::begin(c), std::end(c), cmp);
}
```
And use it as:
```
std::vector<int> l = {1,43,10,17};
auto a = max(l, [](int x) { return -1 * std::abs(42-x); };
int l[] = {1,43,10,17}; //works with array also!
auto a = max(l, [](int x) { return -1 * std::abs(42-x); };
```
**Note:** Unlike the other solution, this `max()` returns the element itself, not the iterator to the element!
Also note this solution would work for user-defined container also:
```
namespace test
{
template<size_t N>
struct intcollection
{
int _data[N];
int const * begin() const { return _data; }
int const * end() const { return _data + N; }
};
}
test::intcollection<4> c{{1,43,10,17}};
auto r = max(c, [](int x) { return -1 * std::abs(42-x); });
```
[See the live demo](http://stacked-crooked.com/view?id=8c5b827574be4247f3eb28e944462812).
|
Make LazyColumn items be as large as the largest item in the list
```
@Composable
fun PreviewLayout() {
fun getRandomString(length: Int): String {
val allowedChars = ('A'..'Z') + ('a'..'z') + ('0'..'9')
return (1..length)
.map { allowedChars.random() }
.joinToString("")
}
val horizontalScrollState = rememberScrollState()
LazyColumn(
modifier = Modifier
.background(Color.Blue)
.fillMaxHeight()
.wrapContentWidth()
.horizontalScroll(horizontalScrollState)
) {
items(5) { index ->
Text(
text = getRandomString((index + 1) * 4).uppercase(),
color = Color.Black,
fontSize = 16.sp,
modifier = Modifier
.padding(8.dp)
.background(Color.Yellow)
)
}
}
}
```
**Preview of the layout:**
[![Compose Layout Preview](https://i.stack.imgur.com/aipOd.png)](https://i.stack.imgur.com/aipOd.png)
I'd like to have the items width be the same as the largest item in the list.
Notice the `.horizontalScroll(horizontalScrollState)`, this is to allow horizontal scrolling.
**What I'd like:**
[![Desirable Layout Preview](https://i.stack.imgur.com/BdURO.png)](https://i.stack.imgur.com/BdURO.png)
I need to use a `LazyColumn` but if I could use a `Column` I'd write it this way:
```
Column(
modifier = Modifier
.background(Color.Blue)
.horizontalScroll(horizontalScrollState)
.fillMaxHeight()
.width(IntrinsicSize.Min)
) {
repeat(5) { index ->
Text(
text = getRandomString((index + 1) * 4).uppercase(),
color = Color.Black,
fontSize = 16.sp,
modifier = Modifier
.padding(8.dp)
.fillMaxWidth()
.background(Color.Yellow)
)
}
}
```
| 1. You need to calculate width of the widest element separately. You can do it by placing an invisible copy of you cell with widest content in a `Box` along with `LazyColumn`.
In your sample it's easy - just get the longest string. If in the real project you can't decide which of contents is gonna be the widest one, you have two options:
1.1. Place all of them one on top of each other. You can do it only if you have some limited number of cells,
1.2. Otherwise you have to made some approximation and filter a short list of the ones you expect to be the widest ones.
2. Because of `horizontalScroll` `maxWidth` constraint is infinity, you have to pass calculated width manually. You can get it with `onSizeChanged`:
```
@Composable
fun TestScreen(
) {
fun getRandomString(length: Int): String {
val allowedChars = ('A'..'Z') + ('a'..'z') + ('0'..'9')
return (1..length)
.map { allowedChars.random() }
.joinToString("")
}
val items = remember {
List(30) { index ->
getRandomString((index + 1) * 4).uppercase()
}
}
val maxLengthItem = remember(items) {
items.maxByOrNull { it.length }
}
val (maxLengthItemWidthDp, setMaxLengthItemWidthDp) = remember {
mutableStateOf<Dp?>(null)
}
val horizontalScrollState = rememberScrollState()
Box(
Modifier
.background(Color.Blue)
.horizontalScroll(horizontalScrollState)
) {
LazyColumn(
Modifier.fillMaxWidth()
) {
items(items) { item ->
Cell(
item,
modifier = if (maxLengthItemWidthDp != null) {
Modifier.width(maxLengthItemWidthDp)
} else {
Modifier
}
)
}
}
if (maxLengthItem != null) {
val density = LocalDensity.current
Cell(
maxLengthItem,
modifier = Modifier
.requiredWidthIn(max = Dp.Infinity)
.onSizeChanged {
setMaxLengthItemWidthDp(with(density) { it.width.toDp() })
}
.alpha(0f)
)
}
}
}
@Composable
fun Cell(
item: String,
modifier: Modifier,
) {
Text(
text = item,
color = Color.Black,
fontSize = 16.sp,
modifier = modifier
.padding(8.dp)
.background(Color.Yellow)
)
}
```
Result:
![](https://i.stack.imgur.com/Yp7QB.gif)
|
How to program a Yes/No answer as buttons in an HTML form, without using radio buttons or checkboxes?
How do you program the following in a webform:
Are you male or female?
[![enter image description here](https://i.stack.imgur.com/fptpW.png)](https://i.stack.imgur.com/fptpW.png)
| You can simulate using the labels of radio buttons.
```
input[type="radio"] {
display: none;
}
label {
padding: 10px 20px;
background-color: orange;
border: thin solid darkorange;
border-radius: 10px;
margin:5px;
display: inline-block;
}
input[type="radio"]:checked + label {
background-color: darkblue;
cursor: default;
color: #E6E6E6;
}
```
```
<input id="toggle-on" name="toggle" type="radio">
<label for="toggle-on">On</label>
<input id="toggle-off" name="toggle" type="radio">
<label for="toggle-off">Off</label>
```
|
How to translate Symfony2 Exception
I'm using yml format to translate my web app but I've one problem.
What I would like to do:
```
#exception.en.yml
exception.bad: 'Bad credentials'
```
What I know is possible to do:
```
#exception.en.yml
'Bad credentials': 'Bad credentials'
```
Is this the only method to translate the exception?
| simply put in the translator and remember to add the `trans` statement on the messagge error dump in the Twig template.
Here an xliff example:
```
messages.en.xlf
<trans-unit id="1">
<source>User account is disabled.</source>
<target>Account disabled or waiting for confirm</target>
</trans-unit>
<trans-unit id="2">
<source>Bad credentials</source>
<target>Wrong username or password</target>
</trans-unit>
```
and in the template
```
{# src/Acme/SecurityBundle/Resources/views/Security/login.html.twig #}
{% if error %}
<div>{{ error.message|trans }}</div>
{% endif %}
<form action="{{ path('login_check') }}" method="post">
<label for="username">Username:</label>
<input type="text" id="username" name="_username" value="{{ last_username }}" />
<label for="password">Password:</label>
<input type="password" id="password" name="_password" />
<button type="submit">login</button>
</form>
```
Check [this doc](http://symfony.com/doc/2.0/cookbook/security/entity_provider.html#forbid-non-active-users) for exclude non active users
hope this help
|
Debug into Maven Dependency Source w/IntelliJ
I'm debugging a Maven project in IntelliJ and I'm trying to figure out how to step into the source of one of my dependencies that's specified in my pom.xml. Specifically, my project has a dependency on Crawler4J I'm seeing some weird behaviour from Parser.parse(), and I want to step thru that method. I tried setting up a local cloned Git repo with the source and attaching it via the Sources option under Project Structure, but I still can't step into the compiled Crawler4J methods. As a long time C# developer (and relative Java nub) what I would have ideally liked is something like .NET Reflector's functionality for decompiling on the fly while debugging, but a way to attach the source would suffice.
| I just set up the same dependency and I have no problems to download the source code.
![enter image description here](https://i.stack.imgur.com/lFxBs.png)
Now I created a simple Main class with a Parser. I do `Ctrl` + Left-click and it will bring me to the Parser class.
![enter image description here](https://i.stack.imgur.com/Yl9jJ.png)
As you can see it has a link in the upper right corner saying `Download Sources`.
![enter image description here](https://i.stack.imgur.com/YJSwI.png)
After pressing that link the sources are downloaded and immediately available.
![enter image description here](https://i.stack.imgur.com/tDS1Z.png)
|
Edit in place vs. separate edit page / modal?
I have some data that is broken up into sections, much like the Resume feature of StackOverflow Careers (it's not resume data, though), that is editable/create-able via a jQuery web app. It's a bit more hierarchical (jobs can have sub-jobs, etc.) so depending on what method of CRUD I take, it means differing amounts of work. I don't mind spending the time to do it right, but I don't want to spend a lot of time making something fancy that isn't an optimal user experience.
Has there been any research done into the different styles of "editing" this kind of segmented, hierarchical text data:
1. Edit in place (e.g. you click on a form element such as job title, it turns editable, then you click "ok" and it saves)
2. Edit button that takes you to a new screen (like StackOverflow currently)
3. Edit button that pops up a modal form
4. All fields are open and editable, single save button (like StackOverflow Careers)
Is there general consensus on when those different forms should be used to provide the best user experience?
| It depends. If your user base is web savvy, I would recommend an edit in place approach because of the natural editing flow it provides.
---
**Edit in place**
When you edit a section of a heirarchy, you edit inline with the rest of the information. This allows you to check how your edits apply to the other information *as you're making them* (rather than having to move back and forth between screens).
In terms of usability, a scenario where grouped items are editable all at once is nice as it saves multiple clicks. For example, if a job has the following data items:
```
Title
Description
Positions
```
It's good to provide a mechanism to edit all at once along with the edit each item in place behavior.
Inline editing also protects the other sections of the hierarchy from being accidentally updated.
---
**Modal Edit**
This editing method introduces a barrier between the hierarchy as a whole and the section you're editing (i.e. the relationship between the information you're entering and it's place in the hierarchy isn't immediately apparent from looking at the UI).
---
**New Screen**
As with the modal edit, the relationship of the edited information to the entire hierarchy is lost. However it is a very basic setup that most of your user base will understand immediately. It also protects the entire document from accidental updates.
---
**All Fields Open**
This provides the benefit of keeping the edited information in context (as with edit-in-place) and is very simple. There's no learning curve that requires a user to learn they have to click an element to edit it.
However, as someone that has more than one form ruined by my curious kids, I don't like how it exposes the entire hierarchy to unintended updates.
|
How do I stop eclipse from removing whitespace when formatting
I am using eclipse to do some LWJGL programming and when I create arrays for holding the vertices, I tend to use a lot of whitespace to show what group of floats hold a vertex.
```
//Array for holding the vertices of a hexagon
float [ ] Vertices = {
// Vertex 0
-0.5f , 1.0f , 0.0f ,
// Vertex 1
-1.0f , 0.0f , 0.0f ,
// Vertex 2
-0.5f , -1.0f , 0.0f ,
// Vertex 3
0.5f , -1.0f , 0.0f ,
// Vertex 4
1.0f , 0.0f , 0.0f ,
// Vertex 5
0.5f , 1.0f , 0.0f ,
// Center Vertex ( 6 )
0.0f , 0.0f , 0.0f
};
```
I should also note that I am using element array buffers so that is why there are only 7 vertices.
When I use the Format option in the Source tab of the menu bar, it strips all of this whitespace. How do I stop it from doing that? I cannot find an option in the preferences editor.
I am using Windows 8.1 and Eclipse Java Neon. And no, I won't use a different version of Eclipse as it took me about 5 hours customizing eclipse initially so... If there is a newer version that allows this to be done, I will not be changing the version. It would also take a while to install and stuff so... yeah.
| Disable the Eclipse formatter for the part of code where you want to ignore the formatter and re-enable it in this way :
```
//Array for holding the vertices of a hexagon
// @formatter:off
float [ ] Vertices = {
// Vertex 0
-0.5f , 1.0f , 0.0f ,
// Vertex 1
-1.0f , 0.0f , 0.0f ,
// Vertex 2
-0.5f , -1.0f , 0.0f ,
// Vertex 3
0.5f , -1.0f , 0.0f ,
// Vertex 4
1.0f , 0.0f , 0.0f ,
// Vertex 5
0.5f , 1.0f , 0.0f ,
// Center Vertex ( 6 )
0.0f , 0.0f , 0.0f
};
// @formatter:on
```
You must enable this feature if it is not already done.
Preferences->Java->Code Style->Formatter->Edit->Off/On Tags
|
lambda parameter with optional return value
I'm trying to write a function like `std::for_each`, that in addition to the normal usage, can also take a `std::function<bool (param)>`. A false return value means that I want to break out of the loop. The code below is what I've gotten so far.
The second call to `a.visit([&](int) -> void)` does not compile when evaluating !visitor(i). Is it possible to make this work or am I barking up the wrong tree?
I'm using MSVC 2010 but want the code to be generally C++11 compatible.
```
#include <list>
#include <string>
#include <iostream>
struct A
{
std::list<int> _lst;
template<typename _F>
void visit(_F visitor)
{
for(std::list<int>::const_iterator it = _lst.begin(), end = _lst.end() ; it != end ; it++) {
int i = *it;
if (std::is_void<decltype(visitor(i))>::value) {
visitor(i);
} else {
if (!visitor(i)) { // <----- error C2171: '!' : illegal on operands of type 'void'
break;
}
}
}
}
};
int main(int argc, char* argv[])
{
A a;
// populate a
for (int i = 0 ; i < 10 ; i++) {
a._lst.push_back(i);
}
a.visit([](int i) -> bool {
std::cout << i << std::endl;
return i < 5;
});
a.visit([](int i) {
std::cout << i << std::endl;
});
}
```
| Here's how I would implement `for_almost_each`; I'm `using namespace std` plus type aliases for readability purposes.
```
#include <algorithm>
#include <iterator>
#include <functional>
using namespace std;
template<class Iter, class Func>
Iter
for_almost_each_impl(Iter begin, Iter end, Func func, std::true_type)
{
for (auto i = begin; i!=end; ++i)
if (!func(*i))
return i;
return end;
}
template<class Iter, class Func>
Iter
for_almost_each_impl(Iter begin, Iter end, Func func, std::false_type)
{
for_each(begin, end, func);
return end;
}
template<class Iter, class Func>
Iter for_almost_each(Iter begin, Iter end, Func func)
{
using Val = typename iterator_traits<Iter>::value_type;
using Res = typename result_of<Func(Val)>::type;
return for_almost_each_impl(begin, end,
func,
is_convertible<Res, bool>{} );
}
```
I used `is_convertible`, as it seems to make more sense than `is_same`.
|
Typescript [number, number] vs number[]
Can someone help me understand why I get a type error with the following code:
```
function sumOfTwoNumbersInArray(a: [number, number]) {
return a[0] + a[1];
}
sumOfTwoNumbersInArray([1, 2]); // Works
let foo = [1, 2];
sumOfTwoNumbersInArray(foo); // Error
```
The error is:
>
> Argument of type 'number[]' is not assignable to parameter of type '[number, number]'.
>
>
> Type 'number[]' is missing the following properties from type '[number, number]': 0, 1
>
>
>
| The parameter `a` in `sumOfTwoNumbersInArray` is a tuple.
It is not the same as `number[]`.
The following works okay because all variables are basic arrays
```
function sumOfTwoNumbersInArray(a: number[]) { // parameter declared as array
return a[0] + a[1];
}
let foo = [1, 2]; // initialization defaults to array
sumOfTwoNumbersInArray(foo); // no error.
```
As Rafael mentioned, explicitly defining foo as a tuple works fine as well.
```
function sumOfTwoNumbersInArray(a: [number, number]) { // parameter declared as a tuple
return a[0] + a[1];
}
let foo: [number, number] = [1, 2]; // variable explicitely defined as a tuple
sumOfTwoNumbersInArray(foo); // no error.
```
|
Purpose of "import std;" in C++
I have seen following small piece of code on [cppdepend](http://cppdepend.com/blog/?p=228) site.
```
import std; // Module import directive.
int main()
{
std::cout<<"Hello World\n";
}
```
So, What is the purpose of `import std;` in C++? How to use `import std;` instead of `using namespace std;` in C++?
I tried to compile program in **G++** compiler, but I got an error.
|
>
> So, What is the purpose of import std; C++?
>
>
>
Its purpose is to make names from the `std` module available. Modules are a language feature that has been proposed for inclusion in a future C++ standard.
>
> How to use `import std;` instead of using `namespace std;` in C++?
>
>
>
They are not exclusive so you cannot use one *instead* of another. Namespaces are a separate language feature from modules. You can use both, either or neither.
>
> I tried to compile program in G++ compiler, but I got an error.
>
>
>
Considering that the hypothetical future standard version is not yet released, nor is it even decided that modules would definitely be part of a future standard, it's hardly surprising that a compiler hasn't implemented them.
You can find the state of modules in GCC here: <https://gcc.gnu.org/wiki/cxx-modules>
At the moment of writing, work has begun and is underway on a development branch.
|
ASP.NET MVC Html.DropDownList populated by Ajax call to controller?
I wanted to create an editor template for a field type that is represented as a dropdownlist. In the definition of the editor template I would like to populate the DropDownList using a call to an action on the controller returning the results as JSON - Any ideas how to do this?
E.g something like:
```
<%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<TheFieldType>" %>
<%= Html.DropDownList(.....
```
| In the editor template provide an empty dropdown:
```
<%= Html.DropDownListFor(
x => x.PropertyToHoldSelectedValue,
Enumerable.Empty<SelectListItem>(),
"-- Loading Values --",
new { id = "foo" })
%>
```
Then setup a controller action that will return the values:
```
public class FooController: Controller
{
public ActionResult Index()
{
return Json(new[] {
new { Id = 1, Value = "value 1" },
new { Id = 2, Value = "value 2" },
new { Id = 3, Value = "value 3" },
}, JsonRequestBehavior.AllowGet);
}
}
```
And then populate the values using AJAX:
```
$(function() {
$.getJSON('/foo/index', function(result) {
var ddl = $('#foo');
ddl.empty();
$(result).each(function() {
$(document.createElement('option'))
.attr('value', this.Id)
.text(this.Value)
.appendTo(ddl);
});
});
});
```
|
Determine a normal distribution given its quantile information
I was wondering how I could have R tell me the **SD** (as an argument in the **qnorm()** built in R) for a normal distribution whose 95% limit values are already known?
As an example, I know the two 95% limit values for my normal are 158, and 168, respectively. So, in **the below R code** **SD** is shown as "x". **If** "y" (the answer of this simple **qnorm()** function) needs to be (158, 168), **then** can R tell me what should be **x**?
```
y <- qnorm(c(.025,.975), 163, x)
```
| # A general procedure for Normal distribution
Suppose we have a Normal distribution `X ~ N(mu, sigma)`, with unknown mean `mu` and unknown standard deviation `sigma`. And we aim to solve for `mu` and `sigma`, given two quantile equations:
```
Pr(X < q1) = alpha1
Pr(X < q2) = alpha2
```
We consider standardization: `Z = (X - mu) / sigma`, so that
```
Pr(Z < (q1 - mu) / sigma) = alpha1
Pr(Z < (q2 - mu) / sigma) = alpha2
```
In other words,
```
(q1 - mu) / sigma = qnorm(alpha1)
(q2 - mu) / sigma = qnorm(alpha2)
```
The RHS is explicitly known, and we define `beta1 = qnorm(alpha1)`, `beta2 = qnorm(alpha2)`. Now, the above simplifies to a system of 2 linear equations:
```
mu + beta1 * sigma = q1
mu + beta2 * sigma = q2
```
This system has coefficient matrix:
```
1 beta1
1 beta2
```
with determinant `beta2 - beta1`. The only situation for singularity is `beta2 = beta1`. As long as the system is non-singular, we can use `solve` to solve for `mu` and `sigma`.
Think about what the singularity situation means. `qnorm` is strictly monotone for Normal distribution. So `beta1 = beta2` is as same as `alpha1 = alpha2`. But this can be easily avoided as it is under your specification, so in the following I will not check singularity.
Wrap up above into an estimation function:
```
est <- function(q, alpha) {
beta <- qnorm(alpha)
setNames(solve(cbind(1, beta), q), c("mu", "sigma"))
}
```
Let's have a test:
```
x <- est(c(158, 168), c(0.025, 0.975))
# mu sigma
#163.000000 2.551067
## verification
qnorm(c(0.025, 0.975), x[1], x[2])
# [1] 158 168
```
---
We can also do something arbitrary:
```
x <- est(c(1, 5), c(0.1, 0.4))
# mu sigma
#5.985590 3.890277
## verification
qnorm(c(0.1, 0.4), x[1], x[2])
# [1] 1 5
```
|
Step by step instruction to install Rust and Cargo for mingw with Msys2?
I tried to install Rust on Cygwin but failed to be able link with mingw. Now I am trying to install it with Msys2. I already installed Msys2 and Mingw. I tried to follow [this wiki page](https://github.com/rust-lang/rust-wiki-backup/blob/master/Using-Rust-on-Windows.md) but I got lost at number 2:
>
> Download and install Rust+Cargo using the installer but be sure to disable the Linker and platform libraries option.
>
>
>
Is it referring to the "rustup-init.exe" on [the install page](https://www.rust-lang.org/en-US/install.html)? Should I double click to run this file or run it from Msys2? I tried to run from Msys2 and got the options:
>
>
> ```
> 1) Proceed with installation (default)
> 2) Customize installation
> 3) Cancel installation
>
> ```
>
>
I don't know what to do next.
| The *Using Rust on Windows* page you linked to dates from before rustup replaced the installer as the default option to install Rust. [Installers](https://www.rust-lang.org/en-US/other-installers.html#standalone-installers) are still available, but you should use rustup if possible, because it makes it easy to update and to use multiple toolchains at once (e.g. stable, beta and nightly). If you must use the installer, just select the `x86_64-pc-windows-gnu` installer and follow the step from the *Using Rust on Windows* page. If you're using rustup, read on.
By default, rustup on Windows installs the compiler and tools targeting the MSVC toolchain, rather than the GNU/MinGW-w64 toolchain. At the initial menu, select *2) Customize installation*. When asked for a host triple, enter `x86_64-pc-windows-gnu`. Then make a choice for the other questions, then proceed with the installation.
**Note:** If rustup is already installed, then rerunning rustup-init won't actually install the requested toolchain. Instead, run `rustup toolchain install stable-x86_64-pc-windows-gnu` if you already have the MSVC-based toolchain. Then run `rustup default stable-x86_64-pc-windows-gnu` to set the GNU-based toolchain as the default.
Rustup will install the MinGW linker and platform libraries automatically (as part of the `rust-mingw` component) and refuses to let you remove them. If you prefer to use the MinGW linker and libraries you installed with MSYS2, you'll need to create a [`.cargo/config`](http://doc.crates.io/config.html) file (either in your profile directory, i.e. `C:\Users\you\.cargo\config`, or in your project's directory if this configuration is specific to a project). The contents of that file might look like this:
```
[target.x86_64-pc-windows-gnu]
linker = "C:\\msys2\\mingw64\\bin\\gcc.exe"
ar = "C:\\msys2\\mingw64\\bin\\ar.exe"
```
Rustup will modify the `PATH` environment variable unless you told it not to. However, MSYS2 resets `PATH` by default when you launch, so when you try to invoke `cargo` or `rustc` from your MSYS2 shell, it might not find it. You'll need to edit your `.profile`/`.bash_profile` script to set the `PATH` correctly (you need to prepend `/c/Users/yourname/.cargo/bin:` to `PATH`).
|