prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Where is the documentation on url helpers in rails?
How do I know what arguments url helpers take in Rails? For example, how do I know url helper takes just one parameter below? I know these methods are metaprogrammed but where is their documentation?
```
link_to "New Ticket", new_project_ticket_path(@project)
```
| You can determine how many parameters a route helper requires by looking at the route definition.
For example, you might have this routes file:
```
resources :users
```
If you ran `rake routes` at the command line you would see something like this:
```
users GET /users(.:format) users#index
POST /users(.:format) users#create
new_user GET /users/new(.:format) users#new
edit_user GET /users/:id/edit(.:format) users#edit
user GET /users/:id(.:format) users#show
PUT /users/:id(.:format) users#update
DELETE /users/:id(.:format) users#destroy
```
The first column gives you the name of the route. You can append `_path` or `_url` to get the name of a route helper.
The third column shows the pattern. This is where you can figure out what the arguments are. Arguments are the parts prefixed with a colon, and optional arguments are shown in parentheses. For example the `edit_user` route has the pattern `/users/:id/edit(.:format)` which contains one required argument (`id`) and one optional argument (`format`), which tells me I need to pass at least one argument to the `edit_user_path` or `edit_user_url` helper:
```
edit_user_path(1) # => "/users/1/edit"
edit_user_path(2, :html) # => "/users/2/edit.html"
```
You can also use the argument names from the pattern as keys in a hash:
```
edit_user_path(id: 3, format: 'js') # => "/users/3/edit.js"
```
Finally, you can add extra arguments which will become part of the query string:
```
edit_user_path(id: 4, format: 'json', foo: 1) # => "/users/4/edit.json?foo=1"
edit_user_path(5, bar: 2) # => "/users/5/edit?bar=2"
```
See the [Rails Routing Guide's section on Listing Existing Routes](http://guides.rubyonrails.org/routing.html#listing-existing-routes) for more information about `rake routes`.
|
Scrapy hxs.select() not selecting all results
I am trying to scrapy to scrape odds from [here](http://www.paddypower.com/football/football-matches/premier-league).
Currently just trying to log the results with the following spider :
```
def parse(self, response):
log.start("LogFile.txt", log.DEBUG);
hxs = HtmlXPathSelector(response)
sites = hxs.select('//div[@class="fb_day_type_wrapper"]')
items = []
for site in sites:
siteAddress = urlparse.urljoin(response.url, site.extract())
self.log('Found category url: %s' % siteAddress)
```
This only logs the entry: This market is currently unavailable....
Not the other elements which contain the odds.
I have tried a few different selectors with no luck. It looks like once I try and get inside of the element `div[@class="fb_day_type_wrapper"]` I get nothing returned. I have the same results using the scrapy shell.
| The site uses javascript to generate the data table. There are some alternatives like [scrapyjs](https://github.com/scrapinghub/scrapyjs) or [splash](https://github.com/scrapinghub/splash) that allows to get the js-rendered html page. If you only need to scrape one page, you might be better off using Selenium.
Otherwise, you might need to go into hardcore mode and reverse engineer what is going on in the site with the data. I will show you how to do that.
First, start the `scrapy shell` so we can explore the web page:
```
scrapy shell http://www.paddypower.com/football/football-matches/premier-league
```
**Note:** I'm using python 2.7.4, ipython 0.13.2 and scrapy 0.18.0.
If you lookup in the source for "Crystal Palace v Fulham" in your browser, you will see there is a javascript code that have that reference. The `<script>` block looks like:
```
document.bodyOnLoad.push(function() {
lb_fb_cpn_init(
"",
"html",
"MR_224",
{category: 'SOCCER',
```
We lookup in the shell for this element:
```
In [1]: hxs.select('//script[contains(., "lb_fb_cpn_init")]')
Out[1]: [<HtmlXPathSelector xpath='//script[contains(., "lb_fb_cpn_init")]' data=u'<script type="text/javascript">\n/* $Id: '>]
```
If you lookup into the `lb_fb_cpn_init` arguments, you will see the data we are looking for is passed as an argument in this form:
```
[{names: {en: 'Newcastle v Liverpool'}, ...
```
In fact there are three arguments like that:
```
In [2]: hxs.select('//script[contains(., "lb_fb_cpn_init")]').re('\[{names:')
Out[2]: [u'[{names:', u'[{names:', u'[{names:']
```
So we extract all of them, notice that we use a lot of regular expressions:
```
In [3]: js_args = hxs.select('//script[contains(., "lb_fb_cpn_init")]').re(r'(\[{names:(?:.+?)\]),')
In [4]: len(js_args)
Out[4]: 3
```
The idea here is that we want to parse the javascript code (which is a literal object) into python code (a dict). We could use `json.loads` but to do so the js code must be a valid json object, that is, have field names and strings enclosed in `""`.
We proceed to do so. First I join the arguments in a single string as a javascript list:
```
In [5]: args_raw = '[{}]'.format(', '.join(js_args))
```
Then we enclose the field names into `""` **and** replace with single quotes with double quotes:
```
In [6]: import re
In [7]: args_json = re.sub(r'(,\s?|{)(\w+):', r'\1"\2":', args_raw).replace("'", '"')
```
This might not always work in all cases as the javascript code might have patterns that are not so easy to replace with a single `re.sub` and/or `.replace`.
We are ready to parse the javascript code as a json object:
```
In [8]: import json
In [9]: data = json.loads(args_json)
In [10]: len(data)
Out[10]: 3
```
Here, I'm just looking for the event name and odds. You can take a look to the `data` content to see what it looks like.
Luckily, the data seems to have a correlation:
```
In [11]: map(len, data)
Out[11]: [20, 20, 60]
```
You could as well build a single `dict` from the three of them by using the `ev_id` field. I will just assume that `data[0]` and `data[1]` hava a direct correlation and that `data[2]` contains 3 items per event. This can be easily verified with:
```
In [12]: map(lambda v: v['ev_id'], data[2])
Out [12]:
[5889932,
5889932,
5889932,
5889933,
5889933,
5889933,
...
```
With some python-fu, we can merge the records:
```
In [13]: odds = iter(data[2])
In [14]: odds_merged = zip(odds, odds, odds)
In [15]: data_merged = zip(data[0], data[1], odds_merged)
In [16]: len(data_merged)
Out[16]: 20
```
Finally, we collect the data:
```
In [17]: get_odd = lambda obj: (obj['names']['en'], '/'.join([obj['lp_num'], obj['lp_den']]))
In [18]: event_odds = []
In [19]: for event, _, odds in data_merged:
....: event_odds.append({'name': event['names']['en'], 'odds': dict(map(get_odd, odds)), 'url': event['url']})
....:
In [20]: event_odds
Out[20]:
[{'name': u'Newcastle v Liverpool',
'odds': {u'Draw': u'14/5', u'Liverpool': u'17/20', u'Newcastle': u'3/1'},
'url': u'http://www.paddypower.com/football/football-matches/premier-league-matches/Newcastle%2dv%2dLiverpool-5889932.html'},
{'name': u'Arsenal v Norwich',
'odds': {u'Arsenal': u'3/10', u'Draw': u'9/2', u'Norwich': u'9/1'},
'url': u'http://www.paddypower.com/football/football-matches/premier-league-matches/Arsenal%2dv%2dNorwich-5889933.html'},
{'name': u'Chelsea v Cardiff',
'odds': {u'Cardiff': u'10/1', u'Chelsea': u'1/4', u'Draw': u'5/1'},
'url': u'http://www.paddypower.com/football/football-matches/premier-league-matches/Chelsea%2dv%2dCardiff-5889934.html'},
{'name': u'Everton v Hull',
'odds': {u'Draw': u'10/3', u'Everton': u'4/9', u'Hull': u'13/2'},
'url': u'http://www.paddypower.com/football/football-matches/premier-league-matches/Everton%2dv%2dHull-5889935.html'},
{'name': u'Man Utd v Southampton',
'odds': {u'Draw': u'3/1', u'Man Utd': u'8/15', u'Southampton': u'11/2'},
'url': u'http://www.paddypower.com/football/football-matches/premier-league-matches/Man%2dUtd%2dv%2dSouthampton-5889939.html'},
...
```
As you can see, web scraping can be very challenging (and fun!). All it depends how the website displays the data. Here you could save time by just using Selenium, but if you are looking to scrape a large website, Selenium will be very slow compared to Scrapy.
Also you have to consider whether the site will get code updates very often, in that case you will spend more time reverse engineering the js code. In that case a solution like [scrapyjs](https://github.com/scrapinghub/scrapyjs) or [splash](https://github.com/scrapinghub/splash) can be a better option.
Final remarks:
- Now you have all the code required to extract the data. You need to integrate this into your spider callback and build your item.
- Don't use `log.start`. Use the setting `LOG_FILE` (command line argument: `--set LOG_FILE=mylog.txt`).
- Remeber that `.extract()` always returns a list.
|
Git ssh connection refused with the following format
Whenever I use the following
```
url = ssh://user@dev.example.com:imageInfo.git
```
in .git/config for a remote repo I get the following error
```
ssh: connect to host port 22: Connection refused
fatal: The remote end hung up unexpectedly
```
but if use the following
```
ssh user@dev.example.com
```
outside of git I connect without a problem.
Git also connects to same server with a gitosis user using
```
url = gitosis@dev.example.com:imageInfo.git
```
in .git/config and it has not problems.
so to sum things up in short my .git/config file looks like this
```
[remote "production"]
url = ssh://user@dev.example.com:imageInfo.git
url = gitosis@dev.example.com:imageInfo.git
```
any ideas?
| You have the format of the SSH URL wrong - you can either use the `scp`-style syntax, like:
```
user@dev.example.com:imageInfo.git
```
... or the true URL form, where you need a `/` after the host rather than a `:`, and an absolute path, which I can only guess at, e.g.:
```
ssh://user@dev.example.com/srv/git/imageInfo.git
```
The documentation for git's URLs [is here](http://git-scm.com/docs/git-clone#URLS), but `kernel.org` is down at the moment, so you may want to look at the cached version [here](http://webcache.googleusercontent.com/search?q=cache%3aTgm6QVLCfx0J%3awww.kernel.org/pub/software/scm/git/docs/git-clone.html&cd=1&hl=en&ct=clnk&gl=uk&client=ubuntu#URLS).
|
TFlowPanel alike in D5
I am looking for an implementation of `TFlowPanel` (or similar) that will work with D5.
Basically I need the `TFlowPanel` to work inside a `TScrollBox` (with Vertical scroll bar), so the controls will wrap based on the Width of that `TScrollBox`.
The images basically show what I need:
![enter image description here](https://i.stack.imgur.com/EA3se.png)
After resizing the controls are automatically repositioned:
![enter image description here](https://i.stack.imgur.com/gBUJx.png)
With Vertical scroll bar:
![enter image description here](https://i.stack.imgur.com/0IYUs.png)
| Just a concept. No various FlowTypes, and no possibility to change the order of the controls. You could still move them around by changing the order in the DFM, I think, or by resetting the parent.
The panel sizes vertically to fit all controls. This means, that when you put it inside a scrollbox it will automatically work.
```
unit uAlignPanel;
interface
uses
Windows, SysUtils, Classes, Controls, ExtCtrls;
type
TAlignPanel = class(TPanel)
protected
procedure SetChildOrder(Child: TComponent; Order: Integer); overload; override;
procedure SetZOrder(TopMost: Boolean); override;
public
procedure AlignControls(AControl: TControl; var Rect: TRect); override;
procedure Insert(AControl: TControl);
procedure Append(AControl: TControl);
function GetChildOrder(Child: TControl): Integer;
procedure SetChildOrder(Child: TControl; Order: Integer); reintroduce; overload; virtual;
procedure MoveChildBefore(Child: TControl; Sibling: TControl); virtual;
end;
procedure Register;
implementation
procedure Register;
begin
RegisterComponents('StackOverflow', [TAlignPanel]);
end;
{ TAlignPanel }
procedure TAlignPanel.AlignControls(AControl: TControl; var Rect: TRect);
var
i: Integer;
x, y: Integer;
LineHeight: Integer;
begin
x := 0; y := 0;
LineHeight := 0;
for i := 0 to ControlCount - 1 do
begin
if x + Controls[i].Width > ClientWidth then
begin
x := 0;
y := y + LineHeight;
LineHeight := 0;
end;
Controls[i].Top := y;
Controls[i].Left := x;
x := x + Controls[i].Width;
if Controls[i].Height > LineHeight then
LineHeight := Controls[i].Height;
end;
// Height + 1. Not only looks nices, but also prevents a small redrawing
// problem of the bottom line of the panel when adding controls.
ClientHeight := y + LineHeight + 1;
end;
procedure TAlignPanel.Append(AControl: TControl);
begin
AControl.Parent := Self;
AControl.BringToFront;
Realign;
end;
function TAlignPanel.GetChildOrder(Child: TControl): Integer;
begin
for Result := 0 to ControlCount - 1 do
if Controls[Result] = Child then
Exit;
Result := -1;
end;
procedure TAlignPanel.Insert(AControl: TControl);
begin
AControl.Parent := Self;
AControl.SendToBack;
Realign;
end;
procedure TAlignPanel.MoveChildBefore(Child, Sibling: TControl);
var
CurrentIndex: Integer;
NewIndex: Integer;
begin
if Child = Sibling then
raise Exception.Create('Child and sibling cannot be the same');
CurrentIndex := GetChildOrder(Child);
if CurrentIndex = -1 then
raise Exception.CreateFmt( 'Control ''%s'' is not a child of panel ''%s''',
[Sibling.Name, Name]);
if Sibling <> nil then
begin
NewIndex := GetChildOrder(Sibling);
if NewIndex = -1 then
raise Exception.CreateFmt( 'Sibling ''%s'' is not a child of panel ''%s''',
[Sibling.Name, Name]);
if CurrentIndex < NewIndex then
Dec(NewIndex);
end
else
NewIndex := ControlCount;
SetChildOrder(Child, NewIndex);
end;
procedure TAlignPanel.SetChildOrder(Child: TComponent; Order: Integer);
begin
inherited;
Realign;
end;
procedure TAlignPanel.SetChildOrder(Child: TControl; Order: Integer);
begin
SetChildOrder(TComponent(Child), Order);
end;
procedure TAlignPanel.SetZOrder(TopMost: Boolean);
begin
inherited;
Realign;
end;
end.
```
|
Circle pack layout using nest() and rollup
I'm trying to create a circle pack graph using `nest()` and `.rollup`. I'm getting the following errors:
```
Error: Invalid value for <g> attribute transform="translate(undefined,undefined)"
Error: Invalid value for <circle> attribute r="NaN"
```
I want the circles to be sized according to the number of companies in each country. I'm attempting to adapt Mike Bostock's [Flare](http://bl.ocks.org/mbostock/4063530) circle-pack example.
If anyone could point me in the direction of any information, I'd be very grateful.
JS code:
```
var diameter = 960,
format = d3.format(",d");
var pack = d3.layout.pack()
.size([diameter - 4, diameter - 4])
.value(function(d) { return d.size; });
var svg = d3.select("body").append("svg")
.attr("width", diameter)
.attr("height", diameter)
.append("g")
.attr("transform", "translate(2,2)");
//Get data
d3.json("data/countriesNested.php", function(error, data){
var submissionsByCountry = d3.nest()
.key(function(d) { return d.Country; })
.key(function(d) { return d.Organisation; })
.rollup(function(leaves) { return leaves.length; })
.entries(data);
var node = svg.datum(data).selectAll(".node")
.data(pack.nodes)
.enter().append("g")
.attr("class", function(d) { return d.children ? "node" : "leaf node"; })
.attr("transform", function(d) { return "translate(" + d.cx + "," + d.cy + ")"; });
node.append("title")
.text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); });
node.append("circle")
.attr("r", function(d) { return d.r; });
});
d3.select(self.frameElement).style("height", diameter + "px");
</script>
```
Data file (from MySQL using PHP script):
```
[
{
"Country":"USA",
"ID":4,
"Organisation":"Company 1"
},
{
"Country":"USA",
"ID":5,
"Organisation":"Company 2"
},
{
"Country":"USA",
"ID":6,
"Organisation":"Company 3"
},
{
"Country":"FRANCE",
"ID":19,
"Organisation":"Company 4"
},
{
"Country":"FRANCE",
"ID":24,
"Organisation":"Company 5"
},
{
"Country":"GERMANY",
"ID":10,
"Organisation":"Company 6"
},
{
"Country":"ITALY",
"ID":7,
"Organisation":"Company 7"
},
.....
```
Thanks for reading.
| There are a few errors in your code that need to be fixed:
1. You need to set the accessor functions for children and values on your pack layout:
```
var pack = d3.layout.pack()
.size([diameter - 4, diameter - 4])
.children(function(d) {
return d.values; // accessor for children
})
.value(function(d) {
return d.values; // accessor for values
});
```
2. Your `d3.nest()` returns an array but `d3.pack()` requires you to supply a root *object* containing the hierarchy. You have to create a root object and put your nested array inside:
```
var countryRoot = {
key: "root",
values: submissionsByCountry
};
```
3. In your code you nest your data into `submissionsByCountry` but you are not using this variable anywhere else. So you obviously have to refer to it when binding data to your `svg`. This is accomplished by the above mentioned root object which is later on bound to the `svg`.
```
var node = svg.datum(countryRoot).selectAll(".node")
```
4. The attributes the pack layout is adding to your data nodes include values `x` and `y`, whereas you refered to them as `cx` and `cy` which are attributes to `<svg:circle>` but are not present in your data. Hence, you got your `transform="translate(undefined,undefined)"` error messages. You should use these attributes as such:
```
.attr("transform", function(d) {
return "translate(" + d.x + "," + d.y + ")";
});
```
I put together a working [plunk](http://plnkr.co/edit/2ycPSdLwVnbu62JwO9d0?p=preview).
|
Awk, tail, sed or others - which one faster for big files?
I have scripts for big log files. I can check all line and do something with `tail` and `awk`.
Tail:
```
tail -n +$startline $LOG
```
Awk:
```
awk 'NR>='"$startline"' {print}' $LOG
```
And checking time, tail working 6 mins 39 seconds, awk working 6 mins 42 seconds. So two commands do same thing / same time.
I don't know how to do with sed. Sed can be faster than tail and awk? Or maybe other commands.
---
Second question, I use `$startline` and every time continue remains from the last line. For example:
I use script line this:
```
10:00AM -> ./script -> $startline=1 and do something -> write line number to save file(for ex. 25),
10:05AM -> ./script -> $startline=26(read save file +1) and do something -> write line number save file(55),
10:10AM -> ./script -> $startline=56(read save file +1) and do something ....
```
But when script is running, checking all lines and when see `$startline`, doing something. And it's a little slow because of huge files.
Any suggestions for it be faster?
Script example:
```
lastline=$(tail -1 "line.save")
startline=$(($lastline + 1))
tail -n +$startline $LOG | while read -r
do
....
done
linecount=$(wc -l "$LOG" | awk '{print $1}')
echo $linecount >> line.save
```
| `tail` and `head` are tools especially created for this purposes, so the intuitive idea is that their are quite optimized for it. On the other hand, `awk` and `sed` can perfectly do it because they are like a [Swiss Army knife](http://en.wikipedia.org/wiki/Swiss_Army_knife), but this is not supposed to be its best "skill" over the multiple others that they have.
In [Efficient way to print lines from a massive file using awk, sed, or something else?](https://unix.stackexchange.com/q/94318/40596) there is a nice comparison on methods and `head` / `tail` is seen as the best approach.
Hence, I would go for `tail` + `head`.
---
Note also that if it is not only the last lines, but a set of them within the text, in `awk` (or in `sed`) you have the option to `exit` after the last line you wanted. This way, you avoid the script to run the file until the last line.
So this:
```
awk '{if (NR>=10 && NR<20) print} NR==20 {print; exit}'
```
is faster than
```
awk 'NR>=10 && NR<=20'
```
If your input happens to contain more than 20 lines.
---
Regarding your expression:
```
awk 'NR>='"$startline"' {print}' $LOG
```
note that it is more straight forward to write:
```
awk -v start="$startline" 'NR>=start' $LOG
```
there is no need to say `print` because it is implicit.
|
How to run " ps cax | grep something " in Python?
How do I run a command with a pipe `|` in it?
The subprocess module seems complex...
Is there something like
```
output,error = `ps cax | grep something`
```
as in shell script?
| See [Replacing shell pipeline](http://docs.python.org/library/subprocess.html#replacing-shell-pipeline):
```
import subprocess
proc1 = subprocess.Popen(['ps', 'cax'], stdout=subprocess.PIPE)
proc2 = subprocess.Popen(['grep', 'python'], stdin=proc1.stdout,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc1.stdout.close() # Allow proc1 to receive a SIGPIPE if proc2 exits.
out, err = proc2.communicate()
print('out: {0}'.format(out))
print('err: {0}'.format(err))
```
PS. Using `shell=True` can be dangerous. See for example [the warning](http://docs.python.org/library/subprocess.html#using-the-subprocess-module) in the docs.
---
There is also the [sh module](http://pypi.python.org/pypi/sh) which can make subprocess scripting in Python a lot more pleasant:
```
import sh
print(sh.grep(sh.ps("cax"), 'something'))
```
|
How to run Apache Airflow DAG as Unix user
I installed Apache Airflow on my cluster using `root` account. I know it is bad practice, but it is only test environment. I created a simple DAG:
```
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from datetime import datetime, timedelta
dag = DAG('create_directory', description='simple create directory workflow', start_date=datetime(2017, 6, 1))
t1 = BashOperator(task_id='create_dir', bash_command='mkdir /tmp/airflow_dir_test', dag=dag)
t2 = BashOperator(task_id='create_file', bash_command='echo airflow_works > /tmp/airflow_dir_test/airflow.txt')
t2.set_upstream(t1)
```
The problem is that when I run this job, the `root` user executes it. I tried to add `owner` parameter, but it doesn't work. Airflow says:
```
Broken DAG: [/opt/airflow/dags/create_directory.py] name 'user1' is not defined
```
My question is, how I can run Apache Airflow DAG using other user than root?
| You can use the `run_as_user` parameter to [impersonate](https://airflow.incubator.apache.org/security.html?highlight=owner#impersonation) a unix user for any task:
```
t1 = BashOperator(task_id='create_dir', bash_command='mkdir /tmp/airflow_dir_test', dag=dag, run_as_user='user1')
```
You can use `default_args` if you want to apply it to every task in the DAG:
```
dag = DAG('create_directory', description='simple create directory workflow', start_date=datetime(2017, 6, 1), default_args={'run_as_user': 'user1'})
t1 = BashOperator(task_id='create_dir', bash_command='mkdir /tmp/airflow_dir_test', dag=dag)
t2 = BashOperator(task_id='create_file', bash_command='echo airflow_works > /tmp/airflow_dir_test/airflow.txt')
```
Note that the `owner` parameter is for something else, [multi-tenancy](https://airflow.incubator.apache.org/security.html?highlight=owner#multi-tenancy).
|
Why is there a "==" inside String.equals?
Why Java is comparing (this == another String) inside equalsIgnoreCase method for checking a string insensitive?
Also, String equals is comparing (this == another String) to compare two objects?
Java 6: String Class equalsIgnoreCase implementation given below.
```
public boolean equalsIgnoreCase(String anotherString) {
return (this == anotherString) ? true :
(anotherString != null) && (anotherString.count == count) &&
regionMatches(true, 0, anotherString, 0, count);
}
```
Java 6: String Class equals implementation given below.
```
public boolean equals(Object anObject) {
if (this == anObject) {
return true;
}
```
|
>
> Why Java is comparing (this == another String) inside equalsIgnoreCase method for checking a string insensitive?
>
>
>
It's an optimization. If the reference passed in is exactly the same as `this`, then `equals` *must* return `true`, but we don't need to look at any fields etc. Everything is the same as itself. From the documentation for `Object.equals(Object)`:
>
> The equals method implements an equivalence relation on non-null object references:
>
>
> - It is reflexive: for any non-null reference value x, x.equals(x) should return true.
> - ...
>
>
>
It's very common for an equality check to start with:
- Is the other reference equal to `this`? If so, return true.
- Is the other reference null? If so, return false.
- Does the other reference refer to an object of the wrong type? If so, return false.
*Then* you go on to type-specific checks.
|
How do I use the HotSpot DTrace probes on SmartOS?
On Mac OS X, I can find the HotSpot probes of running Java programs by running:
```
cody.mello@ashur ~ (1) % sudo dtrace -ln 'hotspot*:::'
Password:
Invalid connection: com.apple.coresymbolicationd
ID PROVIDER MODULE FUNCTION NAME
165084 hotspot46 libjvm.dylib _ZN13instanceKlass15initialize_implE19instanceKlassHandleP6Thread [instanceKlass::initialize_impl(instanceKlassHandle, Thread*)] class-initialization-clinit
165085 hotspot46 libjvm.dylib _ZN13instanceKlass15initialize_implE19instanceKlassHandleP6Thread [instanceKlass::initialize_impl(instanceKlassHandle, Thread*)] class-initialization-concurrent
165086 hotspot46 libjvm.dylib _ZN13instanceKlass15initialize_implE19instanceKlassHandleP6Thread [instanceKlass::initialize_impl(instanceKlassHandle, Thread*)] class-initialization-end
165087 hotspot46 libjvm.dylib _ZN13instanceKlass15initialize_implE19instanceKlassHandleP6Thread [instanceKlass::initialize_impl(instanceKlassHandle, Thread*)] class-initialization-erroneous
165088 hotspot46 libjvm.dylib _ZN13instanceKlass15initialize_implE19instanceKlassHandleP6Thread [instanceKlass::initialize_impl(instanceKlassHandle, Thread*)] class-initialization-error
165089 hotspot46 libjvm.dylib _ZN13instanceKlass15initialize_implE19instanceKlassHandleP6Thread [instanceKlass::initialize_impl(instanceKlassHandle, Thread*)] class-initialization-recursive
...
```
But if I create a simple Java program and run it on SmartOS:
```
cody@101901c9-6d66-ea32-fe42-f1fbebd4bf99 ~ % cat Loop.java
class Loop {
public static void main(String[] args) throws InterruptedException {
while (true) {
Thread.sleep(5000);
}
}
}
cody@101901c9-6d66-ea32-fe42-f1fbebd4bf99 ~ % javac Loop.java
cody@101901c9-6d66-ea32-fe42-f1fbebd4bf99 ~ % java Loop
```
I can't find any probes:
```
cody@101901c9-6d66-ea32-fe42-f1fbebd4bf99 ~ (255) % pfexec dtrace -ln 'hotspot*:::'
ID PROVIDER MODULE FUNCTION NAME
dtrace: failed to match hotspot*:::: No probe matches description
```
Is there anything special that I need to do to see them?
| The problem here is that on SmartOS (and other illumos variants -- as well as their proprietary Solaris cousins) the DTrace module in the JVM is *lazily loaded* (that is, the DOF was compiled with `-x lazyload`). As a result, the DTrace probes are not loaded until explicitly enabled. There are two ways to deal with this. The first is that you can tell DTrace itself to enable the specific probes in question, forcing the target process to load its probes. This requires (at least) the ID of the target process; to couch this in the example provided in the question, it would be something like:
```
% pfexec dtrace -ln 'hotspot*$target:::' -p `pgrep -fn "java Loop"`
```
This will pick up the `hotspot` (and `hotspot_jni`) USDT probes, but it still leaves using the `jstack()` action difficult on a machine filled with unsuspecting Java processes. (That is, this works when you want to use the USDT probes on a known process, not when you want to use the ustack helper profile all Java processes.) If this is a problem that you care about, on illumos variants (SmartOS, OmniOS, etc.) you can effectively undo the lazy loading of the DTrace probes (and stack helper) by using an *audit library* designed for the task. This library -- `/usr/lib/dtrace/libdtrace_forceload.so` and its 64-bit variant, `/usr/lib/dtrace/64/libdtrace_forceload.so` -- will effectively force the DTrace probes to be loaded when the process starts, giving you USDT probes and the `jstack()` action for all such processes. To do this for 32-bit JVMs, launch `java` with the `LD_AUDIT_32` environment variable set:
```
export LD_AUDIT_32=/usr/lib/dtrace/libdtrace_forceload.so
```
For 64-bit JVMs:
```
export LD_AUDIT_64=/usr/lib/dtrace/64/libdtrace_forceload.so
```
|
How Set Authorization headers at HTML Form or at A href
I have this code:
```
$.ajax({
url: "http://localhost:15797/api/values",
type: 'get',
contentType: 'application/json',
headers: {
"Authorization": "Bearer " + token
}
})
```
works fine, but I want to do that without using Ajax, I want something like that:
```
<form action="http://localhost:15797/api/values" method="get">
<input type="hidden" name="headers[Authorization]" value="Bearer token" />
<input type="submit" />
</form>
```
Is it possible? Or just do something like that without XMLHttpRequest? How?
| You need to send the `Authorization` argument in the HTTP request header, it's imposed by OAuth flows. Of course you can change it by making changes in OAuth server's code but if you've got no control on the OAuth server's code it's not possible.
So answering your question, no you can't send them with the form's posted data. However, obviously you can put them in the hidden field and write a JS code to read it from the field and put it in the request header.
e.g.
HTML:
```
<input id="tokenField" type="hidden" />
<input id="submitButton" type="button" />
```
Javascript:
```
$('#submitButton').on('click',function(){
$.ajax({
url: "http://localhost:15797/api/values",
type: 'GET',
contentType: 'application/json',
headers: {
"Authorization": "Bearer " + $('#tokenField').val()
},
async: false
}});
```
Notice the `async: false` makes your call synchronous, just like a submit. And if you need to post other data to the server you can change the `type: 'GET'` to `type: 'POST'` and add another field named `data` and pass your form data through its value :
```
<input id="firstName" type="text" />
<input id="lastName" type="text" />
<input id="tokenField" type="hidden" />
<input id="submitButton" type="button" />
$('#submitButton').on('click',function(){
$.ajax({
url: "http://localhost:15797/api/values",
type: 'POST',
data: {
firstName: $('#firstName').val(),
lastName: $('#lastName').val()
},
contentType: 'application/json',
headers: {
"Authorization": "Bearer " + $('#tokenField').val()
},
async: false
})
});
```
|
How can I change a form field value before saving to db?
I have form:
```
<%= form_for(@event) do |f| %>
<div class="field">
<%= f.label :title %><br />
<%= f.text_field :title %>
</div>
<div class="field">
<%= f.label :date %><br />
<%= f.text_field :date %>
</div>
<div class="field">
<%= f.label :repeat %><br />
<%= repeat_types = ['none', 'daily', 'monthly', 'yearly']
f.select :repeat, repeat_types %>
</div>
<div class="actions">
<%= f.submit %>
</div>
<% end %>
```
I need save into 'repeat' field changed data as:
```
:repeat = Event.rule(:date,:repeat)
```
Where and how can I modify the repeat field before saving it to database?
| In general, if you need to change data slightly from what the user entered in a form before saving it to the database, you can do so in Rails by using [ActiveRecord callbacks](http://guides.rubyonrails.org/active_record_validations_callbacks.html#available-callbacks) such as `before_save`. For example, you might have the following:
```
class Event < ActiveRecord::Base
before_save :set_repeat
private
def set_repeat
self.repeat = Event.rule(date, repeat) if ['none', 'daily', 'monthly', 'yearly'].include? repeat
end
end
```
This would always run the `set_repeat` private callback method on an `Event` instance before saving it to the DB, and changes the `repeat` attribute if it is currently one of the strings in `['none', 'daily', 'monthly', 'yearly']` (but you should adjust this logic as needed -- I just guessed at what you might want).
So I would look into [ActiveRecord callbacks](http://guides.rubyonrails.org/active_record_validations_callbacks.html#available-callbacks) as a general way to modify model attributes before saving them.
|
What is the maximum length of an Azure Active Directory (AAD) username?
I'm trying to find the maximum length of an Azure Active Directory (AAD) username. The username is UPN-formatted (username@domain), so presumably there are two string length limits for the username and domain fields.
All I can find online is the maximum length of either a standard Windows user or a Windows Domain user, which is 20 characters. Is Azure the same? And presumably that's just for the username so what is the limit on the domain string?
| The maximum length for an AAD username (without domain) is 64 characters.
The maximum length for an AAD custom domain is 48 characters.
For a non-custom (\*.onmicrosoft.com) domain, the string length limit is 27 characters. As ".onmicrosoft.com" is 16 characters, this adds up to a 43-character limit in total, slightly less than the custom domain limit.
So overall, a username in the UPN format (username@domain) has a total string length limit of 113 characters.
These figures can be found on the official Microsoft documentation [here](https://learn.microsoft.com/en-us/azure/active-directory/authentication/concept-sspr-policy#userprincipalname-policies-that-apply-to-all-user-accounts)
|
Modify value by key
```
Dim dFeat As Collection
Set dFeat = New Collection
Dim cObj As Collection
Set cObj = New Collection
cObj.Add 3, "PASSED"
cObj.Add 4, "TOTAL"
dFeat.Add cObj, "M1"
Set cObj = New Collection
cObj.Add 5, "PASSED"
cObj.Add 6, "TOTAL"
dFeat.Add cObj, "M2"
dFeat("M1")("TOTAL") = 88 ' Error here
Debug.Print dFeat("M1")("TOTAL")
```
How do I modify the value of inner collection using the key?
| Alex K.'s advice about using a `Dictionary` is correct, but I think the issue here is more general than his answer lets on. A `Collection` key (or index position for that matter) is only good for reading, not writing.
So in this line:
```
dFeat("M1")("TOTAL") = 88 ' Error here
```
`dFeat("M1")` is fine. It returns the `Collection` you added with key "M1". The error is happening because you try to directly assign to an element of *that* collection. In general, if `c` is a `Collection`, `c("TOTAL")` (or `c(2)`) can't be an lvalue.
As Alek K. says, the best way around this is to use a `Dictionary` for the inner "collections", or for both the inner and outer. Here is how using one for the inner would look:
```
Dim d As Dictionary
Set d = New Dictionary
d("PASSED") = 3
d("TOTAL") = 4
dFeat.Add d, "M1"
```
Then the line:
```
dFeat("M1")("TOTAL") = 88
```
will work because `dFeat("M1")("TOTAL")` *is* a valid lvalue.
If for some reason you can't or don't want to include the MS Scripting Runtime, you'll have to replace the failing line with something like:
```
Dim c As Collection
Set c = dFeat("M1")
Call c.Remove("TOTAL")
Call c.Add(88, "TOTAL")
```
or more concisely:
```
Call dFeat("M1").Remove("TOTAL")
Call dFeat("M1").Add(88, "TOTAL")
```
Then, you can *read* the value of `dFeat("M1")("TOTAL")`, but you still can't assign to it.
|
Magento - getting data from an order or invoice
I'm trying to write a Magento (CE 1.4) extension to export order data once an order has been paid for. Iโve set up an observer that hooks in to the sales\_order\_invoice\_save\_after event, and that is working properly - my function gets executed when an invoice is generated. But Iโm having trouble getting information about the order, such as the shipping address, billing address, items ordered, order total, etc.
This is my attempt:
```
class Lightbulb_Blastramp_Model_Observer {
public function sendOrderToBlastramp(Varien_Event_Observer $observer) {
$invoice = $observer->getEvent()->getInvoice();
$order = $invoice->getOrder();
$shipping_address = $order->getShippingAddress();
$billing_address = $order->getBillingAddress();
$items = $invoice->getAllItems();
$total = $invoice->getGrandTotal();
return $this;
}
}
```
I tried doing a print\_r on all those variables, and ended up getting a lot of data back. Could someone point me in the right direction of getting the shipping address of an order?
Thanks!
| Many Magento objects are based on `Varien_Object`, which has a method called `getData()` to get just the usually interesting data of the object (excluding the tons of other, but mostly useless data).
With your code you could either go for all the data at once:
```
$shipping_address = $order->getShippingAddress();
var_dump($shipping_address->getData());
```
or directly for specific single properties like this:
```
$shipping_address = $order->getShippingAddress();
var_dump(
$shipping_address->getFirstname(),
$shipping_address->getLastname(),
$shipping_address->getCity()
);
```
To understand how this works, I'd recommend to make yourself more familiar with the `Varien_Object` and read a bit about PHPs [magic methods](http://php.net/manual/en/language.oop5.magic.php), like `__call()`, `__get()` and `__set()`.
|
Input box in react to show suggestions in reactjs
How to make an input box which will give suggestion with small delay not on every character input fast. I don't want to hit suggestion api on every char input.
```
class Input extends React.Component {
constructor (props){
super(props);
this.state = {
inputVal:'',
suggestion : []
}
}
handleChange = ({target:{value}})=>{
fetch('https://api.github.com/search/users?q='+value, (res)=>{
this.setState({suggestions:res.items});
});
}
render(){
<input onChange={this.handleChange} value {this.state.inputVal} />
<ul id="suggestions">
this.state.suggestions.map(sugg=>{
return (
<li>{sugg.login}</li>
)
})
</ul>
}
}
ReactDOM.render(<Input />, document.getElementById('container'));
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script>
<div id='container'></div>
```
| You can use delayed API call using setTimeout that is cleared after each change of input value. Here's a small working example:
```
const INPUT_TIMEOUT = 250; //ms - It's our input delay
class TodoApp extends React.Component {
constructor(props) {
super(props);
this.state = {
value: '',
predictions: [],
};
this.onChange = this.onChange.bind(this);
}
getPredictions(value) {
// let's say that it's an API call
return [
'Boston',
'Los Angeles',
'San Diego',
'San Franciso',
'Sacramento',
'New York',
'New Jersie',
'Chicago',
].filter(item => item.toLowerCase().indexOf(value.toLowerCase()) !== -1);
}
onChange(e) {
// clear timeout when input changes value
clearTimeout(this.timeout);
const value = e.target.value;
this.setState({
value
});
if (value.length > 0) {
// make delayed api call
this.timeout = setTimeout(() => {
const predictions = this.getPredictions(value);
this.setState({
predictions
});
}, INPUT_TIMEOUT);
} else {
this.setState({
predictions: []
});
}
}
render() {
return (
<div >
<input type = "text" value={this.state.value} onChange = {this.onChange}/>
<div>
{
this.state.predictions.map((item, index) => (
<div key={index + item}>{item}</div>
))
}
</div>
</div>
)
}
}
ReactDOM.render( <TodoApp />, document.querySelector("#app"))
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script>
<div id="app"></div>
```
|
Nokogiri: Running into error "undefined method โtextโ for nil:NilClass"
I'm a newbie to programmer so excuse my noviceness. So I'm using Nokogiri to scrape a police crime log. Here is the code below:
```
require 'rubygems'
require 'nokogiri'
require 'open-uri'
url = "http://www.sfsu.edu/~upd/crimelog/index.html"
doc = Nokogiri::HTML(open(url))
puts doc.at_css("title").text
doc.css(".brief").each do |brief|
puts brief.at_css("h3").text
end
```
I used the selector gadget bookmarklet to find the CSS selector for the log (.brief). When I pass "h3" through brief.at\_css I get all of the h3 tags with the content inside.
However, if I add the .text method to remove the tags, I get NoMethod error.
Is there any reason why this is happening? What am I missing? Thanks!
| To clarify if you look at the structure of the HTML source you will see that the very first occurrence of `<div class="brief">` does not have a child `h3` tag (it actually only has a child `<p>` tag).
The [Nokogiri Docs](http://nokogiri.org/Nokogiri/XML/Node.html#method-i-at_css) say that
>
> at\_css(\*rules)
>
>
> Search this node for the first occurrence of CSS rules. Equivalent to css(rules).first See Node#css for more information.
>
>
>
If you call `at_css(*rules)` the docs states it is equivalent to `css(rules).first`. When there are items (your `.brief` class contains a `h3`) then an `Nokogiri::XML::Element` object is returned which responds to `text`, whereas if your `.brief` does not contain a `h3` then a `NilClass` object is returned, which of course does not respond to `text`
So if we call `css(rules)` (not `at_css` as you have) we get a `Nokogiri::XML::NodeSet` object returned, which has the `text()` method defined as (notice the `alias`)
```
# Get the inner text of all contained Node objects
def inner_text
collect{|j| j.inner_text}.join('')
end
alias :text :inner_text
```
because the class is `Enumerable` it iterates over it's children calling their `inner_text` method and joins them all together.
Therefore you can either perform a `nil?` check or as @floatless correctly stated just use the `css` method
|
Kubectl - How to Read Ingress Hosts from Config Variables?
I have a `ConfigMap` with a variable for my domain:
```
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
MY_DOMAIN: mydomain.com
```
and my goal is to use the `MY_DOMAIN` variable inside my Ingress config
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
โฎ - config.MY_DOMAIN
secretName: mytls
rules:
โฎ - host: config.MY_DOMAIN
http:
paths:
- backend:
serviceName: myservice
servicePort: 3000
```
But obviously the config above is not valid. So how can this be achieved?
| The `configMapRef` and `secretMapRef` for the [envFrom](https://gist.github.com/troyharvey/4506472732157221e04c6b15e3b3f094#file-deployment-yml-L25) and [valueFrom](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data) functions are only available for environment variables which means they cannot be used in this context. The desired functionality is not available in vanilla Kubernetes as of 1.18.0.
However, it can be done. [Helm](https://helm.sh/) and [Kustomize](https://kustomize.io/) are probably the two best ways to accomplish this but it could also be done with `sed` or `awk`. Helm is a templating engine for Kubernetes manifests. Meaning, you create generic manifests, template out the deltas between your desired manifests with the generic manifests by variables, and then provide a variables file. Then, at runtime, the variables from your variables file are automatically injected into the template for you.
Another way to accomplish this is why Kustomize. Which is what I would personally recommend. Kustomize is like Helm in that it deals with producing customized manifests from generic ones, but it doesn't do so through templating. Kustomize is unique in that it performs merge patches between [YAML](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#patchstrategicmerge) or [JSON](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#patchjson6902) files at runtime. These patches are referred to as [Overlays](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#overlay) so it is often referred to as an overlay engine to differentiate itself from traditional templating engines. Reason being Kustomize can be used with recursive directory trees of [bases](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#base) and [overlays](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#overlay). Which makes it much more scalable for environments where dozens, hundreds, or thousands of manifests might need to be generated from boilerplate generic examples.
So how do we do this? Well, with Kustomize you would first define a [kustomization.yml](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#kustomization) file. Within you would define your Resources. In this case, `myingress`:
```
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- myingress.yml
```
So create a `example` directory and make a subdirectory called `base` inside it. Create `./example/base/kustomization.yml` and populate it with the kustomization above. Now create a `./example/base/myingress.yml` file and populate it with the example `myingress` file you gave above.
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- config.MY_DOMAIN
secretName: mytls
rules:
- host: config.MY_DOMAIN
http:
paths:
- backend:
serviceName: myservice
servicePort: 3000
```
Now we need to define our first overlay. We'll create two different domain configurations to provide an example of how overlays work. First create a `./example/overlays/domain-a` directory and create a `kustomization.yml` file within it with the following contents:
```
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../../base/
patchesStrategicMerge:
- ing_patch.yml
configMapGenerator:
- name: config_a
literals:
- MY_DOMAIN='domain_a'
```
At this point we have defined `ing_patch.yml` and `config_a` in this file. `ing_patch.yml` will serve as our ingress Patch and `config_a` will serve as our `configMap`. However, in this case we'll be taking advantage of a Kustomize feature known as a [configMapGenerator](https://github.com/kubernetes-sigs/kustomize/blob/master/examples/configGeneration.md) rather than manually creating configMap files for single literal `key:value` pairs.
Now that we have done this, we have to actually make our first patch! Since the deltas in your ingress are pretty small, it's not that hard. Create `./example/overlays/domain_a/ing_patch.yml` and populate it with:
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- domain.a.com
rules:
- host: domain.a.com
```
Perfect, you have created your first overlay. Now you can use `kubectl` or `kustomize` to generate your resultant manifest to apply to the Kubernetes API Server.
- **Kubectl Build:** `kubectl kustomize ./example/overlays/domain_a`
- **Kustomize Build:** `kustomize build ./example/overlays/domain_a`
Run one of the above Build commands and review the STDOUT produced in your terminal. Notice how it contains two files, `myingress` and `config`? And `myingress` contains the Domain configuration present in your overlay's patch?
So, at this point you're probably asking. Why does Kustomize exist if Kubectl supports the features by default? Well Kustomize started as an external project initially and the Kustomize binary is often running a newer release than the version available in Kubectl.
The next step is to create a second overlay. So go ahead and `cp` your first overlay over: `cp -r ./example/overlays/domain_a ./example/overlays/domain_b`.
Now that you have done that, open up `./example/overlays/domain_b/ing_patch.yml` up in a text editor and change the contents to look like so:
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- domain.b.com
rules:
- host: domain.b.com
```
Save the file and then build your two separate overlays:
```
kustomize build ./example/overlays/domain_a
kustomize build ./example/overlays/domain_b
```
Notice how each generated stream of STDOUT varies based on the patch present in the Overlay directory? You can continue to abstract this pattern by making your Bases the Overlays for other bases. Or by making your Overlays the Bases for other Overlays. Doing so can allow you to scale this project in extremely powerful and efficient ways. Apply them to your API Server if you wish:
```
kubectl apply -k ./example/overlays/domain_a
kubectl apply -k ./example/overlays/domain_b
```
This is only the beginning of Kustomize really. As you might have guessed after seeing the `configMapGenerator` field in the `kustomization.yml` file for each overlay, Kustomize has a LOT of features baked in. It can add [labels](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/plugins/builtins.md#field-name-commonLabels) to all of your resources, it can override their [namespaces](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/plugins/builtins.md#namespacetransformer) or container [image](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/plugins/builtins.md#imagetagtransformer) information, etc.
I hope this helps. Let me know if you have any other questions.
|
Is it possible to get identical SHA1 hash?
Given two different strings S1 and S2 (S1 != S2) is it possible that:
```
SHA1(S1) == SHA1(S2)
```
is True?
1. If yes - with what probability?
2. If not - why not?
3. Is there a upper bound on the length of a input string, for which the probability of getting duplicates is 0? OR is the calculation of SHA1 (hence probability of duplicates) independent of the length of the string?
The goal I am trying to achieve is to hash some sensitive ID string (possibly joined together with some other fields like parent ID), so that I can use the hash value as an ID instead (for example in the database).
Example:
```
Resource ID: X123
Parent ID: P123
```
I don't want to expose the nature of my resource identifies to allow client to see "X123-P123".
Instead I want to create a new column hash("X123-P123"), let's say it's AAAZZZ. Then the client can request resource with id AAAZZZ and not know about my internal id's etc.
| What you describe is called a *collision*. Collisions necessarily exist, since SHA-1 accepts many more distinct messages as input that it can produce distinct outputs (SHA-1 may eat any string of bits up to 2^64 bits, but outputs only 160 bits; thus, at least one output value must pop up several times). This observation is valid for any function with an output smaller than its input, regardless of whether the function is a "good" hash function or not.
Assuming that SHA-1 behaves like a "random oracle" (a conceptual object which basically returns random values, with the sole restriction that once it has returned output *v* on input *m*, it must always thereafter return *v* on input *m*), then the probability of collision, for any two distinct strings S1 and S2, should be 2^(-160). Still under the assumption of SHA-1 behaving like a random oracle, if you collect many input strings, then you shall begin to observe collisions after having collected about 2^80 such strings.
(That's 2^80 and not 2^160 because, with 2^80 strings you can make about 2^159 pairs of strings. This is often called the "birthday paradox" because it comes as a surprise to most people when applied to collisions on birthdays. See [the Wikipedia page](http://en.wikipedia.org/wiki/Birthday_paradox) on the subject.)
Now we strongly suspect that SHA-1 does *not* really behave like a random oracle, because the birthday-paradox approach is the optimal collision searching algorithm for a random oracle. Yet there is a published attack which should find a collision in about 2^63 steps, hence 2^17 = 131072 times faster than the birthday-paradox algorithm. Such an attack should not be doable on a true random oracle. Mind you, this attack has not been actually completed, it remains theoretical (some people [tried but apparently could not find enough CPU power](http://boinc.iaik.tugraz.at/))(**Update:** as of early 2017, somebody *did* compute a [SHA-1 collision](https://shattered.io/) with the above-mentioned method, and it worked exactly as predicted). Yet, the theory looks sound and it really seems that SHA-1 is not a random oracle. Correspondingly, as for the probability of collision, well, all bets are off.
As for your third question: for a function with a *n*-bit output, then there necessarily are collisions if you can input more than 2^*n* distinct messages, i.e. if the maximum input message length is greater than *n*. With a bound *m* lower than *n*, the answer is not as easy. If the function behaves as a random oracle, then the probability of the existence of a collision lowers with *m*, and not linearly, rather with a steep cutoff around *m=n/2*. This is the same analysis than the birthday paradox. With SHA-1, this means that if *m < 80* then chances are that there is no collision, while *m > 80* makes the existence of at least one collision very probable (with *m > 160* this becomes a certainty).
Note that there is a difference between "there exists a collision" and "you find a collision". Even when a collision *must* exist, you still have your 2^(-160) probability every time you try. What the previous paragraph means is that such a probability is rather meaningless if you cannot (conceptually) try 2^160 pairs of strings, e.g. because you restrict yourself to strings of less than 80 bits.
|
Search for Entity that has multiple tags in doctrine
I have a many to many relation between Document and Tag. So a `Document` can have several `Tags`'s, and one `Tag` can be assigned to different `Document`'s.
This is `Tag`
```
AppBundle\Entity\Tag:
type: entity
table: tags
repositoryClass: AppBundle\Repository\TagRepository
manyToMany:
documents:
targetEntity: Document
mappedBy: tags
id:
id:
type: integer
id: true
generator:
strategy: AUTO
fields:
label:
type: string
length: 255
unique: true
```
And `Document`
```
AppBundle\Entity\Document:
type: entity
table: documents
repositoryClass: AppBundle\Repository\DocumentRepository
manyToMany:
tags:
targetEntity: Tag
inversedBy: documents
joinTable:
name: documents_tags
id:
id:
type: integer
id: true
generator:
strategy: AUTO
fields:
title:
type: string
length: 255
```
Now I want to search for all Documents that has the tags `animal` and `fiction`. How can I achieve that with doctrine?
Something like
```
$repository = $this->getDoctrine()->getRepository('AppBundle:Document');
$query = $repository->createQueryBuilder('d');
$query ->join('d.tags', 't')
->where($query->expr()->orX(
$query->expr()->eq('t.label', ':tag'),
$query->expr()->eq('t.label', ':tag2')
))
->setParameter('tag', $tag)
->setParameter('tag2', $tag2)
```
wont do the job, because it returns all Documents that have either `tag1` or `tag2`. But `andX` won't work too, because there is no single tag that has both labels.
| You can achieve this with additional inner joins for each tag:
Example:
```
$em = $this->getDoctrine()->getManager();
$repository = $this->getDoctrine()->getRepository('AppBundle:Document');
$query = $repository->createQueryBuilder('d');
$query->innerJoin('d.tags', 't1', Join::WITH, 't1.label = :tag1');
$query->innerJoin('d.tags', 't2', Join::WITH, 't2.label = :tag2');
$dql = $query->getDql();
$result = $em->createQuery($dql)
->setParameter('tag1', 'LabelForTag1')
->setParameter('tag2', 'LabelForTag2')
->getResult();
```
Maybe this little image helps understanding what this query does. The whole circle represent all your documents. If you are only using one single join, the query will return either the green+red or the blue+red part.
Using an additional inner join, you will only get the intersection of the joins seen individually (which is only the red part).
[![enter image description here](https://i.stack.imgur.com/8wdU5.png)](https://i.stack.imgur.com/8wdU5.png)
If you have even more tags to search for, you can simply add another join for that.
|
REST-based desktop application
We have a highly efficient library written in a low-level programming language. We would like to allow third parties to implement a GUI for it.
The approach we would like to take is to write a REST server. The GUI (written in whatever language) needs to start the server and is then able to use the library.
As said, the goal is to create a local desktop application, so the server should only listen to the localhost and the GUI (the latter may be solved via auth).
Is there a reason such an approach is not used more often (I hardly couldn't find anything)? The only place it is mentioned seems to be [The Modern Application Stack โ Part 3: Building a REST API Using Express.js](https://www.mongodb.com/blog/post/the-modern-application-stack-part-3-building-a-rest-api-using-expressjs) as "*... MERN (MongoDB, Express, React, Node.js) Stacks, why you might want to use them, and how to combine them to build your web application (or your native mobile or desktop app).*"
Are there tutorials or special architectural patterns?
I found the following resources:
- <https://www.fyears.org/2017/02/electron-as-gui-of-python-apps-updated.html>
- <https://www.reddit.com/r/rust/comments/49c95n/nativelooking_ui_with_rust/d0quiia/>
| Splitting desktop application into server and client is not that common. But it is also not unheard of. Linux's X Server might be good example of that.
The reason why it is not used more often, is that API between the client and server is heavily rigid and strict. The question is if the advantages of that approach: running in separate processes, ability to use different languages, frameworks or development approaches on either side, improved security, etc.. outweigh inflexibility stemming from hard separation between the client and server. In the majority of cases, those advantages do not outweigh. But in some specific cases it might.
In your case, it could make sense, as it would allow to develop the UI in completely separate system than the computation library. And keeping them in separate processes would shield each from possible stability issues in the other side.
Also, I would stop focusing so much on "REST". The core issue here is separation of UI and background logic into separate processes. How those processes communicate is secondary.
|
JavaScript: Object destructuring dynamically via computed keys
I'm trying to write a curried function dissoc, that takes in a prop and an object and removes the key and its value from the object.
Here is the code that works:
```
const dissoc = prop => obj => {
const result = {};
for (let p in obj) {
result[p] = obj[p];
}
delete result[prop];
return result;
};
```
Now, I thought this could be written more concise by using [computed properties](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Object_initializer#New_notations_in_ECMAScript_2015), destructuring and the rest operator:
```
const dissoc = prop => ({ [prop], ...obj}) => obj
```
But this code throws with:
```
Uncaught SyntaxError: Unexpected token ,
```
So I tried:
```
const dissoc = prop => ({ prop, ...obj}) => obj
```
which computes, but just returns the object without the prop.
Is there a way to dynamically destructure and / or simplify this code?
| You need a target variable, for example an underscore as sign for an unused variable/placeholder.
```
const dissoc = prop => ({ [prop]: _, ...obj}) => obj
```
Javascript does not have the concept of using variable variables, like PHP with `$$variable`, where the value of `$variable` is taken as variable. In this case, you need another variable, because of the [computed property name](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Object_initializer#Computed_property_names) and without a real variable, it does not work.
Maybe worth a read: [Object property assignment pattern [YDKJS: ES6 & Beyond]](https://github.com/getify/You-Dont-Know-JS/blob/master/es6%20%26%20beyond/ch2.md#object-property-assignment-pattern)
|
Change how the volume scales? (The loudness of the sound at 100% volume)
Bit of an odd question here. Everything I have found is trying to make 100% volume louder, but on my speakers, 100% volume is *deafening*, so I have keep my volume bar about 20-30%. I want to allow allow for finer control at the lower volume as if the new "100%" is the same as the current 20-30%, the new graduations would be finer. Is this possible?
Ubuntu 16.04 LTS running unity.
| Run `paprefs` command. In the "PulseAudio Preferences" window, goto "Simultaneous Output" and check the checkbox "Add virtual output device...". Close the window.
Run `pavucontrol`. In "Output Devices" select "Show: All Output Devices" (if not already selected). Make the virtual device "Simultaneous output to..." your default device (click on the green button on the right). Now your volume controls should work on this device.
Set the output level of your physical output device to 30% or whatever fits your needs.
If everything is correct, now the output volume control should be finer, as 100% of the default "simultaneous output" should be the 30% of your physical device.
If your PA database has remembered the (physical) output device for some applications, you might have to change that in the "Playback" section of `pavucontrol` when your application is running.
Remark: This is working for 16.04 with xfce4 volume control. Maybe you have to manually modify something in unity to have your volume controls modifying the PA default. You could also play around a bit with `pavucontrol` to see what can be done, especially with virtual devices.
**Edit:**
Usually, after having checked the item in `paprefs`, and reboot, the virtual output device should be displayed. If you cannot (or don't want to) reboot or
if the output device "Simultaneous output..." is not listed, maybe the corresponding PA module "[module-combine-sink](https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Modules/#index11h3)" is not loaded. You can check with `pacmd list-modules`; there it should be shown like
```
$ pacmd list-modules
26 module(s) loaded.
.
.
index: 26
name: <module-combine-sink>
argument: <>
used: -1
load once: no
properties:
module.author = "Lennart Poettering"
module.description = "Combine multiple sinks to one"
module.version = "8.0"
```
(your configuration and therefore the numbers may vary).
Ifit is not listed, you can load the module with
```
$ pacmd load-module module-combine-sink
```
It then should be visible to the output tab immediately.
|
Defining a Theory of Sets with Z3/SMT-LIB2
I'm trying to define a theory of sets (union, intersection etc.)
for Z3 using the SMTLIB interface. Unfortunately, my current
definition hangs z3 for a trivial query, so I guess I'm missing
some simple option/flags.
here's the permalink: <http://rise4fun.com/Z3/JomY>
```
(declare-sort Set)
(declare-fun emp () Set)
(declare-fun add (Set Int) Set)
(declare-fun cup (Set Set) Set)
(declare-fun cap (Set Set) Set)
(declare-fun dif (Set Set) Set)
(declare-fun sub (Set Set) Bool)
(declare-fun mem (Int Set) Bool)
(assert (forall ((x Int)) (not (mem x emp))))
(assert (forall ((x Int) (s1 Set) (s2 Set))
(= (mem x (cup s1 s2)) (or (mem x s1) (mem x s2)))))
(assert (forall ((x Int) (s1 Set) (s2 Set))
(= (mem x (cap s1 s2)) (and (mem x s1) (mem x s2)))))
(assert (forall ((x Int) (s1 Set) (s2 Set))
(= (mem x (dif s1 s2)) (and (mem x s1) (not (mem x s2))))))
(assert (forall ((x Int) (s Set) (y Int))
(= (mem x (add s y)) (or (mem x s) (= x y)))))
(declare-fun z3v8 () Bool)
(assert (not z3v8))
(check-sat)
```
Any hint as to what I'm missing?
Also, from what I can tell there is no standard SMT-LIB2
encoding of the set operations e.g. `Z3.mk_set_{add,del,empty,...}`
(which is why I'm trying to get that functionality via quantifiers.)
Is that correct? Or is there another route?
Thanks!
Ranjit.
| Your formula is satisfiable, and Z3 is not capable to produce a model for this kind of formula. Note that, it will have to generate an interpretation for the uninterpreted sort `Set`. There are a couple of alternatives you may consider.
1- Disable the model-based quantifier instantiation (MBQI) module. BTW, all Boogie-based tools (VCC, Dafny, Coral, etc) do that. To disable the MBQI module, we have to use
```
(set-option :auto-config false)
(set-option :mbqi false)
```
Remark: in the work-in-progress branch, the option `:mbqi` has been renamed to `:smt.mbqi`.
Cons: when the MBQI module is disable, Z3 will usually return `unknown` for satisfiable formulas that contain quantifier.
2- Encode sets of T as Arrays from T to Boolean. Z3 supports extended array theory. The extended theory has two new operators: `((_ const T) a)` constant arrays, `((_ map f) a b)` map operator. [This paper](http://scholar.google.com/citations?view_op=view_citation&hl=en&user=CwazDKgAAAAJ&cstart=20&citation_for_view=CwazDKgAAAAJ%3aSe3iqnhoufwC) describes the extended array theory, and how to encode set operations such as union and intersection using it. The [rise4fun](http://rise4fun.com/z3) website has examples.
This is a good alternative if these are the only quantifiers in your problem because the problem is now in a decidable fragment. On the other hand, if you have additional quantified formulas that contain sets, then this will probably perform poorly. The problem is that the model built by the array theory is unaware of the existence of the additional quantifiers.
For example of how to encode the above operators using const and map see: <http://rise4fun.com/Z3/DWYC>
3- Represent sets of T as functions from T to Bool. This approach usually works well if we don't have sets of sets, or uninterpreted functions that take sets as arguments. the Z3 online tutorial has an example (Quantifiers section).
|
Count number of nodes within range inside Binary Search Tree in O(LogN)
Given a BST and two integers 'a' and 'b' (a < b), how can we find the number of nodes such that , a < node value < b, in O(log n)?
I know one can easily find the position of a and b in LogN time, but how to count the nodes in between without doing a traversal, which is O(n)?
| In each node of your Binary Search Tree, also keep count of the number of values in the tree that are lesser than its value (or, for a different tree design mentioned in the footnote below, the nodes in its left subtree).
Now, first find the node containing the value `a`. Get the count of values lesser than `a` which has been stored in this node. This step is Log(n).
Now find the node containing the value `b`. Get the count of values lesser than `b` which are stored in this node. This step is also Log(n).
Subtract the two counts and you have the number of nodes between `a` and `b`. Total complexity of this search is 2\*Log(n) = O(Log(n)).
---
*See [this video](https://youtu.be/zksIj9O8_jc?t=22m15s). The professor explains your question here by using Splay Trees.*
|
Regex named capture groups in Delphi XE
I have built a match pattern in RegexBuddy which behaves exactly as I expect. But I cannot transfer this to Delphi XE, at least when using the latest built in TRegEx or TPerlRegEx.
My real world code have 6 capture group but I can illustrate the problem in an easier example. This code gives "3" in first dialog and then raises an exception (-7 index out of bounds) when executing the second dialog.
```
var
Regex: TRegEx;
M: TMatch;
begin
Regex := TRegEx.Create('(?P<time>\d{1,2}:\d{1,2})(?P<judge>.{1,3})');
M := Regex.Match('00:00 X1 90 55KENNY BENNY');
ShowMessage(IntToStr(M.Groups.Count));
ShowMessage(M.Groups['time'].Value);
end;
```
But if I use only one capture group
```
Regex := TRegEx.Create('(?P<time>\d{1,2}:\d{1,2})');
```
The first dialog shows "2" and the second dialog will show the time "00:00" as expected.
However this would be a bit limiting if only one named capture group was allowed, but thats not the case... If I change the capture group name to for example "atime".
```
var
Regex: TRegEx;
M: TMatch;
begin
Regex := TRegEx.Create('(?P<atime>\d{1,2}:\d{1,2})(?P<judge>.{1,3})');
M := Regex.Match('00:00 X1 90 55KENNY BENNY');
ShowMessage(IntToStr(M.Groups.Count));
ShowMessage(M.Groups['atime'].Value);
end;
```
I'll get "3" and "00:00", just as expected. Is there reserved words I cannot use? I don't think so because in my real example I've tried completely random names. I just cannot figure out what causes this behaviour.
| When [pcre\_get\_stringnumber](http://regexkit.sourceforge.net/Documentation/pcre/pcre_get_stringnumber.html) does not find the name, `PCRE_ERROR_NOSUBSTRING` is returned.
`PCRE_ERROR_NOSUBSTRING` is defined in RegularExpressionsAPI as `PCRE_ERROR_NOSUBSTRING = -7`.
Some testing shows that `pcre_get_stringnumber` returns `PCRE_ERROR_NOSUBSTRING` for every name that has the first letter in the range of `k` to `z` and that range is dependent of the first letter in `judge`. Changing `judge` to something else changes the range.
As i see it there is at lest two bugs involved here. One in `pcre_get_stringnumber` and one in TGroupCollection.GetItem that needs to raise a proper exception instead of `SRegExIndexOutOfBounds`
|
Vue trimming white spaces
I'm using vue.js on my app, and when displaying some content, vue is removing spaces when there's more than one space between words. Unfortunately I can't reproduce this on a fiddle (not sure why). I'm not familiar with vue (I'm more of a back-end), so I'm sorry for the lack of details. The HTML code to display is this: `<div slot="body" v-html="viewingEmail.message"></div>`. And a sample content would be any phrase that has two spaces, example: `Hello, how are you?`. On that case, the app will display `Hello,how are you?`
Our vue dependencies are:
```
"vue": "^2.4.2",
"vue-cookie": "^1.1.4",
"vue-flatpickr": "^2.3.0",
"vue-js-toggle-button": "^1.1.2",
"vue-loader": "^11.3.4",
"vue-resource": "^1.0.3",
"vue-select": "^2.2.0",
"vue-slider-component": "^2.3.6",
"vue-star-rating": "^1.4.0",
"vue-template-compiler": "^2.4.2",
"vue2-dropzone": "^2.2.7",
"vuedraggable": "^2.15.0",
"vuejs-paginate": "^1.1.0",
"vuex": "^2.2.1",
```
| Vue is not trimming spaces. That's just how HTML works.
The space is there, see demo below.
```
new Vue({
el: '#app',
data: {
message: 'Hello, Vue.js!'
},
mounted() {
console.log('Notice how the spaces exist in HTML, even though they are not displayed.');
console.log('divHTML', this.$refs.divHTML.outerHTML);
console.log('divTEXT', this.$refs.divTEXT.outerHTML);
}
})
```
```
<script src="https://unpkg.com/vue@2.5.16/dist/vue.min.js"></script>
<div id="app">
v-html: <div v-html="message" ref="divHTML"></div>
v-text: <div v-text="message" ref="divTEXT"></div>
</div>
```
You could just replace space chars with a ` ` HTML entity, but that would mess nested elements' attributes.
**My suggestion:** [use `white-space: pre-wrap;` style](https://developer.mozilla.org/en-US/docs/Web/CSS/white-space).
See demo below.
```
new Vue({
el: '#app',
data: {
message: 'Hello, Vue.js!'
}
})
```
```
.keep-spaces { white-space: pre-wrap; }
```
```
<script src="https://unpkg.com/vue@2.5.16/dist/vue.js"></script>
<div id="app">
<h3>With "white-space:pre-wrap;" spaces are preserved visually.</h3>
v-html: <div v-html="message" class="keep-spaces"></div>
v-text: <div v-text="message" class="keep-spaces"></div>
</div>
```
|
Rails and exporting a console query to csv
I have the following query I am running...
```
require 'csv'
require File.dirname(__FILE__) + '/config/environment.rb'
file = "#{Rails.root}/public/data.csv"
registrations = OnlineCourseRegistration.where(course_class_id: 681).where(status: "Completed").where("score >= ?", "80").where("exam_completed_at BETWEEN ? AND ?", 3.years.ago, Date.today)
CSV.open( file, 'w' ) do |writer|
writer << registrations.first.attributes.map { |a,v| a }
registrations.each do |s|
if s.user_id
writer << User.find(s.user_id).email
end
writer << s.attributes.map { |a,v| v }
end
end
```
This is failing on the line `writer << User.find(s.user_id).email` with the error:
```
`<<': undefined method `map' for "my_user@yahoo.com":String (NoMethodError)`
```
Basically, I just want to add a column with the users email address in it.
# Update
Here is the current output w/out the email field
```
id cart_id user_id course_class_id created_at updated_at exam_attempts exam_completed_at evaluation_completed_at status score add_extension retest_cart_id retest_purchased_at
11990 10278 6073 681 2014-10-30 20:34:18 UTC 2014-12-17 14:48:39 UTC 2 2014-12-17 03:16:44 UTC 2014-12-17 14:48:39 UTC Completed 90 FALSE
11931 10178 6023 681 2014-09-02 22:35:08 UTC 2015-02-24 03:58:03 UTC 1 2015-02-24 03:56:12 UTC 2015-02-24 03:58:03 UTC Completed 80 FALSE
12015 10316 6089 681 2014-11-15 14:31:05 UTC 2014-11-18 20:14:13 UTC 1 2014-11-18 20:11:46 UTC 2014-11-18 20:14:13 UTC Completed 82 FALSE
12044 10358 6103 681 2014-12-03 15:56:39 UTC 2014-12-06 23:05:18 UTC 2 2014-12-06 23:02:13 UTC 2014-12-06 23:05:18 UTC Completed 94 FALSE
```
So I would like to append an email field to each of the colums above.
| This is pretty ghetto. But full disclosure, it's an adhoc report a customer is requesting and it does what they need it to. I just concatinated `[User.find(s.user_id).email]` onto the `writer` statement. Looks like simply wrapping brackets around it is enough to do the trick. Here is the full code...
```
require 'csv'
require File.dirname(__FILE__) + '/config/environment.rb'
file = "#{Rails.root}/public/data.csv"
registrations = OnlineCourseRegistration.where(course_class_id: 681).where(status: "Completed").where("score >= ?", "80").where("exam_completed_at BETWEEN ? AND ?", 3.years.ago, Date.today)
CSV.open( file, 'w' ) do |writer|
writer << registrations.first.attributes.map { |a,v| a }
registrations.each do |s|
writer << s.attributes.map { |a,v| v } + [User.find(s.user_id).email]
end
end
```
|
Why does virtualbox only have 32-bit option, no 64-bit option on Windows 7?
I have downloaded and installed the newest [virtualbox 4.3.20](https://www.virtualbox.org/wiki/Downloads) for my `Windows 7 (64-bit OS)`, but when I want to install `64-bit Linux`, the *New*->*Create Virtual Machine* only displays `32-bit` option, no `64-bit` option. Could anyone give any clues about this issue? Thanks very much in advance!
| Take a look: <http://www.fixedbyvonnie.com/2014/11/virtualbox-showing-32-bit-guest-versions-64-bit-host-os/>
If VirtualBox is only showing 32-bit versions in the Version list make sure:
- You have an x64 CPU installed. (Optimally, [a 64-bit OS should also be installed to receive acceptable virtualization performance](http://www.virtualbox.org/manual/ch03.html#intro-64bitguests).)
- Hardware virtualization is enabled in the BIOS. (Your CPU must support it.)
- For Intel x64: VT-x (Intel Virtualization Technology) and VT-d are both enabled
- For AMD x64: AMD SVM (Secure Virtual Machine) is enabled
- Hyper-V (or any other form of bare-metal hypervisor) is not installed
|
Writing a Makefile.am to invoke googletest unit tests
I am trying to add my first unit test to an existing Open Source project. Specifically, I added a new class, called audio\_manager:
```
src/audio/audio_manager.h
src/audio/audio_manager.cc
```
I created a src/test directory structure that mirrors the structure of the implementation files, and wrote my [googletest](http://code.google.com/p/googletest/) unit tests:
```
src/test/audio/audio_manager.cc
```
Now, I am trying to set up my Makefile.am to compile and run the unit test:
```
src/test/audio/Makefile.am
```
I copied Makefile.am from:
```
src/audio/Makefile.am
```
Does anyone have a simple recipe for me, or is it to the cryptic automake documentation for me? :)
| William's answer got me where I needed to go. Just for the sake of the community, here's what I ended up doing:
1. I moved my tests back into the main directory structure and prepended test\_, as per William's suggestions.
2. I added a few lines to src/audio/Makefile.am to enable unit tests:
```
# Unit tests
noinst_PROGRAMS = test_audio_manager
test_audio_manager_SOURCES = $(libadonthell_audio_la_SOURCES) test_audio_manager.cc
test_audio_manager_CXXFLAGS = $(libadonthell_audio_la_CXXFLAGS)
test_audio_manager_LDADD = $(libadonthell_audio_la_LIBADD) -lgtest
TESTS = test_audio_manager
```
3. Now, running "make check" fires the unit tests!
All of this can be seen here: <http://github.com/ksterker/adonthell/commit/aacdb0fe22f59e61ef0f5986827af180c56ae9f3>
|
Get browser height from within a frame
I'm using Moodle: ![moddle](https://i.stack.imgur.com/veqG0.png)
I am trying to run some Javascript from within the frame to get the browser window height, so I can alert the user if they are using a less then optimal browser size.
I'v tried things like:
```
window.top.document.body.offsetHeight
document.documentElement.clientHeight
$(document).height()
$(window).height()
```
But they all give static heights which never change when I resize the browser.
Oh and it has to work in IE8.
| You have to use the `parent` property of the iFrame window:
Run this code from within the iFrame, it will return the height of the parent window:
```
$(window.parent).height();
```
However, if you're using jQuery 1.8.0 this may not work if your browser is in [quirks mode](http://en.wikipedia.org/wiki/Quirks_mode). As of jQuery 1.8.0, the command `$(window).height()` stopped working for Internet Explorer in quirks mode and they [don't plan to fix it](http://bugs.jquery.com/ticket/12310).
If you're using jQuery 1.8.0, use this slight variation instead:
```
$(window.parent.document).height();
```
This will ensure cross-browser support, even in IE quirks mode ;)
<http://www.w3schools.com/jsref/prop_win_parent.asp>
|
C# GetHashCode with two Int16, also returns only up to Int32?
Sorry to combine two questions into one, they are related.
`HashCode`s for `HashSet`s and the such. As I understand it, they must be unique, not change, and represent any configuration of an object as a single number.
My first question is that for my object, containing the two Int16s `a` and `b`, is it safe for my `GetHashCode` to return something like `a * n + b` where n is a large number, I think perhaps `Math.Pow(2, 16)`?
Also `GetHashCode` appears to inflexibly return specifically the type Int32.
32bits can just about store, for example, two Int16s, a single unicode character or 16 N, S, E, W compass directions, it's not much, even something like a small few node graph would probably be too much for it. Does this represent a limit of C# Hash collections?
|
>
> As I understand it, they must be unique
>
>
>
Nope. They can't possibly be unique for most types, which can have more than 232 possible values. Ideally, if two objects have the same hash code then they're *unlikely* to be equal - but you should never assume that they *are* equal. The important point is that if they have *different* hash codes, they should definitely be *unequal*.
>
> My first question is that for my object, containing the two Int16s a and b, is it safe for my GetHashCode to return something like a \* n + b where n is a large number, I think perhaps Math.Pow(2, 16).
>
>
>
If it *only* contains two `Int16` values, it would be simplest to use:
```
return (a << 16) | (ushort) b;
```
Then the value *will* be unique. Hoorah!
>
> Also `GetHashCode` appears to inflexibly return specifically the type `Int32`.
>
>
>
Yes. Types such as `Dictionary` and `HashSet` need to be able to use the fixed size so they can work with it to put values into buckets.
>
> 32bits can just about store, for example, two Int16s, a single unicode character or 16 N, S, E, W compass directions, it's not much, even something like a small few node graph would probably be too much for it. Does this represent a limit of C# Hash collections?
>
>
>
If it *were* a limitation, it would be a .NET limitation rather than a C# limitation - but no, it's just a misunderstanding of what hash codes are meant to represent.
Eric Lippert has an excellent (obviously) [blog post about `GetHashCode`](http://blogs.msdn.com/b/ericlippert/archive/2011/02/28/guidelines-and-rules-for-gethashcode.aspx) which you should read for more information.
|
"User-friendly" but secure algorithm for anonymising log files
I have a set of IIS log files that I'd like to publish for a research study.
However, these contain some sensitive information that I would like to anonymise, eg:
```
UserName=XXXX65
```
I'd like to use an algorithm that retains some "user friendly"-ness for visual inspection of the log files, but which is also secure enough it is impossible / impractical to derive the original UserNames.
I can't just *\**\* out all the UserNames, since it is important to be able to correlate requests from the same username across the logs.
Using SHA1 hashing gives me something like
```
UserName=AD5CBF0BA0A8646EBDBA6BE1B5DA4FCB1F385D39
```
which is just about usable,
SHA256 gives:
```
UserName=C9B84EE0DD2EFA53645D5268602E23A9E788903B31BBEB99C03982D9B50AF70C
```
Which is starting to get too long to be usable,
and [PBKDF2-SHA1 hashing](http://crackstation.net/hashing-security.htm#properhashing) gives
```
UserName=1000:153JkeeGAqtG2UsHX57RBqm3O0DIkXhF:31BBDlQrUqqeyaMo/ikCJAXRC4fFXf82
```
which in my opinion is much too long to be usable.
Is there an algorithm that gives a relatively short one way hash but remains secure / non--reversible?
I'm looking for something where you can scan the log files with your eye, and still notice UserName correlations.
| **One way hashes aren't really anonymous.** Why? One can easily verify which user corresponds to which hash:
1. Say `"root"` is a user.
2. You apply `hash("root")` and it turns out the result is `foo`. You publish logs containing several references to `foo`.
3. I make a smart guess that `root` is a user on your machine. I then apply `hash("root")` myself and obtain `foo`. Now I know which logs correspond to `"root"`.
So in essence: Hashes are useful when you later want to be able to verify from the published logs that a certain user was the cause of a certain log. Not when the goal is anonymity.
**Plus, hashes are difficult to read.**
**I'd generate random pronounceable strings, and map one to each user name.** Then publish the logs using the random strings. Truly anonymous and truly readable.
How to produce random pronounceable strings? **Alternate consonants and vowels. Here's how to do it with C** (of course, this only produces a random 6 character string. You need more logic to go with it when processing your logs, like: mapping each user name to a string, making sure strings are unique):
```
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>
#define NAME_LENGTH 6
#define RAND_CHAR(string) \
( (string)[rand () % strlen (string)])
int main (void)
{
char vowel[] = "aeiou";
char consonant[] = "bcdfghjklmnpqrstvwxyz";
int i;
char rand_name[NAME_LENGTH + 1];
srand (time (NULL));
for (i = 0; i < NAME_LENGTH; i++)
rand_name[i] = (i % 2) ? RAND_CHAR (vowel) : RAND_CHAR (consonant);
rand_name[NAME_LENGTH] = '\0';
printf ("%s\n", rand_name);
return 0;
}
```
**Here's some examples it produced for me:**
*cemala
gogipa
topeqe
lixate
fasota
rironu*
If the number of users you serve is comparable to *125 \* 213*, you need to generate longer strings, and maybe use separators to make it easy to pronounce:
*cemala-gogipa*
|
CMake add\_definitions does not seem to work
I'm trying to define a preprocessor variable via CMake, but unfortunately it does not seem to work.
The following is my c++ code:
```
#ifdef hehe_test
#define it_exists "it_exists"
#endif
#ifndef hehe_test
#define it_exists "it_doesnt_exist"
#endif
int main(int argc, char** argv)
{
printf("%s\n",it_exists);
return 0;
}
```
And this is my CMakeLists.txt:
```
project(LibraryTester)
add_definitions(hehe_test)
file(GLOB src "*.h" "*.cpp")
include_directories(${CMAKE_SOURCE_DIR}/include)
add_executable( ${PROJECT_NAME} ${src})
```
When i run my program it outputs: it\_doesnt\_exist
indicating "hehe\_test" has not been defined.
What am i doing wrong here?
| The following form won't do what you expect in any case:
```
add_definitions(hehe_test)
```
According to the [documentation](https://cmake.org/cmake/help/latest/command/add_definitions.html) the form is:
```
add_definitions(-DFOO -DBAR ...)
```
In your case it means:
```
add_definitions(-Dhehe_test)
```
You can also give them a value if required:
```
// statically defined
add_definitions(-Dfoo=bar)
// use a cmake or user-defined variable
add_definitions(-Dbar=${FOO})
// use "" if you intend to pass strings with spaces
add_definitions(-Dxxx="${YYY}")
```
Note that you can put all of them together in the same `add_definitions` call.
|
Better way to load vectors from memory. (clang)
I'm writing a test program to get used to Clang's language extensions for OpenCL style vectors. I can get the code to work but I'm having issues getting one aspect of it down. I can't seem to figure out how to get clang to just load in a vector from a scalar array nicely.
At the moment I have to do something like:
```
byte16 va = (byte16){ argv[1][start], argv[1][start + 1], argv[1][start + 2],
argv[1][start + 3], argv[1][start + 4], argv[1][start + 5],
argv[1][start + 6], argv[1][start + 7], argv[1][start + 8],
argv[1][start + 9], argv[1][start + 10], argv[1][start + 11],
argv[1][start + 12], argv[1][start + 13], argv[1][start + 14],
argv[1][start + 15]};
```
I would ideally like something like this:
```
byte16 va = *(byte16 *)(&(argv[1][start]));
```
Which I can easily do using the proper intrinsics for ARM or x86. But that code causes the program to crash although it compiles.
| One of the reasons the crash might occur on x86 is due to alignment issues. I do not have clang on my system to reproduce the problem, but I can demonstrate it at the example of GCC.
If you do something like:
```
/* Define a vector type of 16 characters. */
typedef char __attribute__ ((vector_size (16))) byte16;
/* Global pointer. */
char * foo;
byte16 test ()
{
return *(byte16 *)&foo[1];
}
```
Now if you compile it on a vector-capable x86 with:
```
$ gcc -O3 -march=native -mtune=native a.c
```
You will get the following assembly for test:
```
test:
movq foo(%rip), %rax
vmovdqa 1(%rax), %xmm0
ret
```
Please note, that the move is aligned, which is of course wrong. Now, if you would inline this function into the main, and you will have something like:
```
int main ()
{
foo = __builtin_malloc (22);
byte16 x = *(byte16 *)&foo[1];
return x[0];
}
```
You will be fine, and you will get unaligned instruction. This is kind of a bug, which doesn't have a very good fix in the compiler, as it would require interprocedural optimisations with addition of new data structures, etc.
The origin of the problem is that the compiler assumes that vector types are aligned, so when you dereference an array of aligned vector types you can use an aligned move. As a workaround for the problem in GCC one can define an unaligned vector type like:
```
typedef char __attribute__ ((vector_size (16),aligned (1))) unaligned_byte16;
```
And use it to dereference unaligned memory.
I am not sure that you are hitting exactly this problem within your setup, but this is something that I would recommend to check by inspecting the assembly output from your compiler.
|
Python : Convert 2 ints to float
I have been trying to convert a set of 2 16 bit integers into a single 32 bit floating point number. This is for my work on MODBUS communication protocol.
My query is the same as mentioned [here](https://stackoverflow.com/questions/3063078/python-convert-2-ints-to-32-float)
However, when I follow the pack/unpack approach provided there, I do not get similar results.
Here is my test program:
```
from struct import *
tup=(16256, 0)
print hex(tup[0])
print hex(tup[1])
mypack = pack('>HH',tup[0],tup[1])
print mypack
f = unpack('f', mypack)[0]
print f
```
And the Output is as below:
```
0x3f80
0x0
?
4.60060298822e-41
```
I expect the final output to be actually just '1.0' as the floating point interpretation of '0x3F800000' is just that!
I am running this code on a Beagle Bone Black board, powered by Debian OS.
Where am I going wrong?
| You need to specify the endianness in your unpack call.
```
from struct import *
tup=(16256, 0)
print hex(tup[0])
print hex(tup[1])
mypack = pack('>HH',tup[0],tup[1])
print `mypack`
f = unpack('>f', mypack)
print f
```
**output**
```
0x3f80
0x0
'?\x80\x00\x00'
(1.0,)
```
But please bear in mind the warnings / caveats mentioned in the question you linked to, especially those in regard to NaN.
Also note that it's generally considered bad practice to do `from modulename import *` because it dumps the names from `modulename` into your scripts namespace. This is messy, and it can lead to bugs. So it's better to do
```
import struct
#...
mypack = struct.pack('>HH',tup[0],tup[1])
f = struct.unpack('>f', mypack)
```
Sure, it's a little bit more to type, but it makes your code a lot easier to read and maintain.
|
What does '&' mean in C++?
What does '&' mean in C++?
As within the function
```
void Read_wav::read_wav(const string &filename)
{
}
```
And what is its equivalent in C?
If I want to transform the above C++ function into a C function, how would I do it?
| In that context, the & makes the variable a reference.
Usually, when you pass an variable to a function, the variable is copied and the function works on the copy. When the function returns, your original variable is unchanged. When you pass a reference, no copy is made and changes made by the function show up even after the function returns.
C doesn't have references, but a C++ reference is functionally the same as a pointer in C. Really the only difference is that pointers have to be dereferenced when you use them:
```
*filename = "file.wav";
```
But references can be used as though they were the original variable:
```
filename = "file.wav";
```
Ostensibly, references are supposed to never be null, although it's not impossible for that to happen.
The equivalent C function would be:
```
void read_wav(const char* filename)
{
}
```
This is because C doesn't have `string`. Usual practice in C is to send a pointer to an array of characters when you need a string. As in C++, if you type a string constant
```
read_wav("file.wav");
```
The type is `const char*`.
|
Set openCV basic structures to zero in a unique way
For test cases I need a unique way to set any structure to zero. For openCV I wrote this:
```
// Fill every basic structure of openCV with zero.
template < typename Tp >
void zero( cv::Point_<Tp>& p ) {
p.x = p.y = 0;
}
template < typename Tp >
void zero( cv::Point3_<Tp>& p ) {
p.x = p.y = p.z = 0;
}
template < typename Tp >
void zero( cv::Size_<Tp>& s ) {
s.width = s.height = 0;
}
template < typename Tp >
void zero( cv::Rect_<Tp>& r ) {
r.x = r.y = r.width = r.height = 0;
}
inline void zero( cv::RotatedRect& r ) {
zero( r.center );
zero( r.size );
r.angle = 0;
}
inline void zero( cv::TermCriteria& t ) {
t.type = t.maxCount = 0;
t.epsilon = 0.0;
}
template < typename Tp, int M, int N>
void zero( cv::Matx<Tp, M, N>& m ) {
m = 0;
}
// Vec<> and Scalar<> are derived from Matx<>
inline void zero( cv::Range& r ) {
r.start = r.end = 0;
}
inline void zero( cv::Mat& m ) {
m = 0;
}
inline void zero( cv::SparseMat& sm ) {
sm.clear();
}
template < typename Tp >
void zero( cv::SparseMat_<Tp>& sm ) {
sm.clear();
}
```
Setting `cv::Mat` to zero by assigning a new zero filled Mat is reallocating. But looping over would be slow (now done by \*= 0). Using `memset( m.data, ... )` is only possible if `m.isContinuous()`. Any idea of being safe and fast? Is it possible to join some derived cases?
The template parameters `_Tp` has been replaced by `Tp` because starting with `_`, followed by uppercase letter is reserved. That is something the openCV authors do violate too (I copied it from the openCV core code).
`Mat`, `Matx` and derived `Vec` and `Scalar` can be set to zero by multiplying with zero. Internal this is a `for()` over all elements but without the realloc of the `zeros()` assignment.
| If you want to zero a `Matx`, you can take advantage of its `val` public data member, which is a good old C array. Then you can rely on the standard library algorithm [`std::fill`](http://en.cppreference.com/w/cpp/algorithm/fill) to set every element of the matrix to `0`; the algorithm generally does static dispatch at compile time to call `std::memset` whenever possible. Using it should allow you to always be safe but also to be safe when possible:
```
template < typename Tp, int M, int N>
void zero( cv::Matx<Tp, M, N>& m ) {
std::fill(std::begin(m.val), std::end(m.val), 0);
}
```
Since `Matx::val` is a fixed-size C array, [`std::begin`](http://en.cppreference.com/w/cpp/iterator/begin) and `std::end` should work. If you don't use a C++11 standard library, you can implement it as follows instead:
```
template < typename Tp, int M, int N>
void zero( cv::Matx<Tp, M, N>& m ) {
std::fill(m.val, m.val + M*N, 0);
}
```
For `cv::Mat`, the documentation says that:
>
> To set all the matrix elements to the particular value after the construction, use the assignment operator `Mat::operator=(const Scalar& value)`.
>
>
>
It means that you can zero the matrix like this (I couldn't find equivalent overloads for `Matx` or `SparseMat`):
```
inline void zero( cv::Mat& m ) {
m = 0;
}
```
Also, as a side note, most modern compilers just don't care about `inline`, at least they don't take it as a hint for inlining since they already do a great job when it comes to inlining stuff. Nowadays, `inline` is just a way to solve ODR problems, which don't happen with templates. All of your functions are function templates, so you can safely remove `inline`.
|
How to read information from core bluetooth device
I am working on an iOS core Bluetooth application, I am able to connect with the BLE device using iPad3. I am able to reach to the block `didDiscoverServices`, but unable to proceed from here.
My questions are ;
1. How can I read characteristic from Bluetooth device?
2. How can I read other information of Bluetooth device?
Help me on this or provide any suggestion.
Thanks Wilhelmsen for reply.
I got the following from the mentioned block :
```
[0] - Service : <CBConcreteService: 0x1769a0> UUID: Generic Attribute Profile
[1] - Service : <CBConcreteService: 0x174470> UUID: Generic Access Profile
[2] - Service : <CBConcreteService: 0x1744e0> UUID: Unknown (<00005301 00000041 4c505749 53450000>)
Characteristic
[0] - Characteristic : <CBConcreteCharacteristic: 0x15d410> UUID: Service Changed
[0] - Characteristic : <CBConcreteCharacteristic: 0x1805b0> UUID: Device Name
[1] - Characteristic : <CBConcreteCharacteristic: 0x1806a0> UUID: Appearence
[0] - Characteristic : <CBConcreteCharacteristic: 0x183810> UUID: Unknown (<00004301 00000041 4c505749 53450000>)
[1] - Characteristic : <CBConcreteCharacteristic: 0x1838a0> UUID: Unknown (<00004302 00000041 4c505749 53450000>)
```
Now how to get the exact values from this Characteristic in didUpdateValueForCharacteristic block?
| Take a nice good read through the framework. if you have come this far you shouldn't have any problem finding 'discoverCharacteristics' and the peripheral delegate callback 'didDiscoverCharacteristic'. You need to know the UUID of the services and characteristics you want to discover and apply it to those methods.
Then you can read with 'readValueForCharacteristic' and the delegate callback 'didUpdateValueForCharacteristic'.
This is sent from my phone, so I will maybe edit a bit when I get to a computer. Hope it helps
New question:
```
[connectedPeripheral readValueForCharacteristic:wantedCharacteristic]
```
and at peripheral delegate
```
- (void) peripheral:(CBPeripheral *)peripheral didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error{
NSLog(@"Characteristic value : %@ with ID %@", characteristic.value, characteristic.UUID);
[delegate characteristicValueRead:characteristic.value];
}
```
works for me
|
What's the recommended way to extend AngularJS controllers?
I have three controllers that are quite similar. I want to have a controller which these three extend and share its functions.
| Perhaps *you* don't extend a controller but it is possible to extend a controller or make a single controller a mixin of multiple controllers.
```
module.controller('CtrlImplAdvanced', ['$scope', '$controller', function ($scope, $controller) {
// Initialize the super class and extend it.
angular.extend(this, $controller('CtrlImpl', {$scope: $scope}));
โฆ Additional extensions to create a mixin.
}]);
```
When the parent controller is created the logic contained within it is also executed.
See $controller() for for more information about but only the `$scope` value needs to be passed. All other values will be injected normally.
*@mwarren*, your concern is taken care of auto-magically by Angular dependency injection. All you need is to inject $scope, although you could override the other injected values if desired.
Take the following example:
```
(function(angular) {
var module = angular.module('stackoverflow.example',[]);
module.controller('simpleController', function($scope, $document) {
this.getOrigin = function() {
return $document[0].location.origin;
};
});
module.controller('complexController', function($scope, $controller) {
angular.extend(this, $controller('simpleController', {$scope: $scope}));
});
})(angular);
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.3.15/angular.js"></script>
<div ng-app="stackoverflow.example">
<div ng-controller="complexController as C">
<span><b>Origin from Controller:</b> {{C.getOrigin()}}</span>
</div>
</div>
```
Although $document is not passed into 'simpleController' when it is created by 'complexController' $document is injected for us.
|
How can I give the compiler a hint for picking the right instance function from a typeclass with several type variables?
I am trying to define a simple text cursor according to the custom `At` typeclass as follows:
```
{-# LANGUAGE MultiParamTypeClasses #-}
class At a b where
valueAt :: a -> b
moveOn :: a -> Maybe a
newtype AtText = AsAtText [Char] deriving (Show)
instance At AtText Char where
valueAt (AsAtText []) = error "Unable to get a value from an empty list."
valueAt (AsAtText (x:_)) = x
moveOn (AsAtText []) = Nothing
moveOn (AsAtText [_]) = Nothing
moveOn (AsAtText (_:xt)) = Just $ AsAtText xt
```
So far so good, the problem I bumped into is when I tried to use the `moveOn` method later in code. So the compiler says it cannot infer which exact instance it should use asking me to provide a signature.
```
No instance for (At AtText b0) arising from a use of `moveOn'
The type variable `b0' is ambiguous
Possible fix: add a type signature that fixes these type variable(s)
Note: there is a potential instance available:
instance At AtText Char
```
It is a valid point, but I have no idea how I can specify what the `b0` type variable is in a signature of `moveOn`
```
test :: AtText -> whatever
test at =
let nextAtOpt = moveOn at in -- <--- a probem
case nextAtOpt of
Just nextAt -> undefined
Nothing -> undefined
```
| What you need are functional dependencies, you'll need to add the pragma `{-# LANGUAGE FunctionalDependencies #-}` and modify your class declaration to be
```
class At a b | a -> b where
...
```
What this does is instruct the compiler that for a given `a`, there is a unique `b` to match it with. This means that you can't have another instance for `At AtText b` for `b` other than `Char`, though.
---
**Edit**: An alternative, as suggested by @shang and @MathematicalOrchid is to use Associated Types (if I've got this correct)
```
class At a where
type B a :: *
valueAt :: a -> B a
moveOn :: a -> Maybe a
instance At AtText where
type B AtText = Char
valueAt (AsAtText []) = error "Unable to get a value from an empty list."
valueAt (AsAtText (x:_)) = x
moveOn (AsAtText []) = Nothing
moveOn (AsAtText [_]) = Nothing
moveOn (AsAtText (_:xt)) = Just $ AsAtText xt
```
and this requires the `TypeFamilies` extension. There are a few variations on this pattern, but I'm not entirely sure on the differences. For example, you can also define it as
```
class At a where
type B :: *
valueAt :: a -> B
...
instance At AtText where
type B = Char
...
```
If someone else can elaborate, feel free to add the explanation as an edit, comment, or answer.
|
Is it acceptable to access the file system in EJB?
Is it acceptable to use types from the [`java.io`](https://docs.oracle.com/en/java/javase/13/docs/api/java.base/java/io/package-summary.html) and [`java.nio`](https://docs.oracle.com/en/java/javase/13/docs/api/java.base/java/nio/package-summary.html) packages to access the file system and manipulate files in EJB?
| No, you are not allowed to do that because if the application will be clustered at some point, you never know to what location has your files been saved. So file manipulation in Java EE environment is rather unsafe operation.
The way to do this safely would be probably to have service in JNDI which will take care about your object serialization - see example
```
ByteArrayOutputStream baos= new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(baos);
oos.writeObject(myObj);
new InitialContext().bind("path/to/FileManipulator", baos.toByteArray());
```
See also [this](https://stackoverflow.com/questions/1357667/how-to-access-the-file-system-from-an-ejb-3/1357718#1357718) answer for further explanation. Also [here](http://cwe.mitre.org/data/definitions/576.html) is an article which describes what specification says about using `java.io` in EJB.
|
How to retrieve all data using relation?
i have three tables
```
users
posts
comments
```
my question is how to retrieve user post and post comments in single query .
thank you in advance
| Your question answer has already given by @ Nicklas Kevin Frank .
for getting following error FatalErrorException in Model.php line 867: Class 'Post' not found error ,in your model class modify your hasmany to this: App\User
```
<?php
class User extends Model
{
public function posts()
{
return $this->hasMany('App\Post');
}
}
```
in posts model
```
<?php
class Post extends Model
{
public function comments()
{
return $this->hasMany('App\Comment');
}
public function user()
{
return $this->belongsTo('App\User');
}
}
```
in comments model
```
<?php
class Comment extends Model
{
public function posts()
{
return $this->belongsTo('App\Post');
}
}
```
and in your controller
```
$data = User::findOrFail(1)->posts()->get();
```
|
How to fix "Illegal datatype context" (use -XDatatypeContexts)?
I am a new learner of Haskell, my code is as follows:
```
data Num a=>Units a = Units a (SymbolicManip a )
deriving (Eq)
```
I am not sure how to fix it?
Anyone can help me?
| Typeclass contexts in datatypes are now regarded as a not so useful feature. The problem is that the following does not compile:
```
foo :: Units a -> a
foo (Units x _) = x+x
```
This intuitively should compile, since the `Units a` argument can only be constructed for a type `a` satisfying `Num a`. So, on destruction (pattern matching) one should be able to access the `Num a` instance. However this is not the case, and a `Num a` must be counterintuitively *provided* on destruction as well:
```
foo :: Num a => Units a -> a
foo (Units x _) = x+x
```
The standard suggestion is therefore to remove the constraint `Num a` from the `Units a` datatype declaration, and add it instead to every function involving `Units a`.
Another option is to enable GADTs and change the datatype to:
```
data Units a where
Units :: Num a => a -> SymbolicManip a -> Units a
```
This does the "right" thing: a `Num a` instance is required to *construct* a value, and is instead provided on *destruction*. In this way, the first `foo` declaration above will be well-typed.
---
I almost forgot the "quick & dirty" option, which is to enable the obsolescent datatype context feature: this is done by adding at the beginning of your file the line
```
{-# LANGUAGE DatatypeContexts #-}
```
Still, I would rather modify the code than to enable this language extension.
|
How do you use the Bounded typeclass in Haskell to define a type with a floating point range?
I expected the following code to fail with a type error due to violation of the minBound and maxBound. But, as you can see, it goes through without flagging an error.
```
{-# OPTIONS_GHC -XTypeSynonymInstances #-}
module Main where
type Probability = Float
instance Bounded Probability where
minBound = 0.0
maxBound = 1.0
testout :: Float -> Probability
testout xx = xx + 1.0
main = do
putStrLn $ show $ testout 0.5
putStrLn $ show $ testout (-1.5)
putStrLn $ show $ testout 1.5
```
In the Prelude I get this
```
*Main> :type (testout 0.5)
(testout 0.5) :: Probability
```
And at the prompt I get this:
```
[~/test]$runhaskell demo.hs
1.5
-0.5
2.5
```
Clearly I'm not declaring Bounded properly, and I'm sure I'm doing something wrong syntactically. There isn't much simple stuff on Google regarding Bounded typeclasses, so any help would be much appreciated.
| That's not what `Bounded` is for. `Bounded a` just defines the functions `minBound :: a` and `maxBound :: a`. It does not induce any special checking or anything.
You can define a bounded type using a so-called *smart constructor*. That is:
```
module Probability (Probability) where
newtype Probability = P { getP :: Float }
deriving (Eq,Ord,Show)
mkP :: Float -> Probability
mkP x | 0 <= x && x <= 1 = P x
| otherwise = error $ show x ++ " is not in [0,1]"
-- after this point, the Probability data constructor is not to be used
instance Num Probability where
P x + P y = mkP (x + y)
P x * P y = mkP (x * y)
fromIntegral = mkP . fromIntegral
...
```
So the only way to make a `Probability` is to use the `mkP` function eventually (this is done for you when you use numeric operations given our `Num` instance), which checks that the argument is in range. Because of the module's export list, outside of this module is it not possible to construct an invalid probability.
Probably not the two-liner you were looking for, but oh well.
For extra composability, you could factor out this functionality by making a `BoundCheck` module instead of `Probability`. Just like above, except:
```
newtype BoundCheck a = BC { getBC :: a }
deriving (Bounded,Eq,Ord,Show)
mkBC :: (Bounded a) => a -> BoundCheck a
mkBC x | minBound <= x && x <= maxBound = BC x
| otherwise = error "..."
instance (Bounded a) => Num (BoundCheck a) where
BC x + BC y = mkBC (x + y)
...
```
Thus you can get the functionality you were wishing was built in for you when you asked the question.ย
To do this deriving stuff you may need the language extension `{-# LANGUAGE GeneralizedNewtypeDeriving #-}`.
|
How to write literal Javascript in Typescript
I need to add a member to an `HTMLElement`, in other words, I need to store data into an element. This is what I would like to achieve as if I am coding in [ScriptSharp](https://github.com/nikhilk/scriptsharp).
```
/** My method */
public DoIt(element: Element, obj: Object) {
Literal("{0}.extended = {1}", element, obj); // This is not standard Typescript!
}
```
**In my example** ScriptSharp (a project to convert C# code into Javascript) provides a `Script.Literal` object that allows developers to write plain Javascript when a C# abstraction is not possible.
So that the Javascript output is:
```
// Probably Typescript will render it a bit differently, but basically
// this is what we get in the end...
var _doit = function(element, obj) {
element.extended = obj;
};
```
How can I achieve this in Typescript? Or maybe I should handle this problem in a different way?
| Any valid JavaScript is also valid TypeScript.
This means that you can write literal JS in any place in your code.
```
var _doit = function(element, obj) {
element.extended = obj;
};
```
This is valid JS **and** TS.
However, since you use TypeScript, you may also want to use static typing with your code.
If you just add types to your code, it will compile correctly, but you'll get a semantic error:
```
var _doit = function(element:HTMLElement, obj) {
element.extended = obj; // error: HTMLElement doesn't have property 'extended'
};
```
To prevent this error, you can notify the compiler that you intend to create a new property on `HTMLElement`:
```
interface HTMLElement {
extended?: any;
}
```
Now the compiler knows that you have an (optional) property `extended` on `HTMLElement` and will compile without errors. You will also get code autocompletion on this property (and JSDoc if provided).
|
C++11 Exception's destructor allows to throw now?
any idea why
virtual ~exception() throw() is in C++98,
but
virtual ~exception() is in C++11?
What's the design decision that allows C++11 to throw in the destructor of the class `exception`?
From [here](http://www.cplusplus.com/reference/exception/exception/):
c++98:
```
class exception {
public:
exception () throw();
exception (const exception&) throw();
exception& operator= (const exception&) throw();
virtual ~exception() throw();
virtual const char* what() const throw();
}
```
c++11:
```
class exception {
public:
exception () noexcept;
exception (const exception&) noexcept;
exception& operator= (const exception&) noexcept;
virtual ~exception();
virtual const char* what() const noexcept;
}
```
|
>
> What's the desing decision makes C++11 allow to throw in the destructor of the class `exception`?
>
>
>
There was no such design decision (fortunately!). In C++11, even explicitly declared destructors are qualified as `noexcept` by default. This can be evinced from paragraph 12.4/3 of the C++11 Standard:
>
> A declaration of a destructor that does not have an *exception-specification* is implicitly considered to have
> the same *exception-specification* as an implicit declaration (15.4).
>
>
>
And from paragraph 15.4/14, which specifies what exception specification an implicit declaration has:
>
> An inheriting constructor (12.9) and an implicitly declared special member function (Clause 12) have an
> exception-specification. If `f` is an inheriting constructor or an **implicitly declared** default constructor, copy
> constructor, move constructor, **destructor**, copy assignment operator, or move assignment operator, its implicit
> exception-specification specifies the type-id `T` if and only if `T` is allowed by the exception-specification
> of a function directly invoked by fโs implicit definition; `f` allows all exceptions if any function it directly
> invokes allows all exceptions, and **`f` has the exception-specification `noexcept(true)` if every function it directly
> invokes allows no exceptions**.
>
>
>
Together, the above two paragraphs guarantee (given the declaration you quoted of `exception`'s destructor) that the destructor of `exception` won't throw.
This is also explicitly stated in paragraphs 18.8.1/7-8 of the C++11 Standard:
>
> `virtual ~exception();`
>
>
> 7 *Effects*: Destroys an object of class exception.
>
>
> 8 *Remarks*: **Does not throw any exceptions**.
>
>
>
Notice, that dynamic exception specifications (such as `throw()`) are deprecated in C++11. Per ยง D.4/1 of the Annex D:
>
> The use of *dynamic-exception-specifications* is deprecated.
>
>
>
|
Excel VBA add hyperlink to shape to link to another sheet
I have a macro that creates a summary sheet at the front of a Workbook. Shapes are created and labeled after the sheets in the workbook and then hyperlinks are added to the shapes to redirect to those sheets, however, when I recorded the macro to do this, the code generated was:
```
ActiveSheet.Shapes.Range(Array("Rounded Rectangle 1")).Select
ActiveSheet.Hyperlinks.Add Anchor:=Selection.ShapeRange.Item(1), Address:=""
```
The hyperlinks that were manually created in excel while recording the macro work just fine and when hovering over them, display the file path and " - Sheet!A1" but they don't seem to actually be adding the link location into the address portion of the macro. Does anyone know the code that should go in that address section to link to the sheet?
| The macro recorder doesn't record what is actually happening in this case. The property you are looking for is `SubAddress`. `Address` is correctly set in your code.
**Create a hyperlink from a shape without selecting it**
You want to avoid selecting things in your code if possible, and in this case it definitely is. Create a shape variable and set it to the shape you want to modify, then add the hyperlink to the sheet the shape is on. Note that you can also set the text for the screen tip.
In the example below, the shape I want to modify is on Sheet 6, and hyperlinks to a range on Sheet 4.
```
Sub SetHyperlinkOnShape()
Dim ws As Worksheet
Set ws = ThisWorkbook.Sheets("Sheet6")
Dim hyperLinkedShape As Shape
Set hyperLinkedShape = ws.Shapes("Rectangle 1")
ws.Hyperlinks.Add Anchor:=hyperLinkedShape, Address:="", _
SubAddress:="Sheet4!C4:C8", ScreenTip:="yadda yadda"
End Sub
```
|
A signal m2m\_changed and bug with post\_remove
I need to detect a post\_remove signal, so I have written :
```
def handler1(sender, instance, action, reverse, model, pk_set, **kwargs):
if (action == 'post_remove'):
test1() # not declared but make a bug if it works, to detect :)
m2m_changed.connect(handler1, sender=Course.subscribed.through)
```
If I change 'post\_remove' by 'post\_add' it is ok.. Is it a django's bug about post\_remove ??
I use that model and I switch beetween two values of 'subscribed' (so one added and one deleted)
```
class Course(models.Model):
name = models.CharField(max_length=30)
subscribed = models.ManyToManyField(User, related_name='course_list', blank=True, null=True, limit_choices_to={'userprofile__status': 'student'})
```
I have seen a post with a bug of django, maybe it havn't been fixed... (or it's me ^^)
| As I understand it, it's not a bug, it's just that Django does not update m2m relations in the way you expect. It does not remove the relations to be deleted then add the new ones. Instead, it clears all of the m2m relations, then adds them again.
There's a related question [Django signal m2m\_changed not triggered](https://stackoverflow.com/questions/6463863/django-signal-m2m-changed-not-triggered) which links to ticket [13087](https://code.djangoproject.com/ticket/13087).
So you can check for the `pre_clear` or `post_clear` action with the `m2m_changed` signal, but since those actions do not provide `pk_set`, it doesn't help you find the related entries before the save, as you wanted to do in [your other question](https://stackoverflow.com/questions/11678502/compare-field-before-and-after-save).
|
The jquery code works on console but not in script tags or in attached js file
```
$(".filter-close").click(function(){
$(this).parent().remove();
});
```
this snippet works in console but neither in script tags nor in attached js file.
| Wait for the DOM to be ready when calling your event handler:
```
jQuery(function($) { // this does the trick and also makes sure jQuery is not conflicting with another library
$(".filter-close").click(function(){
$(this).parent().remove();
});
});
```
>
> When using another JavaScript library, we may wish to call $.noConflict() to avoid namespace difficulties. When this function is called, the $ shortcut is no longer available, forcing us to write jQuery each time we would normally write $. **However, the handler passed to the .ready() method can take an argument, which is passed the global jQuery object. This means we can rename the object within the context of our .ready() handler without affecting other code**
>
>
>
[Documentation for `.ready()` method](https://api.jquery.com/ready/)
|
Group values by unique elements
I have a vector that looks like this:
```
a <- c("A110","A110","A110","B220","B220","C330","D440","D440","D440","D440","D440","D440","E550")
```
I would like to create another another vector, based on a, that should look like:
```
b <- c(1,1,1,2,2,2,3,4,4,4,4,4,4,5)
```
In other words, b should assign a value (starting from 1) to each different element of a.
| First of all, (I assume) this is your vector
```
a <- c("A110","A110","A110","B220","B220","C330","D440","D440","D440","D440","D440","D440","E550")
```
As per possible solutions, here are few (can't find a good dupe right now)
```
as.integer(factor(a))
# [1] 1 1 1 2 2 3 4 4 4 4 4 4 5
```
Or
```
cumsum(!duplicated(a))
# [1] 1 1 1 2 2 3 4 4 4 4 4 4 5
```
Or
```
match(a, unique(a))
# [1] 1 1 1 2 2 3 4 4 4 4 4 4 5
```
Also `rle` will work the similarly in your *specific* scenario
```
with(rle(a), rep(seq_along(values), lengths))
# [1] 1 1 1 2 2 3 4 4 4 4 4 4 5
```
Or (which is practically the same)
```
data.table::rleid(a)
# [1] 1 1 1 2 2 3 4 4 4 4 4 4 5
```
---
Though be advised that all 4 solutions have their unique behavior in different scenarios, consider the following vector
```
a <- c("B110","B110","B110","A220","A220","C330","D440","D440","B110","B110","E550")
```
And the results of the 4 different solutions:
1.
```
as.integer(factor(a))
# [1] 2 2 2 1 1 3 4 4 2 2 5
```
The `factor` solution begins with `2` because `a` is unsorted and hence the first values are getting higher `integer` representation within the `factor` function. Hence, this solution is only valid *if your vector is sorted*, so don't use it other wise.
2.
```
cumsum(!duplicated(a))
# [1] 1 1 1 2 2 3 4 4 4 4 5
```
This `cumsum/duplicated` solution got confused because of `"B110"` already been present at the beginning and hence grouped `"D440","D440","B110","B110"` into the same group.
3.
```
match(a, unique(a))
# [1] 1 1 1 2 2 3 4 4 1 1 5
```
This `match/unique` solution added ones at the end, because it is sensitive to `"B110"` showing up in more than one sequences (because of `unique`) and hence grouping them all into same group regardless of where they appear
4.
```
with(rle(a), rep(seq_along(values), lengths))
# [1] 1 1 1 2 2 3 4 4 5 5 6
```
This solution only cares about sequences, hence different sequences of `"B110"` were grouped into different groups
|
Android device not seen by ADB but accessible from Windows XP
I just bought a new Nexus 7 tablet and I am trying to put my first Java application on it. However, I am stuck at a very basic point: ADB does not see my device. When I check on my working station, Windows perfectly detects the tablet, I switched the USB port and every one make appear the device but ADB still cannot see it. I rebooted and it is still not working. Any idea about this?
***Update***
There was actually two problems. First, I had not activated the **USB debugging** mode. This was the reason why I could use the tablet from the working station (as a simple multimedia player) even though the correct USB driver was not installed.
Second, the driver was not detected by Windows XP (even we I specified the correct repository to search for it). The problem was solved by following the procedure described by [adamp](https://stackoverflow.com/users/342605/adamp)
| As Thomas K points out, you need to install the ADB driver for the device from the SDK manager. Also confirm that USB debugging is enabled on the device as mihail noted.
The basic ADB driver provided with the SDK is generic and can work with any Android device. Simply add the appropriate lines to the `android_winusb.inf` file under `extras/google/usb_driver` to make Windows recognize the device hardware IDs during driver installation. Add the lines under the x86 section for 32 bit Windows or amd64 section for 64 bit Windows.
For the Nexus 7, the configuration you'll need is:
```
;Nexus7
%SingleAdbInterface% = USB_Install, USB\VID_18D1&PID_4E42
%CompositeAdbInterface% = USB_Install, USB\VID_18D1&PID_4E42&MI_01
```
The significant sections there are VID\_XXXX and PID\_YYYY on both lines. If you have another Android device you would like to add, start by duplicating the lines above in the correct section of the file. You'll need to replace the hardware IDs with the correct IDs for your device.
Open Device Manager, locate the Android device without an ADB driver installed, right click it, and choose Properties. Under the Details tab, select Hardware IDs from the dropdown list. You'll see a line that looks something like `USB\VID_18D1&PID_4E42&MI_01`. Copy the VID\_XXXX section and PID\_YYYY section into the two lines you added in the .inf file above. Save the file, then update the driver for the device and use the driver from the directory where the .inf you just saved is.
|
Indexing VARCHAR in MySQL
Is performance of index on a VARCHAR field compared to INT field same or worse?
I have a table with two fields(Both are of VARCHAR). I want to create an index on 1st column. I have two things in my mind.
1. Directly create index on first field.
2. Assign an integer to each VARCHAR in the first field and store them in a table. Then replace the VARCHARs in the first column with integers and create another table and index on the first colum(In this case it will be integer column). I will end up with two different tables here.
Also if I have 4 or 5 tables with same 1st field and have to create index on the 1st field in every table, will the second method give better performance?
Thanks in advance.
| Indexes on `VARCHAR()` columns are indeed slightly less efficient than indexes on fixed-length fields like `INT` or `BIGINT`. But not significantly so.
The *only conceivable situation* where you would want to use a second table containing a numbered list of text strings is this: the number of distinct text strings in your application is much smaller than the number of rows in your tables. Why might that be true? For example, the text strings might be words in a so-called ["controlled vocabulary."](https://en.wikipedia.org/wiki/Controlled_vocabulary) For example, music tracks have a genre like "rock", "classical", or "hiphop". It's pointless to allow arbitrary genres like "southern california alt-surf-rock" in such an application.
Don't overthink this. Keep in mind that database server developers have spent a great deal of time optimizing the performance of their indexes. It's almost impossible that you can do better than they have done, especially if you have to introduce extra tables and constraints to your system.
Put indexes on your `VARCHAR()` columns as needed.
(Another factor: collations get baked into indexes on `VARCHAR()` columns. If you build a custom indexing scheme like the one you propose, you have to deal with that complexity in your code. It's a notorious pain in the neck.)
Fun fact to know and tell: Systems in the olden days of computing (when all the cool kids had T1 lines) offered objects called "atoms." These were text strings referred to with id numbers. Atoms showed up in the X Window System (for example) in the [xlib function call `XInternAtom()` and related functions](https://linux.die.net/man/3/xinternatom). Why? partly to save memory and network bandwidth, which were scarcer then than now. Partly for the "controlled vocabulary" purpose mentioned earlier in this post.
|
What range of dates are permitted in Javascript?
What is the maximum and the minimum date that I can use with the `Date` object in Javascript?
Is it possible to represent ancient historical dates (like `January 1, 2,500 B.C.`) or dates that are far into the future (like `October 7, 10,000`)?
If these far-from-present dates can't be represented with the `Date` object, how should I represent them?
| According to [ยง15.9.1.1 of the ECMA-262 specification](http://es5.github.com/#x15.9.1.1),
>
> Time is measured in ECMAScript in milliseconds since 01 January, 1970 UTC.
>
> ...
>
> The actual range of times supported by ECMAScript Date objects is ... exactly **โ100,000,000 days to 100,000,000 days measured relative to midnight at the beginning of 01 January, 1970 UTC.** This gives a range of 8,640,000,000,000,000 milliseconds to either side of 01 January, 1970 UTC.
>
>
>
So the earliest date representable with the `Date` object is fairly far beyond known human history:
```
new Date(-8640000000000000).toUTCString()
// Tue, 20 Apr 271,822 B.C. 00:00:00 UTC
```
The latest date is sufficient to last beyond [Y10K](http://en.wikipedia.org/wiki/Year_10,000_problem) and even beyond Y100K, but will need to be reworked a few hundred years before Y276K.
```
new Date(8640000000000000).toUTCString()
// Sat, 13 Sep 275,760 00:00:00 UTC
```
Any date outside of this range will return `Invalid Date`.
```
new Date(8640000000000001) // Invalid Date
new Date(-8640000000000001) // Invalid Date
```
---
In short, the JavaScript `Date` type will be **sufficient for measuring time to millisecond precision within approximately 285,616 years before or after January 1, 1970**. The dates posted in the question are very comfortably inside of this range.
|
Creating origin-destination matrices with R
My data frame consists of individuals and the city they live at a point in time. I would like to generate one origin-destination matrix for each year, which records the number of moves from one city to another. I would like to know:
1. How can I generate the origin-destination tables for each year in my dataset automatically?
2. How can I generate all tables in the same 5x5 format, 5 being the number of cities in my example?
3. Is there a more efficient code than what I propose below? I intend to run it on a very large dataset.
Consider the following example:
```
#An example dataframe
id=sample(1:5,50,T)
year=sample(2005:2010,50,T)
city=sample(paste(rep("City",5),1:5,sep=""),50,T)
df=as.data.frame(cbind(id,year,city),stringsAsFactors=F)
df$year=as.numeric(df$year)
df=df[order(df$id,df$year),]
rm(id,year,city)
```
My best try
```
#Creating variables
for(i in 1:length(df$id)){
df$origin[i]=df$city[i]
df$destination[i]=df$city[i+1]
df$move[i]=ifelse(df$orig[i]!=df$dest[i] & df$id[i]==df$id[i+1],1,0) #Checking whether a move has taken place and whether its the same person
df$year_move[i]=ceiling((df$year[i]+df$year[i+1])/2) #I consider that the person has moved exactly between the two dates at which its location was recorded
}
df=df[df$move!=0,c("origin","destination","year_move")]
```
Creating an origin-destination table for 2007
```
yr07=df[df$year_move==2007,]
table(yr07$origin,yr07$destination)
```
Result
```
City1 City2 City3 City5
City1 0 0 1 2
City2 2 0 0 0
City5 1 1 0 0
```
| You can split your data from by id, perform the necessary computations on the id-specific data frame to grab all the moves from that person, and then re-combine:
```
spl <- split(df, df$id)
move.spl <- lapply(spl, function(x) {
ret <- data.frame(from=head(x$city, -1), to=tail(x$city, -1),
year=ceiling((head(x$year, -1)+tail(x$year, -1))/2),
stringsAsFactors=FALSE)
ret[ret$from != ret$to,]
})
(moves <- do.call(rbind, move.spl))
# from to year
# 1.1 City4 City2 2007
# 1.2 City2 City1 2008
# 1.3 City1 City5 2009
# 1.4 City5 City4 2009
# 1.5 City4 City2 2009
# ...
```
Because this code uses vectorized computations for each id, it should be a good deal quicker than looping through each row of your data frame as you did in the provided code.
Now you could grab the year-specific 5x5 move matrices using `split` and `table`:
```
moves$from <- factor(moves$from)
moves$to <- factor(moves$to)
lapply(split(moves, moves$year), function(x) table(x$from, x$to))
# $`2005`
#
# City1 City2 City3 City4 City5
# City1 0 0 0 0 1
# City2 0 0 0 0 0
# City3 0 0 0 0 0
# City4 0 0 0 0 0
# City5 0 0 1 0 0
#
# $`2006`
#
# City1 City2 City3 City4 City5
# City1 0 0 0 1 0
# City2 0 0 0 0 0
# City3 1 0 0 1 0
# City4 0 0 0 0 0
# City5 2 0 0 0 0
# ...
```
|
How to use formula in R to exclude main effect but retain interaction
I do not want main effect because it is collinear with a finer factor fixed effect, so it is annoying to have these `NA`.
In this example:
```
lm(y ~ x * z)
```
I want the interaction of `x` (numeric) and `z` (factor), but not the main effect of `z`.
| ### Introduction
R documentation of `?formula` says:
>
> The โ\*โ operator denotes factor crossing: โa \* bโ interpreted as โa + b + a : b
>
>
>
So it sounds like that dropping main effect is straightforward, by just doing one of the following:
```
a + a:b ## main effect on `b` is dropped
b + a:b ## main effect on `a` is dropped
a:b ## main effects on both `a` and `b` are dropped
```
Oh, really? No no no (*too simple, too naive*). In reality it depends on the variable class of `a` and `b`.
- If none of them are factors, or only one one them is a factor, this is true;
- If both of them are factors, no. You can never drop main effect (low-order effect) when interaction (high-order effect) is present.
This kind of behavior is due to a magic function called `model.matrix.default`, which constructs a design matrix from a formula. A numerical variable is just included as it is into a column, but a factor variable is automatically coded as many dummy columns. It is exactly this dummy recoding that is a magic. It is commonly believed that we can enable or disable contrasts to control it, but not really. We lose control of contrasts even in [this simplest example](https://stackoverflow.com/q/38150773/4891738). The problem is that `model.matrix.default` has its own rule when doing dummy encoding, and it is very sensitive to how you specify the model formula. It is exactly for this reason that we can't drop main effect when an interaction between two factors exists.
---
### Interaction between a numeric and a factor
From your question, `x` is numeric and `z` is a factor. You can specify a model with interaction but not with main effect of `z` by
```
y ~ x + x:z
```
Since `x` is numeric, it is equivalent to do
```
y ~ x:z
```
The only difference here is parametrization (or how `model.matrix.default` does dummy encoding). Consider a small example:
```
set.seed(0)
y <- rnorm(10)
x <- rnorm(10)
z <- gl(2, 5, labels = letters[1:2])
fit1 <- lm(y ~ x + x:z)
#Coefficients:
#(Intercept) x x:zb
# 0.1989 -0.1627 -0.5456
fit2 <- lm(y ~ x:z)
#Coefficients:
#(Intercept) x:za x:zb
# 0.1989 -0.1627 -0.7082
```
From the names of the coefficients we see that in the 1st specification, `z` is contrasted so its 1st level "a" is not dummy encoded, while in the 2nd specification, `z` is not contrasted and both levels "a" and "b" are dummy encoded. Given that both specifications ends up with three coefficients, they are really equivalent (mathematically speaking, the design matrix in two cases have the same column space) and you can verify this by comparing their fitted values:
```
all.equal(fit1$fitted, fit2$fitted)
# [1] TRUE
```
So why is `z` contrasted in the first case? Because otherwise we have two dummy columns for `x:z`, and the sum of these two columns are just `x`, aliased with the existing model term `x` in the formula. In fact, in this case even if you require that you don't want contrasts, `model.matrix.default` will not obey:
```
model.matrix.default(y ~ x + x:z,
contrast.arg = list(z = contr.treatment(nlevels(z), contrasts = FALSE)))
# (Intercept) x x:zb
#1 1 0.7635935 0.0000000
#2 1 -0.7990092 0.0000000
#3 1 -1.1476570 0.0000000
#4 1 -0.2894616 0.0000000
#5 1 -0.2992151 0.0000000
#6 1 -0.4115108 -0.4115108
#7 1 0.2522234 0.2522234
#8 1 -0.8919211 -0.8919211
#9 1 0.4356833 0.4356833
#10 1 -1.2375384 -1.2375384
```
So why in the 2nd case is `z` not contrasted? Because if it is, we loose the effect of level "a" when constructing interaction. And even if you require a contrast, `model.matrix.default` will just ignore you:
```
model.matrix.default(y ~ x:z,
contrast.arg = list(z = contr.treatment(nlevels(z), contrasts = TRUE)))
# (Intercept) x:za x:zb
#1 1 0.7635935 0.0000000
#2 1 -0.7990092 0.0000000
#3 1 -1.1476570 0.0000000
#4 1 -0.2894616 0.0000000
#5 1 -0.2992151 0.0000000
#6 1 0.0000000 -0.4115108
#7 1 0.0000000 0.2522234
#8 1 0.0000000 -0.8919211
#9 1 0.0000000 0.4356833
#10 1 0.0000000 -1.2375384
```
Oh, amazing `model.matrix.default`. It is able to make the right decision!
---
### Interaction between two factors
Let me reiterate it: **There is no way to drop main effect when interaction is present.**
I will not provide extra example here, as I have one in [Why do I get NA coefficients and how does `lm` drop reference level for interaction](https://stackoverflow.com/q/40723196/4891738). See the "Contrasts for interaction" section over there. In short, all the following specifications give the same model (they have the same fitted values):
```
~ year:treatment
~ year:treatment + 0
~ year + year:treatment
~ treatment + year:treatment
~ year + treatment + year:treatment
~ year * treatment
```
And in particular, the 1st specification leads to an `NA` coefficient.
So once the RHS of `~` contains an `year:treatment`, you can never ask `model.matrix.default` to drop main effects.
[People inexperienced with this behavior are to be surprised when producing ANOVA tables](https://stackoverflow.com/questions/43146368/checking-type-iii-anova-results?noredirect=1&lq=1#comment73370246_43146368).
---
### Bypassing `model.matrix.default`
Some people consider `model.matrix.default` annoying as it does not appear to have a consistent manner in dummy encoding. A "consistent manner" in their view is to always drop the 1st factor level. Well, no problem, you can bypass `model.matrix.default` by manually doing the dummy encoding, and feed the resulting dummy matrix as a variable to `lm`, etc.
However, you still need `model.matrix.default`'s help to easily do dummy encoding for **a** (yes, only one) factor variable. For example, for the variable `z` in our previous example, its full dummy encoding is the following, and you can retain all or some of its columns for regression.
```
Z <- model.matrix.default(~ z + 0) ## no contrasts (as there is no intercept)
# za zb
#1 1 0
#2 1 0
#3 1 0
#4 1 0
#5 1 0
#6 0 1
#7 0 1
#8 0 1
#9 0 1
#10 0 1
#attr(,"assign")
#[1] 1 1
#attr(,"contrasts")
#attr(,"contrasts")$z
#[1] "contr.treatment"
```
Back to our simple example, if we don't want contrasts for `z` in `y ~ x + x:z`, we can do
```
Z2 <- Z[, 1:2] ## use "[" to remove attributes of `Z`
lm(y ~ x + x:Z2)
#Coefficients:
#(Intercept) x x:Z2za x:Z2zb
# 0.1989 -0.7082 0.5456 NA
```
Not surprisingly we see an `NA` (because `colSums(Z2)` is aliased with `x`). And if we want to enforce contrasts in `y ~ x:z`, we can do either of the following:
```
Z1 <- Z[, 1]
lm(y ~ x:Z1)
#Coefficients:
#(Intercept) x:Z1
# 0.34728 -0.06571
Z1 <- Z[, 2]
lm(y ~ x:Z1)
#Coefficients:
#(Intercept) x:Z1
# 0.2318 -0.6860
```
[And the latter case is probably what *contefranz* is trying to do](https://stackoverflow.com/a/42028278/4891738).
However, I do not really recommend this kind of hacking. When you pass a model formula to `lm`, etc, `model.matrix.default` is trying to give you the most sensible construction. Also, in reality we want to do prediction with a fitted model. If you have done dummy encoding yourself, you would have a hard time when providing `newdata` to `predict`.
|
CRT Doesn't print line number of memory leak
I've got the code below, which I think, based on [Finding Memory Leaks Using the CRT Library](https://msdn.microsoft.com/en-us/library/x98tx3cf.aspx), should print out the line number of a memory leak.
```
#include "stdafx.h"
#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
#include <iostream>
void derp()
{
int* q = new int;
}
int main()
{
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF);
derp();
return 0;
}
```
When I run it, I get the following:
```
Detected memory leaks!
Dumping objects ->
{75} normal block at 0x0067E930, 4 bytes long.
Data: < > CD CD CD CD
Object dump complete.
```
Based on Microsoft's documentation, I'd expect to see a print-out of the line where the leaky memory was allocated, but I don't.
What have I done wrong? I'm using VS2015.
| From the [MSDN topic](https://msdn.microsoft.com/en-us/library/x98tx3cf.aspx):
>
> These techniques work for memory allocated using the standard CRT
> malloc function. If your program allocates memory using the C++ new
> operator, however, you may only see the file and line number where the
> implementation of global operator new calls \_malloc\_dbg in the
> memory-leak report. Because that behavior is not very useful, you can
> change it to report the line that made the allocation by using a macro
> that looks like this:
>
>
>
```
#ifdef _DEBUG
#define DBG_NEW new ( _NORMAL_BLOCK , __FILE__ , __LINE__ )
// Replace _NORMAL_BLOCK with _CLIENT_BLOCK if you want the
// allocations to be of _CLIENT_BLOCK type
#else
#define DBG_NEW new
#endif
```
And then replace the `new` in your code with `DBG_NEW`. I tested it and it works correctly with your code.
---
Actually, replacing the `new` with `DBG_NEW` everywhere in the code is too tedious task, so possibly your could use this macro:
```
#ifdef _DEBUG
#define new new( _NORMAL_BLOCK , __FILE__ , __LINE__ )
#else
#define new new
#endif
```
I tested this method and it works too.
|
Samba, Apache and SVN. Getting the permissions right
I have two machines I work on:
1. Windows Client (Development Machine)
2. Linux Web Server (Ubuntu)
On the Linux server I have installed Apache, Samba and SVN.
I've created a samba share that maps to the htdocs/ directory so that I can access the web files from Windows.
The following illustrates my workflow:
1. From command line on Linux server I checkout working copies of web projects from remote server into my local Linux server's htdocs directory.
2. On the Windows machine I access these files (using samba) and edit them in my editor and test them in the web browsers
3. Back on the Linux machine I checkin my work to the remote server.
The problem I have is that currently for me to be able to edit the files on the Windows box via Samba I have to change the owner of the files to nobody (apache user) and set the Samba share to use SHARE permissions.
When I try to use SVN to commit and update etc. I can't because my Linux user is not 'nobody' and does not have permissions to do so. So I have to become root do an SVN [command] then change all the files back to 'nobody' so that I can't edit on Windows.
What I would like to be able to do is have the web files be owned by my local Linux user which would enable SVN commands to work and for Windows (over samba) to also use this same user.
How can I get this to work, is there a way to get Windows and Linux users to match?
| Have the files and directories owned by a group, which [Apache runs as](http://httpd.apache.org/docs/2.1/mod/mpm_common.html#group). Make the directories [SGID](http://en.wikipedia.org/wiki/Setuid#setuid_and_setgid_on_directories).
```
chgrp -R group1 /path/to/htdocs
find /path/to/htdocs -type d -exec chmod 2775 {} \;
```
Specify this group under the SAMBA share, which can be done with `force group=group1`.
Make the user you authenticate to SAMBA with a member of this group. Specify this user within `valid users` and `write list` within your share in SAMBA. This should also allow you commit to SVN.
Also, specify `create mask=0664` and `directory mask=2775` within smb.conf, which will set the appropriate permissions upon creation. If the files are owned by the same user as Apache, you can set the final bit to 0 on both settings.
If everything is applied consistently, you should be able to perform all desired actions without negatively impacting any functionality.
|
How to create a radial menu in CSS?
How do I create a menu which looks like so:
![Tooltip Image](https://i.stack.imgur.com/hb5aQ.png)
[Link to PSD](http://dribbble.com/shots/732391-Tooltip-PSD)
I don't want to use the PSD images. I would prefer using icons from some package like [FontAwesome](http://fortawesome.github.com/Font-Awesome/) and have the backgrounds/css generated in CSS.
A version of the menu that is using the PSD to to generate images of the tooltip and then using it can be found [here](http://theearlcarlson.com/experiments/amTooltip/).
| *Almost 3 years later, I finally made the time to revisit this and post an improved version. You can still view the original answer at the end for reference.*
*While SVG may be the better choice, especially today, my goal with this was to keep it just HTML and CSS, no JS, no SVG, no images (other than the background on the root element).*
# ***[2015 demo](http://codepen.io/thebabydino/pen/aOWeLa?editors=010)***
### Screenshots
Chrome 43:
![Chrome screenshot](https://i.stack.imgur.com/m6TlH.png)
Firefox 38:
![Firefox screenshot](https://i.stack.imgur.com/GEjbG.png)
IE 11:
![IE screenshot](https://i.stack.imgur.com/PfRtB.png)
### Code
The HTML is pretty simple. I'm using the checkbox hack to reveal/ hide the menu.
```
<input type='checkbox' id='t'/>
<label for='t'>โฐ</label>
<ul>
<li><a href='#'>โ</a></li>
<li><a href='#'>โ</a></li>
<li><a href='#'>โ</a></li>
</ul>
```
I'm using Sass to keep this logical and make it easier to change things if needed. Heavily commented.
```
$d: 2em; // diameter of central round button
$r: 16em; // radius of menu
$n: 3; // must match number of list items in DOM
$exp: 3em; // menu item height
$tip: .75em; // dimension of tip on middle menu item
$w: .5em; // width of ends
$cover-dim: 2*($r - $exp); // dimension of the link cover
$angle: 15deg; // angle for a menu item
$skew-angle: 90deg - $angle; // how much to skew a menu item to $angle
$scale-factor: cos($skew-angle); // correction factor - see vimeo.com/98137613 from min 15
$off-angle: .125deg; // offset angle so we have a little space between menu items
// don't show the actual checkbox
input {
transform: translate(-100vw); // move offscreen
visibility: hidden; // avoid paint
}
// change state of menu to revealed on checking the checkbox
input:checked ~ ul {
transform: scale(1);
opacity: .999;
// ease out back from easings.net/#easeOutBack
transition: .5s cubic-bezier(0.175, 0.885, 0.32, 1.275);
}
// position everything absolutely such that their left bottom corner
// is in the middle of the screen
label, ul, li {
position: absolute;
left: 50%; bottom: 50%;
}
// visual candy styles
label, a {
color: #858596;
font: 700 1em/ #{$d} sans-serif;
text-align: center;
text-shadow: 0 1px 1px #6c6f7e;
cursor: pointer;
}
label {
z-index: 2; // place it above the menu which has z-index: 1
margin: -$d/2; // position correction such that it's right in the middle
width: $d; height: $d;
border-radius: 50%;
box-shadow: 0 0 1px 1px white,
0 .125em .25em #876366,
0 .125em .5em #876366;
background: radial-gradient(#d4c7c5, #e5e1dd);
}
ul {
z-index: 1;
margin: -$r + $exp + 1.5*$d 0; // position correction
padding: 0;
list-style: none;
transform-origin: 50% (-$r + $exp);
transform: scale(.001); // initial state: scaled down to invisible
will-change: transform; // better perf on transitioning transform
opacity: .001; // initial state: transparent
filter: drop-shadow(0 .125em .25em #847c77)
drop-shadow(0 .125em .5em #847c77);
// ease in back, also from easings.net
transition: .5s cubic-bezier(0.6, -0.28, 0.735, 0.045);
// menu ends
&:before, &:after {
position: absolute;
margin: -$exp (-$w/2);
width: $w; height: $exp;
transform-origin: 50% 100%;
background: linear-gradient(#ddd, #c9c4bf);
content: '';
}
&:before {
border-radius: $w 0 0 $w;
transform: rotate(-.5*$n*$angle)
translate(-$w/2, -$r + $exp);
box-shadow: inset 1px 0 1px #eee;
}
&:after {
border-radius: 0 $w $w 0;
transform: rotate(.5*$n*$angle)
translate($w/2, -$r + $exp);
box-shadow: inset -1px 0 1px #eee;
}
}
li {
overflow: hidden;
width: $r; height: $r;
transform-origin: 0 100%;
@for $i from 0 to $n {
&:nth-child(#{$i + 1}) {
$curr-angle: $i*$angle +
($i + .5)*$off-angle -
.5*$n*($angle + $off-angle);
// make each list item a rhombus rotated around its bottom left corner
// see explanation from minute 33:10 youtube.com/watch?v=ehjoh_MmE9A
transform: rotate($curr-angle)
skewY(-$skew-angle)
scaleX($scale-factor);
// add tip for the item n the middle, just a rotated square
@if $i == ($n - 1)/2 {
a:after {
position: absolute;
top: $exp; left: 50%;
margin: -$tip/2;
width: $tip; height: $tip;
transform: rotate(45deg);
box-shadow:
inset -1px -1px 1px #eee;
background: linear-gradient(-45deg,
#bbb, #c9c4bf 50%);
content: '';
}
}
}
}
a, &:before {
margin: 0 (-$r);
width: 2*$r; height: 2*$r;
border-radius: 50%;
}
&:before, &:after {
position: absolute;
border-radius: 50%;
// undo distorting transforms from menu item (parent li)
transform: scaleX(1/$scale-factor)
skewY($skew-angle);
content: '';
}
// actual background of the arched menu items
&:before {
box-shadow:
inset 0 0 1px 1px #fff,
inset 0 0 $exp #ebe7e2,
inset 0 0 1px ($exp - .0625em) #c9c4bf,
inset 0 0 0 $exp #dcdcdc;
}
// cover to prevent click action in between the star and menu items
&:after {
top: 100%; left: 0;
margin: -$cover-dim/2;
width: $cover-dim; height: $cover-dim;
border-radius: 50%;
}
}
a {
display: block;
// undo distorting transforms from menu item and rotate into right position
transform: scaleX(1/$scale-factor)
skewY($skew-angle)
rotate($angle/2);
line-height: $exp;
text-align: center;
text-decoration: none;
}
```
```
html {
overflow: hidden;
background: url(http://i.imgur.com/AeFfmwL.jpg);
}
input {
/* move offscreen */
-webkit-transform: translate(-100vw);
-ms-transform: translate(-100vw);
transform: translate(-100vw);
/* avoid paint */
visibility: hidden;
}
input:checked ~ ul {
-webkit-transform: scale(1);
-ms-transform: scale(1);
transform: scale(1);
opacity: .999;
/* ease out back from easings.net */
-webkit-transition: 0.5s cubic-bezier(0.175, 0.885, 0.32, 1.275);
transition: 0.5s cubic-bezier(0.175, 0.885, 0.32, 1.275);
}
label, ul, li {
position: absolute;
left: 50%;
bottom: 50%;
}
label, a {
color: #858596;
font: 700 1em/ 2em sans-serif;
text-align: center;
text-shadow: 0 1px 1px #6c6f7e;
cursor: pointer;
}
label {
z-index: 2;
margin: -1em;
width: 2em;
height: 2em;
border-radius: 50%;
box-shadow: 0 0 1px 1px white, 0 .125em .25em #876366, 0 .125em .5em #876366;
background: #d3d3d3;
background: -webkit-radial-gradient(#d4c7c5, #e5e1dd);
background: radial-gradient(#d4c7c5, #e5e1dd);
}
ul {
z-index: 1;
margin: -10em 0;
padding: 0;
list-style: none;
-webkit-transform-origin: 50% -13em;
-ms-transform-origin: 50% -13em;
transform-origin: 50% -13em;
-webkit-transform: scale(0.001);
-ms-transform: scale(0.001);
transform: scale(0.001);
/* for improved perf on transitioning transform
* https://twitter.com/paul_irish/status/608492121734193152
*/
will-change: transform;
opacity: .001;
-webkit-filter: drop-shadow(0 0.125em 0.25em #847c77);
filter: drop-shadow(0 0.125em 0.25em #847c77);
-webkit-transition: 0.5s cubic-bezier(0.6, -0.28, 0.735, 0.045);
transition: 0.5s cubic-bezier(0.6, -0.28, 0.735, 0.045);
}
ul:before, ul:after {
position: absolute;
margin: -3em -0.25em;
width: 0.5em;
height: 3em;
-webkit-transform-origin: 50% 100%;
-ms-transform-origin: 50% 100%;
transform-origin: 50% 100%;
background: #d3d3d3;
background: -webkit-linear-gradient(#ddd, #c9c4bf);
background: linear-gradient(#ddd, #c9c4bf);
content: '';
}
ul:before {
border-radius: 0.5em 0 0 0.5em;
-webkit-transform: rotate(-22.5deg) translate(-0.25em, -13em);
-ms-transform: rotate(-22.5deg) translate(-0.25em, -13em);
transform: rotate(-22.5deg) translate(-0.25em, -13em);
box-shadow: inset 1px 0 1px #eee;
}
ul:after {
border-radius: 0 0.5em 0.5em 0;
-webkit-transform: rotate(22.5deg) translate(0.25em, -13em);
-ms-transform: rotate(22.5deg) translate(0.25em, -13em);
transform: rotate(22.5deg) translate(0.25em, -13em);
box-shadow: inset -1px 0 1px #eee;
}
li {
overflow: hidden;
width: 16em;
height: 16em;
-webkit-transform-origin: 0 100%;
-ms-transform-origin: 0 100%;
transform-origin: 0 100%;
}
li:nth-child(1) {
-webkit-transform: rotate(-22.625deg) skewY(-75deg) scaleX(0.25882);
-ms-transform: rotate(-22.625deg) skewY(-75deg) scaleX(0.25882);
transform: rotate(-22.625deg) skewY(-75deg) scaleX(0.25882);
}
li:nth-child(2) {
-webkit-transform: rotate(-7.5deg) skewY(-75deg) scaleX(0.25882);
-ms-transform: rotate(-7.5deg) skewY(-75deg) scaleX(0.25882);
transform: rotate(-7.5deg) skewY(-75deg) scaleX(0.25882);
}
li:nth-child(2) a:after {
position: absolute;
top: 3em;
left: 50%;
margin: -0.375em;
width: 0.75em;
height: 0.75em;
-webkit-transform: rotate(45deg);
-ms-transform: rotate(45deg);
transform: rotate(45deg);
box-shadow: inset -1px -1px 1px #eee;
background: -webkit-linear-gradient(135deg, #bbb, #c9c4bf 50%);
background: linear-gradient(-45deg, #bbb, #c9c4bf 50%);
content: '';
}
li:nth-child(3) {
-webkit-transform: rotate(7.625deg) skewY(-75deg) scaleX(0.25882);
-ms-transform: rotate(7.625deg) skewY(-75deg) scaleX(0.25882);
transform: rotate(7.625deg) skewY(-75deg) scaleX(0.25882);
}
li a, li:before {
margin: 0 -16em;
width: 32em;
height: 32em;
border-radius: 50%;
}
li:before, li:after {
position: absolute;
border-radius: 50%;
-webkit-transform: scaleX(3.8637) skewY(75deg);
-ms-transform: scaleX(3.8637) skewY(75deg);
transform: scaleX(3.8637) skewY(75deg);
content: '';
}
li:before {
box-shadow: inset 0 0 1px 1px #fff, inset 0 0 3em #ebe7e2, inset 0 0 1px 2.9375em #c9c4bf, inset 0 0 0 3em #dcdcdc;
}
li:after {
top: 100%;
left: 0;
margin: -13em;
width: 26em;
height: 26em;
border-radius: 50%;
}
a {
display: block;
-webkit-transform: scaleX(3.8637) skewY(75deg) rotate(7.5deg);
-ms-transform: scaleX(3.8637) skewY(75deg) rotate(7.5deg);
transform: scaleX(3.8637) skewY(75deg) rotate(7.5deg);
line-height: 3em;
text-align: center;
text-decoration: none;
}
```
```
<input type='checkbox' id='t'/>
<label for='t'>โฐ</label>
<ul>
<li><a href='#'>โ</a></li>
<li><a href='#'>โ</a></li>
<li><a href='#'>โ</a></li>
</ul>
```
---
## Original answer
My attempt at doing something of the kind with pure CSS:
# [*demo*](http://dabblet.com/gist/3979221/cb69c5e8ccf0745fff2c94b47b27b108931f1a15)
(click the star)
Works in Chrome, Firefox (a bit weirdish blur effect on hover), Opera (ends look smaller) & Safari (ends look smaller).
```
* { margin: 0; padding: 0; }
body {
overflow: hidden;
}
/* generic styles for button & circular menu */
.ctrl {
position: absolute;
top: 70%; left: 50%;
font: 1.5em/1.13 Verdana, sans-serif;
transition: .5s;
}
/* generic link styles */
a.ctrl, .ctrl a {
display: block;
opacity: .56;
background: #c9c9c9;
color: #7a8092;
text-align: center;
text-decoration: none;
text-shadow: 0 -1px dimgrey;
}
a.ctrl:hover, .ctrl a:hover, a.ctrl:focus, .ctrl a:focus { opacity: 1; }
a.ctrl:focus, .ctrl a:focus { outline: none; }
.button {
z-index: 2;
margin: -.625em;
width: 1.25em; height: 1.25em;
border-radius: 50%;
box-shadow: 0 0 3px 1px white;
}
/* circular menu */
.tip {
z-index: 1;
/**outline: dotted 1px white;/**/
margin: -5em;
width: 10em; height: 10em;
transform: scale(.001);
list-style: none;
opacity: 0;
}
/* the ends of the menu */
.tip:before, .tip:after {
position: absolute;
top: 34.3%;
width: .5em; height: 14%;
opacity: .56;
background: #c9c9c9;
content: '';
}
.tip:before {
left: 5.4%;
border-radius: .25em 0 0 .25em;
box-shadow: -1px 0 1px dimgrey, inset 1px 0 1px white, inset -1px 0 1px grey,
inset 0 1px 1px white, inset 0 -1px 1px white;
transform: rotate(-75deg);
}
.tip:after {
right: 5.4%;
border-radius: 0 .25em .25em 0;
box-shadow: 1px 0 1px dimgrey, inset -1px 0 1px white, inset 1px 0 1px grey,
inset 0 1px 1px white, inset 0 -1px 1px white;
transform: rotate(75deg);
}
/* make the menu appear on click */
.button:focus + .tip {
transform: scale(1);
opacity: 1;
}
/* slices of the circular menu */
.slice {
overflow: hidden;
position: absolute;
/**outline: dotted 1px yellow;/**/
width: 50%; height: 50%;
transform-origin: 100% 100%;
}
/*
* rotate each slice at the right angle = (A/2)ยฐ + (k - (n+1)/2)*Aยฐ
* where A is the angle of 1 slice (30ยฐ in this case)
* k is the number of the slice (in {1,2,3,4,5} here)
* and n is the number of slices (5 in this case)
* formula works for odd number of slices (n odd)
* for even number of slices (n even) the rotation angle is (k - n/2)*Aยฐ
*
* after rotating, skew on Y by 90ยฐ-Aยฐ; here Aยฐ = the angle for 1 slice = 30ยฐ
*/
.slice:first-child { transform: rotate(-45deg) skewY(60deg); }
.slice:nth-child(2) { transform: rotate(-15deg) skewY(60deg); }
.slice:nth-child(3) { transform: rotate(15deg) skewY(60deg); }
.slice:nth-child(4) { transform: rotate(45deg) skewY(60deg); }
.slice:last-child { transform: rotate(75deg) skewY(60deg); }
/* covers for the inner part of the links so there's no hover trigger between
star button & menu links; give them a red background to see them */
.slice:after {
position: absolute;
top: 32%; left: 32%;
width: 136%; height: 136%;
border-radius: 50%;
/* "unskew" = skew by minus the same angle by which parent was skewed */
transform: skewY(-60deg);
content: '';
}
/* menu links */
.slice a {
width: 200%; height: 200%;
border-radius: 50%;
box-shadow: 0 0 3px dimgrey, inset 0 0 4px white;
/* "unskew" & rotate by -Aยฐ/2 */
transform: skewY(-60deg) rotate(-15deg);
background: /* lateral separators */
linear-gradient(75deg,
transparent 50%, grey 50%, transparent 54%) no-repeat 36.5% 0,
linear-gradient(-75deg,
transparent 50%, grey 50%, transparent 54%) no-repeat 63.5% 0,
/* make sure inner part is transparent */
radial-gradient(rgba(127,127,127,0) 49%,
rgba(255,255,255,.7) 51%, #c9c9c9 52%);
background-size: 15% 15%, 15% 15%, cover;
line-height: 1.4;
}
/* arrow for middle link */
.slice:nth-child(3) a:after {
position: absolute;
top: 13%; left: 50%;
margin: -.25em;
width: .5em; height: .5em;
box-shadow: 2px 2px 2px white;
transform: rotate(45deg);
background: linear-gradient(-45deg, #c9c9c9 50%, transparent 50%);
content: '';
}
```
```
<a class='button ctrl' href='#' tabindex='1'>โ
</a>
<ul class='tip ctrl'>
<li class='slice'><a href='#'>โฆ</a></li>
<li class='slice'><a href='#'>โฟ</a></li>
<li class='slice'><a href='#'>โต</a></li>
<li class='slice'><a href='#'>โช</a></li>
<li class='slice'><a href='#'>โ</a></li>
</ul>
```
|
Why does my linux server resolve all domains (even non-existant ones)
Currently every domain name resolves to my primary server, `primary.example.com`. So for example, if I `ping randomdomain123.blah` I get:
```
PING primary.example.com` (1.2.3.4) 56(84) bytes of data.
```
but am expecting a 'host not found' error.
Initially I thought it was because I had `search example.com` in my `/etc/resolv.conf`. However, after removing that pinging `randomdomain123.blah` still resolves to my primary domain. Restarting the server had no effect either.
I have nothing specified in `/etc/hosts`.
Running `hostname` from another server in the cluster gives `secondary.example.com`.
I use Route 53 as the DNS provider, and relevant DNS seems to be:
```
example.com. A 1.2.3.4
primary.example.com. A 1.2.3.4
*.primary.example.com. CNAME primary.example.com
*.example.com. CNAME www.example.com
www.example.com. CNAME primary.example.com
```
So is this a local networking misconfiguration or some DNS problem? (or both?)
Update: The reason I want/need a wildcard is that I run a webapp of this domain so customer1.example.com etc. need to resolve to this machine and it needs to be automatic - so I wanted to avoid having to change the DNS after each new customer signs up.
Update 2: My `/etc/resolv.conf` is currently as follows (since I commented out the search line):
```
### Hetzner Online AG installimage
# nameserver config
nameserver 213.133.99.99
nameserver 213.133.100.100
nameserver 213.133.98.98
nameserver 2a01:4f8:0:a102::add:9999
nameserver 2a01:4f8:0:a0a1::add:1010
nameserver 2a01:4f8:0:a111::add:9898
# search example.com
```
Update 3: Running `dig randomdomain123.blah +trace` gives:
```
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.30.rc1.el6 <<>> randomdomain123.blah +trace
;; global options: +cmd
;; Received 12 bytes from 213.133.99.99#53(213.133.99.99) in 0 ms
```
Update 4: I can confirm that `ping randomdomain123.blah.` with the final dot gives:
```
ping: unknown host randomdomain123.blah.
```
So does that mean that from a Java app on this machine, I need to append dots and use a URL like `http://randomdomain123.blah./somepage.html` to ever generate a HostNotFoundException?
| Your problem is the `search` field in `/etc/resolv.conf` combined with your `*` record. You mentioned that you already tried to remove that setting. But it turns out that omitting it from `/etc/resolv.conf`, does not mean that the search feature will be turned off.
If absent from `/etc/resolv.conf` the `search` setting will default to the domain from your hostname.
I don't know if there is an official way to completely disable the `search` feature, but this appeared to work:
```
search .
```
Alternatively, you can point your `search` to a domain without a `*` record, which could contain a few other records for your convenience. For example:
```
search search.example.com
```
Then you can create records such as `server1.search.example.com` but not `*.search.example.com`.
|
Jsoup set accept-header request doesn't work
I'm trying to parse data from tempobet.com in english format. The thing is when I use google rest client it returns the html as same as I want, however, when I try to parse it via Jsoup it returns the date format in my locale format. This is the test code
```
import java.io.IOException;
import java.util.Date;
import java.util.ListIterator;
import java.util.Locale;
import org.apache.commons.lang3.time.DateUtils;
import org.jsoup.Connection.Response;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
import org.junit.Test;
public class ParseHtmlTest {
@Test
public void testName() throws IOException {
Response response = Jsoup.connect("https://www.tempobet.com/league191_5_0.html")
.userAgent("Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1667.0 Safari/537.36")
.execute();
Document doc = Jsoup.connect("https://www.tempobet.com/league191_5_0.html")
.userAgent("Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1667.0 Safari/537.36")
.header("Accept-Language", "en-US")
.header("Accept-Encoding", "gzip,deflate,sdch")
.cookies(response.cookies())
.get();
Elements tableElement = doc.select("table[class=table-a]");
ListIterator<Element> trElementIterator = tableElement.select("tr:gt(2)").listIterator();
while (trElementIterator.hasNext()) {
ListIterator<Element> tdElementIterator = trElementIterator.next().select("td").listIterator();
while (tdElementIterator.hasNext()) {
System.out.println(tdElementIterator.next());
}
}
}
}
```
here is an example line of response
```
<td width="40" class="grey">21 Nis 20:00</td>
```
which the date should be `"21 Apr 20:00"`. I will appreciate for any help. Thanks anyway
| It could be so easy if tempobet would just take a look in the `Accept-Language` Header...
They are serving tr (tempobet22.com) and en (tempobet.com) on different domains. First call to en-domain is redirected to tr-domain. If you choose another language they are doing two redirects and their magic session-sharing. For the first redirect you need a `GAMBLINGSESS` cookie from the first domain, for the second one for the second domain. Jsoup does not know this when itโs following a redirect...
```
String userAgent = "Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1667.0 Safari/537.36";
// get a session for tr and en domain
String tempobetSession = Jsoup.connect("https://www.tempobet.com/").userAgent(userAgent).execute().cookie("GAMBLINGSESS");
String tempobet22Session = Jsoup.connect("https://www.tempobet22.com/").userAgent(userAgent).execute().cookie("GAMBLINGSESS");
// tell tr domain that we wont to go to en without following the redirect
String redirect = Jsoup.connect("https://www.tempobet22.com/?change_lang=https://www.tempobet.com/")
.userAgent(userAgent).cookie("GAMBLINGSESS", tempobet22Session).followRedirects(false).execute().header("Location");
// Redirect goes to en domain including our hashed tr-cookie as parameter - but this redirect needs a en-cookie
Response response = Jsoup.connect(redirect).userAgent(userAgent).cookie("GAMBLINGSESS", tempobetSession).execute();
// finally...
Document doc = Jsoup.connect("https://www.tempobet.com/league191_5_0.html").userAgent(userAgent).cookies(response.cookies()).get();
```
|
Cannot start a docker container
This is my Dockerfile, which attempts to setup Nginx with Phusion Passenger, and then install Ruby.
```
# Build command: docker build --force-rm --tag="rails" .
# Run command: docker run -P -d rails
FROM ubuntu:14.04
ENV DEBIAN_FRONTEND noninteractive
# Install nginx and passenger
RUN sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 561F9B9CAC40B2F7
RUN sudo sudo apt-get install -yf apt-transport-https ca-certificates
RUN sudo echo 'deb https://oss-binaries.phusionpassenger.com/apt/passenger trusty main' > /etc/apt/sources.list.d/passenger.list
RUN sudo chown root: /etc/apt/sources.list.d/passenger.list
RUN sudo chmod 600 /etc/apt/sources.list.d/passenger.list
RUN sudo apt-get update
RUN sudo apt-get install -yf nginx-extras passenger
RUN sudo service nginx restart
# Install rbenv and ruby
RUN sudo apt-get install -yf git autoconf bison build-essential libssl-dev libyaml-dev libreadline6-dev zlib1g-dev libncurses5-dev libffi-dev libgdbm3 libgdbm-dev curl
RUN git clone https://github.com/sstephenson/rbenv.git ~/.rbenv
RUN git clone https://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
RUN echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
RUN echo 'eval "$(rbenv init -)"' >> ~/.bashrc
RUN /root/.rbenv/plugins/ruby-build/install.sh
ENV PATH /root/.rbenv/bin:$PATH
RUN rbenv install 2.1.5
RUN rbenv global 2.1.5
EXPOSE 80
```
Basically, I can build the image just fine, but I cannot start a container from this image.
This is my first time using docker, so I suspect that I need to use the CMD directive, but I have no idea what command should go there.
I appreciate any help to point out what is wrong with my Dockerfile.
| The container is running successfully, its just exiting immediately as you don't specify any process to run. A container will only run as long as its main process. As you've run it in the background (the `-d` flag) it won't provide any output, which is a bit confusing.
For example:
```
$ docker run ubuntu echo "Hello World"
Hello World
```
The container ran the command and exited as expected.
```
$ docker run -d ubuntu echo "Hello World"
efd8f9980c1c9489f72a576575cf57ec3c2961e312b981ad13a2118914732036
```
The same thing as happened, but as we ran with `-d`, we got the id of the container back rather than the output. We can get the output using the `logs` command:
```
$ docker logs efd8f9980c1c9489f72a576575cf57ec3c2961e312b981ad13a2118914732036
Hello World
```
What you need to do is start your rails app, or whatever process you want the container to run when you launch the container. You can either do this from the `docker run` command or using `CMD` statement in the Dockerfile. Note that the main process must stay in the foreground - if it forks to the background the container will exit.
If you want to get a shell in a container, start it with `-it` e.g:
```
$ docker run -it ubuntu /bin/bash
```
To be honest, I think you'd be better served by using an official image e.g. <https://registry.hub.docker.com/_/rails/>.
|
nginx not serving admin static files?
***Clarification***: *The following error is onlyfor the **admin static files**, i.e. it is specific to the static files corresponding to the Django admin. The rest of the static files are working perfectly.*
### Problem
Basically, I cannot access the **admin static files** using the ngix server.
It does work with the micro server of Django, and the `collectstatic` is doing its job, meaning it is putting the files on the expected place in the static folder.
The urls are correct but I cannot access the **admin static files** directly, but the others I can. So, for example:
1. I am able to access this url (copying it in the browser):
`myserver.com:8080/static/css/base/base.css`
2. but I am not able to access this other url (copying it in the browser):
`myserver.com:8080/static/admin/css/admin.css`
---
## What have I tried?
It **does work** if I copy the `admin/` directory structure into `__other_admin_directory_name/__`, and then I access
`myserver.com:8080/static/__other_admin_directory_name__/css/admin.css`
Moreover,
1. I checked permissions and everything is fine.
2. I tried to change ADMIN\_MEDIA\_PREFIX = '/static/admin/' to ADMIN\_MEDIA\_PREFIX = '/static/**other\_admin\_directory\_name**/', it doesn't work.
Finally, and it seems to be an important clue:
I tried to copy the `admin/` directory structure into `__admin_and_then_any_suffix/__`. Then I cannot access
`myserver.com:8080/static/__admin_and_then_any_suffix/__/css/admin.css`. So, if the name of the directory starts with `admin` (for example administration or `admin2`) then **it doesn't work**.
---
**EDIT - added thanks to @sarnold observation:**
The problem seems to be in the nginx configuration file /etc/nginx/sites-available/mysite
```
location /static/admin {
alias /home/vl3/.virtualenvs/vl3/lib/python2.7/site-packages/django/contrib/admin/media/;
}
```
| My suggestions:
1. Use django 1.3+ (and [ADMIN\_MEDIA\_PREFIX is deprecated now](https://docs.djangoproject.com/en/1.4/releases/1.4/#django-contrib-admin))
2. Set both [`STATIC_URL` and `STATIC_ROOT`](https://docs.djangoproject.com/en/dev/howto/static-files/#deploying-static-files-in-a-nutshell) in your settings.py
3. Define just a single static entry in your nginx conf (with trailing slashes). ***No need*** for a second one that addresses `static/admin/`:
```
location /static/ {
alias /path/to/static/;
}
```
4. Use `collectstatic` which should collect admin -> static/admin. It will live under the same location as all the rest of your collected static media.
`python manage.py collectstatic`
|
How to groupby ndarray?
I have the DataFrame (just an example)
```
D = pd.DataFrame({i: {"name": str(i),
"vector": np.arange(i + i % 4, i + i % 4 + 10),
"sq": i ** 2,
"gp": i % 2} for i in range(10)}).T
gp name sq vector
0 0 0 0 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
1 1 1 1 [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
2 0 2 4 [4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
3 1 3 9 [6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
4 0 4 16 [4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
5 1 5 25 [6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
6 0 6 36 [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]
7 1 7 49 [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
8 0 8 64 [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]
9 1 9 81 [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
```
and I want to group by the column vector and then column gp. How can I do this?
```
from dfply import *
D >>\
groupby(X.vector, X.gp) >>\
summarize(b=X.sq.sum())
```
results in
>
> TypeError: unhashable type: 'numpy.ndarray'
>
>
>
| I think you need convert column `vector` to tuples first in `pandas`:
```
print(D['sq'].groupby([D['vector'].apply(tuple), D['gp']]).sum().reset_index())
vector gp sq
0 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) 0 0
1 (2, 3, 4, 5, 6, 7, 8, 9, 10, 11) 1 1
2 (4, 5, 6, 7, 8, 9, 10, 11, 12, 13) 0 20
3 (6, 7, 8, 9, 10, 11, 12, 13, 14, 15) 1 34
4 (8, 9, 10, 11, 12, 13, 14, 15, 16, 17) 0 100
5 (10, 11, 12, 13, 14, 15, 16, 17, 18, 19) 1 130
```
Another solution is convert column first:
```
D['vector'] = D['vector'].apply(tuple)
print(D.groupby(['vector','gp'])['sq'].sum().reset_index())
vector gp sq
0 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) 0 0
1 (2, 3, 4, 5, 6, 7, 8, 9, 10, 11) 1 1
2 (4, 5, 6, 7, 8, 9, 10, 11, 12, 13) 0 20
3 (6, 7, 8, 9, 10, 11, 12, 13, 14, 15) 1 34
4 (8, 9, 10, 11, 12, 13, 14, 15, 16, 17) 0 100
5 (10, 11, 12, 13, 14, 15, 16, 17, 18, 19) 1 130
```
Anf if necesary last convert to `array` back:
```
D['vector'] = D['vector'].apply(tuple)
df = D.groupby(['vector','gp'])['sq'].sum().reset_index()
df['vector'] = df['vector'].apply(np.array)
print (df)
vector gp sq
0 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] 0 0
1 [2, 3, 4, 5, 6, 7, 8, 9, 10, 11] 1 1
2 [4, 5, 6, 7, 8, 9, 10, 11, 12, 13] 0 20
3 [6, 7, 8, 9, 10, 11, 12, 13, 14, 15] 1 34
4 [8, 9, 10, 11, 12, 13, 14, 15, 16, 17] 0 100
5 [10, 11, 12, 13, 14, 15, 16, 17, 18, 19] 1 130
print (type(df['vector'].iat[0]))
<class 'numpy.ndarray'>
```
I try use your code and for me works:
```
from dfply import *
D['vector'] = D['vector'].apply(tuple)
a = D >> groupby(X.vector, X.gp) >> summarize(b=X.sq.sum())
a['vector'] = a['vector'].apply(np.array)
print (a)
gp vector b
0 0 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] 0
1 1 [2, 3, 4, 5, 6, 7, 8, 9, 10, 11] 1
2 0 [4, 5, 6, 7, 8, 9, 10, 11, 12, 13] 20
3 1 [6, 7, 8, 9, 10, 11, 12, 13, 14, 15] 34
4 0 [8, 9, 10, 11, 12, 13, 14, 15, 16, 17] 100
5 1 [10, 11, 12, 13, 14, 15, 16, 17, 18, 19] 130
```
|
Set layout from module controller in yii2
I have three layouts in my layouts folder in main views folder. I added a module called subDomain. In my subDomain module I have a Controller called HomeController. In HomeController there is an action called `getDomain()`.
In the `getDomain()` action I want to change the main layout to `getDomainLayout`. But there is an error when I use a code:
`$this->layout = "getDomainLayout";`
Yii2 throws:
```
Invalid Parameter โ yii\base\InvalidParamException
The view file does not exist: \myyii2\modules\subDomain\views\layouts\bersih.php
```
| There are several options to address this issue.
## Create a layout file in the appropriate module directory
An example below shows a canonical directory structure of some `subDomain` module, including it's layouts (domain.php):
```
subDomain/
Module.php the module class file
controllers/ containing controller class files
HomeController.php the home controller class file
models/ containing model class files
views/ containing controller view and layout files
layouts/ containing layout view files
domain.php the domain layout file
home/ containing view files for HomeController
index.php the index view file
```
Following this simple structure, you can set any layout by its name within the module's controller:
```
namespace myApp\modules\subDomain\controllers;
class HomeController extends Controller {
public function actionGetDomain() {
$this->layout = 'domain'; // equals 'myApp/modules/subDomain/views/layouts/domain'
}
}
```
This is the most preferable way, because modules are self-contained software units that consist of its own models, layouts, controllers, etc.
## Specify the complete path to the directory that contains your layout file
In some cases you might want to use a layout file that is located outside the module directory:
```
class HomeController extends Controller {
public function actionGetDomain() {
$this->layout = '@app/views/layouts/main';
}
}
```
Where *[@app](http://www.yiiframework.com/wiki/667/yii-2-list-of-path-aliases-available-with-default-basic-and-advanced-app/#hh0)* is the base path of the currently running application, for example:
```
myApp/frontend
```
In this situation, be sure that the **main.php** layout file exists in the following directory:
```
myApp/frontend/views/layouts/main.php
```
|
Python Loop Behavior with Generators
I am confused about why different looping constructs behave so differently when used with a simple generator. Consider the following code:
```
example_list = [1, 2, 3, 4, 5]
for_count = 0
next_count = 0
while_count = 0
def consumer(iterable):
for item in iterable:
yield
return
for item in consumer(example_list):
print("For Count: {0}".format(for_count))
for_count += 1
# First while loop
while consumer(example_list).next():
print("Next Count: {0}".format(next_count))
next_count += 1
# Second while loop
while consumer(example_list):
print("While Count: {0}".format(while_count))
while_count += 1
if while_count > 10: # add contrived limit for infinite loop
break
```
The output for this code is:
>
> For Count: 0
>
> For Count: 1
>
> For Count: 2
>
> For Count: 3
>
> For Count: 4
>
> While Count: 0
>
> While Count: 1
>
> While Count: 2
>
> While Count: 3
>
> While Count: 4
>
> While Count: 5
>
> While Count: 6
>
> While Count: 7
>
> While Count: 8
>
> While Count: 9
>
> While Count: 10
>
>
>
I would appreciate help understanding the following:
1. How does the `for` loop know when to end but not the second `while` loop? I expected both `while` loops to exit immediately since `None` is yielded.
2. Why doesn't using `.next()` raise an exception? The `consumer` routine isn't a class with a `__next__()` method defined, so does it magically appear when you use the `yield` keyword?
3. Why is it that if I change `consumer` to `yield item`, the first `while` loop become infinite like the second one?
4. If I change `consumer` to simply return instead of yielding anything, the second `while` loop exits immediate instead of becoming infinite. I've been under the impression that a `yield` is essentially a `return` that you can resume from. Why are they treated differently by a `while` loop?
| ## The `for` loop
Your first for loop works as expected.
**Update**: [Mark Ransom](https://stackoverflow.com/users/5987/mark-ransom) noted that your `yield` is not accompanied by the expected `item`, so it just returns `[None, None, None, None, None]` rather than `[1, 2, 3, 4, 5]` - but it still iterates over the list.
## The first `while` loop
[The very same commentator](https://stackoverflow.com/users/5987/mark-ransom) also noticed that the first while loop never starts because `0` is a `False`-equivalent in Python.
## The second `while` loop
In the second while loop, you are testing the value of `consumer(example_list)`. This is the generator object itself, not the values return by its `next()`. The object itself never equals None, or any other `False` equivalent - so your loop never ends.
This can be seen by printing the value of `consumer(example_list)`, your while condition, within the loop:
```
>>> while_count=0
>>> while consumer(example_list):
... print while_count, consumer(example_list)
... while_count += 1
... if while_count > 10:
... break
```
Giving:
```
0 <generator object consumer at 0x1044a1b90>
1 <generator object consumer at 0x1044a1b90>
2 <generator object consumer at 0x1044a1b90>
3 <generator object consumer at 0x1044a1b90>
4 <generator object consumer at 0x1044a1b90>
5 <generator object consumer at 0x1044a1b90>
6 <generator object consumer at 0x1044a1b90>
7 <generator object consumer at 0x1044a1b90>
8 <generator object consumer at 0x1044a1b90>
9 <generator object consumer at 0x1044a1b90>
10 <generator object consumer at 0x1044a1b90>
```
The second item is the object, which never equals `None`.
|
scala syntax question :/ and ~
Could anybody help me, and explain what is `:/` and then `~` in scala, example:
```
json = http(:/("api.twitter.com") / "1/users/show.json" <<? Map("screen_name" -> "aloiscochard") >~ { _.getLines.mkString })
```
from: <http://aloiscochard.blogspot.com/2011/05/simple-rest-web-service-client-in-scala.html>
| In the code that you link to, note the `import dispatch._`. This imports the [dispatch library](http://databinder.net/dispatch-doc/).
In that library we find an object `:/` that has an apply method, so that's what `:/("api.twitter.com")` means.
Also, there is no `~` in the code, either--only a `>~`. In Scala, any group of symbols is a method name, which can be used as an operator. So `>~` must be a method on something.
Looking around, we find that `HandlerVerbs` defines a `>~` method that will "Handle response as a scala.io.Source, in a block."
To understand what the code does in detail, you need to understand the `dispatch` library, which I don't.
This library seems to be very DSL-heavy. As such, it may be an excellent choice if you are doing lots and lots of dispatch work (because dispatching hopefully can be done in an intuitive and clean way). But it may be a horrible choice for one-off usage, since you have to be quite familiar with the library to understand what it might be doing (due to the choice of very short method names like `>~`).
|
Concat MP3/media audio files on amazon S3 server
I want to concatenate the files uploaded on Amazon S3 server.
How can I do this.
Concatenation on local machine i can do using following code.
```
var fs = require('fs'),
files = fs.readdirSync('./files'),
clips = [],
stream,
currentfile,
dhh = fs.createWriteStream('./concatfile.mp3');
files.forEach(function (file) {
clips.push(file.substring(0, 6));
});
function main() {
if (!clips.length) {
dhh.end("Done");
return;
}
currentfile = './files/' + clips.shift() + '.mp3';
stream = fs.createReadStream(currentfile);
stream.pipe(dhh, {end: false});
stream.on("end", function() {
main();
});
}
main();
```
| You can achieve what you want by breaking it into two steps:
# Manipulating files on s3
Since s3 is a remote file storage, you can't run code on s3 server to do the operation locally (as @Andrey mentioned).
what you will need to do in your code is to fetch each input file, process them locally and upload the results back to s3. checkout the code examples from [amazon](http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-examples.html):
```
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'mp3-input1.mp3'};
var file = require('fs').createWriteStream('/path/to/input.mp3');
s3.getObject(params).createReadStream().pipe(file);
```
at this stage you'll run your concatenation code, and upload the results back:
```
var fs = require('fs');
var zlib = require('zlib');
var body = fs.createReadStream('bigfile.mp3').pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body}).
on('httpUploadProgress', function(evt) { console.log(evt); }).
send(function(err, data) { console.log(err, data) });
```
# Merging two (or more) mp3 files
Since MP3 file include a header that specifies some information like bitrate, simply concatenating them together might introduce playback issues.
See: <https://stackoverflow.com/a/5364985/1265980>
what you want to use a tool to that. you can have one approach of saving your input mp3 files in tmp folder, and executing an external program like to change the bitrate, contcatenate files and fix the header.
alternatively you can use an library that allows you to use [ffmpeg within node.js.](https://github.com/fluent-ffmpeg/node-fluent-ffmpeg)
in their code example shown, you can see how their merge two files together within the node api.
```
ffmpeg('/path/to/part1.avi')
.input('/path/to/part2.avi')
.input('/path/to/part2.avi')
.on('error', function(err) {
console.log('An error occurred: ' + err.message);
})
.on('end', function() {
console.log('Merging finished !');
})
.mergeToFile('/path/to/merged.avi', '/path/to/tempDir');
```
|
Why does Julia fails to solve linear system systematically?
The problem Ax=b for square A is solved by the \ function. With that in mind, I've tried to do the following:
```
A = rand(1:4,3,3)
x = fill(1.0, 3)
b = A * x
A\b
```
For some reason, the code seems to works at times. But sometimes it returns me the following error:
```
LinearAlgebra.SingularException(3)
Stacktrace:
[1] checknonsingular
@ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/factorization.jl:19 [inlined]
[2] checknonsingular
@ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/factorization.jl:21 [inlined]
[3] #lu!#136
@ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/lu.jl:85 [inlined]
[4] #lu#140
@ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/lu.jl:273 [inlined]
[5] lu (repeats 2 times)
@ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/lu.jl:272 [inlined]
[6] \(A::Matrix{Int64}, B::Vector{Float64})
@ LinearAlgebra /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/generic.jl:1136
[7] top-level scope
@ In[208]:4
[8] eval
@ ./boot.jl:360 [inlined]
[9] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
@ Base ./loading.jl:1116
```
So, I tried to understand what is happening, and executed the code 10000000 times and found out that it failed 10% of the times it was executed.
```
using Printf
i = 0
test = 10000000
for x in 1:test
try
A = rand(1:4,3,3)
x = fill(1.0, 3)
b = A * x
A\b
catch
i = i+1
end
end
fail_percentage = (i/test)*100
@printf "this code has failed in %.2f%%" fail_percentage
```
Can someone explain me what is happening here?
| The error is explicit: `LinearAlgebra.SingularException`. This is not a failure of Julia, but a property of a system of equations.
There is no single solution if the matrix `A` is singular - either an infinite amount of solutions if the system is homogeneous, or none in the general case. Seems you have empirically calculated the probability of generating a singular system using the properties you tested (dimensions of `A` and `x`, `x` filled with 1s, `A` filled between 1 and 4).
In case, like OP, you are looking to skip singular matrices, you need to ensure determinant of `A` is not 0. You can either use the built in function to check and skip such matrices, or generate them by noting that the determinant is itself an equation, and so, say for your 3x3 example with no 0 entries, you choose 8 numbers, you can calculate what the 9th one cannot be to ensure the determinant is non-zero. If you allow for 0s you need to check all possibilities.
|
F# code quotation invocation, performance, and run-time requirements
Here are 4 deeply related questions about F# code quotations -
How do I invoke an F# code quotation?
Will it be invoked in a manner less efficient than if it were just a plain old F# lambda? to what degree?
Will it require run-time support for advanced reflection or code-emitting functionality (which is often absent or prohibited from embedded platforms I am targeting)?
| Quotations are just data, so you can potentially "invoke" them in whatever clever way you come up with. For instance, you can simply walk the tree and interpret each node as you go, though that wouldn't perform particularly well if you're trying use the value many times and its not a simple value (e.g. if you've quoted a lambda that you want to invoke repeatedly).
If you want something more performant (and also simpler), then you can just use `Linq.RuntimeHelpers.LeafExpressionConverter.EvaluateQuotation`. This doesn't support all possible quotations (just roughly the set equivalent to C# LINQ expressions), and it's got to do a bit more work to actually generate IL, etc., but this should be more efficient if you're reusing the result. This does its work by first converting the quotation to a C# expression tree and then using the standard `Compile` function defined there, so it will only work on platforms that support that.
|
How to return getVisibleValues() with Google Apps Script?
Do you know why the `getVisibleValues()` function doesn't return any values? I would like to get a filtering criterion, for example in this case (screenshot) I'd like to get the 'A' value. I tried `getCriteriaValues()` , `getValues()`and it doesn't work too. It doesn't not return any values. Only the `getHiddenValues()` function works and returns "B, C, D" correctly.
![screenshot](https://i.stack.imgur.com/SM2NI.png)
```
function myFunction() {
var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Test1');
var filter = sheet.getRange('A1:C').getFilter();
var values = filter.getColumnFilterCriteria(3).getVisibleValues();
Logger.log(values);
}
```
| This seems to be a bug. You can report in the [Issue Tracker](https://issuetracker.google.com/issues?q=componentid:191640%2B).
### This method is not currently supported:
See issue [159051708](https://issuetracker.google.com/issues/159051708):
>
> Status: Won't Fix (Intended Behavior) Hello,
>
>
> Unfortunately those methods are not currently supported. As you try to
> use them in your script you will see an Exception message:
>
>
> Visible values are not currently supported. As an alternative specify
> a list of hidden values that excludes the values that should be
> visible.
>
>
> Hidden values are supported.
>
>
>
You could create a separate function to simulate this:
```
function test_getVisibleValues() {
const sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Test1");
const columnRange = sheet.getRange("C2:C");
const filter = columnRange.getFilter();
const visibleValues = getVisibleValues(filter, columnRange);
console.log(visibleValues);
}
/**
* Returns the values to show.
* Simulates https://developers.google.com/apps-script/reference/spreadsheet/filter-criteria#getvisiblevalues
* @param {Filter} filter
* @param {Range} range
* @returns {string[]}
*/
function getVisibleValues(filter, range) {
const filterCriteria = filter.getColumnFilterCriteria(range.getColumn());
const hiddenValues = filterCriteria.getHiddenValues();
const allNonBlankValues = range.getValues().filter(String);
const uniqueValues = Array.from(new Set(allNonBlankValues));
// filter unique values that aren't hidden
return uniqueValues.flat().filter(value => !hiddenValues.includes(value));
}
```
|
Summarize array of objects and calculate average value for each unique object name
I have an array like so:
```
var array = [
{
name: "a",
value: 1
},
{
name: "a",
value: 2
},
{
name: "a",
value: 3
},
{
name: "b",
value: 0
},
{
name: "b",
value: 1
}
];
```
And I need an array like this:
```
var newarray = [
{
name: "a",
value: 2
},
{
name: "b",
value: 0.5
}
]
```
Where the new array has each unique name as an object with the average value.
Is there an easy way to accomplish this?
| You'll have to loop through the array, computing the sum and counts for each object. Here's a quick implementation:
```
function average(arr) {
var sums = {}, counts = {}, results = [], name;
for (var i = 0; i < arr.length; i++) {
name = arr[i].name;
if (!(name in sums)) {
sums[name] = 0;
counts[name] = 0;
}
sums[name] += arr[i].value;
counts[name]++;
}
for(name in sums) {
results.push({ name: name, value: sums[name] / counts[name] });
}
return results;
}
```
[**Demonstration**](http://jsfiddle.net/ELMSq/)
Note, this kind of thing can be made much easier if you use a library like [Underscore.js](http://underscorejs.org/):
```
var averages = _.chain(array)
.groupBy('name')
.map(function(g, k) {
return {
name: k,
value: _.chain(g)
.pluck('value')
.reduce(function(x, y) { return x + y })
.value() / g.length
};
})
.value();
```
[**Demonstration**](http://jsfiddle.net/pEzY3/)
|
MySql procedure IF number is odd or even
My learning mysql and my question is to check what i'm doing wrong here:
I'm trying to make a sql procedure to check if the **first number** of a number user enters is **odd** or **even** but im getting syntax error in line 9 near **';//'**
Here's the sql:
```
MYSQL>
DELIMITER //
CREATE PROCEDURE num()
BEGIN
IF (SELECT LEFT(num,1))=1 OR (SELECT LEFT(num,1))=3 OR (SELECT LEFT(num,1))=5 OR (SELECT LEFT(num,1))=7 THEN
SELECT 'number is odd';
ELSEIF (SELECT LEFT(num,1))=2 OR (SELECT LEFT(num,1))=4 OR (SELECT LEFT(num,1))=6 OR (SELECT LEFT(num,1))=8 THEN
SELECT 'number is even';
END IF;
END;//
```
And here is the CALL of then number for testing:
```
MYSQL> CALL num(3123123123)
```
Any ideas?
| what you want to do is a calculation. % 2 will give the remainder of a division by 2. if that remainder is not 0 then it is odd
```
SELECT IF(LEFT(num, 1) % 2 <> 0, "number is odd", "number is even")
```
you also want to fix your procedure to something like this
```
DELIMITER //
CREATE PROCEDURE `num`( IN input_num int, OUT output varchar(200))
BEGIN
SELECT IF(LEFT(input_num, 1) % 2 <> 0, "number is odd", "number is even") INTO output;
END//
```
you would call it like this.
```
set @a = '';
call num(333, @a);
select @a;
```
[**demo fiddle**](http://sqlfiddle.com/#!2/cd3f0/1)
|
Error: Uncaught (in promise): Failed to load login.component.html
I tried to access a custom built html using templateUrl in Angular2.
Here is my login.component.ts
```
import {Component} from '@angular/core';
@Component({
selector: 'login' ,
templateUrl : './login.component.html'
})
export class loginComponent{
}
```
Here is my login.component.html
```
<div class="container">
<input type="text" placeholder="username">
<input type="text" placeholder="password">
<button>Login</button>
</div>
```
My directory structure has both the login.component.ts and login.component.html both in the same location.
When I compile this code I am getting an error stating
>
> localhost:8081/login.component.html not found 404
>
>
> Unhandled Promise rejection: Failed to load login.component.html ;
> Zone: ; Task: Promise.then ; Value: Failed to load
> login.component.html undefined
>
>
>
| you need config your app to using relative url
**tsconfig.json**
```
{
"compilerOptions": {
"module": "commonjs",
"target": "es5",
//... other options
}
}
```
**login.component.ts**
import {Component} from '@angular/core';
```
@Component({
moduleId: module.id, // fully resolved filename; defined at module load time
selector: 'login' ,
templateUrl : './login.component.html'
})
export class loginComponent{
}
```
>
> The key lesson is to set the moduleId : module.id in the @Component
> decorator! Without the moduleId setting, Angular will look for our
> files in paths relative to the application root.
>
>
> And donโt forget the "module": "commonjs" in your tsconfig.json.
>
>
> The beauty of this
> component-relative-path solution is that we can (1) easily repackage
> our components and (2) easily reuse componentsโฆ all without changing
> the @Component metadata.
>
>
>
<https://blog.thoughtram.io/angular/2016/06/08/component-relative-paths-in-angular-2.html>
|
Why does the Web Deploy Agent Service listen on port 80 and 8172
On four Win Server 2008 R2 boxes, we have MS Deploy installed. It listens on port 80 and 8172 which throws into doubt all the other material out there on the web about how there's no need to change the port 80 default when running on IIS 7.
I can't understand it. Why is it using BOTH ports? I can't follow the instructions to move it to 8172 since it has something already registered there.
There are two problems it causes: 1) an extra attack surface I want to close 2) it keeps our load-balancer from detecting when IIS is stopped and customers get 404s from MsDepSvc!
It's so stupid.
| I **think** I have worked this out.
There are two Web Deploys in the world. One that is installed with Web Management Service (WMSvc) and people call it Web Deploy anyway and use it via Visual Studio via :8172/msdeploy.axd and then there's Web Deploy, the extra thing you install to allow publishing from the public internet.
All this time, me and my colleagues and people I've worked with at different companies, have all been needlessly installing Web Deploy and then, not even using it.
That's my *theory*. And now I will go and disable MsDepSvc and see if it holds.
**Update 1 - This is incorrect. Sort of.**
On a new server, thinking that Web Deploy is built-in to WMSvc, I kept getting 404.7 errors from msdeploy.exe until I installed Web Deploy - because a nice fella named Richard said "Web Deploy registers a handler with WMSvc".
Ha! So deployment is not natively a part of WMSvc. And after installing Web Deploy, you end up with two deployment handlers, one in WMSvc and the other a dedicated Windows Service MsDepSvc and you can disable the MsDepSvc to prevent it sucking on port 80 and fooling your load-balancer into thinking the server is up when its down!
[Getting a 404 from WMSvc via MSDeploy.exe](https://stackoverflow.com/questions/13870561/getting-a-404-from-wmsvc-via-msdeploy-exe)
|
Azure Service Bus - Two Way Communication Performance Challenge
I need to establish a two-way communication between a Publisher and a Subscriber. This is to facilitate a front-end MVC3 application defining a Subscription with a Correlation Filter, and then placing a message onto a Topic. Finally, the MVC3 controller calls BeginReceive() on the SubscriptionClient, and awaits the response.
The issue seems to be the creation and deletion of these Subscription objects. The overhead is enormous, and it slows the application to a crawl. This is not to mention the various limitations to work around, such as no more than 2000 Subscriptions on a Topic.
What is the best practice for establishing this kind of two-way communication between a Publisher and Subscriber? We want the MVC3 app to publish a message and then wait for a response to that exact message (via the CorrelationId property and a CorrelationFilter). We already cache the NamespaceManager and MessagingFactory, as those are also prohibitively expensive, resource-wise, and also because we were advised that Service Bus uses an explicit provisioning model, where we are expected to pre-create most of these things during role startup.
So, this leaves us with the challenge of correlating request to response, and having this tremendous overhead of the creation and deletion of Subscriptions. What better practice exists? Should we keep a cache of SubscriptionClients, and swap the Filter each time? What does everyone else do? I need to have a request throughput on the order of 5 to 10 thousand MVC3 requests per second through the Web Role cluster. We are already using AsyncController and employing the asynchronous BeginReceive() on SubscriptionClient. It appears to be the creation and deletion of the Subscriptions by the thousands that is choking the system at this point.
**UPDATE1:**
Based on the great advice provided here, we have updated this solution to keep a cache of SubscriptionClient objects on each web role instance. Additionally, we have migrated to a MessageSession oriented approach.
However, this is still not scaling. It seems that AcceptMessageSession() is a very expensive operation. Should MessageSession objects also be cached and re-used? Does each open MessageSession object consume a connection to the Service Bus? If so, is this counted against the Subscription's concurrent connection quota?
Many thanks. I think we are getting there. Most of the example code on the web shows: Create Topic(), then CreateSubscription(), then CreateSubscriptionClient(), then BeginReceive() on the client, then teardown of all of the objects. All I can say is if you did this in real life, your server would be crushed, and you would max out on connections in no time.
We need to put thousands of requests per second through this thing, and it is very apparent that these objects must be cached and reused heavily. So, is MessageSession yet another item to cache? I will have fun caching that, because we will have to implement a reference counting mechanism, where only one reference to the MessageSession can be given out at a time, since this is for http request-specific request/response, and we cannot have other subscribers using the MessageSession objects concurrently.
**UPDATE2:**
OK, it is not feasible to cache MessageSession for re-use, because they only live as long as the LockDuration on the Subscription. This is a bummer, because the maximum LockDuration is 5 minutes. These appear to be for pub/sub of short duration, not for long-running distributed processes. It looks like we need to back to polling Azure Tables.
**SUMMARY/COMMENTARY**
We tried to build on Service Bus because of the scale potential and its durability and delivery semantics. However, it seems that there are situations, high-volume request/response among them, that are not suited to it. The publishing part works great, and having competing consumers on the back-end is great, but having a front-end request wait on a defined, single-consumer response, does not scale well at all, because the MessageSessions take way too long to create via AcceptMessageSession() or BeginAcceptMessageSession(), and because they are not suited to caching.
If someone has an alternative view, I would love to hear it.
| This scenario is a classic request/response and a good candidate to use sessions. These are another correlation mechanism. Make a simple request queue and response queue. Each web role thread creates a unique sessionid for a request and puts that value in the "ReplyToSessionID" property of the brokeredmessage. Also this thread calls a [AcceptMessageSession](http://msdn.microsoft.com/en-us/library/windowsazure/hh293162.aspx) on the response queue with the sessionid value so it locks it. The brokered message is sent to the request queue and all worker roles compete for messages. When a worker role gets a request it processes it, creates a response message and sets the sessionid property on the response message = replytosessinid of request. this is then sent to the response queue and will only be delivered to the thread that has locked that session id. A detailed sample [using sessions is here](http://code.msdn.microsoft.com/windowsazure/Brokered-Messaging-Request-2b4ff5d8). There are 2 additional samples here using [Queues](http://code.msdn.microsoft.com/windowsazure/Brokered-Messaging-Request-0ce8fcaf) and [Topics](http://code.msdn.microsoft.com/windowsazure/Brokered-Messaging-Request-6759a36e) to achieve the request response correlation.
|
Activity Indicator not appearing
I have some heavy code that runs for around 0.2 seconds.
I set up the activity indicator like this; however, it doesn't show up, but rather the whole screen freezes for around 0.2seconds until the code finishes.
```
func heavyWork() {
self.actvityIndicator.startAnimating()
...
// heavy loop codes here
...
self.activityIndicator.stopAnimating()
}
```
Is this the correct way of using the activity indicator?
When I comment out
```
// self.activityIndicator.stopAnimating()
```
the activity indicator shows up and stays there - the codes are set up right.
But UI doesn't seem to be updated at the right time.
As i said, the screen just freezes without showing the activity indicator until the heavy code is done.
| maybe you want to carry on with such pattern instead:
```
func heavyWork() {
self.actvityIndicator.startAnimating()
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), { () -> Void in
// ...
// heavy loop codes here
// ...
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.activityIndicator.stopAnimating()
})
});
}
```
as the heavy work should happen in a *background* thread and you need to update the UI on a *main* thread after.
---
*NOTE: obviously it is assumed you call the* `func heavyWork()` *on a main thread; if not, you might need to distract the initial UI updates to the main thread as well.*
|
How to remove elevation shadow from one side without removing elevation itself in flutter from Card or Material widget?
How can I remove top-side elevation shadow inside Card or Material widget.
I used Material widget to container and given a value for elevation. it reflects to my container in all side. But i want only left, bottom and right side elevation shadow. how can i get it or remove top side elevation shadow.
Example from Material or Card Widget will be useful.
```
Material(
elevation: 3,
child: Container(
height: 100,
width: 300,
),
)
```
[![Example picture](https://i.stack.imgur.com/TYBDV.jpg)](https://i.stack.imgur.com/TYBDV.jpg)
| For that, you just have to bring the `shadow` a little bit down by increasing the *y-axis* of the `offset` property just like this:
```
Container(
height: 100.0,
width: 300.0,
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(20.0),
color: Colors.white,
boxShadow: [
BoxShadow(
spreadRadius: 2,
blurRadius: 3,
offset: Offset(0, 6),
color: Colors.black38
)
]
),
),
```
Here's the output:
[![enter image description here](https://i.stack.imgur.com/urepL.png)](https://i.stack.imgur.com/urepL.png)
|
WebSockets and text encoding
I read:
>
> The WebSocket API accepts a DOMString object, which is encoded as
> UTF-8 on the wire, or one of ArrayBuffer, ArrayBufferView, or Blob
> objects for binary transfers.
>
>
>
A `DOMString` is a UTF-16 encoded string. So is it correct that UTF-8 encoding is used over the wire?
| # Yes, it is correct.
UTF-16 may or may not be used in memory, that is just an implementation detail of whatever framework you are using. In the case of JavaScript, strings are UTF-16.
For WebSocket communications, UTF-8 must be used over the wire for textual data (most Internet protocols use UTF-8 nowadays). That is dictated by the [WebSocket protocol specification](https://www.rfc-editor.org/rfc/rfc6455):
>
> After a successful handshake, clients and servers transfer data back and forth in conceptual units referred to in this specification as "messages". On the wire, a message is composed of one or more frames. The WebSocket message does not necessarily correspond to a particular network layer framing, as a fragmented message may be coalesced or split by an intermediary.
>
>
> A frame has an associated type. Each frame belonging to the same message contains the same type of data. Broadly speaking, **there are types for textual data (which is interpreted as UTF-8 [RFC3629] text)**, binary data (whose interpretation is left up to the application), and control frames (which are not intended to carry data for the application but instead for protocol-level signaling, such as to signal that the connection should be closed). This version of the protocol defines six frame types and leaves ten reserved for future use.
>
>
> ...
>
>
> Data frames (e.g., non-control frames) are identified by opcodes where the most significant bit of the opcode is 0. Currently defined opcodes for data frames include **0x1 (Text)**, 0x2 (Binary). Opcodes 0x3-0x7 are reserved for further non-control frames yet to be defined.
>
>
> Data frames carry application-layer and/or extension-layer data. The opcode determines the interpretation of the data:
>
>
> Text
>
>
> **The "Payload data" is text data encoded as UTF-8**. Note that a particular text frame might include a partial UTF-8 sequence; however, **the whole message MUST contain valid UTF-8**. Invalid UTF-8 in reassembled messages is handled as described in Section 8.1.
>
>
> Binary
>
>
> The "Payload data" is arbitrary binary data whose interpretation is solely up to the application layer.
>
>
>
You will incure a small amount of overhead converting from UTF-16 to UTF-8 to UTF-16, but the overhead is minimal on modern machines, and conversions between UTFs are lossless.
|
jQuery validation of multiple not equal inputs
I've managed to set up jQuery validate plugin on my form and I gave rules for two fields in which their values should not match. Specifically, the email and email\_2 inputs can not be the same now, and that works. But my real need is to validate multiple inputs in the same way (in this case 4 of them). I have email, email\_2, email\_3, email\_4 and none of them should not be equal. You can see it in my [jsfiddle here](http://jsfiddle.net/zPetY/), and if you have solution, you can update it and put it back in answer:
html:
```
<form id="signupform" method="post">
<input id="email" name="username" />
<br/ >
<input id="email_2" name="email_2" />
<br />
<input id="email_3" name="email_3" />
<br />
<input id="email_4" name="email_4" />
<br />
<input id="submit" type="submit" value="submit" />
</form>
```
jquery:
```
$.validator.addMethod("notequal", function(value, element) {
return $('#email').val() != $('#email_2').val()},
"* You've entered this one already");
// validate form
$("#signupform").validate({
rules: {
email: {
required: true,
notequal: true
},
email_2: {
required: true,
notequal: true
},
},
});
```
what is the best solution for validate of all inputs?
| Yes, it still works with 1.10.1
[***DEMO***](http://jsfiddle.net/j952T/)
source: <http://www.anujgakhar.com/2010/05/24/jquery-validator-plugin-and-notequalto-rule-with-multiple-fields/>
HTML
```
<form id="signupform" method="post">
<p>
<input class="distinctemails" id="email" name="username" /></p>
<p>
<input class="distinctemails" id="email_2" name="email_2" /></p>
<p>
<input class="distinctemails" id="email_3" name="email_3" /></p>
<p>
<input class="distinctemails" id="email_4" name="email_4" /></p>
<p>
<input id="submit" type="submit" value="submit" />
</p>
</form>
```
jQuery
```
jQuery.validator.addMethod("notEqualToGroup", function (value, element, options) {
// get all the elements passed here with the same class
var elems = $(element).parents('form').find(options[0]);
// the value of the current element
var valueToCompare = value;
// count
var matchesFound = 0;
// loop each element and compare its value with the current value
// and increase the count every time we find one
jQuery.each(elems, function () {
thisVal = $(this).val();
if (thisVal == valueToCompare) {
matchesFound++;
}
});
// count should be either 0 or 1 max
if (this.optional(element) || matchesFound <= 1) {
//elems.removeClass('error');
return true;
} else {
//elems.addClass('error');
}
}, jQuery.format("Please enter a Unique Value."))
// validate form
$("#signupform").validate({
rules: {
email: {
required: true,
notEqualToGroup: ['.distinctemails']
},
email_2: {
required: true,
notEqualToGroup: ['.distinctemails']
},
email_3: {
required: true,
notEqualToGroup: ['.distinctemails']
},
email_4: {
required: true,
notEqualToGroup: ['.distinctemails']
},
},
});
```
|
Understanding how dynamic linking works on UNIX
Consider we have the following situation:
- a program named `program` which depends dynamically on `libfoo.so`
- `libfoo.so` that depends on nothing (well, it depends on `libstdc++` and stuff but I guess we can omit that)
`program` runs perfectly.
Suddenly, `libfoo` codes changes, and some function now uses internally `func_bar()` a function that is provided by another library `libbar.so`.
`libfoo.so` is recompiled and now depends on `libbar.so`. `program` remains unchanged, it still depends only on `libfoo.so`.
Now when I execute `program` it complains that he can't find `func_bar()`.
Here are my questions:
- `libfoo.so` interface didn't change, only its implementation. Why does `program` have to **explicitely** link with `libbar.so` ?
- Isn't the dependency tree recursive ? I would have think that since `libfoo.so` depends on `libbar.so`, `libbar.so` would have been automatically added to the dependency list of `program`, **without** recompilation. However, `ldd program` shows that it is not the case.
It seems weird that one has to **recompile (relink) every binary** that depends on some library everytime that library's dependencies change. What solutions do I have here to prevent this ?
| The problem arises when you have not linked `libfoo.so` against `libbar`. When you are compiling an executable, by default the linker will not let you leave undefined references. However, when you're compiling a shared library, it *will* - and it will expect them to be satisfied at link time. This is so that `libfoo` can use functions exported by `program` itself - when you try to run it, the dynamic linker is expecting `func_bar()` to be supplied by `program`. The problem is illustrated like so:
(`foo.c` is selfcontained)
```
export LD_RUN_PATH=`pwd`
gcc -Wall -shared foo.c -o libfoo.so
gcc -Wall -L. p.c -lfoo -o p
```
At this point, `./p` runs correctly, as you would expect. We then create `libbar.so` and modify `foo.c` to use it:
```
gcc -Wall -shared bar.c -o libbar.so
gcc -Wall -shared foo.c -o libfoo.so
```
At this point, `./p` gives the error you describe. If we check `ldd libfoo.so`, we notice that it does *not* have a dependency on `libbar.so` - this is the error. To correct the error, we must link `libfoo.so` correctly:
```
gcc -Wall -L. -lbar -shared foo.c -o libfoo.so
```
At this point, `./p` again runs correctly, and `ldd libfoo.so` shows a dependency on `libbar.so`.
|
bootstrap-vue tabs - open a tab content given an anchor in the url
I'm using `bootstrap-vue` for a SPA and I am working on a page where we need to nest some content within `b-tabs`.
By given a url with an anchor (eg: `www.mydomain.com/page123#tab-3`) I would like to show the content under `Tab 3`.
**Question:** How do I do it within bootstrap-vue?
Is there a native function I can use for that?
**reference:** (I couldn't find it in the docs: <https://bootstrap-vue.js.org/docs/components/tabs/>)
---
Here is **my** code:
```
<b-tabs>
<b-tab title="Tab 1" active>
Tab 1 content
</b-tab>
<b-tab title="Tab 2">
Tab 2 content
</b-tab>
<b-tab title="Tab 3">
Tab 3 content
</b-tab>
</b-tabs>
```
And this is the **rendered** html code:
```
<div class="tabs">
<div class="">
<ul role="tablist" tabindex="0" class="nav nav-tabs">
<li class="nav-item">
<a role="tab" tabindex="-1" href="#" class="nav-link active">Tab 1</a>
</li>
<li class="nav-item">
<a role="tab" tabindex="-1" href="#" class="nav-link">Tab 2</a>
</li>
<li class="nav-item">
<a role="tab" tabindex="-1" href="#" class="nav-link">Tab 3</a>
</li>
</ul>
</div>
<div class="tab-content">
<div class="tab-pane show fade active">
Tab 1 content
</div>
<div class="tab-pane fade" style="display: none;">
Tab 2 content
</div>
<div class="tab-pane fade" style="display: none;">
Tab 3 content
</div>
</div>
</div>
```
| `b-tabs` is supposed to be used for local (non-url) based content (the `href` prop on `b-tab` is deprecated)
You can, however, easily use `b-nav` and `div`s to generate URL navigation based tabs:
```
<div class="tabs">
<b-nav tabs>
<b-nav-item to="#" :active="$route.hash === '#' || $route.hash === ''">
One
</b-nav-item>
<b-nav-item to="#two" :active="$route.hash === '#two'">
Two
</b-nav-item>
<b-nav-item to="#three" :active="$route.hash === '#three'">
Three
</b-nav-item>
</b-nav>
<div class="tab-content">
<div :class="['tab-pane', { 'active': $route.hash === '#' || $route.hash === '' }]" class="p-2">
<p>Tab One (default) with no hash</p>
</div>
<div :class="['tab-pane', { 'active': $route.hash === '#two' }]" class="p-2">
<p>Tab One with hash #two</p>
</div>
<div :class="['tab-pane', { 'active': $route.hash === '#three' }]" class="p-2">
<p>Tab One with hash #three</p>
</div>
</div>
</div>
```
This assumes you are using Vue router.
|
Distribution of arrival times to server for an M/M/1 queue (what the server experiences)
In an M/M/1 queue, we know that inter-arrival times are exponentially distributed, and that service times are the same. What is the distribution of to-server inter-arrival times (aka service start times)? Put another way, what is the distribution of times between when a server starts helping a customer?
Some intuition:
if $\lambda \ll \mu$, then there is rarely a wait, and the server just experiences arrivals at the same rate as arrivals to the queue.
if $\lambda \approx \mu$, then the server is almost always busy and experiences arrivals at the same rate of service/arrivals.
However, when $\lambda < \mu$, there will be cycles where customers arrive to both empty and occupied lines. So, the server will at times see the true arrival rate $\lambda$, and at other times, arrivals to them will be $\mu$
[Burke's Theorem](http://en.wikipedia.org/wiki/Burke%27s_theorem) shows that the distribution of departure time from a server is the same as arrival times. One proof this theorem use weighted sums, and the probability that the server is busy. I think there may be a similar approach to solve this problem.
| It's a mixture of three distributions, and can be found pretty easily by brute force, if one allows oneself to handwave over some important details (e.g., "is $\lambda < \mu$").
Let $n$ be the number of customers in the system; $n=0$ means no-one is being served or waiting, $n=1$ means one customer is being served but no-one is waiting, etc. Let $p\_n$ be the steady-state (assumed from here on) probability that $n$ customers are in the system.
Clearly, if $n=0$, the time $t$ to the next service start is distributed $f(t|n=0) = \text{Exp}\{\lambda\}$, and if $n \geq 2$, the time to the next service start is distributed $f(t|n \geq 2) = \text{Exp}\{\mu\}$, since the next service will start immediately upon completion of the current one. If $n=1$, the time to the next service start is the maximum of the time to the next arrival and the time to the next service completion. This latter distribution has the following easily-derived form:
$f(t|n=1) = \lambda e^{-\lambda t} + \mu e^{-\mu t} - (\lambda + \mu)e^{-(\lambda+\mu)t}$
The mixture probabilities correspond to $p\_0$, $1-p\_0-p\_1$, and $p\_1$ respectively. The probabilities $p\_0$, $p\_1$, and $1-p\_0-p\_1$ can be written as:
$$p\_0 = \left[\sum\_{k=0}^{\infty}\left({\lambda \over{\mu}}\right)^k\right]^{-1} = 1 - {\lambda\over{\mu}}$$
$$p\_1 = \left({\lambda \over{\mu}}\right)p\_0 = {\lambda\over{\mu}} - \left({\lambda\over{\mu}}\right)^2$$
$$1-p\_0-p\_1 = \left({\lambda\over{\mu}}\right)^2$$
Writing the whole thing out, with some rearranging of terms, gives:
$$f(t) = {\lambda(\mu-\lambda)\over{\mu}}\left(e^{-\lambda t} + e^{-\mu t}\right) + \left({\lambda\over{\mu}}\right)^2\left(\lambda e^{-\lambda t} + \mu e^{-\mu t} - (\lambda+\mu)e^{-(\lambda+\mu)t}\right)$$
My source for probability formulae was [Kleinrock, Queuing Systems](http://rads.stackoverflow.com/amzn/click/0471491101).
Edit: The derivation of $f(t|n=1)$ is below, written as the derivation of the maximum of two independent exponential variates $x \sim \text{Exp}\{\lambda\}$ and $y \sim \text{Exp}\{\mu\}$. The corresponding CDFs are $F\_X(x) = 1-\exp{\{-\lambda x\}}$ and $F\_Y(y) = 1-\exp{\{-\mu y\}}$.
We'll approach this using the "cumulative distribution function technique". Note first that the statement "$\max(x,y) \leq t$" is equivalent to "$x \leq t$ and $y \leq t$". The probability that $\max(x,y) \leq t$ is just the product of the probabilities that $x \leq t$ and $y \leq t$ (as $x$ and $y$ are independent.) Writing this out gives:
$$F\_{\max(x,y)}(t) = \left(1-e^{-\lambda t}\right)\left(1-e^{-\mu t}\right) = 1 - e^{-\lambda t} - e^{-\mu t} + e^{-(\lambda+\mu) t}$$
and taking the derivative with respect to $t$ gets you to the density function.
|
Groupby, transpose and append in Pandas?
I have a dataframe which looks like this:
>
> [![enter image description here](https://i.stack.imgur.com/PuLyM.png)](https://i.stack.imgur.com/PuLyM.png)
>
>
>
Each user has 10 records. Now, I want to create a dataframe which looks like this:
```
userid name1 name2 ... name10
```
which means I need to invert every 10 records of the column `name` and append to a new dataframe.
So, how do it do it? Is there any way I can do it in Pandas?
| `groupby('userid')` then `reset_index` within each group to enumerate consistently across groups. Then `unstack` to get columns.
```
df.groupby('userid')['name'].apply(lambda df: df.reset_index(drop=True)).unstack()
```
### Demonstration
```
df = pd.DataFrame([
[123, 'abc'],
[123, 'abc'],
[456, 'def'],
[123, 'abc'],
[123, 'abc'],
[456, 'def'],
[456, 'def'],
[456, 'def'],
], columns=['userid', 'name'])
df.sort_values('userid').groupby('userid')['name'].apply(lambda df: df.reset_index(drop=True)).unstack()
```
[![enter image description here](https://i.stack.imgur.com/GnDX2.png)](https://i.stack.imgur.com/GnDX2.png)
if you don't want the `userid` as the index, add `reset_index` to the end.
```
df.sort_values('userid').groupby('userid')['name'].apply(lambda df: df.reset_index(drop=True)).unstack().reset_index()
```
[![enter image description here](https://i.stack.imgur.com/HrxmA.png)](https://i.stack.imgur.com/HrxmA.png)
|
Multiple user models with Ruby On Rails and devise to have separate registration routes but one common login route
First, I've searched intensely with Google and Yahoo and I've found several replies on topics like mine, but they all don't really cover what I need to know.
I've got several user models in my app, for now it's Customers, Designers, Retailers and it seems there are yet more to come. They all have different data stored in their tables and several areas on the site they're allowed to or not. So I figured to go the devise+CanCan way and to try my luck with polymorphic associations, so I got the following models setup:
```
class User < AR
belongs_to :loginable, :polymorphic => true
end
class Customer < AR
has_one :user, :as => :loginable
end
class Designer < AR
has_one :user, :as => :loginable
end
class Retailer < AR
has_one :user, :as => :loginable
end
```
For the registration I've got customized views for each different User type and my routes are setup like this:
```
devise_for :customers, :class_name => 'User'
devise_for :designers, :class_name => 'User'
devise_for :retailers, :class_name => 'User'
```
For now the registrations controller is left as standard (which is "devise/registrations"), but I figured, since I got different data to store in different models I'd have to customize this behaviour as well!?
But with this setup I got helpers like `customer_signed_in?` and `designer_signed_in?`, but what I'd really need is a general helper like `user_signed_in?` for the areas on the site that are accessible to all users, no matter which user type.
I'd also like a routes helper like `new_user_session_path` instead of the several `new_*type*_session_path` and so on. In fact all I need to be different is the registration process...
So I was wondering IF THIS IS THE WAY TO GO for this problem? Or is there a better/easier/less must-customize solution for this?
| Okay, so I worked it through and came to the following solution.
I needed to costumize devise a little bit, but it's not that complicated.
The User model
```
# user.rb
class User < ActiveRecord::Base
devise :database_authenticatable, :registerable,
:recoverable, :rememberable, :trackable, :validatable
attr_accessible :email, :password, :password_confirmation, :remember_me
belongs_to :rolable, :polymorphic => true
end
```
The Customer model
```
# customer.rb
class Customer < ActiveRecord::Base
has_one :user, :as => :rolable
end
```
The Designer model
```
# designer.rb
class Designer < ActiveRecord::Base
has_one :user, :as => :rolable
end
```
So the User model has a simple polymorphic association, defining if it's a Customer or a Designer.
The next thing I had to do was to generate the devise views with `rails g devise:views` to be part of my application. Since I only needed the registration to be customized I kept the `app/views/devise/registrations` folder only and removed the rest.
Then I customized the registrations view for new registrations, which can be found in `app/views/devise/registrations/new.html.erb` after you generated them.
```
<h2>Sign up</h2>
<%
# customized code begin
params[:user][:user_type] ||= 'customer'
if ["customer", "designer"].include? params[:user][:user_type].downcase
child_class_name = params[:user][:user_type].downcase.camelize
user_type = params[:user][:user_type].downcase
else
child_class_name = "Customer"
user_type = "customer"
end
resource.rolable = child_class_name.constantize.new if resource.rolable.nil?
# customized code end
%>
<%= form_for(resource, :as => resource_name, :url => registration_path(resource_name)) do |f| %>
<%= my_devise_error_messages! # customized code %>
<div><%= f.label :email %><br />
<%= f.email_field :email %></div>
<div><%= f.label :password %><br />
<%= f.password_field :password %></div>
<div><%= f.label :password_confirmation %><br />
<%= f.password_field :password_confirmation %></div>
<% # customized code begin %>
<%= fields_for resource.rolable do |rf| %>
<% render :partial => "#{child_class_name.underscore}_fields", :locals => { :f => rf } %>
<% end %>
<%= hidden_field :user, :user_type, :value => user_type %>
<% # customized code end %>
<div><%= f.submit "Sign up" %></div>
<% end %>
<%= render :partial => "devise/shared/links" %>
```
For each User type I created a separate partial with the custom fields for that specific User type, i.e. Designer --> `_designer_fields.html`
```
<div><%= f.label :label_name %><br />
<%= f.text_field :label_name %></div>
```
Then I setup the routes for devise to use the custom controller on registrations
```
devise_for :users, :controllers => { :registrations => 'UserRegistrations' }
```
Then I generated a controller to handle the customized registration process, copied the original source code from the `create` method in the `Devise::RegistrationsController` and modified it to work my way (don't forget to move your view files to the appropriate folder, in my case `app/views/user_registrations`
```
class UserRegistrationsController < Devise::RegistrationsController
def create
build_resource
# customized code begin
# crate a new child instance depending on the given user type
child_class = params[:user][:user_type].camelize.constantize
resource.rolable = child_class.new(params[child_class.to_s.underscore.to_sym])
# first check if child instance is valid
# cause if so and the parent instance is valid as well
# it's all being saved at once
valid = resource.valid?
valid = resource.rolable.valid? && valid
# customized code end
if valid && resource.save # customized code
if resource.active_for_authentication?
set_flash_message :notice, :signed_up if is_navigational_format?
sign_in(resource_name, resource)
respond_with resource, :location => redirect_location(resource_name, resource)
else
set_flash_message :notice, :inactive_signed_up, :reason => inactive_reason(resource) if is_navigational_format?
expire_session_data_after_sign_in!
respond_with resource, :location => after_inactive_sign_up_path_for(resource)
end
else
clean_up_passwords(resource)
respond_with_navigational(resource) { render_with_scope :new }
end
end
end
```
What this all basically does is that the controller determines which user type must be created according to the `user_type` parameter that's delivered to the controller's `create` method by the hidden field in the view which uses the parameter by a simple GET-param in the URL.
For example:
If you go to `/users/sign_up?user[user_type]=designer` you can create a Designer.
If you go to `/users/sign_up?user[user_type]=customer` you can create a Customer.
The `my_devise_error_messages!` method is a helper method which also handles validation errors in the associative model, based on the original `devise_error_messages!` method
```
module ApplicationHelper
def my_devise_error_messages!
return "" if resource.errors.empty? && resource.rolable.errors.empty?
messages = rolable_messages = ""
if !resource.errors.empty?
messages = resource.errors.full_messages.map { |msg| content_tag(:li, msg) }.join
end
if !resource.rolable.errors.empty?
rolable_messages = resource.rolable.errors.full_messages.map { |msg| content_tag(:li, msg) }.join
end
messages = messages + rolable_messages
sentence = I18n.t("errors.messages.not_saved",
:count => resource.errors.count + resource.rolable.errors.count,
:resource => resource.class.model_name.human.downcase)
html = <<-HTML
<div id="error_explanation">
<h2>#{sentence}</h2>
<ul>#{messages}</ul>
</div>
HTML
html.html_safe
end
end
```
UPDATE:
To be able to support routes like `/designer/sign_up` and `/customer/sign_up` you can do the following in your routes file:
```
# routes.rb
match 'designer/sign_up' => 'user_registrations#new', :user => { :user_type => 'designer' }
match 'customer/sign_up' => 'user_registrations#new', :user => { :user_type => 'customer' }
```
Any parameter that's not used in the routes syntax internally gets passed to the params hash. So `:user` gets passed to the params hash.
So... that's it. With a little tweeking here and there I got it working in a quite general way, that's easily extensible with many other User models sharing a common User table.
Hope someone finds it useful.
|
Waiting for multiple deferred objects to complete
When waiting for multiple deferred objects to complete, why does:
```
$.when(tasks).then(function() {
document.write("Completed all requests." + "<br/>");
});
```
execute immediately, yet
```
$.when.apply(null, tasks).then(function () {
document.write("Completed all requests." + "<br/>");
});
```
waits until the tasks have completed.
| The [`when`](http://api.jquery.com/jQuery.when/) function does not take an array of deferreds. Rather, you pass each deferred as a separate argument. That's exactly what [`apply`](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Function/apply) is doing for you.
The `null` being passed to `apply` is just because that's what `apply` expects: the first argument is what the context of the function should be set to when its called, and the second argument is always an array, which will be expanded so that the function will be called as if all the items in the array have been passed in as separate arguments.
Since for the purpose of `when` it makes no difference what context it's being called with, `null` works just as well as anything else. I prefer to pass it jQuery itself:
```
$.when.apply($, tasks).then(function () {
// Whatever
});
```
since I think it looks cleaner, but that's just me. It makes no difference whatsoever.
---
If your browser supports native promises (or you're using [a polyfill](https://github.com/getify/native-promise-only)) you can use its `all` method instead, which takes an array of promises directly:
```
Promise.all(tasks).then(function (values) {
// "values" is an array, with the results of each of the "tasks"
});
```
|
Is there a non-messy way to chain the results of functions that return Option values?
I have some code that looks like this:
```
f(a).and_then(|b| {
g(b).and_then(|c| {
h(c).map(|d| {
do_something_with(a, b, c, d)
})
})
})
```
Where `f`, `g`, and `h` return `Option` values. I need to use all the intermediate values (`a`, `b`, `c`, and `d`) in the `do_something_with` calculation. The indentation is very deep. Is there a better way to do this? Ideally it would look something like this (which of course doesn't work):
```
try {
let b = f(a);
let c = g(b);
let d = h(c);
do_something_with(a, b, c, d)
} rescue NonexistentValueException {
None
}
```
| # [Rust 1.22](https://github.com/rust-lang/rust/blob/master/RELEASES.md#version-1220-2017-11-22)
The [question mark operator](https://stackoverflow.com/q/42917566/155423) now supports `Option`, so you can write your function as
```
fn do_something(a: i32) -> Option<i32> {
let b = f(a)?;
let c = g(b)?;
let d = h(c)?;
do_something_with(a, b, c, d) // wrap in Some(...) if this doesn't return an Option
}
```
# Rust 1.0
The Rust standard library defines a `try!` macro (and, equivalently, the `?` operator, as of [Rust 1.13](https://github.com/rust-lang/rust/blob/master/RELEASES.md#version-1130-2016-11-10)) that solves this problem for `Result`. The macro looks like this:
```
macro_rules! try {
($expr:expr) => (match $expr {
$crate::result::Result::Ok(val) => val,
$crate::result::Result::Err(err) => {
return $crate::result::Result::Err($crate::convert::From::from(err))
}
})
}
```
If the argument is `Err`, it returns from the function with that `Err` value. Otherwise, it evaluates to the value wrapped in `Ok`. The macro can only be used in a function that returns `Result`, because it returns the error it meets.
We can make a similar macro for `Option`:
```
macro_rules! try_opt {
($expr:expr) => (match $expr {
::std::option::Option::Some(val) => val,
::std::option::Option::None => return None
})
}
```
You can then use this macro like this:
```
fn do_something(a: i32) -> Option<i32> {
let b = try_opt!(f(a));
let c = try_opt!(g(b));
let d = try_opt!(h(c));
do_something_with(a, b, c, d) // wrap in Some(...) if this doesn't return an Option
}
```
|
Why is render() being called twice in ReactNative?
I am currently attempting to make an app with one screen where the background of the screen is taken up by a map centered on the user's current coordinates. In the below code, I have saved the longitude and longitude as null within the state of the App class component. Then, I use the inherited method 'componentDidMount()' to update the state with the user's current location, and finally in render() I use this.state.latitude and this.state.longitude to inform the values for latitude and longitude for MapView.
The code does not compile. Using console.log, I have isolated the problem to be that render() is being called twice, and my console.log statement first outputs the null value to the console, and then outputs the user's current location.
So two questions.
1) Why does console.log output values to the console two different times, one of which is the state value passed in and one of which is the state value updated by componentDidMount()?
2) How can I run the function navigator.geolocation.getCurrentPosition() to save the user's current location and pass it to MapView so that the code compiles in the first place?
```
export default class App extends React.Component {
constructor(props) {
super(props);
this.state = { latitude: null, longitude: null };
}
componentDidMount() {
navigator.geolocation.getCurrentPosition(position => {
this.setState({
latitude: position.coords.latitude,
longitude: position.coords.longitude,
});
});
}
render() {
return (
<View style={styles.container}>
<MapView
style={styles2.map}
initialRegion={{
latitude: this.state.latitude,
longitude: this.state.longitude,
latitudeDelta: 0.0922,
longitudeDelta: 0.0421,
}}
/>
{console.log(this.state.latitude)}
{console.log(this.state.longitude)}
</View>
);
}
}
```
| React will re-render whenever your component updates via `state` or props. In your `componentDidMount()` you are calling `setState()`, so at some point after that (since it is asynchronous), your component will need to update for the new `state` so it renders again.
The first `render()` occurs when your component mounts. This is where you will see whatever the initial values of `latitude` and `longitude` are on your state.
After your component has mounted, your call to `setState()` will update `state` with the new values of `latitude` and `longitude` so your component will `render()` a second time where you will see the new values of `latitude` and `longitude`.
EDIT:
If you want to avoid the first display of `latitude` and `longitude` (NB. It will still render twice) you could conditionally render i.e.
```
render() {
if(!this.state.longitude) {
return <View>Loading...</View>;
}
return (
<View style={styles.container}>
<MapView
style={styles2.map}
initialRegion={{
latitude: this.state.latitude,
longitude: this.state.longitude,
latitudeDelta: 0.0922,
longitudeDelta: 0.0421,
}}
/>
</View>
);
}
```
|
Choosing const vs. non-const pointer for user data
Consider a simple, re-usable library. It has a object for the current state,
and a callback function to feed it input.
```
typedef struct Context_S Context_T;
typedef size_t (*GetBytes_T) (Context_T * ctx, uint8_t * bytes, size_t max);
struct Context_S {
GetBytes_T byteFunc;
void * extra;
// more elements
};
void Init(Context_T * ctx, GetBytes_T func);
int GetNext(Context_T * ctx); // Calls callback when needs more bytes
```
User might need some extra data for callback (like file pointer). Library
provides functions to have 1 extra pointer:
```
void SetExtra(Context_T * ctx, void * ext); // May be called after init
void * GetExtra(Context_T const * ctx); // May be called in callback
```
However, if user extra data is constant, it would require him to cast constness
away before setting the data. I could change the functions to take/return const,
but this would require extra cast in callback, if data should not be constant.
```
void SetExtra(Context_T * ctx, void const * ext);
void const * GetExtra(Context_T const * ctx);
```
Third alternative would be to hide cast inside the function calls:
```
void SetExtra(Context_T * ctx, void const * ext);
void * GetExtra(Context_T const * ctx);
```
Is it good idea to hide cast in this case?
I'm trying to find balance with usability and type safety. But since we are
using `void*` pointers, lot of safety is gone already.
Or am I overlooking something worthy of consideration?
| The C standard library has similar problems. Notoriously, the `strchr` function accepts a `const char *` parameter and returns a `char *` value that points into the given string.
This is a deficiency in the C language: Its provisions for `const` do not support all the ways in which `const` might be reasonably used.
It is not unreasonable to follow the example of the C standard: Accept a pointer to `const` and, when giving it back to the calling software, provide a pointer to non-`const`, as in your third example.
Another alternative is to define two sets of routines, `SetExtra` and `GetExtra` that use non-`const`, and `SetExtraConst` and `GetExtraConst` that use `const`. These could be enforced at run-time with an extra bit that records whether the set context was `const` or non-`const`. However, even without enforcement, they could be helpful because they could make errors more visible in the calling code: Somebody reading the code could see that `SetExtraConst` is used to set the data and `GetExtra` (non-`const`) is used to get the data. (This might not help if the calling code is somewhat convoluted and uses `const` data in some cases and non-`const` data in others, but it is better to catch more errors than fewer.)
|
How do I get Apache 2.2.17 started?
I am running Apache 2.2.17 on Fedora v14. I cannot access the running web server from a browser, and I also have difficulties setting it up to listen on port 443.
The server `httpd` is running:
```
[me@host ~]$ sudo ps -U root -u root u | grep httpd
root 6592 0.0 3.4 404620 17552 ? Ss 10:50 0:00 /usr/sbin/httpd -k graceful
```
I then stop the service:
```
[me@host ~]$ sudo apachectl stop
```
I then set up the `Listen` directive in `/etc/httpd/conf/httpd.conf` to listen on ports 80 and 443:
```
Listen 1.2.3.4:80
Listen 1.2.3.4:443 https
```
(The IP is not `1.2.3.4` but the actual IP of this host.)
I checked the other Apache conf directives to make sure I have a document folder, that it is accessible by all users, and that it will show a directory listing of the folder, if I do not specify `index.html` etc.
When attempting to start `httpd`, I get a configuration error:
```
[me@host conf]$ sudo apachectl graceful
(98)Address already in use: make_sock: could not bind to address [::]:443
(98)Address already in use: make_sock: could not bind to address 0.0.0.0:443
no listening sockets available, shutting down
Unable to open logs
```
No other services appear to be running on TCP port 443 (or port 80):
```
[me@host conf]$ sudo netstat -tulpn | grep 443
[me@host conf]$ sudo netstat -tulpn | grep 80
```
If I comment out the 443 `Listen` directive and `graceful`-restart the `httpd` service, it starts and is visible in the process list (via `ps`, as above) but I am still unable to access the host via web browser.
Is there anything I have overlooked in setting up Apache? Thanks for your advice.
| Check iptables rules:
```
iptables -nv -L
```
To add new rules, edit /etc/sysconfig/iptables:
```
...
-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
...
```
To restart iptables:
```
#service iptables restart
```
To stop iptables:
```
#service iptables stop
```
For debug use tcpdump:
```
# tcpdump -pn host YOU_IP and not port 22
or
# tcpdump -pn port 443 and icmp
```
|
Onload fit input size to length of text
I'm trying to have jQuery test the length of the text in the input box onLoad and change the size of the input box to fit. Here is my code attempt so far:
```
$("#emailSubject").attr('size', this.val().length);
```
I'm getting the following error:
>
> this.val is not a function
>
>
>
What am I doing wrong?
**Update:** Now I'm no longer getting the first error, but the length is showing up as 0 even though it's not supposed to be. (I'm using an alert to check what the length is). Why would that be happening?
**Update:** Here is the code context:
```
$(
function()
{
//works correctly
alert($("#emailSubject").val().length);
//throws error
$("#emailSubject").attr('size', ($(this).val().length));
}
)
```
New error- the length is showing up correctly in the alert, but I'm getting the error:
>
> Index or size is negative or greater than the allowed amount.
>
>
>
| As [Alien Webguy said](https://stackoverflow.com/questions/6819548/onload-fit-input-size-to-length-of-text/6819648#6819648), you're trying to call a jQuery function (`val`) on ~~what's probably a raw DOM element or the `window` object (you haven't shown enough context for us to know what `this` is, but the error tells us it's not a jQuery instance)~~ the `document` object (because that's what jQuery sets `this` to when calling your `ready` handler). *(Your update clarified it.)* So the first thing is to get the correct reference for the field and wrap it in a jQuery instance.
But separately, if you set `size` to the number of characters, the field will almost certainly be much larger than you want. That's because `size` works in uniform character widths.
Instead, the usual thing is to measure the actual string using an off-page element with the same font family, style, size, text decoration, etc., etc. as the input element. Something like this ([live copy](http://jsbin.com/ifuzop)):
CSS:
```
#theField, #measure {
font-family: serif;
font-size: 12pt;
font-style: normal;
}
#measure {
position: absolute;
left: -10000px;
top: 0px;
}
```
HTML:
```
<input type='text' id='theField' value=''>
<span id="measure"></span>
```
JavaScript:
```
jQuery(function($) {
var field;
// Hook up some events for resizing
field = $("#theField");
field.bind("change keypress click keydown", function() {
resizeIt(field);
});
// Resize on load
resizeIt(field);
// Function to do the work
function resizeIt(field) {
var measure = $("#measure");
measure.text(field.val());
field.css("width", (measure.width() + 16) + "px");
}
});
```
Note that there I'm resizing on various events as well; I doubt the list there is comprehensive, but it gives you an idea.
|
Why is `to\_ary` called from a double-splatted parameter in a code block?
It seems that a double-splatted block parameter calls `to_ary` on an object that is passed, which does not happen with lambda parameters and method parameters. This was confirmed as follows.
First, I prepared an object `obj` on which a method `to_ary` is defined, which returns something other than an array (i.e., a string).
```
obj = Object.new
def obj.to_ary; "baz" end
```
Then, I passed this `obj` to various constructions that have a double splatted parameter:
```
instance_exec(obj){|**foo|}
# >> TypeError: can't convert Object to Array (Object#to_ary gives String)
```
```
->(**foo){}.call(obj)
# >> ArgumentError: wrong number of arguments (given 1, expected 0)
```
```
def bar(**foo); end; bar(obj)
# >> ArgumentError: wrong number of arguments (given 1, expected 0)
```
As can be observed above, only code block tries to convert `obj` to an array by calling a (potential) `to_ary` method.
Why does a double-splatted parameter for a code block behave differently from those for a lambda expression or a method definition?
| I don't have full answers to your questions, but I'll share what I've found out.
## Short version
Procs allow to be called with number of arguments different than defined in the signature. If the argument list doesn't match the definition, `#to_ary` is called to make implicit conversion. Lambdas and methods require number of args matching their signature. No conversions are performed and that's why `#to_ary` is not called.
## Long version
What you describe is a difference between handling params by lambdas (and methods) and procs (and blocks). Take a look at this example:
```
obj = Object.new
def obj.to_ary; "baz" end
lambda{|**foo| print foo}.call(obj)
# >> ArgumentError: wrong number of arguments (given 1, expected 0)
proc{|**foo| print foo}.call(obj)
# >> TypeError: can't convert Object to Array (Object#to_ary gives String)
```
`Proc` doesn't require the same number of args as it defines, and `#to_ary` is called (as you probably know):
>
> For procs created using `lambda` or `->()`, an error is generated if wrong number of parameters are passed to the proc. For procs created using `Proc.new` or `Kernel.proc`, extra parameters are silently discarded and missing parameters are set to `nil`. ([Docs](https://ruby-doc.org/core-2.6/Proc.html))
>
>
>
What is more, `Proc` adjusts passed arguments to fit the signature:
```
proc{|head, *tail| print head; print tail}.call([1,2,3])
# >> 1[2, 3]=> nil
```
Sources: [makandra](https://makandracards.com/makandra/20641-careful-when-calling-a-ruby-block-with-an-array), [SO question](https://stackoverflow.com/questions/23945533/why-do-ruby-procs-blocks-with-splat-arguments-behave-differently-than-methods-an).
`#to_ary` is used for this adjustment (and it's reasonable, as `#to_ary` is for implicit conversions):
```
obj2 = Class.new{def to_ary; [1,2,3]; end}.new
proc{|head, *tail| print head; print tail}.call(obj2)
# >> 1[2, 3]=> nil
```
It's described in detail in [a ruby tracker](https://bugs.ruby-lang.org/issues/6039).
You can see that `[1,2,3]` was split to `head=1` and `tail=[2,3]`. It's the same behaviour as in multi assignment:
```
head, *tail = [1, 2, 3]
# => [1, 2, 3]
tail
# => [2, 3]
```
As you have noticed, `#to_ary` is also called when when a proc has double-splatted keyword args:
```
proc{|head, **tail| print head; print tail}.call(obj2)
# >> 1{}=> nil
proc{|**tail| print tail}.call(obj2)
# >> {}=> nil
```
In the first case, an array of `[1, 2, 3]` returned by `obj2.to_ary` was split to `head=1` and empty tail, as `**tail` wasn't able to match an array of`[2, 3]`.
Lambdas and methods don't have this behaviour. They require strict number of params. There is no implicit conversion, so `#to_ary` is not called.
I think that this difference is implemented in [these two lines](https://github.com/ruby/ruby/blob/547f574b639cd8586568ebb8570c51faf102c313/vm.c#L1096-L1097) of the Ruby soruce:
```
opt_pc = vm_yield_setup_args(ec, iseq, argc, sp, passed_block_handler,
(is_lambda ? arg_setup_method : arg_setup_block));
```
and in [this function](https://github.com/ruby/ruby/blob/d7fdf45a4ae1bcb6fac30a24b025d4f20149ba0a/vm_insnhelper.c#L2895). I guess `#to_ary` is called somewhere in [vm\_callee\_setup\_block\_arg\_arg0\_splat](https://github.com/ruby/ruby/blob/d7fdf45a4ae1bcb6fac30a24b025d4f20149ba0a/vm_insnhelper.c#L2866-L2879), most probably in `RARRAY_AREF`. I would love to read a commentary of this code to understand what happens inside.
|
In javascript how does the below code works
In Javascript how does the below code works.
```
var a = {
prop1: "a",
prop2: "b",
fun: function() {
return this.prop1 + " " + this.prop2;
}
}
var a2 = a;
a.fn = "v";
a = {};
if (a === a2) {
console.log(true);
} else {
console.log(false);
}
```
The above code prints false.
But if I comment out the line a={} the value which prints on console is true.
```
var a = {
prop1: "a",
prop2: "b",
fun: function() {
return this.prop1 + " " + this.prop2;
}
}
var a2 = a;
a.fn = "v";
//a={};
if (a === a2) {
console.log(true);
} else {
console.log(false);
}
```
How the above code works, as Both variables(a and a2) points to the same object but when I initialized a with {} it gave false.
|
>
> ...as Both variables(a and a2) points to the same object ...
>
>
>
They don't anymore, as of this line:
```
a={};
```
At that point, `a2` refers to the old object, and `a` refers to a new, different object.
`a2 = a` doesn't create any kind of ongoing link between the *variable* `a2` and the *variable* `a`.
Let's throw some Unicode-art at it:
After this code runs:
```
var a = {
prop1: "a",
prop2: "b",
fun: function() {
return this.prop1 + " " + this.prop2;
}
}
var a2 = a;
a.fn = "v";
```
At this point, you have something like this in memory (with various details omitted):
```
a:Ref44512โโโ+
|
|
| +โโโโโโโโโโโโโ+
+โโโ>| (object) |
| +โโโโโโโโโโโโโ+
| | prop1: "a" |
| | prop2: "b" | +โโโโโโโโโโโโ+
a2:Ref44512โโ+ | fun:Ref7846 |โโ>| (function) |
| vn: "v" | +โโโโโโโโโโโโ+
+โโโโโโโโโโโโโ+
```
Those "Ref" values are object references. (We never actually see their values, those values are just made up nonsense.) Notice that the value in `a` and the value in `a2` is the same, however.
If you do `a === a2` at this point, it will be `true`: Both variables refer to the same object.
But when you do this:
```
a={};
```
```
+โโโโโโโโโโโโโ+
a:Ref84521โโโโโโโ>| (object) |
+โโโโโโโโโโโโโ+
+โโโโโโโโโโโโโ+
a2:Ref44512โโโโโโ>| (object) |
+โโโโโโโโโโโโโ+
| prop1: "a" |
| prop2: "b" | +โโโโโโโโโโโโ+
| fun:Ref7846 |โโ>| (function) |
| vn: "v" | +โโโโโโโโโโโโ+
+โโโโโโโโโโโโโ+
```
At this point, `a === a2` is `false`: The variables refer different objects.
|
jQuery Mobile popup is not opening on .popup('open')
I am trying to use jQuery Mobile 1.3.1's popup to warn the user when login credentials are false. I started with a basic template from jquerymobile's documentation, but I couldn't make it work with `$('#popupBasic').popup('open');` If I use it this way;
```
<div data-role="page">
<div data-role="header" data-tap-toggle="false">
</div><!-- /header -->
<div data-role="content">
<a href="#popupBasic" data-rel="popup">Tooltip</a>
<div data-role="popup" id="popupBasic">I will change this text dynamically if this popup works</div>
</div><!-- /content -->
</div><!-- /page -->
```
It works well when I click on the Tooltip link. But in my case there isn't any click so I am trying this;
```
if(retVal){
$.mobile.changePage('index');
}
else{
$('#popupBasic').popup();
$('#popupBasic').popup("open");
}
```
this is after my ajax login function makes a callback, so retVal is true if there isn't any errors, false if there is (and at that point I am trying to show popup). By the way all my js part is in
```
$(document).on('pageinit', function(){});
```
so i wait till jquerymobile is ready for the page.
What happens when I do this is on desktop browsers link changes as
```
http://localhost/login#&ui-state=dialog
```
but doesn't show the pop up. After some refreshes and caches it starts to show. On iOS same thing also happens but on android sometimes it changes link some time it doesn't.
I would be really happy if someone can help me to solve this problem.
Thanks in advance.
| That's because when `pageinit` is fired, the poupup isnt ready for manipulation just yet. You need to use `pageshow` to get the popup to open. Here's what you do :
- Make the ajax call in `pageinit`. store the data in `data` attribute of the page.
- Then, in the pageshow event, take if from data and manipulate it the way you want, then open the popup.
Here's the code :
```
$(document).on({
"pageinit": function () {
alert("pageinit");
//$("#popupBasic").popup('open'); will throw error here because the page is not ready yet
//simulate ajax call here
//data recieved from ajax - might be an array, anything
var a = Math.random();
//use this to transfer data betwene events
$(this).data("fromAjax", a);
},
//open popup here
"pageshow": function () {
alert("pageshow");
//using stored data in popup
$("#popupBasic p").html("Random : " + $(this).data("fromAjax"));
//open popup
$("#popupBasic").popup('open');
}
}, "#page1");
```
And here's a demo : <http://jsfiddle.net/hungerpain/MvwBU/>
|
Increase caption size from kableExtra table in RMarkdown
I'm using the `posterdown` package in R to generate a HTML Poster and render it as a PDF.
I have a table in my Rmd file, however the caption is really small. Is there a way to increase the size of the caption?
Secondly, I would also like to move the title and affiliation in the header slightly down (so that its more in the center of the header. Is there a way to do that?
Here is a snippet of my Rmd file
```
---
title: Here is my title
author:
- name: Name and Surname
affiliation:
address: Department, University
column_numbers: 3
logoright_name: https://raw.githubusercontent.com/brentthorne/posterdown/master/images/betterhexlogo.png
logoleft_name: https://raw.githubusercontent.com/brentthorne/posterdown/master/images/betterhexlogo.png
output:
posterdown::posterdown_html:
self_contained: false
knit: pagedown::chrome_print
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE)
```
# Intro
```{r table1, echo = FALSE, warning = FALSE, message = FALSE, results = 'asis', fig.pos='!h'}
library(tidyverse)
library(kableExtra)
col_1 <- c(1,2,3)
col_2 <- c(4,5,6)
col_3 <- c(7,8,9)
data.frame(col_1, col_2, col_3) %>%
kable(format='html',booktabs=TRUE, caption = "This is the caption", escape=F) %>%
kable_styling(font_size = 45)
```
`````
```
| For the title, you have several options. The easiest is certaintly to insert a `<br>` before the title in the YAML at the top of your document. This will insert a line return before your title.
Alternatively, you could insert a CSS block to alter the style of the `h1` tag:
```
```{css, echo=FALSE}
h1 {
padding-top: 40px
}
```
```
In principle, you should be able to include this kind of CSS block to change the style of any HTML element. However, `kableExtra` seems to hard code the font size and to ignore CSS, so this solution only works for *some* style elements. One hacky solution is to manually substitute the font size in the raw HTML using `gsub` or some other similar mechanism:
```
```{css, echo=FALSE}
.table caption {
color: red;
font-weight: bold;
}
```
```{r table1, echo = FALSE, warning = FALSE, message = FALSE, results = 'asis', fig.pos='!h'}
library(kableExtra)
col_1 <- c(1,2,3)
col_2 <- c(4,5,6)
col_3 <- c(7,8,9)
data.frame(col_1, col_2, col_3) %>%
kbl(format = 'html',
escape = FALSE,
caption = "This is the caption") %>%
kable_styling(font_size = 45) %>%
gsub("font-size: initial !important;",
"font-size: 45pt !important;",
.)
```
```
[![enter image description here](https://i.stack.imgur.com/dPtCZ.png)](https://i.stack.imgur.com/dPtCZ.png)
|
Filter on multiple columns in a table using one pipe in Angular
>
> Hi everyone.
> I want to make a custom filter for my table which intakes more than one argument
> to search multiple columns .. in my case right now only one argument can be passed .
> thanks in advance
>
>
>
component.html
```
<tr *ngFor = "let builder of builderDetailsArray[0] | filter :'groupName': searchString; let i = index" >
<td style="text-align: center;"><mat-checkbox></mat-checkbox></td>
<td>{{builder.builderId}}</td>
<td>{{builder.viewDateAdded}}</td>
<td>{{builder.viewLastEdit}}</td>
<td>{{builder.groupName}}</td>
<td>{{builder.companyPersonName}}</td>
<td style="text-align: center;"><button mat-button color="primary">UPDATE</button></td>
</tr>
```
pipe.ts
```
@Pipe({
name: "filter",
pure:false
})
export class FilterPipe implements PipeTransform {
transform(items: any[], field: string, value: string): any[] {
if (!items) {
return [];
}
if (!field || !value) {
return items;
}
return items.filter(singleItem =>
singleItem[field].toLowerCase().includes(value.toLowerCase()) );
}
```
| Created multiple arguments pipe in angular 4
>
> The code lets you search through multiple columns in your table.
>
>
> ### Passed 2 arguments in the transform function
>
>
> 1. value: Which involves all the data inside the table, all columns
> 2. searchString: What you want to search inside the columns (inside the table).
>
>
>
Hence, you can define which columns to be searched inside the transform function.
In this case, the columns to be searched are builderId, groupName and companyPersonName
Pipe file
```
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({
name: "arrayFilter"
})
export class BuilderFilterPipe implements PipeTransform {
transform(value:any[],searchString:string ){
if(!searchString){
console.log('no search')
return value
}
return value.filter(it=>{
const builderId = it.builderId.toString().includes(searchString)
const groupName = it.groupName.toLowerCase().includes(searchString.toLowerCase())
const companyPersonName = it.companyPersonName.toLowerCase().includes(searchString.toLowerCase())
console.log( builderId + groupName + companyPersonName);
return (builderId + groupName + companyPersonName );
})
}
}
```
>
> ### What does the transform function do?
>
>
> 1. builderId, groupName and companyPersonName are the three fields I searched
> 2. builderId converted to string because our searchString is in string format.
> 3. toLowerCase() is used to make search accurate irrespective of user search in lowercase or uppercase
>
>
>
Html:
```
<tr *ngFor = "let builder of newBuilderDetailsArray | arrayFilter:search" >
<td>{{builder.builderId}}</td>
<td>{{builder.groupName}}</td>
<td>{{builder.companyPersonName}}</td>
</tr>
```
[![enter image description here](https://i.stack.imgur.com/D533l.png)](https://i.stack.imgur.com/D533l.png)
[![enter image description here](https://i.stack.imgur.com/egqB9.png)](https://i.stack.imgur.com/egqB9.png)
>
> Make sure your filter.ts file added to module.ts file
>
>
>
|
How to use the state of one observable to skip values of another observable?
This is best explained with a short example.
let's say this is my source observable that I want to filter
```
Observable.interval(1, TimeUnit.SECONDS)
```
I use a checkbox to handle the filter state. When the box is not checked, I want to skip all values.
I use RxAndroid to get an observable for the checkbox like this:
```
RxCompoundButton.checkedChanges(checkBox)
```
here is my code:
```
Observable.combineLatest(
RxCompoundButton.checkedChanges(checkBox)
, Observable.interval(1, TimeUnit.SECONDS)
, (isChecked, intervalCounter) -> {
if (!isChecked) {
return null;
} else {
return intervalCounter;
}
}
).filter(Objects::nonNull)
```
and I get the desired output
```
interval 1--2--3--4--5--6--7--8--9
checkbox 0-----1--------0-----1---
combineLatest
result ------3--4--5--------8--9
```
I am just getting started with RxJava and my "solution" does not feel right, because:
You can see that the combine function returns a magic value `null` and then the filter will skip these `nulls`.
That means that the filter function is called even though I already know, that I want to skip this data.
- maybe I should use another operator
- is there a way to use the checkbox-observable in a filter function
- maybe there is some way in the combine function to signal that I want to skip this data
| Your pictures of the desired output is not correct. In your implementation `combineLatest` is combining a `null` into the stream :
```
interval 1--2--3--4--5--6--7--8--9
checkbox 0-----1--------0-----1---
combineLatest N--N--3--4--5--N--N--8--9
filter(NonNull)------3--4--5--------8--9
```
IMHO, using `null` as a signal is not good in Rx Stream, developers can easily fall into `NullPointerException`.
To avoid the use of `null`, there are two measures. The first one is to transform the result to a Pair, and apply filter & map later.
A very simple `Pair` Class:
```
public class Pair<T, U> {
T first;
U second;
public Pair(T first, U second) {
this.first = first;
this.second = second;
}
}
```
Then the whole implementation would like this:
```
Observable.combineLatest(
RxCompoundButton.checkedChanges(checkBox)
, Observable.interval(1, TimeUnit.SECONDS)
, (isChecked, intervalCounter) -> new Pair<>(isChecked,intervalCounter)
).filter(pair -> pair.first)
.map(pair -> pair.second) // optional, do this if you only need the second part
```
The data flow:
```
interval 1 2 3 4
| | | |
checkbox F | T | F | T F
| | | | | | |
combineLatest F1 T1 T2 F2 F3 T3 F4
| | |
filter(first=T) T1 T2 T3
| | |
map(second) 1 2 3
```
Or if you can Java 8 in your project, use `Optional` to avoid `null`, which is very similar to your solution but it gives the awareness for others developers that the Stream signal is `Optional`.
```
Observable<Optional<Integer>> o1 = Observable.combineLatest(
RxCompoundButton.checkedChanges(checkBox)
, Observable.interval(1, TimeUnit.SECONDS)
, (isChecked, intervalCounter) -> {
if (!isChecked) {
return Optional.empty();
} else {
return Optional.of(intervalCounter);
}
}
)
Observable<Integer> o2 = o1.filter(Optional::isPresent).map(Optional::get)
```
Some easier approach which is not same with `combineLatest` but identical to your desired result pictures.
```
// Approach 1
Observable.interval(1, TimeUnit.SECONDS).filter( i -> checkBox.isEnabled())
// Approach 2
Observable.interval(1, TimeUnit.SECONDS)
.withLatestFrom(
RxCompoundButton.checkedChanges(checkBox),
(isChecked, intervalCounter) -> {
if (!isChecked) {
return Optional.empty();
} else {
return Optional.of(intervalCounter);
}
}).filter(Optional::isPresent).map(Optional::get)
```
|
Java: setCellValuefactory; Lambda vs. PropertyValueFactory; advantages/disadvantages
today i encountered another thing i don't really understand while trying to learn more about JavaFX and Java in general.
Reference is the following tutorial (im trying to apply the principle to an organizer):
[JavaFX 8 Tutorial](http://code.makery.ch/library/javafx-8-tutorial/part3/)
I will give a short outline of the particular part on which i've got a question:
My main window contains a tableview which shows some appointment-data.
So i got some lines of this style(same as in the tutorial):
```
aColumn.setCellValueFactory(cellData ->cellData.getValue().getAColumnsProperty());
```
The data can be manipulated via an additional EditDialog.
That works just fine. If i edit things the changes are displayed immediately but i did some additional research to better understand the Lambda (not too successful). Now...in the online java documentation [Java Doc PropertyValueFactory](https://docs.oracle.com/javafx/2/api/javafx/scene/control/cell/PropertyValueFactory.html) it says:
"A convenience implementation of the Callback-Interface,[...]"
So i refactored my code into this style:
```
aColumn.setCellValueFactory(new PropertyValueFactory<Appointment,LocalDate>("date"));
```
Which i find much more readable than the Lambda.
But i noticed that when i make changes i need to do some sorting on the TableView before the changes are displayed.
Is it possible to achieve an immediate display of change in the second approach?
If yes: are there major disavantages which would discourage such a modification? I.e. would the Lambda be the best practice in this situation?
I appreciate any help.
| `PropertyValueFactory` expects correctly named property getters. `getAColumnsProperty` is probably not one.
In case of `new PropertyValueFactory<Appointment, LocalDate>("date")` the `Appointment` class needs to contain a `dateProperty()` method; the returned values need to extend `ReadOnlyProperty` for this to work and any edits will only lead to an update in the model automatically, if the returned object also `WritableValue`.
Example `Appointment` class that should work with `PropertyValueFactory<>("date")`:
```
public class Appointment {
private final ObjectProperty<LocalDate> date = new SimpleObjectProperty<>();
public final LocalDate getDate() {
return this.date.get();
}
public final void setDate(LocalDate value) {
this.date.set(value);
}
public final ObjectProperty<LocalDate> dateProperty() {
return this.date;
}
}
```
If no such method exists, `PropertyValueFactory` will use a getter to retrieve the value, i.e. `getDate()`, but this case updates in the model will not be visible in the UI until it updates the `Cell`, since the `PropertyValueFactory` "does not know" where to add a listener.
### Disadvantages of `PropertyValueFactory`
- Can only find `public` methods in a `public` class
- `PropertyValueFactory` uses reflection
- Not typesafe. In `new PropertyValueFactory<Appointment, LocalDate>("date")` the compiler does not check, if there is a appropriate method, if that method even returns a suitable class or if e.g. the property getter returns a `String` instead of a `ReadOnlyProperty<LocalDate>` which can lead to `ClassCastException`s.
- No compile time checking. In the lambda expression the compiler can do some checking if the method exists and returns a appropriate type; with `PropertyValueFactory` this is not done.
- Does not work with [records](https://blogs.oracle.com/javamagazine/post/records-come-to-java). [(*JEP 395: Records*)](https://openjdk.org/jeps/395)
If you are sure to implement the appropriate methods in the item class correctly, there is nothing wrong with using `PropertyValueFactory`, but as mentioned above it has it's disadvantages. Moreover implementing the `Callback` is much more flexible. You could e.g. do some additional modifications:
```
TableColumn<Appointment, String> column = ...
column.setCellValueFactory(new Callback<TableColumn.CellDataFeatures<Appointment, String>, ObservableValue<String>> {
@Override
public ObservableValue<String> call(TableColumn.CellDataFeatures<Appointment, String> cd) {
Appointment a = cd.getValue();
return Bindings.createStringBinding(() -> "the year: " + a.getDate().getYear(), a.dateProperty());
}
});
```
|
How to have per-thread but reusable objects (PubNub) in a Spring app?
I'm connecting to PubNub in a Spring Boot application. [From the documentation, it's ok to re-use PubNub objects](https://support.pubnub.com/support/solutions/articles/14000043755-can-i-publish-in-parallel-) but it's better to have one per thread. What's the appropriate method to store and retrieve one object per thread in Spring Boot?
| This is how you'd store and retrieve an object per thread in Spring using `ThreadLocal`, this example is based on Spring's own [ThreadLocalSecurityContextHolderStrategy](https://github.com/spring-projects/spring-security/blob/master/core/src/main/java/org/springframework/security/core/context/ThreadLocalSecurityContextHolderStrategy.java) which is used to store `SecurityContext` per thread.
Also, take a look at [InheritableThreadLocal](https://docs.oracle.com/javase/7/docs/api/java/lang/InheritableThreadLocal.html) especially if your code spins up new thread, e.g. Spring's `@Async` annotation, it has mechanisms to propagate existing or create new thread local values when creating child threads.
```
import org.springframework.util.Assert;
final class ThreadLocalPubNubHolder {
private static final ThreadLocal<PubNub> contextHolder = new ThreadLocal<PubNub>();
public void clearContext() {
contextHolder.remove();
}
public PubNub getContext() {
PubNub ctx = contextHolder.get();
if (ctx == null) {
ctx = createEmptyContext();
contextHolder.set(ctx);
}
return ctx;
}
public void setContext(PubNub context) {
Assert.notNull(context, "Only non-null PubNub instances are permitted");
contextHolder.set(context);
}
public PubNub createEmptyContext() {
// TODO - insert code for creating a new PubNub object here
return new PubNubImpl();
}
}
```
|
Why parameter deduction doesn't work in this template template parameter
I have following template function which has template template parameter as its argument.
```
template<typename T,
template <typename... ELEM> class CONTAINER = std::vector>
void merge(typename CONTAINER<T>::iterator it )
{
std::cout << *it << std::endl;
}
```
And the following code uses this code.
```
std::vector<int> vector1{1,2,3};
merge<int>(begin(vector1));
```
It works as expected, but when I use
```
merge(begin(vector1));
```
It cannot deduce type of `T`.
I thought that it could deduce type from `std::vector<int>::iterator it;` as `int`.
Why the compiler can't deduce the type?
|
>
> I thought that it could deduce type from `std::vector<int>::iterator it;` as int.
>
>
> Why the compiler can't deduce the type?
>
>
>
No.
The compiler can't: look for "non-deduced context" for more information.
And isn't reasonable expecting a deduction.
Suppose a class as follows
```
template <typename T>
struct foo
{ using type = int; };
```
where the type `type` is **always** `int`; whatever is the `T` type.
And suppose a function as follows
```
template <typename T>
void bar (typename foo<T>::type i)
{ }
```
that receive a `int` value (`typename foo<T>::type` is always `int`).
Which `T` type should be deduced from the following call ?
```
bar(0);
```
|