prompt
stringlengths 49
4.73k
| ground_truth
stringlengths 238
35k
|
---|---|
Log transformation in CART analysis
I'm working as a liaison between a researcher and a stats team at a university. I'm a database admin who is working on using business intelligence tools to offer the option of (as of right now) offer CART analyses in .PDF form over the internet. I'm using Microsoft Business Intelligence Development Studio to do this. Recently I received an email saying that the measure (or independent variable I guess it should be called here) was highly skewed and the original CART analysis didn't make sense. Fair enough, I know that sometimes these things don't work out like you had hoped. Then, another analysis was done with a log transformation. Can someone point me to a good quick and dirty resource so I can read up on this? I'm not a complete stats novice, but ANOVAs and logistic regressions are about as far as my knowledge extends.
| Taking the log of a predictor (independent) variable should have no effect, as CART is invariant to monotonic transformations of the predictors.
See this example where I fit a CART with Age (named `fit1`) and log(Age) (named `fit2`). The split points are the same for both trees (only that the split points in `fit2` are on the log scale, e.g., $4.706 \approx \ln(111)$), the predictions and nodes are exactly the same.
```
R> library(rpart)
R> fit <- rpart(Kyphosis ~ Age + Number + Start, data=kyphosis)
R> fit2 <- rpart(Kyphosis ~ log(Age) + Number + Start, data=kyphosis)
R> fit
n= 81
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 81 17 absent (0.79012 0.20988)
2) Start>=8.5 62 6 absent (0.90323 0.09677)
4) Start>=14.5 29 0 absent (1.00000 0.00000) *
5) Start< 14.5 33 6 absent (0.81818 0.18182)
10) Age< 55 12 0 absent (1.00000 0.00000) *
11) Age>=55 21 6 absent (0.71429 0.28571)
22) Age>=111 14 2 absent (0.85714 0.14286) *
23) Age< 111 7 3 present (0.42857 0.57143) *
3) Start< 8.5 19 8 present (0.42105 0.57895) *
R> fit2
n= 81
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 81 17 absent (0.79012 0.20988)
2) Start>=8.5 62 6 absent (0.90323 0.09677)
4) Start>=14.5 29 0 absent (1.00000 0.00000) *
5) Start< 14.5 33 6 absent (0.81818 0.18182)
10) log(Age)< 4.005 12 0 absent (1.00000 0.00000) *
11) log(Age)>=4.005 21 6 absent (0.71429 0.28571)
22) log(Age)>=4.706 14 2 absent (0.85714 0.14286) *
23) log(Age)< 4.706 7 3 present (0.42857 0.57143) *
3) Start< 8.5 19 8 present (0.42105 0.57895) *
```
|
Assembly - x86 call instruction and memory address?
I've been reading some assembly code and I've started seeing that call instructions are actually program counter relative.
However, whenever I'm using visual studio or windbg to debug, it always says call 0xFFFFFF ... which to me means it's saying I'm going to jump to that address.
Who is right? Is Visual Studio hiding the complexity of the instruction encoding and just saying oh that's what the program means, that is the debugger know it's a pc-relative instruction, and since it knows the pc, it just goes and does the math for you?
Highly confused.
| If you're disassembling `.o` object files that haven't been linked yet, the call address will just be a placeholder to be filled in by the linker.
**You can use `objdump -drwc -Mintel` to *show* the relocation types + symbol names from a `.o`** (The `-r` option is the key. Or `-R` for an already-linked shared library.)
---
It's more useful to the user to show the actual address of the jump target, rather than disassemble it as `jcc eip-1234H` or something. Object files have a default load address, so the disassembler has a value for `eip` at every instruction, and this is usually present in disassembly output.
e.g. in some asm code I wrote (where I use symbol names that made it into the object file, so the loop branch target is actually visible to the disassembler):
```
objdump -M intel -d rs-asmbench:
...
00000000004020a0 <.loop>:
4020a0: 0f b6 c2 movzx eax,dl
4020a3: 0f b6 de movzx ebx,dh
...
402166: 49 83 c3 10 add r11,0x10
40216a: 0f 85 30 ff ff ff jne 4020a0 <.loop>
0000000000402170 <.last8>:
402170: 0f b6 c2 movzx eax,dl
```
Note that the encoding of the `jne` instruction is a signed little-endian 32bit displacement, of `-0xD0` bytes. (jumps add their displacement to the value of `e/rip` after the jump. The jump instruction itself is 6 bytes long, so the displacement has to be `-0xD0`, not just `-0xCA`.) `0x100 - 0xD0 = 0x30`, which is the value of the least-significant byte of the 2's complement displacement.
In your question, you're talking about the call addresses being `0xFFFF...`, which makes little sense unless that's just a placeholder, or you thought the non-`0xFF` bytes in the displacement were part of the opcode.
Before linking, references to external symbols look like this:
```
objdump -M intel -d main.o
...
a5: 31 f6 xor esi,esi
a7: e8 00 00 00 00 call ac <main+0xac>
ac: 4c 63 e0 movsxd r12,eax
af: ba 00 00 00 00 mov edx,0x0
b4: 48 89 de mov rsi,rbx
b7: 44 89 f7 mov edi,r14d
ba: e8 00 00 00 00 call bf <main+0xbf>
bf: 83 f8 ff cmp eax,0xffffffff
c2: 75 cc jne 90 <main+0x90>
...
```
Notice how the `call` instructions have their relative displacement = 0. So before the linker has slotted in the actual relative value, they encode a `call` with a target of the instruction right after the call. (i.e. `RIP = RIP+0`). The `call bf` is immediately followed by an instruction that starts at `0xbf` from the start of the section. The other `call` has a different target address because it's at a different place in the file. (gcc puts `main` in its own section: `.text.startup`).
So, if you want to make sense of what's actually being called, look at a linked executable, or get a disassembler that has looks at the object file symbols to slot in symbolic names for call targets instead of showing them as calls with zero displacement.
Relative jumps to local symbols already get resolved before linking:
```
objdump -Mintel -d asm-pinsrw.o:
0000000000000040 <.loop>:
40: 0f b6 c2 movzx eax,dl
43: 0f b6 de movzx ebx,dh
...
106: 49 83 c3 10 add r11,0x10
10a: 0f 85 30 ff ff ff jne 40 <.loop>
0000000000000110 <.last8>:
110: 0f b6 c2 movzx eax,dl
```
Note, the exact same instruction encoding on the relative jump to a symbol in the same file, even though the file has no base address, so the disassembler just treats it as zero.
See Intel's reference manual for instruction encoding. Links at <https://stackoverflow.com/tags/x86/info>. Even in 64bit mode, `call` only supports 32bit sign-extended relative offsets. 64bit addresses are supported as absolute. (In 32bit mode, 16bit relative addresses are supported, with an operand-size prefix, I guess saving one instruction byte.)
|
Why does the number of vt and v elements in a blender .obj file differ?
Having followed the instructions of the tutorial
<https://www.youtube.com/watch?v=yc0b5GcYl3U>
(How To Unwrap A UV Sphere In Blender) I succeeded in generating a textured sphere within the blender program.
Now I want it in my openGL C++ program. To this end I followed the tutorial <http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Load_OBJexported> in order to save the sphere as an .obj file (using the triangulation export option as stated in said tutorial) and joyfully found a lot of 'v', 'vt', and 'f' lines within the result.
However, parsing the file I found 642 vertices (v), 561 'texture vertices' (vt)[, and 1216 elements lines (f) of the expected structure 'f a/at b/bt c/ct'].
What baffles me is this: My naive understanding of openGL tells me that each point on a textured object has a site in space (the vertex) and a site on the texture (the UV point). Hence I really would expect that the numbers of vs and vts matche. But they do not: 642!=561. How can that be?
| Because OBJ and OpenGL use a different definition of "vertex", and handle indices differently.
In the following explanation, I'll call the coordinates of a vertex, which are the values in the `v` records of the OBJ format, "positions".
## OBJ
The main characteristic of the OBJ vertex/index model is that it uses separate indices for different vertex attributes (positions, normals, texture coordinates).
This means that you can have independent lists of positions and texture coordinates, with different sizes. The file only needs to list each unique position once, and each unique texture coordinate pair once.
A vertex is then defined by specifying **3** indices: One each for the position, the texture coordinates, and the normal.
## OpenGL
OpenGL on the other hand uses a single set of indices, which reference complete vertices.
A vertex is defined by its position, texture coordinates, and normal. So a vertex is needed for each unique **combination** of position, texture coordinates, and normal.
## Conversion
When you read an OBJ file for OpenGL rendering, you need to create a vertex for each **unique combination** of position, texture coordinates, and normal. Since they are referenced by indices in the `f` records, you need to create an OpenGL vertex for each unique index triplet you find in those `f` records. For each of these vertices, you use the position, texture coordinates, and normals at the given index, as read from the OBJ file.
My older answer here contains pseudo-code to illustrate this process: [OpenGL - Index buffers difficulties](https://stackoverflow.com/questions/23349080/opengl-index-buffers-difficulties/23356738#23356738).
|
Why MongoDB shell new ISODate(0001-01-01) returns date 1901-01-01
In MongoDB Shell on windows if you run a query with a value of
```
new ISODate('0001-01-01T00:00:00Z')
```
it actually seems to search for
```
new ISODate('1901-01-01T00:00:00Z')
```
If you enter "new ISODate('0001-01-01T00:00:00Z')" directly in the Mongo Shell you can see this conversion taking place as it returns ISODate("1901-01-01T00:00:00Z").
Oddly, when you use "new Date" instead of "new ISODate" by entering:
```
new Date('0001-01-01T:00:00:00Z')
```
it returns ISODate("0001-01-01T00:00:00Z") which is correct.
Both are supposed to return an ISODate according to [the docs](http://docs.mongodb.org/manual/core/shell-types/) and to my mind should act identically. Does anyone know why they don't and whether it's a bug or a feature?
| Internally, `new ISODate` really means:
```
Date.UTC(year, month, date, hour, min, sec, ms);
```
IE, MongoDB splits up the string into elements with a regular expression (Line 60 at <https://github.com/mongodb/mongo/blob/master/src/mongo/shell/types.js#L56>)
The JavaScript Date object has a few different initialisers (<https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date#Syntax>). If you use:
```
new Date("0001-01-01T:00:00:00");
```
Then the four digit year `0001` is not parsed or interpreted, but when you use it like MongoDB does:
```
Date.UTC( parseInt("0001") )
```
Then special rules apply for the years 00-99 apply. The docs at <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date#Date_instances> slightly hint at this.
There is a MongoDB server ticket already at <https://jira.mongodb.org/browse/SERVER-8164>, please vote for it.
|
Java - return 2 values (String array)
I have this java code and i want to return 2 values and then use them in main() or in other functions. Some help please. TX :
```
import java.net.*;
import java.io.*;
import java.io.File;
import org.apache.commons.io.FileUtils;
import org.apache.commons.lang.StringUtils;
public class URLReader {
public String[] functie(String x) throws Exception
{
URL oracle = new URL(x);
BufferedReader in = new BufferedReader(new InputStreamReader(oracle.openStream()));
String inputLine=null;
StringBuffer theText = new StringBuffer();
while ((inputLine = in.readLine()) != null)
theText.append(inputLine+"\n");
String html = theText.toString();
in.close();
String[] tds = StringUtils.substringsBetween(html, "<tr>", "</tr>");
String[] tds2 = StringUtils.substringsBetween(tds[1], "href=\"/module/gallery", "\"><img");
String[] tds3 = StringUtils.substringsBetween(tds[1], "src='/redx_tools/mb_image.php", "' border='1'");
return ???
}
public static void main(String[] args) throws Exception {
String x = new String("http://www.wippro.at/module/gallery/index.php?limitstart=0&picno=0&gallery_key=59");
URLReader s = new URLReader();
for (String st : s.functie(x))
{
System.out.println(st);
}
}
```
}
| Did you build your strings? if `a` and `b` are the String objects you want to return, you can build a String array to return like this:
```
return new String[] {a, b};
```
You've built three String arrays in your code: `tds`, `tds2`, and `tds3`. All of them could be returned in a big array like this:
```
String[] retArray = new String[tds.length+tds2.length+tds3.length];
System.arrayCopy(tds, 0, retArray, 0, tds.length);
System.arrayCopy(tds2, 0, retArray, tds.length, tds2.length);
System.arrayCopy(tds3, 0, retArray, tds.length+tds2.length, tds3.length);
return retArray
```
|
Fill a specific index in tensor with a value
I'm beginner with tensorflow.
I created this tensor
```
z = tf.zeros([20,2], tf.float32)
```
and I want to change the value of index `z[2,1]` and `z[2,2]` to `1.0` instead of zeros.
How can I do that?
| What you *exactly* ask is not possible for two reasons:
- `z` is a constant tensor, it can't be changed.
- There is no `z[2,2]`, only `z[2,0]` and `z[2,1]`.
But assuming you want to change `z` to a variable and fix the indices, it can be done this way:
```
z = tf.Variable(tf.zeros([20,2], tf.float32)) # a variable, not a const
assign21 = tf.assign(z[2, 0], 1.0) # an op to update z
assign22 = tf.assign(z[2, 1], 1.0) # an op to update z
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(z)) # prints all zeros
sess.run([assign21, assign22])
print(sess.run(z)) # prints 1.0 in the 3d row
```
|
Print out Javascript array in table
I have this array:
```
var employees = [
{ "firstName":"John" , "lastName":"Doe" },
{ "firstName":"Anna" , "lastName":"Smith" },
{ "firstName":"Peter" , "lastName": "Jones" }
];
```
and I would like to print the entire array out as a html table. How would I accomplish this?
I tried this but could only get the final name the print:
```
<!DOCTYPE html>
<html>
<body>
<h2>Create Object from JSON String</h2>
<p>
First Name: <span id="fname"></span><br />
Last Name: <span id="lname"></span><br />
</p>
<script type="text/javascript">
var txt = '{"employees":[' +
'{"firstName":"John","lastName":"Doe" },' +
'{"firstName":"Anna","lastName":"Smith" },' +
'{"firstName":"Peter","lastName":"Jones" }]}';
var obj = eval ("(" + txt + ")");
for (i=0; i<txt.length; i++){
document.getElementById("fname").innerHTML=obj.employees[i].firstName
document.getElementById("lname").innerHTML=obj.employees[i].lastName
}
</script>
</body>
</html>
```
| Using jQuery you can do:
```
var txt = '{"employees":[' +
'{"firstName":"John","lastName":"Doe" },' +
'{"firstName":"Anna","lastName":"Smith" },' +
'{"firstName":"Peter","lastName":"Jones" }]}';
// $.parseJSON will parse the txt (JSON) and convert it to an
// JavaScript object. After its call, it gets the employees property
// and sets it to the employees variable
var employees = $.parseJSON( txt ).employees;
var $table = $( "<table></table>" );
for ( var i = 0; i < employees.length; i++ ) {
var emp = employees[i];
var $line = $( "<tr></tr>" );
$line.append( $( "<td></td>" ).html( emp.firstName ) );
$line.append( $( "<td></td>" ).html( emp.lastName ) );
$table.append( $line );
}
$table.appendTo( document.body );
// if you want to insert this table in a div with id attribute
// set as "myDiv", you can do this:
$table.appendTo( $( "#myDiv" ) );
```
jsFiddle: <http://jsfiddle.net/davidbuzatto/aDX7E/>
|
HTML div elements not taking the height of their parent, even though the parent has nonzero height
I have a fairly simple problem. I have a container div with three children - two divs and a table. The following is the CSS:
```
#container {
overflow: auto;
}
#child1 {
float: left;
width: 50px;
height: 100%;
}
#table1 {
float: left;
}
#child2 {
float: left;
width: 50px;
height: 100%;
}
```
The HTML is very simple as well
```
<div id='container'>
<div id='child1'></div>
<table id='table1'>
<tbody></tbody>
</table>
<div id='child2'></div>
</div>
```
The table has some content that sets its height. When the page is rendered, the height of the parent container div is set to the height of the table, as it should. The other children, however, are being collapsed for some reason. Here's the example, with some table elements and styling for clarity: <http://jsfiddle.net/GUEc6/>. As you see, the height of the container div is being properly set to the height of the table, but the child1 and child2 divs fail to properly set their heights to 100% of that. I know that if a floated element has its height set to 100%, the parent element needs to have some definition of its own height so the child can be 100% of something concrete. But it doesn't look like that's what's happening here.
| It's a common misconception about `height: 100%`.
From [MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/height):
>
> The percentage is calculated with respect to the height of the
> generated box's containing block. If the height of the containing
> block **is not specified explicitly** (i.e., it depends on content
> height), and this element is not absolutely positioned, the value
> computes to auto. A percentage height on the root element is relative
> to the initial containing block.
>
>
>
One solution to your problem could be absolute positioning. Set `position: relative` on your container and position the children absolutely. Setting `top: 0; bottom: 0;` on them will stretch them to the container's height.
[Quick Demo](http://jsfiddle.net/GUEc6/3/) (shows the concept, you might need to tweak it)
|
angular ng-show / ng-hide not working correctly with ng-bind-html
I want to set ng-show or ng-hide for my elements in html string and pass it to view with ng-bind-html but ng-show / ng-hide not working and my element always visible.
This is my controller code:
```
$scope.my = {
messageTrue: true,
messageFalse: false
};
$scope.HtmlContent = "<div ng-show='{{my.messageFalse}}'>This is incorrect (ng-show & my.messageFalse={{my.messageFalse}})</div> ";
$scope.trustedHtml = $interpolate($scope.HtmlContent)($scope);
```
And this is my view code:
```
<div ng-show="my.messageTrue">This is correct (ng-show & my.messageTrue={{my.messageTrue}})</div>
<div ng-hide="my.messageFalse">This is correct (ng-hide & my.messageFalse={{my.messageFalse}})</div>
<div ng-bind-html="trustedHtml"></div>
```
[This is a Plnkr](http://plnkr.co/edit/tbmULwHdAimLoXJGLnEb?p=preview) for my question. (Thanks for [Xaero](https://stackoverflow.com/users/3798176/xaero))
Sorry for my bad English. Thanks
| This is because the html you are injecting has not yet been compiled and linked by angular, so it is just being displayed "as is". It's being treated the same way your markup would be treated if you didn't include angular.js at all.
The solution is to create a directive that works similar to ng-bind-html, but that also includes a step for compiling and linking the html fragment.
[This link](http://ngmodules.org/modules/ng-html-compile) is an example of such a directive.
Here is the code:
```
angular.module('ngHtmlCompile', []).
directive('ngHtmlCompile', function($compile) {
return {
restrict: 'A',
link: function(scope, element, attrs) {
scope.$watch(attrs.ngHtmlCompile, function(newValue, oldValue) {
element.html(newValue);
$compile(element.contents())(scope);
});
}
}
});
```
and the usage.
```
<div ng-html-compile="trustedHtml"></div>
```
And here is the working [Plunk](http://plnkr.co/edit/C0tl3dluTbSyOQudzyOV?p=preview)
|
How important is sprint test plan in agile?
I have been preparing a test plan for every sprint even if I have a master test plan just to plan the test for the current sprint. I am including topics like:
1. Introduction
2. Purpose
3. Feature Overview
4. Inscope
5. Outscope
6. Assumptions and Risks
7. Approach
8. Test Deliverables
9. Testing Task and Proposed Schedule
Recently after discussing with some of my fellow mates and colleagues. They suggested not to make a sprint test plan and is covered by the master test plan. Is it ok to not make a Sprint Test Plan and go on with the master test plan?
| The biggest problem I see with this is that you're looking at a mini-waterfall rather than a sprint. The description you're giving above forces the team to deliver all or nothing. What if they have 12 items in the sprint and deliver 9? Does that mean you won't test or release what has been done? If so then I dread your release schedule, every time any team member misses any work item you'll fail to deploy - that weekly deploy will quickly become months long.
My suggestion would be that all the information you've put above should already be available.
- Introduction - Sprint Name, Dates, and goal - job done!
- Purpose - See above
- Feature Overview - a list of stories in the sprint
- Inscope - As above
- Outscope - The backlog minus what's above
- Assumptions and Risks - this depends on what your company is expecting but there's no reason you can't capture risks to stories during backlog refinement. Again, the key is that it's a risk to the story not a risk to the release. Product/Team risk management should be handled outside the sprint framework.
- Approach - Part of the stories themselves
- Test Deliverables - Can you deliver a story without testing it?
- Testing Task and Proposed Schedule - as above
The point I'm trying to make is that all of this information should be available to anyone who wants it either through a link to the Sprint or a link to the story.
My biggest piece of advice - testing is not a phase, it's one of the tasks required to complete a story. Identify the tests you'll do on each story at the same time as identifying the development tasks.
|
Exclude component that breaks Angular Universal
I used [ng-toolkit](https://github.com/maciejtreder/ng-toolkit) with `ng add @ng-toolkit/universal` to add Angular Universal support to my project.
I am able to create the prod build with no errors plus I am able to run the server, again without any errors. It just get "stuck" when request comes to it (nodeJS does not render any output).
I found out, that one of my components is breaking server-side rendering. I found out that the issue is with the Mat-Carousel:
component:
```
export class BannerComponent {
slides: any[] = [
// tslint:disable-next-line:max-line-length
{ image: 'assets/banner/banner-one.png' },
// tslint:disable-next-line:max-line-length
{ image: 'assets/banner/banner-two.png' },
// tslint:disable-next-line:max-line-length
{ image: 'assets/banner/banner-three.png' }
];
}
```
template:
```
<section class="sec-space-b" id="banner">
<mat-carousel
timings="250ms ease-in"
[autoplay]="true"
interval="5000"
color="accent"
maxWidth="auto"
proportion="25"
slides="5"
[loop]="true"
[hideArrows]="false"
[hideIndicators]="false"
[useKeyboard]="true"
[useMouseWheel]="false"
orientation="ltr"
>
<mat-carousel-slide
#matCarouselSlide
*ngFor="let slide of slides; let i = index"
overlayColor="#00000000"
[image]="slide.image"
[hideOverlay]="false"
></mat-carousel-slide>
</mat-carousel>
</section>
```
How can I solve this problem? Can I somehow exclude particular component from the Server-Side build?
| The fix is simple, you should use a [PLATFORM\_ID](https://angular.io/api/core/PLATFORM_ID) token together with the `isPlatformBrowser` or `isPlatformServer` method.
Inside your template use the `#ngIf` statement:
```
<section class="sec-space-b" id="banner" *ngIf="isBrowser">
```
And inside the component code initialize the `isBrowser` field as:
```
import { isPlatformBrowser } from '@angular/common';
import { Component, OnInit, Inject, PLATFORM_ID } from '@angular/core';
@Component({
selector: 'app-home-banner',
templateUrl: './banner.component.html',
styleUrls: ['./banner.component.scss']
})
export class BannerComponent implements OnInit {
public isBrowser = isPlatformBrowser(this.platformId);
constructor(@Inject(PLATFORM_ID) private platformId: any) { }
}
```
You can read more about `isPlatformServer` and `isPlatformBrowser` in this article (they are used there):
<https://www.twilio.com/blog/create-search-engine-friendly-internationalized-web-apps-angular-universal-ngx-translate>
You can also check out my talk about Angular Universal (13:26 - about running different code on browser and server):
<https://www.youtube.com/watch?v=J42mqpVsg0k>
|
Move a vector to vector
Is it possible to move a `vector<T*>` to a `vector<const T*>` without copying it and without relying on `reinterpret_cast<>`? I.e.
```
vector<int*> get() {
return ...;
}
vector<const int*> getConst() {
return whatgoeshere(get());
}
```
| I'm going to attack this from another angle. And address a possible design issue. You didn't specify what comes in the `...`, but assuming `get` populates a vector and then returns it, the solution in my view is to lift the code that does the populating outside of *both* functions.
```
template<typename Int>
void do_get(std::vector<Int*>& v) {
// Populate v
}
auto get() {
std::vector<int*> ret;
do_get(ret);
return ret;
}
auto getConst() {
std::vector<const int*> ret;
do_get(ret);
return ret;
}
```
One source of truth for the populating logic. And while the two original functions are identical, it's negligible. Furthermore on a sane implementation it won't do any superfluous copies, because RVO is amazing.
|
Speed Up Pandas Iterations
I have DataFrame which consist of 3 columns: CustomerId, Amount and Status(success or failed).
The DataFrame is not sorted in any way. A CustomerId can repeat multiple times in DataFrame.
I want to introduce new columns into this DataFrame with below logic:
df[totalamount]= sum of amount for each customer where status was success.
I already have a running code but with df.iterrows which takes too much time. Thus requesting you to kindly provide alternate methods like pandas vectorization or numpy vectorization.
For Example, I want to create the 'totalamount' column from the first three columns:
```
CustomerID Amount Status totalamount
0 1 5 Success 105 # since both transatctions were successful
1 2 10 Failed 80 # since one transaction was successful
2 3 50 Success 50
3 1 100 Success 105
4 2 80 Success 80
5 4 60 Failed 0
```
| Use `where` to mask the 'Failed' rows with `NaN` while preserving the length of the DataFrame. Then `groupby` the CustomerID and `transform` the sum of 'Amount' column to bring the result back to every row.
```
df['totalamount'] = (df.where(df['Status'].eq('Success'))
.groupby(df['CustomerID'])['Amount']
.transform('sum'))
CustomerID Amount Status totalamount
0 1 5 Success 105.0
1 2 10 Faled 80.0
2 3 50 Success 50.0
3 1 100 Success 105.0
4 2 80 Success 80.0
5 4 60 Failed 0.0
```
---
The reason for using `where` (as opposed to subsetting the DataFrame) is because groupby + sum defaults to sum an entirely `NaN` group to 0, so we don't need anything extra to deal with CustomerID 4, for instance.
|
Further understanding setRetainInstance(true)
What *exactly* happens when you call `setRetainInstance(true)` on a `Fragment`? The documentation is virtually non-existent and this seems like a very important function. Specifically I want to know how much of this sequence (that I made up) is true:
>
> 1. The user rotates the device.
> 2. The fragment is detached from the `Activity` and `Fragment.onDetach()` is called.
> 3. The activity is destroyed; `Activity.onDestroy()` is called.
> 4. The `Activity` java object is deleted (when possible, by the GC).
> 5. A new `Activity` java object is created; its constructor, and `onCreate()` are called.
> 6. In `Activity.onCreate()` we either have `setContentView(...)` which sets a layout containing a fragment, or we use `FragmentTransaction`
> to add a fragment.
> 7. I'm really not sure about this, but I assume that android is smart enough to find the old fragment, and call `Fragment.onAttach()` to
> reattach it to the new `Activity`
> 8. Next (or before? who knows?) `Activity.onResume()` is called.
>
>
>
So is that correct? Is Android smart enough to find the old fragment, even if I explicitly use `FragmentTransaction.add(new MyFragment(), ...)` the first time? And if so, how do I avoid adding *another* fragment in `onCreate()`? Do I need to do something like this?:
```
if (getSupportFragmentManager().findFragmentByTag("foo") == null)
{
FragmentTransaction ft = getSupportFragmentManager().beginTransaction();
ft.add(new FooFragment(), "foo").commit();
}
```
| Ok, perhaps I was slightly too harsh on the Android documentation, because it does have some useful information, but sadly none of it is linked from `setRetainInstance()`. From [the page about fragments](http://developer.android.com/guide/components/fragments.html)
>
> Note: Each fragment requires a unique identifier that the system can
> use to restore the fragment if the activity is restarted (and which
> you can use to capture the fragment to perform transactions, such as
> remove it). There are three ways to provide an ID for a fragment:
>
>
> - Supply the android:id attribute with a unique ID.
> - Supply the android:tag attribute with a unique string.
> - If you provide neither of the previous two, the system uses the ID of the container view.
>
>
>
This strongly implies that if you do `setContentView(R.layout.whatever)` in `Activity.onCreated()` and that layout contains a fragment with `setRetainInstance(true)`, then when the activity is recreated it will be searched for again using its id or tag.
Secondly, for UI-less fragments, it states
>
> To add a fragment without a UI, add the fragment from the activity
> using add(Fragment, String) (supplying a unique string "tag" for the
> fragment, rather than a view ID). This adds the fragment, but, because
> it's not associated with a view in the activity layout, it does not
> receive a call to onCreateView(). So you don't need to implement that
> method.
>
>
>
And the docs link to a very good example - `FragmentRetainInstance.java` which I have reproduced below for your convenience. It does exactly what I speculated was the answer in my question (`if (...findFragmentByTag() == null) { ...`).
Finally, I created my own test activity to see exactly what functions are called. It outputs this, when you start in portrait and rotate to landscape. The code is below.
(This is edited a bit to make it easier to read.)
```
TestActivity@415a4a30: this()
TestActivity@415a4a30: onCreate()
TestActivity@415a4a30: Existing fragment not found.
TestFragment{41583008}: this() TestFragment{41583008}
TestFragment{41583008}: onAttach(TestActivity@415a4a30)
TestFragment{41583008}: onCreate()
TestFragment{41583008}: onCreateView()
TestFragment{41583008}: onActivityCreated()
TestActivity@415a4a30: onStart()
TestFragment{41583008}: onStart()
TestActivity@415a4a30: onResume()
TestFragment{41583008}: onResume()
<rotate device>
TestFragment{41583008}: onPause()
TestActivity@415a4a30: onPause()
TestFragment{41583008}: onStop()
TestActivity@415a4a30: onStop()
TestFragment{41583008}: onDestroyView()
TestFragment{41583008}: onDetach()
TestActivity@415a4a30: onDestroy()
TestActivity@415a3380: this()
TestFragment{41583008}: onAttach(TestActivity@415a3380)
TestActivity@415a3380: onCreate()
TestActivity@415a3380: Existing fragment found.
TestFragment{41583008}: onCreateView()
TestFragment{41583008}: onActivityCreated()
TestActivity@415a3380: onStart()
TestFragment{41583008}: onStart()
TestActivity@415a3380: onResume()
TestFragment{41583008}: onResume()
```
**Note that the Android documentation is wrong: the UI-less fragment *does* receive a call to `onCreateView()` but it is free to return `null`.**
# Source code for `TestActivity`/`TestFragment`
```
import android.app.Activity;
import android.app.Fragment;
import android.app.FragmentTransaction;
import android.os.Bundle;
import android.util.Log;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.TextView;
import com.concentriclivers.ss.R;
// An activity for understanding Android lifecycle events.
public class TestActivity extends Activity
{
private static final String TAG = TestActivity.class.getSimpleName();
public TestActivity()
{
super();
Log.d(TAG, this + ": this()");
}
protected void finalize() throws Throwable
{
super.finalize();
Log.d(TAG, this + ": finalize()");
}
@Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
Log.d(TAG, this + ": onCreate()");
TextView tv = new TextView(this);
tv.setText("Hello world");
setContentView(tv);
if (getFragmentManager().findFragmentByTag("test_fragment") == null)
{
Log.d(TAG, this + ": Existing fragment not found.");
FragmentTransaction ft = getFragmentManager().beginTransaction();
ft.add(new TestFragment(), "test_fragment").commit();
}
else
{
Log.d(TAG, this + ": Existing fragment found.");
}
}
@Override
public void onStart()
{
super.onStart();
Log.d(TAG, this + ": onStart()");
}
@Override
public void onResume()
{
super.onResume();
Log.d(TAG, this + ": onResume()");
}
@Override
public void onPause()
{
super.onPause();
Log.d(TAG, this + ": onPause()");
}
@Override
public void onStop()
{
super.onStop();
Log.d(TAG, this + ": onStop()");
}
@Override
public void onDestroy()
{
super.onDestroy();
Log.d(TAG, this + ": onDestroy()");
}
public static class TestFragment extends Fragment
{
private static final String TAG = TestFragment.class.getSimpleName();
public TestFragment()
{
super();
Log.d(TAG, this + ": this() " + this);
}
@Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
Log.d(TAG, this + ": onCreate()");
setRetainInstance(true);
}
@Override
public void onAttach(final Activity activity)
{
super.onAttach(activity);
Log.d(TAG, this + ": onAttach(" + activity + ")");
}
@Override
public void onActivityCreated(Bundle savedInstanceState)
{
super.onActivityCreated(savedInstanceState);
Log.d(TAG, this + ": onActivityCreated()");
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState)
{
Log.d(TAG, this + ": onCreateView()");
return null;
}
@Override
public void onViewCreated(View view, Bundle savedInstanceState)
{
super.onViewCreated(view, savedInstanceState);
Log.d(TAG, this + ": onViewCreated()");
}
@Override
public void onDestroyView()
{
super.onDestroyView();
Log.d(TAG, this + ": onDestroyView()");
}
@Override
public void onDetach()
{
super.onDetach();
Log.d(TAG, this + ": onDetach()");
}
@Override
public void onStart()
{
super.onStart();
Log.d(TAG, this + ": onStart()");
}
@Override
public void onResume()
{
super.onResume();
Log.d(TAG, this + ": onResume()");
}
@Override
public void onPause()
{
super.onPause();
Log.d(TAG, this + ": onPause()");
}
@Override
public void onStop()
{
super.onStop();
Log.d(TAG, this + ": onStop()");
}
@Override
public void onDestroy()
{
super.onDestroy();
Log.d(TAG, this + ": onDestroy()");
}
}
}
```
# Source code for `FragmentRetainInstance.java` (as of API 16):
```
/*
* Copyright (C) 2010 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.example.android.apis.app;
import com.example.android.apis.R;
import android.app.Activity;
import android.app.Fragment;
import android.app.FragmentManager;
import android.os.Bundle;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.ProgressBar;
/**
* This example shows how you can use a Fragment to easily propagate state
* (such as threads) across activity instances when an activity needs to be
* restarted due to, for example, a configuration change. This is a lot
* easier than using the raw Activity.onRetainNonConfiguratinInstance() API.
*/
public class FragmentRetainInstance extends Activity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// First time init, create the UI.
if (savedInstanceState == null) {
getFragmentManager().beginTransaction().add(android.R.id.content,
new UiFragment()).commit();
}
}
/**
* This is a fragment showing UI that will be updated from work done
* in the retained fragment.
*/
public static class UiFragment extends Fragment {
RetainedFragment mWorkFragment;
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View v = inflater.inflate(R.layout.fragment_retain_instance, container, false);
// Watch for button clicks.
Button button = (Button)v.findViewById(R.id.restart);
button.setOnClickListener(new OnClickListener() {
public void onClick(View v) {
mWorkFragment.restart();
}
});
return v;
}
@Override
public void onActivityCreated(Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
FragmentManager fm = getFragmentManager();
// Check to see if we have retained the worker fragment.
mWorkFragment = (RetainedFragment)fm.findFragmentByTag("work");
// If not retained (or first time running), we need to create it.
if (mWorkFragment == null) {
mWorkFragment = new RetainedFragment();
// Tell it who it is working with.
mWorkFragment.setTargetFragment(this, 0);
fm.beginTransaction().add(mWorkFragment, "work").commit();
}
}
}
/**
* This is the Fragment implementation that will be retained across
* activity instances. It represents some ongoing work, here a thread
* we have that sits around incrementing a progress indicator.
*/
public static class RetainedFragment extends Fragment {
ProgressBar mProgressBar;
int mPosition;
boolean mReady = false;
boolean mQuiting = false;
/**
* This is the thread that will do our work. It sits in a loop running
* the progress up until it has reached the top, then stops and waits.
*/
final Thread mThread = new Thread() {
@Override
public void run() {
// We'll figure the real value out later.
int max = 10000;
// This thread runs almost forever.
while (true) {
// Update our shared state with the UI.
synchronized (this) {
// Our thread is stopped if the UI is not ready
// or it has completed its work.
while (!mReady || mPosition >= max) {
if (mQuiting) {
return;
}
try {
wait();
} catch (InterruptedException e) {
}
}
// Now update the progress. Note it is important that
// we touch the progress bar with the lock held, so it
// doesn't disappear on us.
mPosition++;
max = mProgressBar.getMax();
mProgressBar.setProgress(mPosition);
}
// Normally we would be doing some work, but put a kludge
// here to pretend like we are.
synchronized (this) {
try {
wait(50);
} catch (InterruptedException e) {
}
}
}
}
};
/**
* Fragment initialization. We way we want to be retained and
* start our thread.
*/
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// Tell the framework to try to keep this fragment around
// during a configuration change.
setRetainInstance(true);
// Start up the worker thread.
mThread.start();
}
/**
* This is called when the Fragment's Activity is ready to go, after
* its content view has been installed; it is called both after
* the initial fragment creation and after the fragment is re-attached
* to a new activity.
*/
@Override
public void onActivityCreated(Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
// Retrieve the progress bar from the target's view hierarchy.
mProgressBar = (ProgressBar)getTargetFragment().getView().findViewById(
R.id.progress_horizontal);
// We are ready for our thread to go.
synchronized (mThread) {
mReady = true;
mThread.notify();
}
}
/**
* This is called when the fragment is going away. It is NOT called
* when the fragment is being propagated between activity instances.
*/
@Override
public void onDestroy() {
// Make the thread go away.
synchronized (mThread) {
mReady = false;
mQuiting = true;
mThread.notify();
}
super.onDestroy();
}
/**
* This is called right before the fragment is detached from its
* current activity instance.
*/
@Override
public void onDetach() {
// This fragment is being detached from its activity. We need
// to make sure its thread is not going to touch any activity
// state after returning from this function.
synchronized (mThread) {
mProgressBar = null;
mReady = false;
mThread.notify();
}
super.onDetach();
}
/**
* API for our UI to restart the progress thread.
*/
public void restart() {
synchronized (mThread) {
mPosition = 0;
mThread.notify();
}
}
}
}
```
|
Have \*remind\* interpret the output of \*$shell()\*
*remind(1)* provides a function `shell()` documented as follows:
```
shell(s_cmd [,i_maxlen])
Executes cmd as a system command, and returns the first 511
characters of output resulting from cmd. Any whitespace
character in the output is converted to a space. Note that if
RUN OFF has been executed, or the -r command-line option has
been used, shell() will result in an error, and cmd will not be
executed.
…
```
I would like whatever `s_cmd` writes to stdout to be interpreted
by *remind* itself. E. g.:
```
$ echo REM Sep 13 2018 MSG test >/tmp/test.rem
$ tail -2 ~/.reminders
SET tmp shell("cat /tmp/test.rem", -1)
$tmp
```
Where `$tmp` is my unsuccessful attempt at inserting the output of the
command in the line above. When executing `rem(1)`, it does not return an
error but it does not interpolate `$tmp` either:
```
$ rem
Reminders for Thursday, 13th September, 2018 (today):
…
$tmp
```
I assume that `$tmp` is interpreted as an implicit `REM …` statement.
(The `INCLUDE` directive does not work in this context because I need
the output of the inclusion to be generated in situ.)
| Your problem is not with the shell() function, but
a) with the way you try to interpolate expressions/variables -- you should use `[tmp]` instead of `$tmp`
b) with the fact that `remind` doesn't allow `MSG` in expressions:
```
$ cat /tmp/foo.rem
SET var "REM Sep 13 2018 MSG test"
[var]
$ remind /tmp/foo.rem
/tmp/foo.rem(2): Can't nest MSG, MSF, RUN, etc. in expression
No reminders.
```
This is what the documentation says:
>
>
> ```
> o You cannot use expression-pasting to determine the type (MSG,
> CAL, etc.) of a REM command. You can paste expressions before
> and after the MSG, etc keywords, but cannot do something like
> this:
> REM ["12 Nov 1993 AT 13:05 " + "MSG" + " BOO!"]
>
> ```
>
>
I'm not a remind user, but this is my first crack at fixing your problem:
```
SET tmp shell("cat /tmp/test.rem", -1)
REM [substr(tmp, 4, index(tmp, "MSG")-1)] MSG [substr(tmp, index(tmp, "MSG")+4)]
```
provided that `/tmp/test.rem` is of the form `REM ... MSG ...`.
Please notice that in remind, indexes start from 1, not from 0.
**Note**
If your problem is actually 'how to include dynamically generated content in a remind file', you may do that by redirecting the output of the shell command to a temporary file, then INCLUDE-ing that file:
```
INCLUDE [shell("echo REM " + today() + " MSG hello > /tmp/foo.rem; echo /tmp/foo.rem")]
```
Or you could use the `INCLUDE` command with a fifo instead of a regular file, and have a script that writes to the fifo each time it is opened.
Before starting `reminder`:
```
$ mkfifo /tmp/remind-fifo
$ while echo 'REM Sep 18 2018 MSG test' > /tmp/remind-fifo; do sleep 1; done &
```
Replace the `echo` with whatever script you need to generate the remind commands (eg. `sh my_script > /tmp/remind-fifo`).
Then, in the remind file, you can simply include the fifo:
```
INCLUDE /tmp/remind-fifo
```
The fifo approach could be used with other programs that have an include mechanism (eg. the `C` preprocessor)
|
Does std::set.find(element) use the == operator from the class to compare the elements?
Say I have
```
std::set<classtype> set;
class classtype {
bool operator==(const classtype& ct) {
//..
}
};
//..
std::set<classtype>::iterator it = set.find(element);
```
Find does use the == operator from the class correct?
Also my reference says it has log(n) worst case runtime where n is the number of elements in the set. How is this realized internally? I understand that the key is that the elements in the set have a order (so insertion takes long to create that order), for integer sets it is clear what order means but for random classes not so much.
| From the C++ Standard (23.2.4 Associative containers)
>
> 3 The phrase “equivalence of keys” means the equivalence relation
> imposed by the comparison and **not the operator== on keys**. That is,
> two keys k1 and k2 are considered to be equivalent if for the
> comparison object comp, comp(k1, k2) == false && comp(k2, k1) ==
> false. For any two keys k1 and k2 in the same container, calling
> comp(k1, k2) shall always return the same value.
>
>
>
Member function `find` seeks a key according to the comparison object `comp`
If you did not specify explicitly the comparison object then the class uses by default standard functional object `std::less` that uses `operator <` within its operator function. So your class has to have the operator < defined.
If you want to use `operator ==` for comparison values in the set then you can use standard algorithm `std::find` instead of the method `find`.
|
Pandas : remove SOME duplicate values based on conditions
I have a dataset :
```
id url keep_if_dup
1 A.com Yes
2 A.com Yes
3 B.com No
4 B.com No
5 C.com No
```
I want to remove duplicates, i.e. keep first occurence of "url" field, **BUT** keep duplicates if the field "keep\_if\_dup" is YES.
Expected output :
```
id url keep_if_dup
1 A.com Yes
2 A.com Yes
3 B.com No
5 C.com No
```
What I tried :
```
Dataframe=Dataframe.drop_duplicates(subset='url', keep='first')
```
which of course does not take into account "keep\_if\_dup" field. Output is :
```
id url keep_if_dup
1 A.com Yes
3 B.com No
5 C.com No
```
| You can pass multiple boolean conditions to `loc`, the first keeps all rows where col 'keep\_if\_dup' == 'Yes', this is `or`ed (using `|`) with the inverted boolean mask of whether col 'url' column is duplicated or not:
```
In [79]:
df.loc[(df['keep_if_dup'] =='Yes') | ~df['url'].duplicated()]
Out[79]:
id url keep_if_dup
0 1 A.com Yes
1 2 A.com Yes
2 3 B.com No
4 5 C.com No
```
to overwrite your df self-assign back:
```
df = df.loc[(df['keep_if_dup'] =='Yes') | ~df['url'].duplicated()]
```
breaking down the above shows the 2 boolean masks:
```
In [80]:
~df['url'].duplicated()
Out[80]:
0 True
1 False
2 True
3 False
4 True
Name: url, dtype: bool
In [81]:
df['keep_if_dup'] =='Yes'
Out[81]:
0 True
1 True
2 False
3 False
4 False
Name: keep_if_dup, dtype: bool
```
|
Poisson Distribution: Estimating rate parameter and the interval length
Here is the motivation for my question. I have a sensor that reports data to me. The occurrence of the reports from the sensor follows a Poisson process (so, obviously, the inter-event times are exponential). I assume a constant event rate $\lambda$.
The device, however, can fail. Let $T\_F$ be the failure time. After failure, the event occurrences are not reported. So what I observe are event times $t\_1,t\_2,\ldots,t\_n$ that have occurred on some interval $(0,T\_F)$. I do not have prior information.
So this is just a standard Poisson "set-up" except that I don't know the length of the interval over which the events can be observed. I want to estimate both the rate $\lambda$ and the interval length $T\_F$.
I have tried writing down the equations for the maximum likelihood estimates for $\lambda$ and $T\_F$, but I am finding that they have no solution. (Maybe I have made a mistake.)
It seems that this should be a simple enough standard problem. I have not been able to find an answer (in part because searches that involve the term "interval" return large numbers of pages/answers about confidence intervals). Any help or pointers to references would be greatly appreciated.
| Let $t=T\_F$. Conditional on the number of occurences $N=n$, the arrival times $t\_1,t\_2,\dots,t\_N$ are known to have the same distribution as the order statstics of $n$ iid unif$(0,t)$ random variables. Hence, the likelihood becomes
\begin{align}
L(\lambda,t) &= P(N=n) f(t\_1,t\_2,\dots,t\_N|N=n) \\
&= \frac{e^{-\lambda t}(\lambda t)^n}{n!}\frac{n!}{t^n} \\
&= e^{-\lambda t}\lambda^n.
\end{align}
for $t\ge t\_n$ and zero elsewhere. This is maximised for $\hat t=t\_n$ and $\hat\lambda=n/t\_n$. These MLEs don't exist if there are no occurrences $N=0$, however. Conditional on $N=n$, again using the fact that $t\_n$ can be viewed as an order statistic (the maximum) of $n$ iid unif$(0,t)$ random variables, $E(t\_N|N=n)=\frac n{n+1} t$. Hence, the estimator $t^\*=\frac {n+1}n t\_n$ is unbiased for $t$ conditional on $N=n$ and hence also conditional on $N\ge 1$. A reasonable frequentist estimator of $\lambda$ might be $\lambda^\* = n/t^\* = \frac{n^2}{(n+1)t\_n}$ but this does not have finite expectation when $N=1$ so assessing its bias is even more troublesome.
Bayesian inference using independent, non-informative scale priors on $\lambda$ and $t$ on the other hand leads to a posterior
$$
f(\lambda,t|t\_1,\dots,t\_N) \propto e^{-\lambda t}\lambda^{n-1}t^{-1}.
$$
for $t>t\_n,\lambda>0$. Integrating out $\lambda$, the marginal posterior of $t$ becomes
$$
f(t|t\_1,\dots,t\_N) = \frac{n t\_n^n}{t^{n+1}}, t>t\_n,
$$
and the posterior mean $E(t|t\_1,\dots,t\_N)=\frac n{n-1} t\_n$. A $(1-\alpha)$-credible interval for $t$ is given by $\left(\frac{t\_n}{(1-\alpha/2)^{1/n}}, \frac{t\_n}{(\alpha/2)^{1/n}}\right)$.
The marginal posterior of $\lambda$,
\begin{align}
f(\lambda|t\_1,\dots,t\_N) &\propto \int\_{t\_\text{max}}^\infty e^{-\lambda t}\lambda^{n-1}t^{-1} dt \\
&= \lambda^{n-1}\Gamma(0,\lambda t\_n)
\end{align}
where $\Gamma$ is the incomplete gamma function.
|
Stepping through all permutations one swap at a time
Given a list of n distinct items, how can I step through each permutation of the items swapping just one pair of values at a time? (I assume it is possible, it certainly feels like it should be.)
What I'm looking for is an iterator that yields the indices of the next pair of items to swap, such that if iterated n!-1 times it will step through the n! permutations of the list in some order. If iterating it once more would restore the list to its starting order that would be a bonus, but it isn't a requirement. If all pairs involve the first (resp. the last) element as one of the pair, so that the function only needs to return a single value, that would also be a bonus.
Example:- for 3 elements, you can swap the last element alternately with the first and second elements to loop through the permutations, viz: (a b c) swap 0-2 => (c b a) 1-2 (c a b) 0-2 (b a c) 1-2 (b c a) 0-2 (a c b).
I'll be implementing in C, but can probably puzzle out solutions in most languages.
| Ah, once I calculated a sequence for n=4 (with the "always swap the first item with another" constraint), I was able to find sequence [A123400](https://oeis.org/A123400) in the OEIS, which told me I need "Ehrlich's swap method".
Google found me [a C++ implementation](http://www.jjj.de/fxt/demo/comb/perm-star-swaps.h), which I assume from [this](http://www.jjj.de/fxt/) is under the GPL. I've also found Knuth's [fascicle 2b](http://www-cs-faculty.stanford.edu/~knuth/fasc2b.ps.gz) which describes various solutions to exactly my problem.
Once I have a tested C implementation I'll update this with code.
Here's some perl code that implements Ehrlich's method based on Knuth's description. For lists up to 10 items, I tested in each case that it correctly generated the complete list of permutations and then stopped.
```
#
# Given a count of items in a list, returns an iterator that yields the index
# of the item with which the zeroth item should be swapped to generate a new
# permutation. Returns undef when all permutations have been generated.
#
# Assumes all items are distinct; requires a positive integer for the count.
#
sub perm_iterator {
my $n = shift;
my @b = (0 .. $n - 1);
my @c = (undef, (0) x $n);
my $k;
return sub {
$k = 1;
$c[$k++] = 0 while $c[$k] == $k;
return undef if $k == $n;
++$c[$k];
@b[1 .. $k - 1] = reverse @b[1 .. $k - 1];
return $b[$k];
};
}
```
Example use:
```
#!/usr/bin/perl -w
use strict;
my @items = @ARGV;
my $iterator = perm_iterator(scalar @items);
print "Starting permutation: @items\n";
while (my $swap = $iterator->()) {
@items[0, $swap] = @items[$swap, 0];
print "Next permutation: @items\n";
}
print "All permutations traversed.\n";
exit 0;
```
By request, python code. (Sorry, it probably isn't overly idiomatic. Suggestions for improvement welcomed.)
```
class ehrlich_iter:
def __init__(self, n):
self.n = n
self.b = range(0, n)
self.c = [0] * (n + 1)
def __iter__(self):
return self
def next(self):
k = 1
while self.c[k] == k:
self.c[k] = 0
k += 1
if k == self.n:
raise StopIteration
self.c[k] += 1
self.b[1:k - 1].reverse
return self.b[k]
mylist = [ 1, 2, 3, 4 ] # test it
print "Starting permutation: ", mylist
for v in ehrlich_iter(len(mylist)):
mylist[0], mylist[v] = mylist[v], mylist[0]
print "Next permutation: ", mylist
print "All permutations traversed."
```
|
What component to derive my 'TCard' from? (game)
I am trying to make a TCard component for a game. **What class should I derive it from?**
This is for a card game like MTG or yu gi oh. The card should have a blank image, and when created it will load either a front or back view.
If it loads the front view, it will then have to have a few labels (for properties like power/cost/def/text). Cards must be clickable.
```
type
TCard = class(zzzzzzzzz)
private
```
Now once that is done, do I have to add anything to the constructor/destructor? Currently I have:
```
constructor TCard.Create(AOwner: Tcomponent);
begin
inherited Create(AOwner);
end;
{******************************************************************************}
{ Free any resources allocated to component }
destructor TCard.Destroy;
begin
inherited Destroy;
end;
```
Also I think I added the onclick parts right but not sure. In the published area I have
```
{Inherited properties}
property OnMouseDown;
property OnMouseMove;
property OnMouseUp;
property OnClick;
property OnDblClick;
```
etc...
| It depends on what you want to do, but typically there are two ancestors for visible controls:
- TGraphicControl: This is a descendant of TControl that implements a Canvas for you. You can just override the Paint method to start drawing. Controls like this support mouse interactions, but cannot get keyboard focus.
- TCustomControl: This a descendant of TWinControl. It also implements a Canvas and allows you to override the Paint method to draw any content. Because it descends from TWinControl, it actually has a handle and can gain keyboard focus and process keyboard messages.
An other good candidate is TPanel (or rather TCustomPanel). It inherits from TCustomControl, so it has the same graphical properties, but it also contains functionality to draw borders and align child controls. I doubt if you would need this, though.
Of course you can derive directly from TControl or TWinControl, but then you will have to take care of some of this stuff yourself.
Note that it is better to put the actual card game logic in a separate class and only create visual controls for drawing. If you do that, you can still choose whether you want to have separate controls for each card, or you can choose to draw your whole card game on a single control or even directly on the form. I doubt if Windows' card games like Free Cell and Solitaire have over 50 graphics controls.
|
Rabbitmq File Descriptor Limit
Rabbitmq documentation says that we need to do some configuration before we use it on production. One of the configuration is about maximum open file number (which is an OS parameter).
Rabbitmq server we use is running on Ubuntu 16.04 and according to resources I found on web, I updated the number of open files as 500k. When I check it from command line, I get the following output:
```
root@madeleine:~# ulimit -n
500000
```
However when I look at the rabbitmq server status, I see another number.
```
root@madeleine:~# rabbitmqctl status | grep 'file_descriptors' -A 4
{file_descriptors,
[{total_limit,924},
{total_used,19},
{sockets_limit,829},
{sockets_used,10}]},
```
It seems like, I managed to increase the limit on OS side, but rabbitmq still thinks that total limit of file descriptors is 924.
What might be causing this problem?
| You might want to look at this [page](https://www.rabbitmq.com/install-rpm.html#linux-max-open-files-limit-options-other-linux)
Apparently, this operation depends on the OS version. If you have a **systemd**, you should do the following in */etc/systemd/system/rabbitmq-server.service.d/limits.conf* file:
*Notice that this service configuration might be somewhere else according to the operating system you are using. You can use the following command to find where this service configuration is located and update that file.*
```
find / -name "*rabbitmq-server.service*"
```
**[Service]**
**LimitNOFILE=300000**
On the other hand, if you do not have the systemd folder, you should try this in your *rabbitmq-env.conf* file:
**ulimit -S -n 4096**
|
How do I auto-increment a column in my table?
I'm building a database with a product instance table in Visual Studio2010 with Sql Server 2008, and I need to make the ProductId column autoincremented, but I cannot find the attribute in the column properties menu. I'm using c# and asp.net, if that is relevant. I've seen the code to create the table and set the column to autoincrement, but as this is my first go-round with coding, I don't know where to put the code. The only way I know to create a new table is through the VS gui, if that makes sense.
| Set the Identity specification to yes
![enter image description here](https://i.stack.imgur.com/Iih2X.png)
Sample SQL:
```
CREATE TABLE [dbo].[HomePageImages](
[RecordId] [int] IDENTITY(1,1) NOT NULL,
[AlternateText] [varchar](100) NOT NULL,
[ImageName] [varchar](50) NOT NULL,
[NavigateUrl] [varchar](200) NOT NULL,
[ImageUrl] AS ('/content/homepageimages/'+[ImageName]),
[DisplayFrom] [datetime] NULL,
[DisplayTo] [datetime] NULL,
CONSTRAINT [PK_HomePageImages] PRIMARY KEY CLUSTERED
(
[RecordId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
```
|
Polymer + form POST data
I have this
```
<form id="form_837299" class="appnitro" method="post" action="insert.php"> <paper-input label="Title" name="title" maxlength="255">
</paper-input>
<paper-input floatinglabel multiline label="text" name="text"></paper-input>
<li class="buttons">
<input type="hidden" name="form_id" value="837299" />
<input id="saveForm" class="button_text" type="submit" name="submit" value="Submit" />
</li>
</ul>
</form>
```
I have problem with POST data - nothing is sended in "text" and "title" (all in paper-input).
I modified the template and attribute "name" now is in one div, which Polymer created. But no data are sent.
print\_r($\_POST); shows me only this:
Array ( [form\_id] => 837299 [submit] => Submit )
Anybody knows how use Polymer and Material UI on form?
| Only elements that extend native form elements automatically get submitted with forms. `paper-input` extends `core-input` which has an input inside it as opposed to extending it. See [this mailing list discussion](https://groups.google.com/forum/#!searchin/polymer-dev/form/polymer-dev/V0qah6T1Lzk/FjVfT44BAuIJ) for additional discussion and [this StackOverflow post](https://stackoverflow.com/questions/23151817/polymer-manually-submitting-a-form/23159102) for possible solutions.
Something like [this jsbin](http://jsbin.com/kurelaji/2/edit?html,output) maybe?
**Update**: Here's the same thing in [web component form](http://jsbin.com/muvatumi/1/edit?html,output).
**Update**: Looks like the creator of [`ajax-form`](http://ajax-form.raynicholus.com) has [added](https://github.com/garstasio/ajax-form/issues/19) this functionality.
**Update**: Also consider using [`iron-form`](https://elements.polymer-project.org/elements/iron-form).
|
Python: get default gateway for a local interface/ip address in linux
On Linux, how can I find the default gateway for a local ip address/interface using python?
I saw the question "How to get internal IP, external IP and default gateway for UPnP", but the accepted solution only shows how to get the local IP address for a network interface on windows.
Thanks.
| For those people who don't want an extra dependency and don't like calling subprocesses, here's how you do it yourself by reading `/proc/net/route` directly:
```
import socket, struct
def get_default_gateway_linux():
"""Read the default gateway directly from /proc."""
with open("/proc/net/route") as fh:
for line in fh:
fields = line.strip().split()
if fields[1] != '00000000' or not int(fields[3], 16) & 2:
# If not default route or not RTF_GATEWAY, skip it
continue
return socket.inet_ntoa(struct.pack("<L", int(fields[2], 16)))
```
I don't have a big-endian machine to test on, so I'm not sure whether the endianness is dependent on your processor architecture, but if it is, replace the `<` in `struct.pack('<L', ...` with `=` so the code will use the machine's native endianness.
|
Dev always behind master
We use gitflow methodology, so master branch, dev branch, feature branches. For release we merge dev to master and release from there.
What is happening is that each time we go to PR dev to master, we are told dev is one commit behind master and we can't merge. So we PR master to dev. It shows no diff and merges fine. Then we can PR dev to master. But the cycle repeats for the next release, even though we haven't done anything to master.
What should I be looking for that could cause this?
| If you're using the standard [Git Flow](https://nvie.com/posts/a-successful-git-branching-model/#diagram), then you will have a merge commit on `master` when merging in a `release` branch. (Note, in your case it sounds like you're skipping `release` branches for now and instead you just have a `dev` branch which would be akin to `develop` in Git Flow.) So, every time you merge `dev` into `master` you will get one new merge commit on `master`.
From your comment:
>
> If we merge dev to master, they should both be pointing at the same commit (If I understand my git right).
>
>
>
Not necessarily. That would be true if you allowed a fast-forward merge, but that's not true if you force a merge commit. However, the *state* of `dev` and `master` should be the same after the merge.
Regarding his comment:
>
> "Why can't you merge?" Well, bitbucket won't let us. Probably could force it or something, but seems like that wouldn't solve the problem.
>
>
>
That is probably because you have a setting turned on in BitBucket that requires `dev` to be fully up to date with `master`. Note this is unrelated to Git merges in general and the requirement isn't necessary if you don't want it.
If you want to leave that setting on, I would recommend doing the back merge of `master` to `dev` immediately *after* merging into `master`, instead of immediately *before*. This way if you ever have a hotfix that gets merged into `master` your process will get that hotfix merged down into `dev` right away, so your testing against the `dev` branch can include it.
|
Element to take remaining height of viewport
Please consider this style:
```
.root {
display: flex;
flex-direction: column;
height: 100%;
}
.header {
width: 100%;
background-color: red;
display: flex;
height: 100px;
justify-content: space-between;
.logo-pane {
width: 100px;
height: 100px;
background-color: green;
}
.user-actions {
width: 100px;
height: 100px;
background-color: blue;
}
}
.content {
flex-grow: 1;
background-color: pink;
}
```
What I want to achieve is that the `content` element will take the remaining height of the viewport, but it takes only his content height.
**HTML**:
```
<div class="root">
<div class="header">
<div class="logo-pane">Logo</div>
<div class="user-actions">User Actions</div>
</div>
<div class="content">
content
</div>
</div>
```
[Codepen](http://codepen.io/ronenl/pen/LZGZZX)
| The problem is the surrounding `.root`. You have to increase the height of the `.root` to the remaining space. So you have to set the `height:100vh;` on `.root`. Try the following solution:
```
body, html {
margin:0;
padding:0;
}
.root {
display: flex;
flex-direction: column;
height: 100vh;
align-items:stretch;
align-content:stretch;
}
.header {
width: 100%;
background-color: red;
display: flex;
height: 100px;
justify-content: space-between;
}
.logo-pane {
width: 100px;
height: 100px;
background-color: green;
}
.user-actions {
width: 100px;
height: 100px;
background-color: blue;
}
.content {
flex-grow:1;
background-color: pink;
}
```
```
<div class="root">
<div class="header">
<div class="logo-pane">Logo</div>
<div class="user-actions">User Actions</div>
</div>
<div class="content">content</div>
</div>
```
|
What are the numbers in the square brackets in NSLog() output?
What is the stuff between the `[]` in the log message below? I get this in my iPhone app, and I have no idea where the message is coming from. My first guess would be a line number, but which file would it be in?
```
2010-10-19 08:56:12.006 Encore[376:6907]
```
| The first number is the process ID, the second is the logging thread's Mach port. A desktop example:
```
2010-10-19 17:37:13.189 nc_init[28617:a0f] nc <CFNotificationCenter 0x10010d170 [0x7fff70d96f20]> - default <CFNotificationCenter 0x10010d2a0 [0x7fff70d96f20]>
(gdb) i thread
Thread 1 has current state "WAITING"
Mach port #0xa0f (gdb port #0x4203)
frame 0: main () at nc_init.m:10
pthread ID: 0x7fff70ebfc20
system-wide unique thread id: 0x167b49
dispatch queue name: "com.apple.main-thread"
dispatch queue flags: 0x0
total user time: 13232000
total system time: 16099000
scaled cpu usage percentage: 0
scheduling policy in effect: 0x1
run state: 0x3 (WAITING)
flags: 0x0
number of seconds that thread has slept: 0
current priority: 31
max priority: 63
suspend count: 0.
(gdb) p/x (int)mach_thread_self()
$1 = 0xa0f
```
Notice how 0xa0f is reported as the thread's Mach port.
|
How to see logs of terminated pods
I am running selenium hubs and my pods are getting terminated frequently. I would like to look at the logs of the pods which are terminated. How to do it?
```
NAME READY STATUS RESTARTS AGE
chrome-75-0-0e5d3b3d-3580-49d1-bc25-3296fdb52666 0/2 Terminating 0 49s
chrome-75-0-29bea6df-1b1a-458c-ad10-701fe44bb478 0/2 Terminating 0 23s
chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5 0/2 ContainerCreating 0 7s
kubectl logs chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5
Error from server (NotFound): pods "chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5" not found
$ kubectl logs chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5 --previous
Error from server (NotFound): pods "chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5" not found
```
| Running `kubectl logs -p` will fetch logs from existing resources at API level. This means that terminated pods' logs will be unavailable using this command.
As mentioned in other answers, the best way is to have your logs centralized via [logging agents or directly pushing these logs into an external service](https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures).
Alternatively and given the [logging architecture in Kubernetes](https://kubernetes.io/docs/concepts/cluster-administration/logging/#logging-at-the-node-level), you might be able to [fetch the logs directly from the log-rotate files](https://kubernetes.io/docs/concepts/cluster-administration/logging/#system-component-logs) in the node hosting the pods. However, this option might depend on the Kubernetes implementation as log files might be deleted when the pod eviction is triggered.
|
How do I change the name of the ASPNETUsers table to User?
I am using the default authentication system created by ASP.NET Core, and I'd like to know ?
1. how to change the name of the `ASPNETUsers` table to `User` ?
2. How to add the following property to the table: `public string DisplayName {get; set;}`
3. How to add the `RemoteAttribute` attribute to the Email property
4. Is it a good idea to create another table, named Profile, with a one-to-one relationship with the ASPNETUsers table, if I have a few properties ?
thanks...
| You can do those as shown below.
**1.**
```
protected override void OnModelCreating(ModelBuilder builder)
{
base.OnModelCreating(builder);
builder.Entity<ApplicationUser>(entity =>
{
entity.ToTable(name:"User");
});
}
```
**2.**
```
public class ApplicationUser : IdentityUser
{
......................
......................
public string DisplayName {get; set;}
}
```
**3.** I would like to suggest you to put that on your `ViewModel` instead of the `Core model (i.e. ApplicationUser )` as shown below.
```
using Microsoft.AspNet.Mvc;
using System.ComponentModel.DataAnnotations;
public class LoginViewModel
{
[Required]
[EmailAddress]
[Remote("Foo", "Home", ErrorMessage = "Remote validation is working for you")]
public string Email { get; set; }
}
```
**4.** Hence you have only few properties,you can keep those properties inside the `ASPNETUsers` table itself. Which is easy to maintain :)
|
LASSO with interaction terms - is it okay if main effects are shrunk to zero?
LASSO regression shrinks coefficients towards zero, thus providing effectively model selection. I believe that in my data there are meaningful interactions between nominal and continuous covariates. Not necessarily, however, are the 'main effects' of the true model meaningful (non-zero). Of course I do not know this since the true model is unknown. My objectives are to find the true model and predict the outcome as closely as possible.
I have learned that the classical approach to model building would always include a main effect *before* an interaction is included. Thus there cannot be a model without a main effect of two covariates $X$ and $Z$ if there is an interaction of the covariates $X\*Z$ in the same model. The `step` function in `R` consequently carefully selects model terms (e.g. based on backward or forward AIC) abiding to this rule.
LASSO seems to work differently. Since all parameters are penalized it may without doubt happen that a main effect is shrunk to zero whereas the interaction of the best (e.g. cross-validated) model is non-zero. This I find in particular for my data when using `R`'s `glmnet` package.
I received criticism based on the first rule quoted above, i.e. my final cross-validated Lasso model does not include the corresponding main effect terms of some non-zero interaction. However this rule seems somewhat strange in this context. What it comes down to is the question whether the parameter in the true model is zero. Let's assume it is but the interaction is non-zero, then LASSO will identify this perhaps, thus finding the correct model. In fact it seems predictions from this model will be more precise because the model does not contain the true-zero main effect, which is effectively a noise variable.
May I refute the criticism based on this ground or should I take pre-cautions somehow that LASSO does include the main effect before the interaction term?
| One difficulty in answering this question is that it's hard to reconcile LASSO with the idea of a "true" model in most real-world applications, which typically have non-negligible correlations among predictor variables. In that case, as with any variable selection technique, the particular predictors returned with non-zero coefficients by LASSO will depend on the vagaries of sampling from the underlying population. You can check this by performing LASSO on multiple bootstrap samples from the same data set and comparing the sets of predictor variables that are returned.
Furthermore, as @AndrewM noted in a comment, the bias of estimates provided by LASSO means that you will not be predicting outcomes "as closely as possible." Rather, you are predicting outcomes that are based on a particular choice of the unavoidable bias-variance tradeoff.
So given those difficulties, I would hope that you would want to know for yourself, not just to satisfy a critic, the magnitudes of main effects of the variables that contribute to the interaction. There is a package available in R, [glinternet](https://cran.r-project.org/web/packages/glinternet/), that seems to do precisely what you need (although I have no experience with it):
>
> Group-Lasso INTERaction-NET. Fits linear pairwise-interaction models that satisfy strong hierarchy: if an interaction coefficient is estimated to be nonzero, then its two associated main effects also have nonzero estimated coefficients. Accommodates categorical variables (factors) with arbitrary numbers of levels, continuous variables, and combinations thereof.
>
>
>
Alternatively, if you do not have too many predictors, you might consider ridge regression instead, which will return coefficients for all variables that may be much less dependent on the vagaries of your particular data sample.
|
How do I use VaryByParam with multiple parameters?
In ASP.NET MVC2 I use `OutputCache` and the `VaryByParam` attribute. I got it working fine with a single parameter, but what is the correct syntax when I have several parameters on the method?
```
[OutputCache(Duration=30, VaryByParam = "customerId"]
public ActionResult Index(int customerId)
{
//I've got this one under control, since it only has one parameter
}
[OutputCache(Duration=30, VaryByParam = "customerId"]
public ActionResult Index(int customerId, int languageId)
{
//What is the correct syntax for VaryByParam now that I have a second parameter?
}
```
How do I get it to cache the pages using both parameters? Do I enter add the attribute twice? Or write "customerId, languageId" as the value??
| Valid values for `VaryByParam` are one of the following:
- The literal string `*` (asterisk), which varies by *all* parameters of the action method.
- The literal string `none` (case-insensitive), which varies by *no* parameters of the action method.
- A string containing the semicolon-separated names of the parameters you wish to vary by.
In your case, you'd want the first option:
```
[OutputCache(Duration = 30, VaryByParam = "*")]
public ActionResult Index(int customerId, int languageId)
{
}
```
If, however, you had some params you want to vary by and some that you don't, then you'd use the third option:
```
[OutputCache(Duration = 30, VaryByParam = "customerId;languageId")] // foo is omitted
public ActionResult Index(int customerId, int languageId, int foo)
{
}
```
[Reference.](https://learn.microsoft.com/en-us/aspnet/mvc/overview/older-versions-1/controllers-and-routing/improving-performance-with-output-caching-cs#varying-the-output-cache)
|
Game Networking between 2 players
I'm making a game and I want to establish a network connection between 2 players through a server.
So far all I have is:
```
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintWriter;
import java.net.ServerSocket;
import java.net.Socket;
public class Server extends Thread {
public ServerSocket s;
public Socket p1, p2;
public PrintWriter bis1, bis2;
public BufferedReader bos1, bos2;
public Server() {
try {
s = new ServerSocket(5_000, 1714);
} catch (IOException e) {
e.printStackTrace();
}
}
public void run() {
try {
p1 = s.accept();
p2 = s.accept();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
try {
bis1 = new PrintWriter(p1.getOutputStream());
bis2 = new PrintWriter(p2.getOutputStream());
bos1 = new BufferedReader(new InputStreamReader(p1.getInputStream()));
bos2 = new BufferedReader(new InputStreamReader(p2.getInputStream()));
} catch (IOException e) {
e.printStackTrace();
}
while (p1.isClosed() || p2.isClosed()) { // if one of the players disconnect, the match will end.
try {
String p1 = bos1.readLine(); // what p1 says
String p2 = bos2.readLine(); // what p2 says
if (!p1.equalsIgnoreCase("")) { // if what p1 says is something
if(p1.startsWith("my position x=")) {
p1 = p1.substring(15);
float x = Float.parseFloat(p1);
bis2.write("his position x=" + x);
} else if(p1.startsWith("my position y=")) {
p1 = p1.substring(15);
float y = Float.parseFloat(p1);
bis2.write("his position y=" + y);
}
}
if (!p2.equalsIgnoreCase("")) { // if what p1 says is something
if(p2.startsWith("my position x=")) {
p2 = p2.substring(15);
float x = Float.parseFloat(p2);
bis1.write("his position x=" + x);
} else if(p2.startsWith("my position y=")) {
p2 = p2.substring(15);
float y = Float.parseFloat(p2);
bis1.write("his position y=" + y);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
```
This is meant to send update of the players x and y to each other.
I'll implement other data of the game later but I want to know if I'm going in the right direction.
| Your vertical separation is good, you've got clear regions in your code for defining the objects you need like sockets, readers and writers, etc. There's one region that needs cleaning, though:
```
while (p1.isClosed() || p2.isClosed()) { // if one of the players disconnect, the match will end.
try {
String p1 = bos1.readLine(); // what p1 says
String p2 = bos2.readLine(); // what p2 says
if (!p1.equalsIgnoreCase("")) { // if what p1 says is something
if(p1.startsWith("my position x=")) {
p1 = p1.substring(15);
float x = Float.parseFloat(p1);
bis2.write("his position x=" + x);
} else if(p1.startsWith("my position y=")) {
p1 = p1.substring(15);
float y = Float.parseFloat(p1);
bis2.write("his position y=" + y);
}
}
if (!p2.equalsIgnoreCase("")) { // if what p1 says is something
if(p2.startsWith("my position x=")) {
p2 = p2.substring(15);
float x = Float.parseFloat(p2);
bis1.write("his position x=" + x);
} else if(p2.startsWith("my position y=")) {
p2 = p2.substring(15);
float y = Float.parseFloat(p2);
bis1.write("his position y=" + y);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
```
It's filled with duplication. If this game turns into a four-player game, you're gonna have a mess on your hands.
If you could wrap all the processing in a `Player` object, however, you could extract the relevant parts to functions, drastically simplifying the main loop:
```
while (!gameEnded(players)) { // check victory conditions or disconnects
try {
for(Player p : players){
handleTickForPlayer(p);//How to handle update x y?
}
} catch (IOException e) {
e.printStackTrace();
}
}
```
Okay, that looks rather drastic, but that's because I'm leaving some parts out.
You've combined reading and writing in your own code, and its not very future proof. If you wanted to log matches, you'd have to insert it something in between, and from the way you handle messages, you'd have to do it for every case.
What I think you should do is give your messages some sort of identifier to state the meaning of the message, like `id = 0` means it's an update of position x, `id = 1` is used for an update for position y, maybe `id = 2` is a chat message... you could put the type in the message by doing something like so: `"id;message body"`, so `"0;my position x=10.5"`. Or maybe you'd be better off with well known formats like JSON, but that's a separate point.
Via using these ids, you can then say "well, this is an 'update x' message", and send it off to the "update x" message handler.
I'd propose something like this:
```
for(Player p : players){
Message message = readMessageFromPlayer(p);
if (message == null) { continue; }
MessageHandler handler = getHandlerForMessage(message);
if (handler == null) {
//error?
//maybe ignore bad messages
continue;
}
handler.handleMessageFromPlayer(message, p, players);
}
```
In `readMessageFromPlayer`, you check if there is any incoming content, and if there is, you split it up into the body of the message and the identifier. Otherwise return null...
In `getHandlerForMessage`, you check if you have any handler that can handle said message. There's multiple ways to do that; one way is to have a `Map<MessageType, MessageHandler>` of sorts (this means max 1 handler per message type), the other is a `List<MessageHandler>` where each `MessageHandler` has a `boolean canHandle(Message message)` (multiple possible handlers, but first come first serve), or it could be a gigantic switch that just makes you a new `MessageHandler` of the correct subclass (nasty, but could work?). I'd go for the Map because it allows you to instantly see "is there a handler for this messagetype".
Then, once you have the handler, you give it the message, the player who sent the message, and the rest of the players in the game.
(Actually, you might want to wrap the game in a Game object, if games are played ON the server. What you have right now is a relay server; the server doesn't keep track of state. That's okay, if all you want is a relay server, but this does stop you from doing things like server-side anti-cheat, such as checking if players are moving into walls.)
Inside the MessageHandler, you'd go about the thing you're doing now; first, parse the message to get the relevant content, then, send out an update.
You might want to change the messages to a more easily parsable format; if you'd go with the idea that the first `number;` in a message is the id, why not put the rest in the same shape? Something like `0;10.5;` being "update x to 10.5", with the format being `id=0;x=10.5;`.
---
You can go really far with this, as you can see. You'll have to decide how far you want to take it, but I think that at the very least, you should come up with a way for handling players one at a time by wrapping them into objects. That way you centralize message handling and reduce reliance on the availability of several variables - directly writing to sockets is low level and you don't want to deal with it next to your game logic, because the two may get mixed up.
|
How is the header file connected to the corresponding .cpp file?
From what I have read and understood, the `#include` directive in a source file, like for ex: `main.cpp`, just copies the content of the included file to the `.cpp`. So, when I include a header file, for ex: `yum.h`, all the statements in `yum.h` are copied into `main.cpp`.
The header file will just have declarations and the actual definition will be in a corresponding `.cpp` file, like `yum.cpp`.
So, while compiling `main.cpp`, how will the compiler know to look for the definition of any function mentioned in `yum.h` in `yum.cpp`? How will the compiler know to complie the `yum.cpp` file too, as there is no reference to it in either `main.cpp` file or `yum.h`.
Also, why should `yum.h` be included in `yum.cpp`?
This may sound like a stupid question. I am a beginner to OOP and C++, and am just trying to understand what is happening.
| Short answer: there is no relationship between the header and its implementation. One can exist without the other, or the two could be placed in files with unrelated names.
>
> while compiling the `main.cpp` how will the compiler know to look for the definition of any function mentioned in `yum.h` in `yum.cpp`?
>
>
>
The compiler has no idea. Each time it sees a reference to something declared in `yum.h`, or in any other header file, for that matter, it stays on a lookout for the corresponding definition.
If the definition is not there by the time the compiler has reached for the end of translation unit, it writes unsatisfied references into its `main.o` output, noting the places from which they are coming from. This is called a [*symbol table*](https://en.wikipedia.org/wiki/Symbol_table).
Then the compiler compiles `yum.cpp`, finds definitions from `yum.h` in it, and writes their positions into `yum.o`'s symbol table.
Once all `cpp` files have been processed, linker grabs all `.o` files, and builds a combined symbol table from them. If unsatisfied references remain, it issues an error. Otherwise, it links references from `main.o` with the corresponding symbols from `yum.o`, completing the process.
Consider an example: let's say `yum.h` declares a global variable `int yum = 0` defined in `yum.cpp`, and `main.cpp` prints that variable. The compiler produces `main.o` with a symbol table saying "I need `int yum`'s definition at address 1234", and `yum.o` file's symbol table saying "I have `int yum` at address 9876". Linker matches the "I need" with "I have" by placing 9876 at the address 1234.
|
How to disable pre-commit code analysis for Git-backed projects using IntelliJ IDEA
I have a project in IntelliJ IDEA, and I'm using Git/GitHub as source control. Each time I try to commit changes, IntelliJ IDEA runs a lengthy code analysis and searches for TODOs. When it finds "problems," it prompts me whether or not I want to review or commit.
I don't want the pre-commit code analysis to run, and I don't want IntelliJ IDEA to ask me about the results. I can't seem to find any setting in the regular IntelliJ IDEA project/IDE settings to disable this. How can I disable this?
| **This answer is outdated**. Please see [Interlated's answer](https://stackoverflow.com/a/65839277/99717) for a more current answer.
---
Answer for IntelliJ IDEA 11.1.5:
There are persistent check-boxes in the "Commit Changes" dialog. The next time you go to commit a changelist, uncheck the "Perform code analysis" and "Check TODO" check-boxes.
If you want to just get it done now:
- Make a non-invasive, 'test change' to a file; for example, add a test comment to any file
- Right click on the changelist and select "Commit Changes..."
- In the "Commit Changes" dialog, uncheck the "Perform code analysis" and "Check TODO" check-boxes
- Click "Commit" to persist the settings. You can then undo the test comment and commit that.
I can't find anyway to disable these checkboxes by default for new projects.
|
How to handle utf-8 text with Python 3?
I need to parse various text sources and then print / store it somewhere.
Every time a non ASCII character is encountered, I can't correctly print it as it gets converted to bytes, and I have no idea how to view the correct characters.
(I'm quite new to Python, I come from PHP where I never had any utf-8 issues)
The following is a code example:
```
#!/usr/bin/python
# -*- coding: utf-8 -*-
import codecs
import feedparser
url = "http://feeds.bbci.co.uk/japanese/rss.xml"
feeds = feedparser.parse(url)
title = feeds['feed'].get('title').encode('utf-8')
print(title)
file = codecs.open("test.txt", "w", "utf-8")
file.write(str(title))
file.close()
```
I'd like to print and write in a file the RSS title (BBC Japanese - ホーム) but instead the result is this:
>
> b'BBC Japanese - \xe3\x83\x9b\xe3\x83\xbc\xe3\x83\xa0'
>
>
>
Both on screen and file. Is there a proper way to do this ?
| In python3 `bytes` and `str` are two different types - and `str` is used to represent any type of string (also unicode), when you `encode()` something, you convert it from it's `str` representation to it's `bytes` representation for a specific encoding.
In your case in order to the decoded strings, you just need to remove the `encode('utf-8')` part:
```
#!/usr/bin/python
# -*- coding: utf-8 -*-
import codecs
import feedparser
url = "http://feeds.bbci.co.uk/japanese/rss.xml"
feeds = feedparser.parse(url)
title = feeds['feed'].get('title')
print(title)
file = codecs.open("test.txt", "w", encoding="utf-8")
file.write(title)
file.close()
```
|
How to retrieve a downloaded file programatically in Windows Phone 7?
I am downloading an epub file online. For this I first created a directory using `Directory.CreateDirectory`, then downloaded the file using following code.
```
WebClient webClient = new WebClient();
webClient.DownloadStringAsync(new Uri(downloadedURL), directoryName);
webClient.DownloadProgressChanged +=
new DownloadProgressChangedEventHandler(ProgressChanged);
webClient.DownloadStringCompleted +=
new DownloadStringCompletedEventHandler(Completed);
```
Is this the correct way of downloading files? What is the code to view the file which is downloaded and displaying it in a grid?
| 1) You should not use `Directory.CreateDirectory` on Windows Phone. Instead, since you are operating on Isolated Storage, you need to use:
```
var file = IsolatedStorageFile.GetUserStoreForApplication();
file.CreateDirectory("myDirectory");
```
2) Downloading files can be done through WebClient this way:
```
WebClient client = new WebClient();
client.OpenReadCompleted += new OpenReadCompletedEventHandler(client_OpenReadCompleted);
client.OpenReadAsync(new Uri("your_URL"));
void client_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e)
{
var file = IsolatedStorageFile.GetUserStoreForApplication();
using (IsolatedStorageFileStream stream = new IsolatedStorageFileStream("file.epub", System.IO.FileMode.Create, file))
{
byte[] buffer = new byte[1024];
while (e.Result.Read(buffer, 0, buffer.Length) > 0)
{
stream.Write(buffer, 0, buffer.Length);
}
}
}
```
Creating a directory directly in this case is optional. If you need to save the file in a nested folder structure, you might as well set the file path to something like **/Folder/NewFolder/file.epub**.
3) To enumerate files in the Isolated Storage, you could use:
```
var file = IsolatedStorageFile.GetUserStoreForApplication();
file.GetFileNames();
```
That's if the files are located in the root of the IsoStore. If those are located inside a directory, you will have to set a search pattern and pass it to `GetFileNames` - including the folder name and file type. For every single file, you could use this pattern:
```
DIRECTORY_NAME\*.*
```
|
What is the difference in angularJS between ui.router and ui.state?
I am working on getting an angularJS SPA setup with multiple views using [angular ui-router](https://github.com/angular-ui/ui-router).
As I look aroun dthe web at tutorials and how-to's, I see a mixed bag of dependencies. The ui-router github page has examples that use `ui.router` as the module dependency, while other articles such as Ben Schwartz [tutorial](http://txt.fliglio.com/2013/05/angularjs-state-management-with-ui-router/) uses `ui.state`.
What is the difference? Is one deprecated? Is `ui.state` a subset of `ui.router`?
| In summary, `ui.state` was for v0.0.1, while `ui.router` is for v0.2.0 (the current version).
`ui.state` was the necessary module for users to inject as a dependency in v0.0.1 of ui-router. See the [README](https://github.com/angular-ui/ui-router/blob/9ea2bb4c2b40009be76edd12c073c45db3db322e/README.md) at that release, as well as the relevant snippet from [angular-ui-router.js](https://github.com/angular-ui/ui-router/blob/9ea2bb4c2b40009be76edd12c073c45db3db322e/release/angular-ui-router.js) (lines 45-48):
```
angular.module('ui.util', ['ng']);
angular.module('ui.router', ['ui.util']);
angular.module('ui.state', ['ui.router', 'ui.util']);
angular.module('ui.compat', ['ui.state']);
```
The [README](https://github.com/angular-ui/ui-router/blob/818b0d69d2063064ca6d2e3b05252200439862d3/README.md) at v0.2.0 states under Quick Start: `Set ui.router as a dependency in your module. Note: Use ui.state if using v0.0.1.`
This is of course corroborated by [angular-ui-router.js](https://github.com/angular-ui/ui-router/blob/master/release/angular-ui-router.js) at v0.2.0, lines 79-83, showing the corresponding module dependency structure at that point:
```
angular.module('ui.router.util', ['ng']);
angular.module('ui.router.router', ['ui.router.util']);
angular.module('ui.router.state', ['ui.router.router', 'ui.router.util']);
angular.module('ui.router', ['ui.router.state']);
angular.module('ui.router.compat', ['ui.router']);
```
|
how to get entire object using navParams - Ionic 3 Angular 4
currently I am passing data object from one page to another page (in Modal).
I have 8 parameters to pass to the modal view.
```
this.firstParam = navParams.get("firstPassed");
this.secondParam = navParams.get("secondPassed");
.
.
.
this.eightParam = navParams.get("eightPassed");
```
How can I get entire object data using one call
```
this.data = navParams.getAll(); //something like this
```
I am unable to find a method in a documentation to get an entire object.
| You don't need to do like that. You can send all at once like below.
**Note:** Declare a data transfer object like `DtoMy`
*Dto-my.ts*
```
export class DtoMy {
firstPassed: string;
secondPassed: string;
//your other properties
}
```
**send**
```
let dtoMy = new DtoMy();
dtoMy.firstPassed= 'firstPassed';
dtoMy.secondPassed= 'secondPassed';
//your other values
const myModal = this.modalCtrl.create('MyModalPage', { data: dtoMy });
myModal.onDidDismiss(data => { });
myModal.present();
```
**Receive:**
*my-modal-page.ts*
```
data :DtoMy;
constructor(private navParams: NavParams, private modalCtrl: ModalController) {
this.data = this.navParams.get('data');
}
```
|
How can I use REPL with CPS function?
I've just encountered `withSession :: (Session -> IO a) -> IO a` of `wreq` package. I want to evaluate the continuation line by line, but I can't find any way for this.
```
import Network.Wreq.Session as S
withSession $ \sess -> do
res <- S.getWith opts sess "http://stackoverflow.com/questions"
-- print res
-- .. other things
```
In above snippet how can I evaluate `print res` in ghci? In other words, can I get `Session` type in ghci?
| Wonderful question.
I am aware of no methods that can re-enter the GHCi REPL, so that we can use that in CPS functions. Perhaps others can suggest some way.
However, I can suggest an hack. Basically, one can exploit concurrency to turn CPS inside out, if it is based on the IO monad as in this case.
Here's the hack: use this in a GHCi session
```
> sess <- newEmptyMVar :: IO (MVar Session)
> stop <- newEmptyMVar :: IO (MVar ())
> forkIO $ withSession $ \s -> putMVar sess s >> takeMVar stop
> s <- takeMVar sess
> -- use s here as if you were inside withSession
> let s = () -- recommended
> putMVar stop ()
> -- we are now "outside" withSession, don't try to access s here!
```
A small library to automatize the hack:
```
data CPSControl b = CPSControl (MVar ()) (MVar b)
startDebugCps :: ((a -> IO ()) -> IO b) -> IO (a, CPSControl b)
startDebugCps cps = do
cpsVal <- newEmptyMVar
retVal <- newEmptyMVar
stop <- newEmptyMVar
_ <- forkIO $ do
x <- cps $ \c -> putMVar cpsVal c >> takeMVar stop
putMVar retVal x
s <- takeMVar cpsVal
return (s, CPSControl stop retVal)
stopDebugCps :: CPSControl b -> IO b
stopDebugCps (CPSControl stop retVal) = do
putMVar stop ()
takeMVar retVal
testCps :: (String -> IO ()) -> IO String
testCps act = do
putStrLn "testCps: begin"
act "here's some string!"
putStrLn "testCps: end"
return "some return value"
```
A quick test:
```
> (x, ctrl) <- startDebugCps testCps
testCps: begin
> x
"here's some string!"
> stopDebugCps ctrl
testCps: end
"some return value"
```
|
How do enable internal Azure Services for SQL Azure in c#
How do I enable **allowed services : WINDOWS AZURE SERVICES** as seen in the Management Portal in c#?
```
_client = new SqlManagementClient(GetSubscriptionCredentials());
var result = _client.Servers.CreateAsync(new ServerCreateParameters
{
AdministratorUserName = _config.ServerUserName,
AdministratorPassword = _config.ServerPassword,
Location = _config.Location,
Version = "12.0"
}, CancellationToken.None).Result;
var sqlServerName = result.ServerName;
// This will go away once we can enable the Azure internal firewall settings == Yes
var ipAddress = _firewallManagement.GetPublicIP();
var firewall = _client.FirewallRules.Create(sqlServerName, new FirewallRuleCreateParameters("Server", ipAddress, ipAddress));
```
[![enter image description here](https://i.stack.imgur.com/5RoXd.png)](https://i.stack.imgur.com/5RoXd.png)
| Just add 0.0.0.0 as start\_ip\_address and end\_ip\_address like the T-SQL below to sys.firewall\_rules
```
exec sp_set_firewall_rule N'MicrosoftServices','0.0.0.0','0.0.0.0'
```
Don't mind the 0.0.0.0 range, SQL Azure knows it is only for Azure IPs in your subscription.
```
select * from sys.firewall_rules
id name start_ip_address end_ip_address create_date modify_date
7 MicrosoftService 0.0.0.0 0.0.0.0 2015-07-29 13:34:55.790 2015-07-29 13:34:55.790
```
**Azure SQL Database Firewall**
>
> When an application from Azure attempts to connect to your database
> server, the firewall verifies that Azure connections are allowed. A
> firewall setting with starting and ending address equal to 0.0.0.0
> indicates these connections are allowed.
>
>
>
<https://msdn.microsoft.com/en-us/library/azure/ee621782.aspx#ConnectingFromAzure>
**Adding and Deleting SQL Azure firewall rules programmatically**
<http://www.c-sharpcorner.com/uploadfile/dhananjaycoder/adding-and-deleting-sql-azure-firewall-rules-programmatically/>
```
public void AddFirewallRule(FirewallRule rule)
{
using (SqlConnection conn = new SqlConnection(this.ConnectionString))
using (SqlCommand cmd = conn.CreateCommand())
{
conn.Open();
cmd.CommandText = "sp_set_firewall_rule";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("@name", SqlDbType.NVarChar).Value = rule.Name;
cmd.Parameters.Add("@start_ip_address", SqlDbType.VarChar).Value = rule.startIPAddress.ToString();
cmd.Parameters.Add("@end_ip_address", SqlDbType.VarChar).Value = rule.endIPAdress.ToString();
cmd.ExecuteNonQuery();
}
}
```
|
Chaining promises with multiple arguments
I have just started using promises in Javascript and I'm wondering if there is a way to reduce the boilerplate code of wrapping every `then` callback in a function? I find myself doing this a lot and wondering if there is a better way
For example in my `getSettings` function below it either returns a settings model from the database (`then`) or a default settings model (`catch`).
```
var getSettings = function(instance, id, mode) {
return findByInstance(instance)
.then(function(data) {
return useModelFromData(data, id, mode);
})
.catch(function() {
return useDefaultModel(id, mode)
});
};
var useModelFromData = function(data, id, mode) {
var settings = data.model.settings;
applyFeatures(settings, id, mode);
return data.model;
};
var useDefaultModel = function(id, mode) {
var model = myModel.getDefaultModel();
applyFeatures(model.settings, id, mode);
return model;
};
```
I know that if the then callback takes a single argument and returns a value you can do the following shortcut:
```
get('some_url').then(JSON.parse).then(function(response) {
console.log("response", response);
});
```
Is there any shortcut I can do for multiple arguments?
| use [bind](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind) and move the argument list around
```
var getSettings = function(instance, id, mode) {
return findByInstance(instance)
.then(useModelFromData.bind(null, id, mode))
.catch(useDefaultModel.bind(null, id, mode));
};
var useModelFromData = function(id, mode, data) {
var settings = data.model.settings;
applyFeatures(settings, id, mode);
return data.model;
};
var useDefaultModel = function(id, mode) {
var model = myModel.getDefaultModel();
applyFeatures(model.settings, id, mode);
return model;
};
```
With bind you can set arguments to be prepended to the arguments list when the function is executed. So in this case `id` and `mode` will get prepended to the arguments that get passed to `useModelFromData` and `useDefaultModel`
|
How can I write human-language units as postfixes in Haskell, like `3 seconds`?
Ruby has a nice feature that allows to convert numbers to other things, e.g. `3.times` for iteration or `3.to_s` for converting it to a string.
People say Haskell is good for writing natural [DSL](http://en.wikipedia.org/wiki/Domain-specific_language)s.
Is it possible to write units as postfixes, e.g. `timeout = 3 seconds`?
| **Yes.**
You can do this with the following simple trick:
```
{-# LANGUAGE FlexibleInstances #-}
instance Num (Integer -> Integer) where
fromInteger n = \scale -> n * scale -- return a function that takes
-- a number and returns a number
```
Then you can write:
```
seconds, minutes, hours, days :: Integer
seconds = 1000000 -- base unit, e.g. microseconds
minutes = 60 seconds
hours = 60 minutes
days = 24 hours
soon :: Integer
soon = 2 hours + 4 seconds
```
---
**How does this work?**
Above we have given a `Num` instance for `Integer -> Integer`, that is for a *function that takes an integer and returns an integer*.
Every type that implements `Num` and has its function `fromInteger` defined is allowed to be represented by a numeric literal, e.g. `3`.
This means that we can write `3 :: Integer -> Integer` - here `3` is a function that takes an integer and returns an integer!
Therefore, we can apply an integer to it, for example `seconds`; we can write `3 seconds` and the expression will be of type `Integer`.
---
**A more type-safe version**
In fact, we could even write `3 (3 :: Integer)` now - this probably doesn't make much sense though. We can restrict this by making it more type-safe:
```
newtype TimeUnit = TimeUnit Integer
deriving (Eq, Show, Num)
instance Num (TimeUnit -> TimeUnit) where
fromInteger n = \(TimeUnit scale) -> TimeUnit (n * scale)
seconds, minutes, hours, days :: TimeUnit
seconds = TimeUnit 1000000
minutes = 60 seconds
hours = 60 minutes
days = 24 hours
```
Now we can only apply things of type `TimeUnit` to number literals.
You could do that for all kinds of other units, such as weights or distances or people.
|
Problem installing Python-Dev
I am having trouble installing `python-dev`. It all started when I tried to install another Python package and got the error:
```
SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev.
```
I tried `sudo apt-get install python-dev` but got the error:
```
The following packages have unmet dependencies:
python-dev : Depends: python2.7-dev (>= 2.7.3) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
```
So then I tried `sudo apt-get install python2.7-dev` and got the error:
```
The following packages have unmet dependencies:
python2.7-dev : Depends: python2.7 (= 2.7.3-0ubuntu3) but 2.7.3-0ubuntu3.1 is to be installed
Depends: libpython2.7 (= 2.7.3-0ubuntu3) but 2.7.3-0ubuntu3.1 is to be installed
```
I have tried most everything in the post [unmet dependencies](https://askubuntu.com/questions/140246/how-do-i-resolve-unmet-dependencies). I am running Ubuntu 12.04 and I have everything updated. I have done `apt-get clean` and `apt-get autoclean`. I have tried `apt-get -f install` and all variations on that theme. I have cleaned up my PPA. I even tried using Aptitude, and though it did a lot of clean up, the result was the same.
I really want to be able to install python-dev. How can I make this happen? At this point, I am willing to consider extreme options, whatever they may be.
| This bit:
```
python2.7-dev : Depends: python2.7 (= 2.7.3-0ubuntu3) but 2.7.3-0ubuntu3.1 is to be installed
```
suggests that you are using some mismatched repositories, or have some apt-pins in place keeping the version dependencies from lining up. I think, specifically, `python-2.7 2.7.3-0ubuntu3.1` is in the `Precise-proposed` repository and the `2.7.3-0ubuntu3` version is in Precise/main proper, so you may be preferring -proposed for some but not all packages.
Can you edit your question to include the output of:
```
apt-cache policy python2.7-dev
apt-cache policy python2.7
```
and maybe:
```
apt-cache show python2.7
```
...
Reading the apt-cache output from your pastebin, it looks like you have the python2.7 `2.7.3-0ubuntu3.1` from `precise-updates/main` installed, but `python2.7-dev` is from `precise/main`. I think your install media or an earlier "apt-get update" included the `precise-updates` repository, but it's not in your current sources.list.
I think you'll be able to get the install going after adding `precise-updates` and then `apt-get update`.
```
echo "deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted" | sudo tee -a /etc/apt/sources.list.d/precise-updates.list
sudo apt-get update
sudo apt-get install python2.7-dev
```
|
Is this a bug in NSolve in mathematica?
One would expect and hope that if you ask `Mathematica` to find the roots of a polynomial, it should give the same (approximate) answers whether you do this symbolically, then find numerical approximations to these exact answers, or whether you do it numerically. Here's an example which (in `Mathematica 7`, running on OS X) where this fails badly:
```
poly = -112 + 1/q^28 + 1/q^26 - 1/q^24 - 6/q^22 - 14/q^20 - 25/q^18 -
38/q^16 - 52/q^14 - 67/q^12 - 81/q^10 - 93/q^8 - 102/q^6 - 108/
q^4 - 111/q^2 - 111 q^2 - 108 q^4 - 102 q^6 - 93 q^8 - 81 q^10 -
67 q^12 - 52 q^14 - 38 q^16 - 25 q^18 - 14 q^20 - 6 q^22 - q^24 +
q^26 + q^28;
Total[q^4 /. NSolve[poly == 0, q]] - Total[q^4 /. N[Solve[poly == 0, q]]]
```
(Note: this is actually a Laurent polynomial, and if you multiply through by a large power of `q` the problem goes away.)
The last line here is just a demonstration that the solutions found are very different; in fact it's the quantity we were trying to compute in the problem we were working on.
If you look closely at the output of `NSolve[poly == 0, q]` and of `N[Solve[poly == 0, q]`, you'll see that NSolve only gives `54` roots instead of the expected `56`. It's not that it just missed a repeated root or anything; it's missing the two largest roots in magnitude (approximately `+/- 1.59`)
>
> Is this a bug in Mathematica? Does anyone have an explanation for why this is happening?
>
>
>
| Unfortunately your expectations are unfounded.
As you say, `Solve[]` gives an exact solution and `N[]` introduces a small error, but only once, when you evaluate it. `NSolve[]` on the other hand uses numerical approximations from the get go and thus significant rounding error can accumulate.
You are also limited by the default precision for the calculation and this can lead to complete failure of the numerical method, e.g., missing roots (see [Wilkinson's polynomial](http://en.wikipedia.org/wiki/Wilkinson%27s_polynomial)). You can counteract this by telling `NSolve[]` to use greater precision like so:
```
In[1] := Total[q^4 /. NSolve[poly == 0, q, WorkingPrecision -> 50]] -
Total[q^4 /. N[Solve[poly == 0, q]]]
Out[1] := 0. - 3.66374*10^-15 I
```
When using numerical methods it is always important to keep the errors in mind. Since this is true for for a large variety of [numerical analysis](http://en.wikipedia.org/wiki/Numerical_analysis) problems from solving long polynomials to diagonalizing large matrices to integration of weird functions etc. etc. there is no one correct approach, and Mathematica needs to be told to, e.g., raise the WorkingPrecision, or to apply a different numerical technique.
|
how to set a property of an object inside a NSArray
I have written a class which has a property, I want to add instances of this class into a mutable array and after that I want to set the num property of the instances. but I dont know the correct syntax. Please tell me the correct syntax or method to do this.
```
@interface ClassA : NSObject
@property int num;
-(void) method;
+(id) getClassAObj:(int)number;
@end
--------------------------------
#import "ClassA.h"
@implementation ClassA
@synthesize num;
-(void) method{
NSLog(@"ClassA method Called");
}
+(id) getClassAObj:(int)number {
ClassA *obj = [[ClassA alloc] init];
obj.num = number;
return obj;
}
@end
-----------------------
```
now in main I want to set the num property manually but I dont know what is the correct syntax
```
int main(int argc, const char * argv[])
{
@autoreleasepool {
// insert code here...
NSLog(@"Hello, World!");
NSMutableArray *array = [NSMutableArray arrayWithObjects:
[ClassA getClassAObj:9],
[ClassA getClassAObj:4], nil];
NSLog(@"%i",[[array objectAtIndex:0] num]);
NSLog(@"%i",[array[1] num]);
//array[1].num = 3; <--- Help needed here
//[[array[1]]num 3]; <--- this is also not correct syntax
NSLog(@"%i",[array[1] num]);
}
return 0;
}
```
I dont know If I am wrong with the syntax or may be it is not possible to set a property of an object inside an array
| Here you go:
```
ClassA *anObject = array[<index>];
[anObject setNum:<yourNumber>];
```
You can also use the `[<array> objectAtIndex:<index>]` method, but the above is the newer way to do it.
Good Luck!
EDIT:
If you want to validate the object (so it will not crash, but it will also not work as expected) your code can look like this:
```
id anObject = array[<index>];
if ([anObject isKindOfClass:[NSClassFromString(@"ClassA")]]) {
[(ClassA *)anObject setNum:<number>];
}
```
If you want to validade the exact message (the setter in this case), you should check if the object responds to this selector. This is useful when you have objects of different classes, but they all respond to the message. It would look like this:
```
id anObject = array[<index>];
if ([anObject respondsToSelector:@selector(setNum:)) {
[anObject setNum:<number>];
}
```
|
Selecting text in a div programmatically using position values belong to that text
I have a div and there is a text in it.I want a part of that text to be selected programmatically according position value of characters.
```
<div id="textdiv">
Hello world. I am a friend.
</div>
```
I want "llo world" part to be selected(*I mean highlighted/selected like making selection of content in an input/textarea* ). In that case position values are first (3) and last (10).
How can i do that programmatically using position values?
| Here is a simple way to to this:
```
function selectTextRange(obj, start, stop) {
var endNode, startNode = endNode = obj.firstChild
startNode.nodeValue = startNode.nodeValue.trim();
var range = document.createRange();
range.setStart(startNode, start);
range.setEnd(endNode, stop + 1);
var sel = window.getSelection();
sel.removeAllRanges();
sel.addRange(range);
}
selectTextRange(document.getElementById('textdiv'), 3, 10);
```
```
<div id="textdiv">
Hello world. I am a friend.
</div>
```
---
Text highlight:
```
function highlightRange(el, start, end) {
var text = el.textContent.trim()
el.innerHTML = text.substring(0, start) +
'<span style="background:yellow">' +
text.substring(start, end) +
"</span>" + text.substring(end);
}
highlightRange(document.getElementById("textdiv"), 3, 10)
```
```
<div id="textdiv">
Hello world. I am a friend.
</div>
```
|
Exponentials in python: x\*\*y vs math.pow(x, y)
Which one is more efficient using `math.pow` or the `**` operator? When should I use one over the other?
So far I know that `x**y` can return an `int` or a `float` if you use a decimal
the function `pow` will return a float
```
import math
print( math.pow(10, 2) )
print( 10. ** 2 )
```
| Using the power operator `**` will be faster as it won’t have the overhead of a function call. You can see this if you disassemble the Python code:
```
>>> dis.dis('7. ** i')
1 0 LOAD_CONST 0 (7.0)
3 LOAD_NAME 0 (i)
6 BINARY_POWER
7 RETURN_VALUE
>>> dis.dis('pow(7., i)')
1 0 LOAD_NAME 0 (pow)
3 LOAD_CONST 0 (7.0)
6 LOAD_NAME 1 (i)
9 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
12 RETURN_VALUE
>>> dis.dis('math.pow(7, i)')
1 0 LOAD_NAME 0 (math)
3 LOAD_ATTR 1 (pow)
6 LOAD_CONST 0 (7)
9 LOAD_NAME 2 (i)
12 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
15 RETURN_VALUE
```
Note that I’m using a variable `i` as the exponent here because constant expressions like `7. ** 5` are actually evaluated at compile time.
Now, in practice, this difference does not matter that much, as you can see when timing it:
```
>>> from timeit import timeit
>>> timeit('7. ** i', setup='i = 5')
0.2894785532627111
>>> timeit('pow(7., i)', setup='i = 5')
0.41218495570683444
>>> timeit('math.pow(7, i)', setup='import math; i = 5')
0.5655053168791255
```
So, while `pow` and `math.pow` are about twice as slow, they are still fast enough to not care much. Unless you can actually identify the exponentiation as a bottleneck, there won’t be a reason to choose one method over the other if clarity decreases. This especially applies since [`pow`](http://docs.python.org/3/library/functions.html#pow) offers an integrated modulo operation for example.
---
Alfe asked a good question in the comments above:
>
> `timeit` shows that `math.pow` is slower than `**` in all cases. What is `math.pow()` good for anyway? Has anybody an idea where it can be of any advantage then?
>
>
>
The big difference of `math.pow` to both the builtin `pow` and the power operator `**` is that it *always* uses float semantics. So if you, for some reason, want to make sure you get a float as a result back, then `math.pow` will ensure this property.
Let’s think of an example: We have two numbers, `i` and `j`, and have no idea if they are floats or integers. But we want to have a float result of `i^j`. So what options do we have?
- We can convert at least one of the arguments to a float and then do `i ** j`.
- We can do `i ** j` and convert the result to a float (float exponentation is automatically used when either `i` or `j` are floats, so the result is the same).
- We can use `math.pow`.
So, let’s test this:
```
>>> timeit('float(i) ** j', setup='i, j = 7, 5')
0.7610865891750791
>>> timeit('i ** float(j)', setup='i, j = 7, 5')
0.7930400942188385
>>> timeit('float(i ** j)', setup='i, j = 7, 5')
0.8946636625872202
>>> timeit('math.pow(i, j)', setup='import math; i, j = 7, 5')
0.5699394063529439
```
As you can see, `math.pow` is actually faster! And if you think about it, the overhead from the function call is also gone now, because in all the other alternatives we have to call `float()`.
---
In addition, it might be worth to note that the behavior of `**` and `pow` can be overridden by implementing the special `__pow__` (and `__rpow__`) method for custom types. So if you don’t want that (for whatever reason), using `math.pow` won’t do that.
|
Update textView from thread
In my OnCreate method I have created a thread that listens to incoming message!
```
In OnCreate() {
//Some code
myThread = new Thread() {
@Override
public void run() {
receiveMyMessages();
}
};
myThread.start();
// Some code related to sending out by pressing button etc.
}
Then, receiveMyMessage() functions…
Public void receiveMyMessage()
{
//Receive the message and put it in String str;
str = receivedAllTheMessage();
// << here I want to be able to update this str to a textView. But, How?
}
```
I checked [this article](http://www.helloandroid.com/tutorials/using-threads-and-progressdialog) but it did not work for me, no luck!
| Any updates to the UI in an Android application must happen in the UI thread. If you spawn a thread to do work in the background you must marshal the results back to the UI thread before you touch a View. You can use the `Handler` class to perform the marshaling:
```
public class TestActivity extends Activity {
// Handler gets created on the UI-thread
private Handler mHandler = new Handler();
// This gets executed in a non-UI thread:
public void receiveMyMessage() {
final String str = receivedAllTheMessage();
mHandler.post(new Runnable() {
@Override
public void run() {
// This gets executed on the UI thread so it can safely modify Views
mTextView.setText(str);
}
});
}
```
The `AsyncTask` class simplifies a lot of the details for you and is also something you could look into. For example, I believe it provides you with a thread pool to help mitigate some of the cost associated with spawning a new thread each time you want to do background work.
|
Multiple variable assignment in Swift
How do I assign multiple variables in one line using Swift?
```
var blah = 0
var blah2 = 2
blah = blah2 = 3 // Doesn't work???
```
| **You don't.**
This is a language feature to prevent the standard unwanted side-effect of assignment returning a value, as [described in the Swift book](https://developer.apple.com/library/ios/documentation/swift/conceptual/Swift_Programming_Language/BasicOperators.html):
>
> Unlike the assignment operator in C and Objective-C, the assignment operator in Swift does not itself return a value. The following statement is not valid:
>
>
>
> ```
> if x = y {
> // this is not valid, because x = y does not return a value
> }
>
> ```
>
> This feature prevents the assignment operator (`=`) from being used by accident when the equal to operator (`==`) is actually intended. By making if `x = y` invalid, Swift helps you to avoid these kinds of errors in your code.
>
>
>
So, this helps prevent this extremely common error. While this kind of mistake can be mitigated for in other languages—for example, by using [Yoda conditions](https://en.wikipedia.org/wiki/Yoda_conditions)—the Swift designers apparently decided that it was better to make certain at the language level that you couldn't shoot yourself in the foot. But it does mean that you can't use:
```
blah = blah2 = 3
```
If you're desperate to do the assignment on one line, you could use tuple syntax, but you'd still have to specifically assign each value:
```
(blah, blah2) = (3, 3)
```
...and I wouldn't recommend it. While it may feel inconvenient at first, just typing the whole thing out is the best way to go, in my opinion:
```
blah = 3
blah2 = 3
```
|
Legend and Bar Chart Colors do not match
I have the following implementation and I used colorField to assign colors. However, even though I color the bar stack objects properly, but this color does not match with legend color. I thought it should be bound somehow, how should I fix it?
I have used `colorField` because I want to assign the same color for paired objects in the bar stack column.
dataSample:
```
data[0] = {
"value": 29,
"series": 1,
"category": "Men",
"fname": "NY",
"valueColor": "black"
},
```
![enter image description here](https://i.stack.imgur.com/5itV6.png)
<http://jsfiddle.net/fm79hsms/13/>
| Here is a solution, although it feels kind of hacky.
[js fiddle](http://jsfiddle.net/1ost124j/)
I used legend.item.visual to redraw the legend and pulled in the valueColor from the data, which was nicely passed along to the legened.item.visual function.
```
legend: {
item: {
visual: function (e) {
var color = ""
for (var i=0;i<e.series.data.length;i++){
if (e.series.data[i].valueColor != "" && e.series.data[i].fname != "") {
color = e.series.data[i].valueColor
}
}
var rect = new kendo.geometry.Rect([0, 0], [100, 50]);
var layout = new kendo.drawing.Layout(rect, {
spacing: 5,
alignItems: "center"
});
var marker = new kendo.drawing.Path({
fill: {
color: color
}
}).moveTo(10, 0).lineTo(10, 10).lineTo(0, 10).lineTo(0,0).close();
var label = new kendo.drawing.Text(e.series.name, [0, 0], {
fill: {
color: "black"
}
});
layout.append(marker, label);
layout.reflow()
return layout;
}
}
},
```
|
what is dereference in the context of git show-ref -d
Here is the man page for [git show-ref -d](http://git-scm.com/docs/git-show-ref) . They also have an example at the bottom. Still I am not able to understand what dereference does?
| In Git, a "normal" (annotated, not lightweight) [tag is an object unto itself](https://git-scm.com/book/sv/v2/Git-Internals-Git-References#_tags), containing metadata and the SHA1 of the object it tags. Chapter [10.2 Git Internals - Git Objects](https://git-scm.com/book/en/v2/Git-Internals-Git-Objects) in the Git community book has an illustration of the object model:
[![enter image description here](https://i.stack.imgur.com/C8F2H.png)](https://i.stack.imgur.com/C8F2H.png)
Legend: yellow - commit object, blue/green - tree object, white - blob object
So, when you use `git show-ref` on a normal tag, it will normally give you information about the [tag object](https://git-scm.com/book/sv/v2/Git-Internals-Git-References#_tags). With the `-d/--dereference` option, it will dereference the tag into the object the tag refers to, and provide information about it instead.
And a note on lightweight vs. annotated tags, in case you aren't aware of that: a lightweight tag is created by using `git tag <tag name>` (i.e. without any of the metadata-providing options like `-a`, `-s`, or `-u`). It's not a tag object at all, just a [Git reference](https://git-scm.com/book/sv/v2/Git-Internals-Git-References) pointing straight [to the object you've tagged](https://stackoverflow.com/questions/68614449/what-is-tagging-an-object-which-isnt-a-commit-in-git). If you provide one of those options, you're attaching metadata to the tag, so Git creates a tag object to hold that.
|
What exactly is a category?
I am reading [Category Theory for Programmers](https://bartoszmilewski.com/2014/10/28/category-theory-for-programmers-the-preface/), and I cannot figure out what exactly a category is.
Let's consider the `Bool` type. Is `Bool` a category and `True` or `False` are objects (inhabitants)?
| One reason you're getting a lot of potentially confusing answers is that your question is a little like asking: "Let's consider a soccer ball. Is the soccer ball a 'game' with the black and white polygons its 'pieces'?"
The answer might be @arrowd's answer: "No, you've confused the game of soccer (`Hask`) with its ball (`Bool`), and the polygons (`True` and `False`) don't matter." Or, it might be @freestyle's answer: "Yes, we could certainly create a game using the soccer ball and assign one player to the black polygons and the other player the white polygons, but what would the rules be?" Or, it might be @Yuval Itzchakov's answer: "Formally, a 'game' is a collection of one or more players, zero or more pieces, and a set of rules such that, etc., etc."
So, let me add to the confusion with one more (very long) answer, but maybe it will answer your question a little more directly.
## Yes, but it's a boring category
Instead of talking about the Haskell type `Bool`, let's just talk about the abstract concept of boolean logic and the boolean values true and false. Can we form a category with the abstract values "true" and "false" as the objects?
The answer is definitely yes. In fact, we can form (infinitely) many such categories. All we need to do is explain what the "objects" are, what the "arrows" (sometimes called morphisms) are, and make sure that they obey the formal mathematical rules for categories.
Here is one category: Let the "objects" be "true" and "false", and let there be two "arrows":
```
true -> true
false -> false
```
**Note:** Don't confuse this `->` notation with Haskell functions. These arrows don't "mean" anything yet, they are just abstract connections between objects.
Now, I know this is a category because it includes both identity arrows (from an object to itself), and it satisfies the composition property which basically says that if I can follow two arrows from `a -> b -> c` , then there must be a direct arrow `a -> c` representing their "composition". (**Again**, when I write `a -> b -> c`, I'm **not** talking about a function type here -- these are abstract arrows connecting `a` to `b` and then `b` to `c`.) Anyway, I don't have enough arrows to worry too much about composition for this category because I don't have any "paths" between different objects. I will call this the "Discrete Boolean" category. I agree that it is mostly useless, just like a game based on the polygons of a soccer ball would be pretty stupid.
## Yes, but it has nothing to do with boolean values
Here's a slightly more interesting category. Let the objects be "true" and "false", and let the arrows be the two identity arrows above plus:
```
false -> true
```
This is a category, too. It has all the identity arrows, and it satisfies composition because, ignoring the identity arrows, the only interesting "path" I can follow is from "false" to "true", and there's nowhere else to go, so I still don't really have enough arrows to worry about the composition rule.
There are a couple more categories you could write down here. See if you can find one.
Unfortunately, these last two categories don't really have anything to do with the properties of boolean logic. It's true that `false -> true` looks a little like a `not` operation, but then how could we explain `false -> false` or `true -> true`, and why isn't `true -> false` there, too?
Ultimately, we could just as easily have called these objects "foo" and "bar" or "A" and "B" or not even bothered to name them, and the categories would be just as valid. So, while *technically* these are categories with "true" and "false" as objects, they don't capture anything interesting about boolean logic.
## Quick aside: multiple arrows
Something I haven't mentioned yet is that categories can contain multiple, distinct arrows between two objects, so there could be two arrows from `a` to `b`. To differentiate them, I might give them names, like:
```
u : a -> b
v : a -> b
```
I could even have an arrow *separate* from the identity from `b` to itself:
```
w : b -> b -- some non-identity arrow
```
The composition rule would have to be satisfied by all the different paths. So, because there's a path `u : a -> b` and a path `w : b -> b` (even though it doesn't "go" anywhere else), there would have to be an arrow representing the composition of `u` followed by `w` from `a -> b`. Its value *might* be equal to "u" again, or it *might* be "v", or it *might* be some other arrow from `a -> b`. Part of describing a category is explaining how all the arrows compose and demonstrating that they obey the category laws (the unit law and the associative law, but let's not worry about those laws here).
Armed with this knowledge, you can create an infinite number of boolean categories just by adding more arrows wherever you want and inventing any rules you'd like about how they should compose, subject to the category laws.
## Sort of, if you use more complicated objects
Here's a more interesting category that captures some of the "meaning" of boolean logic. It's kind of complicated to explain, so bear with me.
Let the objects be boolean expressions with zero or more boolean variables:
```
true
false
not x
x and y
(not (y or false)) and x
```
We'll consider expressions that are "always the same" to be the same object, so `y or false` and `y` are the same object, since no matter what the value of `y` is, they have the same boolean value. That means that the last expression above could have been written `(not y) and x` instead.
Let the arrows represent the act of setting zero or more boolean variables to specific values. We'll label these arrows with little annotations, so that the arrow `{x=false,y=true}` represents the act of setting two variables as indicated. We'll assume that the settings are applied in order, so the arrow `{x=false,x=true}` would have the same effect on an expression as `{x=false}`, even though they're different arrows. That means that we have arrows like:
```
{x=false} : not x -> true
{x=true,y=true} : x and y -> true
```
We also have:
```
{x=false} : x and y -> false and y -- or just "false", the same thing
```
Technically, the two arrows labelled `{x=false}` are different arrows. (They can't be the same arrow because they're arrows between different objects.) It's very common in category theory to use the same name for different arrows like this if they have the same "meaning" or "interpretation", like these ones do.
We'll define composition of arrows to be the act of applying the sequence of settings in the first arrow and then applying the settings from the second arrow, so the composition of:
```
{x=false}: x or y -> y and {y=true} : y -> true
```
is the arrow:
```
{x=false,y=true}: x or y -> true
```
This is a category. It has identity arrows for every expression, consisting of not setting any variables:
```
{} : true -> true
{} : not (x or y) and (u or v) -> not (x or y) and (u or v)
```
It defines composition for every pair of arrows, and the compositions obey the unit and associative laws (again, let's not worry about that detail here).
And, it represents a particular aspect of boolean logic, specifically the act of calculating the value of a boolean expression by substituting boolean values into variables.
## Hey, look! A Functor!
It also has a somewhat interesting functor which we might call "Negate". I won't explain what a functor is here. I'll just say that Negate maps this category to itself by:
- taking each object (boolean expression) to its logical negation
- taking each arrow to a new arrow representing the same variable substitutions
So, the arrow:
```
{a=false} : (not a) and b -> b
```
gets mapped by the Negate functor to:
```
{a=false} : not ((not a) and b) -> not b
```
or more simply, using the rules of boolean logic:
```
{a=false} : a or (not b) -> not b
```
which is a valid arrow in the original category.
This functor captures the idea that "negating a boolean expression" is equivalent to "negating its final result", or maybe more generally that the process of substituting variables in a negated expression has the same structure as doing it to the original expression. Maybe that's not too exciting, but this is just a long Stack Overflow answer, not a 500-page textbook on Category Theory, right?
## `Bool` as part of the `Hask` category
Now, let's switch from talking about abstract boolean categories to your specific question, whether the `Bool` Haskell *type* is a category with objects `True` and `False`.
The answers above still apply, to the extent that this Haskell type can be used as a model of boolean logic.
However, when people talk about categories in Haskell, they are usually talking about a specific category `Hask` where:
- the objects are types (like `Bool`, `Int`, etc.)
- the arrows are Haskell functions (like `f :: Int -> Double`). **Finally**, the Haskell syntax and our abstract category syntax coincide -- the Haskell function `f` can be thought of as an arrow from the object `Int` to the object `Double`).
- composition is regular function composition
If we are talking about *this* category, then the answer to your question is: no, in the `Hask` category, `Bool` is one of the objects, and the arrows are Haskell functions like:
```
id :: Bool -> Bool
not :: Bool -> Bool
(==0) :: Int -> Bool
foo :: Bool -> Int
foo b = if b then 10 else 15
```
To make things more complicated, the objects *also* include types of functions, so `Bool -> Bool` is one of the objects. One example of an arrow that uses this object is:
```
and :: Bool -> (Bool -> Bool)
```
which is an arrow from the object `Bool` to the object `Bool -> Bool`.
In this scenario, `True` and `False` aren't part of the category. Haskell values for function types, like `sqrt` or `length` are part of the category because they're arrows, but `True` and `False` are non-function types, so we just leave them out of the definition.
## Category Theory
Note that this last category, like the first categories we looked at, has absolutely nothing to do with boolean logic, even though `Bool` is one of the objects. In fact, in this category, `Bool` and `Int` look about the same -- they're just two types that can have arrows leaving or entering them, and you'd never know that `Bool` was about true and false or that `Int` represented integers, if you were just looking at the `Hask` category.
This is a fundamental aspect of category theory. You use a specific category to study a specific aspect of some system. Whether or not `Bool` is a category or a part of category is sort of a vague question. The better question would be, "is this particular aspect of `Bool` that I'm interest in something that can be represented as a useful category?"
The categories I gave above roughly correspond to these potentially interesting aspects of `Bool`:
- The "Discrete Boolean" category represents `Bool` as a plain mathematical set of two objects, "true" and "false", with no additional interesting features.
- The "false -> true" category represents an ordering of boolean values, `false < true`, where each arrow represents the operator '<='.
- The boolean expression category represents an evaluation model for simple boolean expressions.
- `Hask` represents the composition of functions whose input and output types may be a boolean type or a functional type involving boolean and other types.
|
Powershell partial string comparison
I'm currently stuck on a specific comparison problem. I have two CSV files which contain application names and I need to compare both csvs for matching names. Of course that would be easy if the applications were written the same ways in both csvs, but they're not.
Each csv has two columns but only the first column contains tha application names. In csv01 an app is called "Adobe Acrobat Reader DC Continuous MUI" while the same app in csv02 is called "Adobe Acrobat Reader DC v2022.002.20191". By looking at the files, I know both contain "Adobe Reader DC". But I'd like to automate th comparison as the csvs contains hundreds of apps.
I initially thought I'd run a nested foreach loop, taking the first product in csv01 and comparing every app in csv02 to that product to see if I have a match. I did that by splitting the application names at each space character and came up with the following code:
```
# Define the first string
$Products01 = Import-CSV 'C:\Temp\ProductsList01.csv' -Delimiter ";"
# Define the second string
$Products02 = Import-CSV 'C:\Temp\ProductList02.csv' -Delimiter ";"
# Flag to track if all parts of string2 are contained within string1
$allPartsMatch = $true
# Create Hashtable for results
$MatchingApps = @{}
# Loop through each part of string2
foreach ($Product in $Products01.Product) {
Write-Host "==============================="
Write-Host "Searching for product: $Product"
Write-Host "==============================="
# Split the product name into parts
$ProductSplit = $Product -split " "
Write-Host "Split $Product into $ProductSplit"
foreach ($Application in $Products02.Column1) {
Write-Host "Getting comparison app: $Application"
# Split the product name into parts
$ApplicationSplit = $Application -split " "
Write-Host "Split comparison App into: $ApplicationSplit"
# Check if the current part is contained within string1
if ($ProductSplit -notcontains $ApplicationSplit) {
# If the current part is not contained within string1, set the flag to false
$allPartsMatch = $false
}
}
# Display a message indicating the result of the comparison
if ($allPartsMatch) {
Write-Host "==============================="
Write-Host "$Application is contained within $Product"
Write-Host "==============================="
$MatchingApps += @{Product01 = $Product; Product02 = $Application}
} else {
#Write-Host "$Application is not contained within $Product"
}
}
```
However, I seem to have a logic error in my thought process as this returns 0 matches. So obviously, the script isn't properly splitting or comparing the split items.
My question is - how do compare the parts of both app names to see if I have the apps in both csvs? Can I use a specific regex for that or do I need to approach the problem differently?
Cheers,
Fred
I tried comparing both csv files for similar product names. I expected a table of similar product names. I received nothing.
| The basis for "matching" one string to another is that they *share a prefix* - so start by writing a small function that extracts the common prefix of 2 strings, we'll need this later:
```
function Get-CommonPrefix {
param(
[string]$A,
[string]$B
)
# start by assuming the strings share no common prefix
$prefixLength = 0
# the maximum length of the shared prefix will at most be the length of the shortest string
$maxLength = [Math]::Min($A.Length, $B.Length)
for($i = 0; $i -lt $maxLength; $i++){
if($A[$i] -eq $B[$i]){
$prefixLength = $i + 1
}
else {
# we've reached an index with two different characters, common prefix stops here
break
}
}
# return the shared prefix
return $A.Substring(0, $prefixLength)
}
```
Now we can determine the shared prefix between two strings:
```
PS ~> $sharedPrefix = Get-CommonPrefix 'Adobe Acrobat Reader DC Continuous MUI' 'Adobe Acrobat Reader DC v2022.002.20191'
PS ~> Write-Host "The shared prefix is '$sharedPrefix'"
The shared prefix is 'Adobe Acrobat Reader DC '
```
Now we just need to put it to use in your nested loop:
```
# Import the first list
$Products01 = Import-CSV 'C:\Temp\ProductsList01.csv' -Delimiter ";"
# Import the second list
$Products02 = Import-CSV 'C:\Temp\ProductList02.csv' -Delimiter ";"
# now let's find the best match from list 2 for each item in list 1:
foreach($productRow in $Products01) {
# we'll use this to keep track of shared prefixes encountered
$matchDetails = [pscustomobject]@{
Index = -1
Prefix = ''
Product2 = $null
}
for($i = 0; $i -lt $Products02.Count; $i++) {
# for each pair, start by extracting the common prefix and see if we have a "better match" than previously
$commonPrefix = Get-CommonPrefix $productRow.Product $Products02[$i].Product
if($commonPrefix.Length -gt $matchDetails.Prefix.Length){
# we found a better match!
$matchDetails.Index = $i
$matchDetails.Prefix = $commonPrefix
$matchDetails.Product2 = $Products02[$i]
}
}
if($matchDetails.Index -ge 0){
Write-Host "Best match found for '$($productRow.Product)': '$($matchDetails.Product2.Product)' "
# put code that needs to work on both rows here ...
}
}
```
Note: in cases where multiple entries in the second list matches the same-length prefix from the first list, the code simply picks the first match.
|
.NET Core JsonDocument.Parse(ReadOnlyMemory, JsonReaderOptions) failed to parse from WebSocket ReceiveAsync
I use .NET Core 3.0's `JsonDocument.Parse(ReadOnlyMemory<Byte>, JsonReaderOptions)` to parse WS message (`byte[]`) to JSON, but it throws an exception as below:
```
'0x00' is invalid after a single JSON value. Expected end of data. LineNumber: 0 | BytePositionInLine: 34.
```
This is my Middleware snippet code:
```
WebSocket ws = await context.WebSockets.AcceptWebSocketAsync();
byte[] bytes = new byte[1024 * 4];
ArraySegment<byte> buffer = new ArraySegment<byte>(bytes);
while (ws.State == WebSocketState.Open)
{
try
{
WebSocketReceiveResult request = await ws.ReceiveAsync(bytes, CancellationToken.None);
switch (request.MessageType)
{
case WebSocketMessageType.Text:
string msg = Encoding.UTF8.GetString(bytes, 0, bytes.Length);
json = new ReadOnlyMemory<byte>(bytes);
JsonDocument jsonDocument = JsonDocument.Parse(json);
break;
default:
break;
}
}
catch (Exception e)
{
Console.WriteLine($"{e.Message}\r\n{e.StackTrace}");
}
}
```
| As mentioned in the comments, you did some mistakes. One of the biggest one is that you allocate the memory (causes allocations and gc in the long run, something Memory/Span API wants to avoid). The second being, you didn't slice your data, since your payload is smaller then the buffer size.
Some fixes I'd did to the code
```
WebSocket ws = await context.WebSockets.AcceptWebSocketAsync();
// Don't do that, it allocates. Beats the main idea of using [ReadOnly]Span/Memory
// byte[] bytes = new byte[1024 * 4];
// We don't need this either, its old API. Websockets support Memory<byte> in an overload
// ArraySegment<byte> buffer = new ArraySegment<byte>(bytes);
// We ask for a buffer from the pool with a size hint of 4kb. This way we avoid small allocations and releases
// P.S. "using" is new syntax for using(disposable) { } which will
// dispose at the end of the method. new in C# 8.0
using IMemoryOwner<byte> memory = MemoryPool<byte>.Shared.Rent(1024 * 4);
while (ws.State == WebSocketState.Open)
{
try
{
ValueWebSocketReceiveResult request = await ws.ReceiveAsync(memory.Memory, CancellationToken.None);
switch (request.MessageType)
{
case WebSocketMessageType.Text:
// we directly work on the rented buffer
string msg = Encoding.UTF8.GetString(memory.Memory.Span);
// here we slice the memory. Keep in mind that this **DO NOT ALLOCATE** new memory, it just slice the existing memory
// reason why it doesnt allocate is, is that Memory<T> is a struct, so its stored on the stack and contains start
// and end position of the sliced array
JsonDocument jsonDocument = JsonDocument.Parse(memory.Memory.Slice(0, request.Count));
break;
default:
break;
}
}
catch (Exception e)
{
Console.WriteLine($"{e.Message}\r\n{e.StackTrace}");
}
}
```
You need to slice it, so the Json parser won't read beyond the end of the JSON string.
|
How to use node\_modules on a "traditional website"?
I have used node to manage dependencies on React apps and the like, in those you use package.json to keep track of libs and use them in your scripts using ES6 import module syntax.
But now I'm working on a legacy code base that uses a bunch of jQuery plugins (downloaded manually and placed in a "libs" folder) and links them directly in the markup using script tags.
I want to use npm to manage these dependencies. Is my only option:
- run npm init
- install all plugins through npm and have them in package.json
- link to the scripts in the node\_modules folder directly from the markup:
`<script src="./node_modules/lodash/lodash.js"></script>`
or is there a better way?
| Check out this [tutorial](https://medium.com/@andrejsabrickis/modern-approach-of-javascript-bundling-with-webpack-3b7b3e5f4e7) for going from using script tags to bundling with Webpack. You will want to do the following: **(Do steps 1 and 2 as you mentioned in your question then your step 3 will change to the following 3 steps)**
1. Download webpack with npm: `npm install webpack --save-dev`
2. Create a `webpack.config.js` file specifying your entry file and output file. Your entry file will contain *any custom JS components your app is using.* You will also need to specify to include your node\_modules within your generated Javascript bundle. Your *output file will be the resulting Javascript bundle* that Webpack will create for you and it will contain all the necessary Javascript your app needs to run. A simple example `webpack.config.js` would be the following:
```
const path = require('path');
module.exports = {
entry: './path/to/my/entry/file.js',
output: {
path: path.resolve(__dirname, 'dist'),
filename: 'my-first-webpack.bundle.js'
},
resolve: {
alias: {
'node_modules': path.join(__dirname, 'node_modules'),
}
}
};
```
3. Lastly, add a `<script>` tag within your main HTML page pointing to your newly generated Javascript bundle:
```
<script src="dist/my-first-webpack.bundle.js"></script>
```
Now your web application should work the same as before your refactoring journey.
Cheers
|
How to set the 'where' clause on a field with comma-seperated values?
I have a db table with a field with values stored in the `value1,value2,value3,value4` format.
I want to find all the rows where this field contains a defined value, eg. `value3`.
How can I perform a query to search a value in a field like this one?
| use `FIND_IN_SET()`
```
SELECT *
FROM tableName
WHERE FIND_IN_SET('value3', 'comma separated value here') > 0
```
- [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/c3e1b/2)
SOURCE
- [MySQL FIND\_IN\_SET](http://dev.mysql.com/doc/refman/5.5/en/string-functions.html#function_find-in-set)
Description from MySQL Docs:
>
> Returns a value in the range of 1 to N if the string str is in the
> string list strlist consisting of N substrings. A string list is a
> string composed of substrings separated by “,” characters. If the
> first argument is a constant string and the second is a column of type
> SET, the FIND\_IN\_SET() function is optimized to use bit arithmetic.
> Returns 0 if str is not in strlist or if strlist is the empty string.
> Returns NULL if either argument is NULL.
>
>
>
|
Can I use PBKDF2 to generate an AES256 key to encrypt and implicitly authenticate?
I have 2 devices and I want to set up a secure communication channel between them. The only shared secret is a (7- to 20- character ASCII) passphrase. If I use PBKDF2 (from RFC 2898) with a common salt, iterations, and passphrase to generate an AES256-CBC key and IV on both sides, I think I can authenticate the user and provide an encrypted channel all in one step. Is that true, or is there some reason why I've only seen people use PBKDF2 to verify passwords?
My reasoning is that both sides need to know the passphrase to generate the same key and IV. So if device B can decrypt data from device A, they both have demonstrated that they have the same passphrase.
| PBKDF2 is a fine way to generate a common key from a shared secret (you should not be generating the IV in such a way though - the IV should be random, and sent alongside the ciphertext).
However, CBC is not an authenticating cipher mode. This is because an attacker can take an encrypted message and make predictable modifications to it, without needing to be able to read the message or know the key. Such attacks have broken real world systems in the past.
You can use an authenticating cipher mode, like Galois Counter Mode (GCM) instead of CBC.
An alternative is Encrypt-Then-MAC. Use PBKDF2 with two different salts to generate two different keys - first the data is encrypted using CBC with the first key, and then a HMAC is calculated over the ciphertext using the second key.
You will also need to use single-use-nonces to prevent replay attacks.
|
qDebug() stopped to work (no more printing to console) after upgrading to ubuntu 17.10 (and Fedora as well)
After upgrading from Ubuntu 17.04 to 17.10, the `qDebug()` macro stopped working and no longer displays the messages on the console.
How can the debug output be re-enabled in order to see the output of the macro on the console?
| After further investigation, the issue was traced back to an Ubuntu team decision to silence the Qt's `qDebug` output by default.
See [missing qDebug output when creating QT applications](https://bugs.launchpad.net/ubuntu/+source/qtbase-opensource-src/+bug/1731646).
The bug report notes that Fedora has made the same change. If you want to re-enable the `qDebug` output, the solution is pretty easy.
The best way is to create this empty file
```
~/.config/QtProject/qtlogging.ini
```
Another solution is to export the following to your environment:
```
QT_LOGGING_RULES="*.debug=true"
```
This setting affects *all* of the Qt-based applications in the system, i.e. it's a system-wide configuration setting that will cause all of them to display their `qDebug` outputs.
|
Entity Framework POCO Serialization
I will start to code a new Web application soon. The application will be built using ASP.Net MVC 3 and Entity Framework 4.1 (Database First approach). Instead of using the default EntityObject classes, I will create POCO classes using the ADO.NET POCO Entity Generator.
When I create POCOs using this tool, it automatically adds the Virtual keyword to all properties for change tracking and navigation properties for lazy loading.
I have however read and seen from demonstrations, that Julie Lerman (EF Guru!) seems to turn off lazy loading and also modifies her POCO template so that the Virtual keyword is removed from her POCO classes. Julie states the reason why she does this is because she is writing applications for WCF services and using the Virtual keyword with this causes a Serialization issue. She says, as an object is getting serialized, the serializer is touching the navigation properties which then triggers lazy loading, and before you know it you are pulling the whole database across the wire.
I think Julie was perhaps exagarating when she said this could pull the whole database across the wire, however, even so, this thought scares me!
My question is (finally), should I also remove the Virtual keyword from my POCO classes for my MVC application and use DectectChanges for my change tracking and Eager Loading to request navigation properties.
Your help with this would be greatly appreciated.
Thanks as ever.
| Serialization can indeed trigger lazy loading because the getter of the navigation property doesn't have a way to detect if the caller is the serializer or user code.
This is not the only issue: whether you have virtual navigation properties or all properties as virtual EF will create a proxy type at runtime for your entities, therefore entity instances the serializer will have to deal with at runtime will typically be of a type different from the one you defined.
Julie's recommendations are the simplest and most reasonable way to deal with the issues, but if you still want to work with the capabilities of proxies most of the time and only sometimes serialize them with WCF, there are other workarounds available:
- You can use a DataContractResolver to map the proxy types to be serialized as the original types
- You can also turn off lazy loading only when you are about to serialize a graph
More details are contained in this blog post: <http://blogs.msdn.com/b/adonet/archive/2010/01/05/poco-proxies-part-2-serializing-poco-proxies.aspx>
Besides this, my recommendation would be that you use the DbContext template and not the POCO template. DbContext is the new API we released as part of EF 4.1 with the goal of providing greater productivity. It has several advantages like the fact that it will automatically perform DetectChanges so that you won't need in general to care about calling the method yourself. Also the POCO entities we generate for DbContext are simpler than the ones that we generate with the POCO templates. You should be able to find lots of MVC exampels using DbContext.
|
Safely escape period/dot (.) character in .htaccess mod\_rewrite regex
I have a `.htaccess` file that is used by an advanced SEO URL php system installed on my osCommerce site.
It has the following rules that work just fine for most cases, but removing periods from my GET parameters:
```
RewriteRule ^([a-z0-9/-]+)-c-([0-9_]+).html$ index.php [NC,L,QSA]
RewriteRule ^([a-z0-9/-]+)-m-([0-9]+).html$ index.php [NC,L,QSA]
```
So URL like this:
```
http://example.com//index.php?cPath=44_95&page=1&range=1.99_2.99
```
gets rewritten according to the rule and the `1.99_2.99` becomes `199_299`.
How can I escape the period safely? (ie. without causing some random side effects)
| The standard escape character for .htaccess regular expressions is the slash ("`\`").
```
RewriteRule ^([a-z0-9/-]+)-c-([0-9_]+)\.html$ index.php [NC,L,QSA]
^^
RewriteRule ^([a-z0-9/-]+)-m-([0-9]+)\.html$ index.php [NC,L,QSA]
^^
```
The slash will prevent the meaning of the dot and escape it so that the dot is taken verbatim as a character to match (period, ASCII code 46 / x2E) .
The other suggestion given in the comment to create a character class consisting of the dot only ("`[.]`") does the job as well, but it's perhaps a bit over the top to create a character class while you only want to name a single character. But it's technically working (and has been suggested for example in [escaping dot in Apache mod\_rewrite](https://stackoverflow.com/a/9646792/367456)).
BTW: [Apache rewrite uses Perl Compatible Regular Expression (PCRE)](https://httpd.apache.org/docs/current/rewrite/intro.html) which is the same flavour of regex like PHP is using in the `preg_*` family of functions which is PHP's preferred regex dialect.
|
Creating Demux in Verilog
Hello I have a homework about verilog.
My task is :
''When interrupt is asserted, s1 register will give the counter number of interrupt in interrupt subroutine. When more
than one interrupt is received, s1 register will give output of priority encoders.''
The schema :
[![image](https://i.stack.imgur.com/azwfm.jpg)](https://i.stack.imgur.com/azwfm.jpg)
I designed the schema and saw in verilog RTL Schematic except demux part.
How can I see demux part together with other part?
Here is my verilog top\_module code.
counter ,priority encoder,picoblaze is given to me.
I tried to write orgate part and demux .
```
module top_module(
input clock,
input reset
);
///////priority_encoder///////
wire [3:0] encoder_in;
wire [2:0] encoder_out;
///////////////////////////
/////picoblaze//////
wire interrupt_ack;
//////////////////////////
//////coder/////////////
reg start1;
reg start2;
reg start3;
reg start4;
///////////////////////
always @ (encoder_out or interrupt_ack )
begin
case(encoder_out)
3'b001:
start1 <=1'b1;
3'b010:
start2 <=1'b1;
3'b011:
start3 <=1'b1;
3'b100:
start4 <=1'b1;
endcase
end
ascode instance_name (
.address(address),
.instruction(instruction),
.clk(clk)
);
kcpsm3 picoblaze (
.address(address),
.instruction(instruction),
.port_id(port_id),
.write_strobe(write_strobe),
.out_port(out_port),
.read_strobe(read_strobe),
.in_port(encoder_out),
.interrupt(interrupt),
.interrupt_ack(interrupt_ack),
.reset(reset),
.clk(clk)
);
priority_encoder p_encoder (
.encoder_in(encoder_in),
.encoder_out(encoder_out)
);
counter c100 (
.clk(clk),
.start(start1),
.count_up_to(100),
.ready(encoder_in[0])
);
counter c200 (
.clk(clk),
.start(start2),
.count_up_to(200),
.ready(encoder_in[1])
);
counter c300 (
.clk(clk),
.start(start3),
.count_up_to(300),
.ready(encoder_in[2])
);
counter c400 (
.clk(clk),
.start(start4),
.count_up_to(400),
.ready(encoder_in[3])
);
orgate orgate (
.r1(encoder_in[0]),
.r2(encoder_in[1]),
.r3(encoder_in[2]),
.r4(encoder_in[3]),
.y(interrupt)
);
endmodule
```
| You will not see the demux as it is optimized way. Your code always produces a 1.
You probably want this:
```
always @ ( * )
begin
{start1,start2,start3,start4} = 4'b000;
case(encoder_out)
3'b001 : start1 =interrupt_ack ;
3'b010 : start2 =interrupt_ack ;
3'b011 : start3 =interrupt_ack ;
default: start4 =interrupt_ack ;
endcase
end
```
1. As you see I used always @(\*). It is safer.
2. Your start signals are default set to zero.
3. You have a combinatorial block and thus must use blocking assignment.
4. For some reason you use 3 encoder out bits, thus you MUST decode all states otherwise you get latches. That is what the default is for.
5. By keeping repeated code compact you can better see regular patterns. In this case I see that you start with 3'b001 not with 3'b000 for some reason.
|
Avoiding
I am trying to plot a series of functions on the same graphs. The code seems to run ok, but there is no picture coming coming out. just simply
```
<matplotlib.figure.Figure at 0xeafea58>
```
How can it be fixed?
| In the IPython console, the best way to make sure figures show up (without explicitly calling `plt.show()`) is to use `%matplotlib` mode. If matplotlib is installed correctly, it should automatically choose a suitable backend for your system.
For example:
```
In [1]: import matplotlib.pyplot as plt
In [2]: plt.plot([1, 2, 3]) # no plot shown!
Out[2]: [<matplotlib.lines.Line2D at 0x110eac898>]
In [3]: %matplotlib
Using matplotlib backend: MacOSX
In [4]: plt.plot([1, 2, 3]) # plot shown now
Out[4]: [<matplotlib.lines.Line2D at 0x112174400>]
```
[![enter image description here](https://i.stack.imgur.com/a00eo.png)](https://i.stack.imgur.com/a00eo.png)
The `%matplotlib` magic command only needs to be entered once per session.
|
How to make a draggable background image with background-position and percentage values?
I'm trying to make a simple draggable background using `background-position` and percentage values.
I managed to get the drag working so far but I can't seem to find the right calculation for the image to follow the cursor at the same speed (if it makes sense).
Here is a simple example (using only the `x` axis):
```
const container = document.querySelector('div');
const containerSize = container.getBoundingClientRect();
let imagePosition = { x: 50, y: 50 };
let cursorPosBefore = { x: 0, y: 0 };
let imagePosBefore = null;
let imagePosAfter = imagePosition;
container.addEventListener('mousedown', function(event) {
cursorPosBefore = { x: event.clientX, y: event.clientY };
imagePosBefore = imagePosAfter; // Get current image position
});
container.addEventListener('mousemove', function(event) {
if (event.buttons === 0) return;
let newXPos = imagePosBefore.x + ((cursorPosBefore.x - event.clientX) * 100 / containerSize.width);
newXPos = (newXPos < 0) ? 0 : (newXPos > 100) ? 100 : newXPos; // Stop at the end of the image
imagePosAfter = { x: newXPos, y: imagePosition.y }; // Save position
container.style.backgroundPosition = `${newXPos}% ${imagePosition.y}%`;
});
```
```
div {
width: 400px;
height: 400px;
background-position: 50% 50%;
background-size: cover;
background-repeat: no-repeat;
background-image: url('https://i.stack.imgur.com/5yqL8.png');
cursor: move;
border: 2px solid transparent;
}
div:active {
border-color: red;
}
```
```
<div></div>
```
If I click on one of the white cross on the background and move the mouse then the cross should always remains under the cursor until I reach either the end of the image or the end of the container.
It's probably just a math problem but I'm a bit confused because of how percentages work with `background-position`. Any idea?
| I dont know how the formula is exactly, but you would have to include the size of the css-background too.
Your formula would work if the background size is 100% of the container divs size, but it is not. You would need to know the zoom level (which could be calculated from the size in relation to the divs size).
Here is the formula for when you have your zoom level:
```
container.addEventListener('mousemove', function(event) {
event.preventDefault();
const zoomAffector = (currentZoomLevel - 100) / 100; // 100% zoom = image as big as container
if (zoomAffector <= 0) return; // Cant drag a image that is zoomed out
let newXPos = imagePosBefore.x + ((imagePosBefore.x - event.pageX) / zoomAffector * 100;);
newXPos = (newXPos < 0) ? 0 : (newXPos > 100) ? 100 : newXPos;
imagePosAfter = { x: newXPos, y: imagePosition.y };
container.style.backgroundPosition = `${newXPos}% ${imagePosition.y}%`;
});
```
To get the size of the css-background image, maybe checkout this question: [Get the Size of a CSS Background Image Using JavaScript?](https://stackoverflow.com/questions/3098404/get-the-size-of-a-css-background-image-using-javascript)
Here is my try on your problem using one of the answers from the linked question to get the image width (also did it for y-axis, to be complete):
```
const container = document.querySelector('div');
const containerSize = container.getBoundingClientRect();
let imagePosition = { x: 50, y: 50 };
let cursorPosBefore = { x: 0, y: 0 };
let imagePosBefore = null;
let imagePosAfter = imagePosition;
var actualImage = new Image();
actualImage.src = $('#img').css('background-image').replace(/"/g,"").replace(/url\(|\)$/ig, "");
actualImage.onload = function() {
const zoomX = this.width / containerSize.width - 1;
const zoomY = this.height / containerSize.height - 1;
container.addEventListener('mousedown', function(event) {
cursorPosBefore = { x: event.clientX, y: event.clientY };
imagePosBefore = imagePosAfter; // Get current image position
});
container.addEventListener('mousemove', function(event) {
event.preventDefault();
if (event.buttons === 0) return;
let newXPos = imagePosBefore.x + ((cursorPosBefore.x - event.clientX) / containerSize.width * 100 / zoomX);
newXPos = (newXPos < 0) ? 0 : (newXPos > 100) ? 100 : newXPos;
let newYPos = imagePosBefore.y + ((cursorPosBefore.y - event.clientY) / containerSize.height * 100 / zoomY);
newYPos = (newYPos < 0) ? 0 : (newYPos > 100) ? 100 : newYPos;
imagePosAfter = { x: newXPos, y: newYPos };
container.style.backgroundPosition = `${newXPos}% ${newYPos}%`;
});
}
```
```
#img {
width: 400px;
height: 200px;
background-position: 50% 50%;
background-image: url('https://i.stack.imgur.com/5yqL8.png');
cursor: move;
border: 2px solid transparent;
}
#img:active {
border-color: red;
}
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div id="img"></div>
```
Or a bit more cleaned up:
```
const container = document.querySelector('div');
const containerSize = container.getBoundingClientRect();
let imagePosition = { x: 50, y: 50 };
let cursorPosBefore = { x: 0, y: 0 };
let imagePosBefore = null;
let imagePosAfter = imagePosition;
// Helpers
const minMax = (pos) => (pos < 0) ? 0 : (pos > 100) ? 100 : pos;
const setNewCenter = (x, y) => {
imagePosAfter = { x: x, y: y };
container.style.backgroundPosition = `${x}% ${y}%`;
};
const getImageZoom = () => {
return new Promise((resolve, reject) => {
let actualImage = new Image();
actualImage.src = $('#img').css('background-image').replace(/"/g,"").replace(/url\(|\)$/ig, "");
actualImage.onload = function() {
resolve({
x: zoomX = this.width / containerSize.width - 1,
y: zoomY = this.height / containerSize.height - 1
});
}
});
}
const addEventListeners = (zoomLevels) => {container.addEventListener('mousedown', function(event) {
cursorPosBefore = { x: event.clientX, y: event.clientY };
imagePosBefore = imagePosAfter; // Get current image position
});
container.addEventListener('mousemove', function(event) {
event.preventDefault();
if (event.buttons === 0) return;
let newXPos = imagePosBefore.x + ((cursorPosBefore.x - event.clientX) / containerSize.width * 100 / zoomLevels.x);
let newYPos = imagePosBefore.y + ((cursorPosBefore.y - event.clientY) / containerSize.height * 100 / zoomLevels.y);
setNewCenter(minMax(newXPos), minMax(newYPos));
});
};
getImageZoom().then(zoom => addEventListeners(zoom));
```
```
#img {
width: 400px;
height: 200px;
background-position: 50% 50%;
background-image: url('https://i.stack.imgur.com/5yqL8.png');
cursor: move;
border: 2px solid transparent;
}
#img:active {
border-color: red;
}
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div id="img"></div>
```
Or to answer your follow-up question:
```
const container = document.querySelector("div");
const containerSize = container.getBoundingClientRect();
let imagePosition = { x: 50, y: 50 };
let cursorPosBefore = { x: 0, y: 0 };
let imagePosBefore = null;
let imagePosAfter = imagePosition;
// Helpers
const minMax = (pos) => (pos < 0 ? 0 : pos > 100 ? 100 : pos);
const setNewCenter = (x, y) => {
imagePosAfter = { x: x, y: y };
container.style.backgroundPosition = `${x}% ${y}%`;
};
const getImageZoom = () => {
return new Promise((resolve, reject) => {
let actualImage = new Image();
actualImage.src = $("#img")
.css("background-image")
.replace(/"/g, "")
.replace(/url\(|\)$/gi, "");
actualImage.onload = function () {
const imgW = this.width,
imgH = this.height,
conW = containerSize.width,
conH = containerSize.height,
ratioW = imgW / conW,
ratioH = imgH / conH;
// Stretched to Height
if (ratioH < ratioW) {
resolve({
x: imgW / (conW * ratioH) - 1,
y: imgH / (conH * ratioH) - 1,
});
} else {
// Stretched to Width
resolve({
x: imgW / (conW * ratioW) - 1,
y: imgH / (conH * ratioW) - 1,
});
}
};
});
};
const addEventListeners = (zoomLevels) => {
container.addEventListener("mousedown", function (event) {
cursorPosBefore = { x: event.clientX, y: event.clientY };
imagePosBefore = imagePosAfter; // Get current image position
});
container.addEventListener("mousemove", function (event) {
event.preventDefault();
if (event.buttons === 0) return;
let newXPos =
imagePosBefore.x +
(((cursorPosBefore.x - event.clientX) / containerSize.width) * 100) /
zoomLevels.x;
let newYPos =
imagePosBefore.y +
(((cursorPosBefore.y - event.clientY) / containerSize.height) * 100) /
zoomLevels.y;
setNewCenter(minMax(newXPos), minMax(newYPos));
});
};
getImageZoom().then((zoom) => addEventListeners(zoom));
```
```
#img {
width: 400px;
height: 200px;
background-size: cover;
background-position: 50% 50%;
background-image: url('https://i.stack.imgur.com/5yqL8.png');
cursor: move;
}
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div id="img"></div>
```
|
How can I efficiently retrieve a large number of database settings as PHP variables?
Currently all of my script's settings are located in a PHP file which I 'include'. I'm in the process of moving these settings (about 100) to a database table called 'settings'. However I'm struggling to find an efficient way of retrieving all of them into the file.
The settings table has 3 columns:
- ID (autoincrements)
- name
- value
Two example rows might be:
```
admin_user john
admin_email_address john@example.com
```
The only way I can think of retrieving each setting is like this:
```
$result = mysql_query("SELECT value FROM settings WHERE name = 'admin_user'");
$row = mysql_fetch_array($result);
$admin_user = $row['value'];
$result = mysql_query("SELECT value FROM settings WHERE name = 'admin_email_address'");
$row = mysql_fetch_array($result);
$admin_email_address = $row['value'];
```
etc etc
Doing it this way will take up **a lot** of code and will likely be slow.
Is there a better way?
| 100 settings? Load them all at once. That will take no time at all. You *absolutely* do not want to load them one at a time.
```
$result = mysql_query('SELECT * FROM settings');
$settings = array();
while ($row = mysql_fetch_assoc($result)) {
$settings[$row['name']] = $row['value'];
}
```
If you need to compartmentalize these somehow, depending on how you need to do it, you could put a category or something on the table and then just load all the settings in a particular category.
What I would suggest is abstracting this behind an object of some kind:
```
class Settings {
private $settings;
public function __get($name) {
if (!$this->settings)) {
$result = mysql_query('SELECT * FROM settings');
$this->settings = array();
while ($row = mysql_fetch_assoc($result)) {
$this->settings[$row['name']] = $row['value'];
}
}
return $this->settings[$name];
}
}
```
This way the settings aren't loaded until you try and access one:
```
$settings = new Settings;
echo $settings->admin_name; // now they're loaded
```
|
Is Javascript "caching" operations?
I was implementing the Levenshtein distance function in Javascript, and I was wondering how much time it takes to run it with Wikedia's example ("sunday" & "saturday").
So I used `console.time()` and `console.timeEnd()` to determine the time spent for the function execution.
```
for (var i = 0; i < 15; i++) {
console.time("benchmark" + i);
var result = LevenshteinDistance("sunday", "saturday");
console.timeEnd("benchmark" + i);
}
```
Since it was fluctuating between 0.4ms and 0.15ms, I used a loop and I stumbled upon weird values:
- 0.187ms
- 0.028ms
- 0.022ms
- 0.022ms
- 0.052ms
- 0.026ms
- 0.028ms
- 0.245ms
- 0.030ms
- 0.024ms
- 0.020ms
- 0.019ms
- 0.059ms
- 0.039ms
- 0.040ms
The recurring thing is the high value for the first (and rarely second) execution, then smaller values.
(Same behavior between JS in Chrome console and NodeJS.)
So my question is : Is Javascript "caching" executions (since JS is compiled with the V8 engine) ?
And also, can I use this behavior to make the function run faster when using different parameters each time ?
| V8 is using a JIT compiler. It starts to compile everything as fast as it can with little optimizations because it wants to start quickly and then it optimizes the functions that are called multiple times to speed up the execution where it actually matters.
Why doesn't it optimize everything to begin with? To start faster. Some code is run only once and it would be a waste of time to optimize it because the time of running optimizations would be longer than the time saved by the optimizations. And JavaScript starts pretty quickly - compare running a Node.js hello world to compiling and running a Java hello world (yes, Node.js apps are compiled from scratch every time they start).
Consider this Node.js program, hello.js:
```
console.log('Hello from Node');
```
and this Java program, Hello.java:
```
class Hello {
public static void main(String[] argv) {
System.out.println("Hello from Java");
}
}
```
Run the Node program:
```
$ time (node hello.js)
Hello from Node
real 0m0.059s
user 0m0.047s
sys 0m0.012s
```
and compare it with Java program:
```
$ time (javac Hello.java && java Hello)
Hello from Java
real 0m0.554s
user 0m1.073s
sys 0m0.068s
```
For more info see:
- <http://thibaultlaurens.github.io/javascript/2013/04/29/how-the-v8-engine-works/>
- <https://v8project.blogspot.com/2015/07/digging-into-turbofan-jit.html>
- <http://jayconrod.com/posts/54/a-tour-of-v8-crankshaft-the-optimizing-compiler>
|
Setting up virtual bridge: Cannot find device "br0"
**Setting up a virtual bridge with Ubuntu with following config in** `/etc/network/interfaces`
```
auto brOffline
iface brOffline inet static
address 192.168.5.10
netmask 255.255.255.0
bridge_ports eth11
bridge_stp off
bridge_fd 0.0
pre-up ifdown eth11
pre-up ifup eth11
post-down ifdown eth11
```
*code: creating a bridge interface. Every bridge needs an adapter, here my physical network card eth11. To make sure it's working bringing the interface down and up again.*
**causes the restarting of the networking service ...**
```
service networking restart
service networking status
```
**... to display an error simmilar to**
```
ifup[2304]: Cannot find device "brOnline"
dhclient[2330]: Error getting hardware address for "brOffline": No such device
```
if your interface is called as the standard br0 it would sound like:
```
default:
Error getting hardware address for "br0": No such device
```
| **Bridge util was not installed**
I moved from one system to an other. Target OS was a newly installed 17.10. the thing what was missing were the bridge utileties:
```
sudo apt-get install -y bridge-utils
```
The bridge simply could not be created because of missing tools...
Now `ifconfig` shows my shiny bridge
```
brOffline: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.5.10 netmask 255.255.255.0 broadcast 192.168.5.255
inet6 fe80::6a05:caff:fe51:8eff prefixlen 64 scopeid 0x20<link>
ether 68:05:ca:51:8e:ff txqueuelen 1000 (Ethernet)
RX packets 2 bytes 501 (501.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 107 bytes 10316 (10.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
```
[Hint which helped me to find the solution](https://ubuntuforums.org/showthread.php?t=1351120 "Hint which helped me to find the solution")
|
SKNode.removeFromParent() EXC\_BAD\_ACCESS
I noticed a weird behavior in my Swift project and reproduced it on a empty SpriteKit Project that way:
```
class GameScene: SKScene {
override func didMoveToView(view: SKView) {
let sprite = SKSpriteNode(imageNamed:"Spaceship")
self.addChild(sprite)
//sprite.removeFromParent()
let sprite2 = SKSpriteNode(imageNamed:"Spaceship")
self.addChild(sprite2)
sprite2.removeFromParent()
}
}
```
It crash before the app start and all I can see is this :
![Screenshot of xCode](https://i.stack.imgur.com/iKRzo.png)
My config is xCode6-Beta6, iPad Mini Retina with iOS8-Beta5 and OSX 10.9.4.
I also reproduced the bug in the simulators; with xCode6-Beta5; and moving the code in `touchesBegan` method
Uncommenting the line `sprite.removeFromParent()` make the bug disappear.
| **IMPORTANT:** this bug has been corrected since iOS 8.1, be sure to update AND make your app unavailable for iOS 8.0 and prior.
I found out what happens... and it should be some Apple employe which did a mistake. Explainations :
```
let sprite = SKSpriteNode(imageNamed:"Spaceship")
self.addChild(sprite)
let sprite2 = SKSpriteNode(imageNamed:"Spaceship")
self.addChild(sprite2)
sprite2.removeFromParent()
println( sprite == sprite2 ) // Returns "true"
// Then crash
```
And if you do it :
```
let sprite = SKSpriteNode(imageNamed:"Spaceship")
sprite.name = "1"
self.addChild(sprite)
let sprite2 = SKSpriteNode(imageNamed:"Spaceship")
sprite2.name = "2"
self.addChild(sprite2)
sprite2.removeFromParent()
println( sprite == sprite2 ) // Returns "false"
// Then all is right, no crash
```
I think very simply when you call `.removeFromParent()` Apple's code check for equality in code with `==` like they would do in *Objective-C*. But since it's *Swift* you should do `===` to check for **object equality** rather than `==`, so a dumb mistake.
Congratz you found a bug in SpriteKit Code, [go fill a form at Apple](https://developer.apple.com/bug-reporting/) :D
|
How can I remove standard controls in an OpenLayers-Map?
I use OpenLayers and want to create another navigation-control in the upper-left side. I know how to add Controls, but this navigation is added at default while creating the OpenLayers-Map. So I want to remove that Control, to add an own. I know already, that the default-control is an OpenLayers.Control.PanZoom.
| The map object has a property called `controls` that is an array of `OpenLayers.Control` objects. If this property is not explicitly set then OpenLayers will assume that you want the default control set, including `OpenLayers.Control.Navigation()`, `OpenLayers.Control.PanZoom()`, `OpenLayers.Control.ArgParser()`, and `OpenLayers.Control.Attribution()`.
To remove `PanZoom` or any other default control, simply set the `controls` property array at the time you construct the `Map` object. Here is a code example:
```
var map = new OpenLayers.Map('map', {
controls: [
new OpenLayers.Control.Navigation(),
new OpenLayers.Control.ArgParser(),
new OpenLayers.Control.Attribution()
]
});
```
Here is a live [example](http://openlayers.org/dev/examples/controls.html).
**Please note** that by setting the `controls` property that you will not get any `Control` objects be default. Any controls you need must be added manually.
Here is a link to the [source code of the `Map` object](http://trac.openlayers.org/browser/trunk/openlayers/lib/OpenLayers/Map.js) if you want to see how it works for yourself.
|
Joomla How to customize main menu
I am learning joomla and faced the next problem.
Here is the main menu in HTML
```
<ul>
<li class="active"><a href="#">home</a></li>
<li><a href="#">bio</a></li>
<li><a href="#">news</a></li>
<li><a href="#" class="first-lev">projects<span class="ico"></span></a>
<div class="sub-nav">
<ul>
<li><a href="#">yegor<br/>zabelov<br/>trio</a></li>
<li><a href="#">gurzuf</a></li>
<li><a href="#">soundtracks</a></li>
</ul>
</div><!-- .sub-nav -->
</li>
...
</ul>
```
How is it possible to customize the main menu in joomla to:
1. add the class .first-lev to some links
2. add the span inside the item with this class
3. add the wrapper div for sub navigation
Appreciate any help.
| 1. To add the class to the some links, just go to the admin panel, select your menu item. Go to the link type options->link CSS style, and add class manually.
2. You need to edit default\_component.php or defaul\_url.php
3. You need to edit default.php
<http://docs.joomla.org/How_to_override_the_output_from_the_Joomla!_core>
To customize menu layout, just copy
```
/modules/mod_menu/tmpl
```
to:
```
/templates/your_template/html/mod_menu
```
And then customize the files you just copied. This files override system mod\_menu files.
Don't forget to add:
```
<folder>html</folder>
```
at you templateDetails.xml.
|
Printing the view in iOS with Swift
I am developing an app which requires visitor passes to be generated and printed directly from an iPad over AirPrint.
I have looked everywhere to find out how to print a view but I can only find how to print text, webKit and mapKit.
Is there a way of printing an entire view? If not, what would be a good solution to print a visitor pass which will be plain text, boxes and a photograph. Thanks.
| I have found the answer to my question by modifying the code found here: [AirPrint contents of a UIView](https://stackoverflow.com/questions/32403634/airprint-contents-of-a-uiview)
```
//create an extension to covert the view to an image
extension UIView {
func toImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(bounds.size, false, UIScreen.mainScreen().scale)
drawViewHierarchyInRect(self.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
//In your view controller
@IBAction func printButton(sender: AnyObject) {
let printInfo = UIPrintInfo(dictionary:nil)
printInfo.outputType = UIPrintInfoOutputType.General
printInfo.jobName = "My Print Job"
// Set up print controller
let printController = UIPrintInteractionController.sharedPrintController()
printController.printInfo = printInfo
// Assign a UIImage version of my UIView as a printing iten
printController.printingItem = self.view.toImage()
// If you want to specify a printer
guard let printerURL = URL(string: "Your printer URL here, e.g. ipps://HPDC4A3E0DE24A.local.:443/ipp/print") else { return }
guard let currentPrinter = UIPrinter(url: printerURL) else { return }
printController.print(to: currentPrinter, completionHandler: nil)
// Do it
printController.presentFromRect(self.view.frame, inView: self.view, animated: true, completionHandler: nil)
}
```
|
Using numpy to efficiently convert 16-bit image data to 8 bit for display, with intensity scaling
I frequently convert 16-bit grayscale image data to 8-bit image data for display. It's almost always useful to adjust the minimum and maximum display intensity to highlight the 'interesting' parts of the image.
The code below does roughly what I want, but it's ugly and inefficient, and makes many intermediate copies of the image data. **How can I achieve the same result with a minimum memory footprint and processing time?**
```
import numpy
image_data = numpy.random.randint( #Realistic images would be much larger
low=100, high=14000, size=(1, 5, 5)).astype(numpy.uint16)
display_min = 1000
display_max = 10000.0
print(image_data)
threshold_image = ((image_data.astype(float) - display_min) *
(image_data > display_min))
print(threshold_image)
scaled_image = (threshold_image * (255. / (display_max - display_min)))
scaled_image[scaled_image > 255] = 255
print(scaled_image)
display_this_image = scaled_image.astype(numpy.uint8)
print(display_this_image)
```
| What you are doing is [halftoning](http://en.wikipedia.org/wiki/Halftone) your image.
The methods proposed by others work great, but they are repeating a lot of expensive computations over and over again. Since in a `uint16` there are at most 65,536 different values, using a look-up table (LUT) can streamline things a lot. And since the LUT is small, you don't have to worry that much about doing things in place, or not creating boolean arrays. The following code reuses Bi Rico's function to create the LUT:
```
import numpy as np
import timeit
rows, cols = 768, 1024
image = np.random.randint(100, 14000,
size=(1, rows, cols)).astype(np.uint16)
display_min = 1000
display_max = 10000
def display(image, display_min, display_max): # copied from Bi Rico
# Here I set copy=True in order to ensure the original image is not
# modified. If you don't mind modifying the original image, you can
# set copy=False or skip this step.
image = np.array(image, copy=True)
image.clip(display_min, display_max, out=image)
image -= display_min
np.floor_divide(image, (display_max - display_min + 1) / 256,
out=image, casting='unsafe')
return image.astype(np.uint8)
def lut_display(image, display_min, display_max) :
lut = np.arange(2**16, dtype='uint16')
lut = display(lut, display_min, display_max)
return np.take(lut, image)
>>> np.all(display(image, display_min, display_max) ==
lut_display(image, display_min, display_max))
True
>>> timeit.timeit('display(image, display_min, display_max)',
'from __main__ import display, image, display_min, display_max',
number=10)
0.304813282062
>>> timeit.timeit('lut_display(image, display_min, display_max)',
'from __main__ import lut_display, image, display_min, display_max',
number=10)
0.0591987428298
```
So there is a x5 speed-up, which is not a bad thing, I guess...
|
How to set limit to the number of concurrent request in servlet?
I got this servlet which return a pdf file to the client web browser.
We do not want to risk any chance that when the number of request is too much, the server is paralyzed.
We would like to make an application level (program) way to set a limit in the number of concurrent request, and return a error message to the browser when the limit is reached. We need to do it in applicantion level because we have different servlet container in development level(tomcat) and production level(websphere).
I must emphasize that I want to control the maximum number of request instead of session. A user can send multiple request over the server with the same session.
Any idea?
I've thought about using a static counter to keep track of the number of request, but it would raise a problem of race condition.
| I'd suggest writing a simple servlet `Filter`. Configure it in your `web.xml` to apply to the path that you want to limit the number of concurrent requests. The code would look something like this:
```
public class LimitFilter implements Filter {
private int limit = 5;
private int count;
private Object lock = new Object();
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
try {
boolean ok;
synchronized (lock) {
ok = count++ < limit;
}
if (ok) {
// let the request through and process as usual
chain.doFilter(request, response);
} else {
// handle limit case, e.g. return status code 429 (Too Many Requests)
// see https://www.rfc-editor.org/rfc/rfc6585#page-3
}
} finally {
synchronized (lock) {
count--;
}
}
}
}
```
Or alternatively you could just put this logic into your `HttpServlet`. It's just a bit cleaner and more reusable as a `Filter`. You might want to make the limit configurable through the `web.xml` rather than hard coding it.
**Ref.:**
Check definition of [HTTP status code 429](https://www.rfc-editor.org/rfc/rfc6585#page-3).
|
How to get confidence on classification predictions with multi-class Vowpal Wabbit
I have a classification problem in which I'm using the `--ect` option for the multi-class algorithm.
The output of the classifier is something as follows:
```
1.000000 805848386108096
2.000000 133087140195133
2.000000 598100953597523
3.000000 629273927146079
2.000000 547637911979064
1.000000 733923413306849
```
Where the first part is the class (1 to 3) and the second part my tag/id.
Is there a way to get the 'confidence' level of each prediction? For instance, if the confidence is below a certain threshold, I want to leave the example as "un-classified".
| Unfortunately, because of the filter tree / elimination implementation in ECT, getting a measure of confidence is not straight-forward. If you can sacrifice some speed, using -oaa with logistic loss and the -r (--raw\_predictions) option gives you raw scores that you can convert to a normalized measure of relative "confidence". Say you have a file like this in "ect.dat":
```
1 ex1| a
2 ex2| a b
3 ex3| c d e
2 ex4| b a
1 ex5| f g
```
We run the one-against-all:
```
vw --oaa 3 ect.dat -f oaa.model --loss_function logistic
```
Then run prediction with raw scores output:
```
vw -t -i oaa.model ect.dat -p oaa.predict -r oaa.rawp
```
You get predictions in oaa.predict:
```
1.000000 ex1
2.000000 ex2
3.000000 ex3
2.000000 ex4
1.000000 ex5
```
and raw scores for each class in oaa.rawp:
```
1:0.0345831 2:-0.0888872 3:-0.533179 ex1
1:-0.241225 2:0.170322 3:-0.749773 ex2
1:-0.426383 2:-0.502638 3:0.154067 ex3
1:-0.241225 2:0.170322 3:-0.749773 ex4
1:0.307398 2:-0.387151 3:-0.502747 ex5
```
You can map these using `1/(1+exp(-score))` and then normalize in various ways to get something like these:
```
1:0.62144216 2:0.5328338 3:0.20096953 ex1
1:0.57251362 2:0.71125717 3:0.1433303 ex2
1:0.37941591 2:0.29294807 3:0.66095287 ex3
1:0.57251362 2:0.71125717 3:0.1433303 ex4
1:0.72177734 2:0.37525053 3:0.2704246 ex5
```
Once you have a significantly large data set scored, you can plot threshold in steps of 0.1, for instance, against percent correct if using that threshold to score, to get an idea of what threshold will give you, say, 95% correct for class 1, and so on.
[This discussion](https://groups.yahoo.com/neo/groups/vowpal_wabbit/conversations/topics/3196) might be useful.
|
Can you use pattern matching to bind the last element of a list?
Since there is a way to bind the head and tail of a list via pattern matching, I'm wondering if you can use pattern matching to bind the last element of a list?
| Yes, you can, using the `ViewPatterns` extension.
```
Prelude> :set -XViewPatterns
Prelude> let f (last -> x) = x*2
Prelude> f [1, 2, 3]
6
```
Note that this pattern will always succeed, though, so you'll probably want to add a pattern for the case where the list is empty, else `last` will throw an exception.
```
Prelude> f []
*** Exception: Prelude.last: empty list
```
Also note that this is just syntactic sugar. Unlike normal pattern matching, this is *O(n)*, since you're still accessing the last element of a singly-linked list. If you need more efficient access, consider using a different data structure such as [`Data.Sequence`](http://www.haskell.org/ghc/docs/latest/html/libraries/containers/Data-Sequence.html), which offers *O(1)* access to both ends.
|
Firebase Phone Verification verifyPhoneNumber() deprecated + Application Crashed
Getting error after upgrading **Firebase Auth (20.0.0)** dependency for Phone Authentication, **PhoneAuthProvider.getInstance().verifyPhoneNumber()**
**Dependency:**
```
implementation 'com.google.firebase:firebase-auth:20.0.0'
```
**Error:**
```
java.lang.NoClassDefFoundError: Failed resolution of: Landroidx/browser/customtabs/CustomTabsIntent$Builder;
at com.google.firebase.auth.internal.RecaptchaActivity.zza(com.google.firebase:firebase-auth@@20.0.0:92)
at com.google.firebase.auth.api.internal.zzeq.zza(com.google.firebase:firebase-auth@@20.0.0:79)
at com.google.firebase.auth.api.internal.zzeq.onPostExecute(com.google.firebase:firebase-auth@@20.0.0:88)
at android.os.AsyncTask.finish(AsyncTask.java:755)
at android.os.AsyncTask.access$900(AsyncTask.java:192)
at android.os.AsyncTask$InternalHandler.handleMessage(AsyncTask.java:772)
at android.os.Handler.dispatchMessage(Handler.java:107)
at android.os.Looper.loop(Looper.java:237)
at android.app.ActivityThread.main(ActivityThread.java:7948)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1075)
Caused by: java.lang.ClassNotFoundException: Didn't find class "androidx.browser.customtabs.CustomTabsIntent$Builder"
```
Can anyone explain what should I change for new dependency? What are the new steps?
| This is what I did to remove the Error:
I referred [firebase phone auth documentation](https://firebase.google.com/docs/auth/android/phone-auth#send-a-verification-code-to-the-users-phone) and made the necessary changes:
Replace this:
```
PhoneAuthProvider.getInstance().verifyPhoneNumber(
phoneNumber, //phone number to be verified
60, // validity of the OTP
TimeUnit.SECONDS,
(Activity) TaskExecutors.MAIN_THREAD,
mCallBack // onVerificationStateChangedCallback
);
```
With this
```
PhoneAuthOptions options =
PhoneAuthOptions.newBuilder(mAuth)
.setPhoneNumber(phoneNumber) // Phone number to verify
.setTimeout(60L, TimeUnit.SECONDS) // Timeout and unit
.setActivity(this) // Activity (for callback binding)
.setCallbacks(mCallBack) // OnVerificationStateChangedCallbacks
.build();
PhoneAuthProvider.verifyPhoneNumber(options);
```
Also, add this to your app/gradle file dependencies:
```
implementation 'androidx.browser:browser:1.2.0'
```
This will help firebase to open the browser for reCAPTCHA verification.
Hope this works!
|
Converting integers into Unary Notation
I have written a python script that has tallies representing digits 1-9 (0 is just ":" ) Unlike true Unary languages that are in the form **1^k**. I represent values for **n** as per their *decimal digits*. The integer *121* is **1\_11\_1** Overall every **k** digit out of **n** *"integer"* is in Tally notation, and the amount of tallies is the sum of all the digits in a given **n** "integer" such as **121 = 4, 1\_11\_1 = 4**.
Its called a [many-one reduction](https://en.wikipedia.org/wiki/Many-one_reduction). If you can transform Instance A(9,33,4) into Instance B(1111\_1111 1, 111----111, 1111) in poly-time, then its NP-complete
Transformation rules is to just to enter each integer **sequentially** when asked in the while loop. Also, when your integer is a negative you do not give a "-" symbol for input. It will ask for input if its a negative integer.
```
Input
(9,33,4) "9, and then 33 and then 4. One at a time for each input."
Output
'111_1111 1' = 9
111
33>>
111
11 11 =4
```
**Algorithm for the Reduction**
```
# This algorithm is very simple and inconsequential.
# It converts integers into a unary like language
# in poly time. All tallies are represented vertically.
print('If your integer is a negative, the script will ask.')
print("DO NOT GIVE -X integers for input!!!")
print('Script will output multiple - symbols for a negative integer transformation')
while 0 == 0:
ask = input('Enter an integer from a subset-sum instance sequentially.')
askStr = str(ask)
res = list(map(int, str(askStr)))
x = (res)
asktwo = input('Is your integer a negative integer? y or n: ')
if asktwo == str("y"):
twinkle = str('-')
else:
twinkle = str(" ")
for n in x:
if n == 0:
tallyone = ":"
print(twinkle, tallyone)
if n == 1:
print(twinkle, "1")
if n == 2:
print(twinkle, "11")
if n == 3:
print(twinkle, "111")
if n == 4:
print(twinkle, "11 11")
if n == 5:
print(twinkle, "111 11")
if n == 6:
print(twinkle, "111 111")
if n == 7:
print(twinkle, "111_111 1")
if n == 8:
print(twinkle, "1111 1111")
if n == 9:
print(twinkle, "1111_1111 1")
```
## Question
In what way, is this code sloppy? Am I using while loops and variable names in an ugly way? Is my tallies in the print statements hard to read? What is the ugliest parts of my code and how would I improve them?
*The code works. But, I don't know enough about python so what mistakes are you seeing in my code?*
| This is a very interesting task. Good work on doing it. Here are some criticism.
---
>
>
> ```
> print('If your integer is a negative, the script will ask.')
> print("DO NOT GIVE -X integers for input!!!")
> print('Script will output multiple - symbols for a negative integer transformation')
>
> ```
>
>
- Do not use both double quote
(`""`) and single quote (`''`) strings. Pick one. I personally prefer `""`.
>
>
> ```
> while 0 == 0:
> ask = input('Enter an integer from a subset-sum instance sequentially.')
>
> ```
>
>
- It's considered a best practice to indent using 4 spaces instead of 2.
- Also it would be better to move actual transform functionality to a new function.
- Also it's better to use `while True` instead of `while 0 == 0` to indicate a endless loop.
- **Reason:** This is more readable.
>
>
> ```
> askStr = str(ask)
> res = list(map(int, str(askStr))
>
> ```
>
>
- You are converting `ask` twice to a string. This is redundant.
- Since `input()` returns a string you don't need to convert this at all.
- It is also a better to use python conventions for names. Ex: `ask_str` or `value`
>
>
> ```
> x = (res)
>
> ```
>
>
- You don't need parenthesis here.
- There is also no need to assign to `x` you can directly use `res`.
>
>
> ```
> if asktwo == str("y"):
> twinkle = str('-')
> else:
> twinkle = str(" ")
>
> ```
>
>
- You don't need to convert a string literal to a string again.
- You can directly use `"y"` as a string.
- `twinkle` is not a good name. Use something like `sign`.
|
Change the colour of ablines on ggplot
Using [this data](http://pastebin.com/06Zwabnq) I am fitting a plot:
```
p <- ggplot(dat, aes(x=log(Explan), y=Response)) +
geom_point(aes(group=Area, colour=Area))+
geom_abline(slope=-0.062712, intercept=0.165886)+
geom_abline(slope= -0.052300, intercept=-0.038691)+
scale_x_continuous("log(Mass) (g)")+
theme(axis.title.y=element_text(size=rel(1.2),vjust=0.2),
axis.title.x=element_text(size=rel(1.2),vjust=0.2),
axis.text.x=element_text(size=rel(1.3)),
axis.text.y=element_text(size=rel(1.3)),
text = element_text(size=13)) +
scale_colour_brewer(palette="Set1")
```
The two ablines represent the phylogenetically adjusted relationships for each Area trend. I am wondering, is it possible to get the ablines in the same colour palette as their appropriate area data? The first specified is for Area A, the second for Area B.
I used:
```
g <- ggplot_build(p)
```
to find out that the first colour is #E41A1C and the second is #377EB8, however when I try to use aes within the +geom\_abline command to specify these colours i.e.
```
p <- ggplot(dat, aes(x=log(Explan), y=Response)) +
geom_point(aes(group=Area, colour=Area))+
geom_abline(slope=-0.062712, intercept=0.165886,aes(colour='#E41A1C'))+
geom_abline(slope= -0.052300, intercept=-0.038691,aes(colour=#377EB8))+
scale_x_continuous("log(Mass) (g)")+
theme(axis.title.y=element_text(size=rel(1.2),vjust=0.2),
axis.title.x=element_text(size=rel(1.2),vjust=0.2),
axis.text.x=element_text(size=rel(1.3)),
axis.text.y=element_text(size=rel(1.3)),
text = element_text(size=13)) +
scale_colour_brewer(palette="Set1")
```
It changes the colour of the points and adds to the legend, which I don't want to do.
Any advice would be much appreciated!
| Given that you are reading colour for the lines to correspond with those set for the points which are mapped from Area, you can map these using the appropriate values for area.
eg
```
geom_abline(slope=-0.062712, intercept=0.165886,aes(colour='A')) +
geom_abline(slope= -0.052300, intercept=-0.038691,aes(colour='B'))
```
This has the added bonus that it will be consistent if you change the colour scheme.
a second approach would be pass a data.frame containing the slopes and intercepts and Area, eg
```
cc <- data.frame(sl = c(-0.062712,-0.052300),
int = c(0.165886,-0.038691),
Area = c('A','B'))
```
Then you could `map` the `slope`, `intercept` and `colour` within a single call to `geom_abline`
eg
```
p <- ggplot(dat, aes(x=log(Explan), y=Response)) +
geom_point(aes(group=Area, colour=Area))+
geom_abline(data = cc, aes(slope =sl, intercept = int,colour = Area)) +
scale_x_continuous("log(Mass) (g)")+
theme(axis.title.y=element_text(size=rel(1.2),vjust=0.2),
axis.title.x=element_text(size=rel(1.2),vjust=0.2),
axis.text.x=element_text(size=rel(1.3)),
axis.text.y=element_text(size=rel(1.3)),
text = element_text(size=13)) +
scale_colour_brewer(palette="Set1")
p
```
![enter image description here](https://i.stack.imgur.com/MuQYs.png)
|
"Don't run bundler as root" - what is the exact difference made by using root?
If you run ruby bundler from the command line while logged in as root, you get the following warning:
>
> Don't run Bundler as root. Bundler can ask for sudo if it is needed,
> and installing your bundle as root will break this application for all
> non-root users on this machine.
>
>
>
What is this exact difference that running bundler as root makes to the gems it installs?
Is it to do with the permissions of the actual files that it installs for each gem? Will Ruby try to access the gem files as a non-root user (and if so, what user / group would Ruby use and how would I find out)?
What would be the symptoms of an application that is broken due to bundler being used as root?
---
My specific reason for asking is because I'm trying to use bundler on a very basic Centos VPS where I have no need to set up any non-root users. I'm [having other problems with gems installed via bundler](https://stackoverflow.com/questions/25438186/what-could-cause-one-gem-in-a-gemset-to-be-unavailable-while-all-the-others-are) (`Error: file to import not found or unreadable: gemname` despite the gem in question being present in `gem list`), and I'm wondering if installing the gems via bundler as root might have made the files unreadable to Ruby.
I want to work out if I do need to set up a non-root user account purely for running bundler, and if I do, what groups and privileges this user will need to allow Ruby to run the gems bundler installs.
Or can I just `chown` or `chgrp` the gem folders? If so, does it depend on anything to do with how Ruby is installed? (I used RVM and my gems end up in `/usr/local/rvm/gems/` which is owned by root in group rvm) [This loosely related question's answer implies that unspecified aspects of how Ruby is installed influence bundler's permissions requirements](https://stackoverflow.com/questions/16376995/bundler-cannot-install-any-gems-without-sudo).
Researching the "Don't run bundler as root" message only comes up with [an unanswered question](https://stackoverflow.com/questions/25210957/dont-run-bundler-as-root-error) and [complaints that this warning is apparently "like it saying to go to sleep at 8PM" (link contains NSFW language)](http://www.coldplaysucks.com/blogs/news/13799949-dont-run-bundler-as-root).
| So I had to dig into the git log history of bundler's repo, because GitHub [doesn't allow search](https://stackoverflow.com/questions/18122628/how-to-search-for-a-commit-message-on-github) in git commits messages anymore.
The commit `c1b3fd165b2ec97fb254a76eaa3900bc4857a357` says :
>
> Print warning when bundler is run by root. When a user runs bundle install with sudo bundler will print a warning, letting
> them know of potential consequences.
>
>
> closes [#2936](https://github.com/bundler/bundler/issues/2936)
>
>
>
Reading this issue, you understand the real reason you should not use the `root` user:
>
> Running sudo bundle install can cause huge and cascading problems for
> users trying to install gems on OS X into the system gems. We should
> print a warning and explain that Bundler will prompt for sudo if it's
> needed. We should also warn people that sudo bundle will break git
> gems, because they have to be writable by the user that Bundler runs
> as.
>
>
>
|
How can I print a BigInt in JavaScript without losing precision?
For example I have a long integer like BigInt(714782523241122198), is it possible to convert it to a string without losing any digits? I want to do it natively.
| You have to either put an `n` after the number, or put it in quotes, since (as currently written) you have a Number, which is bigger than the max number representable in JavaScript, which is 2^53 or 9007199254740992.
```
console.log(BigInt(714782523241122198).toLocaleString())
console.log(BigInt("714782523241122198").toLocaleString())
console.log((714782523241122198n).toLocaleString())
```
To be clear, what you are currently doing is equivalent to:
```
const x = 714782523241122198
// x has already lost precision!
const y = BigInt(x);
// y was created with an imprecise number
```
|
What does ":&" mean in an Ansible/jinja2 YAML file?
What does ":&" mean in an Ansible/Jinja2 YAML file?
For example, in this line:
```
hosts: test-instances:&{{ target_host | default('None') }}
```
| It is an intersection of two hosts groups in Ansible (it is not a Jinja2 syntax and is not used except for the `hosts` declaration).
In your example, the play will run only on the host (or host group) specified in the `target_host` variable as long as it is listed in the `test-instances` inventory group.
If `target_host` is not specified or `target_host` is not listed in the `test-instances`, the play will be skipped (assuming there is no host named `None`).
Per [Working with Patterns](https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html):
>
> You can also specify the intersection of two groups. This would mean the hosts must be in the group webservers and the host must also be in the group staging:
>
>
>
> ```
> webservers:&staging
>
> ```
>
>
|
Input data length must be a multiple of cipher's block size in AES CTR
I encrypt a string using Dart's encrypt package. The code I encrypted is below.
```
String encrypt(String kelime) {
final key = Key.fromUtf8('H4WtkvK4qyehIe2kjQfH7we1xIHFK67e'); //32 length
final iv = IV.fromUtf8('HgNRbGHbDSz9T0CC');
final encrypter = Encrypter(AES(key, mode: AESMode.cbc));
final encrypted = encrypter.encrypt(kelime, iv: iv);
return encrypted.base64;
}
```
Then I decode the encrypted data with the same package and I get this error Input data length must be a multiple of cipher's block size. After some research, I learned that the encrypt package had trouble deciphering the AES encryption algorithm. I have learned that the encrypted word can be decrypted with the Pointycastle package. Code below
```
String decryptt(String cipher) {
final key = Key.fromUtf8('H4WtkvK4qyehIe2kjQfH7we1xIHFK67e');
final iv = IV.fromUtf8('HgNRbGHbDSz9T0CC');
final encryptedText = Encrypted.fromUtf8(cipher);
final ctr = pc.CTRStreamCipher(pc.AESFastEngine())
..init(false, pc.ParametersWithIV(pc.KeyParameter(key.bytes), iv.bytes));
Uint8List decrypted = ctr.process(encryptedText.bytes);
print(String.fromCharCodes(decrypted));
return String.fromCharCodes(decrypted);
}
```
When I decrypt data encrypted with pointycastle I get an output like this.
>
> **có¥ÄÐÒË.å$[~?q{.. 9**
>
>
>
The word I encrypt is
>
> **Hello**
>
>
>
Packs of darts I use
- <https://pub.dev/packages/pointycastle>
- <https://pub.dev/packages/encrypt>
| I cannot reproduce the problem when decrypting with AES/CTR and the *encrypt* package.
The following code with an encryption and associated decryption runs fine on my machine:
```
final key = enc.Key.fromUtf8('H4WtkvK4qyehIe2kjQfH7we1xIHFK67e'); //32 length
final iv = enc.IV.fromUtf8('HgNRbGHbDSz9T0CC');
// Encryption
String kelime = 'The quick brown fox jumps over the lazy dog';
final encrypter = enc.Encrypter(enc.AES(key, mode: enc.AESMode.ctr, padding: null));
final encrypted = encrypter.encrypt(kelime, iv: iv);
final ciphertext = encrypted.base64;
print(ciphertext);
// Decryption
final decrypter = enc.Encrypter(enc.AES(key, mode: enc.AESMode.ctr, padding: null));
final decrypted = decrypter.decryptBytes(enc.Encrypted.fromBase64(ciphertext), iv: iv);
final decryptedData = utf8.decode(decrypted);
print(decryptedData);
```
[CTR](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#CTR) is a stream cipher mode that does not require padding. Unlike most libraries, the *encrypt* package does not implicitly disable padding for CTR mode, so this must happen *explicitly* (`padding: null`). Otherwise, when decrypting with other libraries (such as *PointyCastle*), the padding bytes will generally not be removed.
Note that in the posted code you are using CBC mode for encryption, not CTR mode. Maybe the modes you use for encryption and decryption just don't match.
By the way, a static IV is generally insecure, especially for CTR (but OK for testing purposes), s. [here](https://crypto.stackexchange.com/questions/2991/why-must-iv-key-pairs-not-be-reused-in-ctr-mode).
---
The decryption also works if the decryption block in the above code is replaced by a decryption with *PointyCastle*:
```
// Decryption
final encryptedText = enc.Encrypted.fromBase64(ciphertext);
final ctr = pc.CTRStreamCipher(pc.AESFastEngine())..init(false, pc.ParametersWithIV(pc.KeyParameter(key.bytes), iv.bytes));
final decrypted = ctr.process(encryptedText.bytes);
final decryptedData = utf8.decode(decrypted);
print(decryptedData);
```
|
Build array of dates in last week, this week and next week
I'm constantly tripping over things with regards to dates in Python. In my webapp I want to show every day of three weeks of a calendar: The last week, the current week and the following week, with Monday denoting the beginning of a week.
The way I would currently approach this is stepping back through dates until I hit Monday and then subtract a further seven days and then add 20 to build the three-week range... But this feels *really* clunky.
Does Python's have a concept of weeks or do I have to manually bodge it around with days?
Edit: Now I code it out, it's not too horrific but I do wonder if there's not something slightly better, again with a concept of weeks rather than just days.
```
today = datetime.date.today()
last_monday = today - datetime.timedelta(days=today.weekday()) - datetime.timedelta(days=7)
dates = [last_monday + datetime.timedelta(days=i) for i in range(0, 21)]
```
| Nope, that's pretty much it. But a list comprehension, basing off the [`datetime.date.weekday()`](http://docs.python.org/2/library/datetime.html#datetime.date.weekday) result, should be easy enough:
```
today = datetime.date(2013, 06, 26)
dates = [today + datetime.timedelta(days=i) for i in range(-7 - today.weekday(), 14 - today.weekday())]
```
Remember, ranges do not *have* to start at 0. :-)
Demo:
```
>>> import datetime
>>> from pprint import pprint
>>> today = datetime.date(2013, 07, 12)
>>> pprint([today + datetime.timedelta(days=i) for i in range(-7 - today.weekday(), 14 - today.weekday())])
[datetime.date(2013, 7, 1),
datetime.date(2013, 7, 2),
datetime.date(2013, 7, 3),
datetime.date(2013, 7, 4),
datetime.date(2013, 7, 5),
datetime.date(2013, 7, 6),
datetime.date(2013, 7, 7),
datetime.date(2013, 7, 8),
datetime.date(2013, 7, 9),
datetime.date(2013, 7, 10),
datetime.date(2013, 7, 11),
datetime.date(2013, 7, 12),
datetime.date(2013, 7, 13),
datetime.date(2013, 7, 14),
datetime.date(2013, 7, 15),
datetime.date(2013, 7, 16),
datetime.date(2013, 7, 17),
datetime.date(2013, 7, 18),
datetime.date(2013, 7, 19),
datetime.date(2013, 7, 20),
datetime.date(2013, 7, 21)]
```
|
R Leaflet map - Draw Line for each row of dataframe
I am trying to create lines on a map with `Leaflet` between latitude/longitude points. Here is a sample input data:
```
segment_id latitude1 longitude1 latitude2 longitude2 len
1 1 48.15387 17.07388 48.15396 17.07387 10.98065
2 1 48.15396 17.07387 48.15404 17.07377 11.31327
3 1 48.15404 17.07377 48.15410 17.07364 11.74550
4 1 48.15410 17.07364 48.15412 17.07349 11.48138
5 1 48.15412 17.07349 48.15412 17.07334 11.63625
6 2 48.15424 17.07307 48.15432 17.07299 10.79304
```
The result of this should be 6 lines `lat1,lng1` -> `lat2,lng2`. I have a hard time working with `addPolylines`, it is creating extra unwanted lines and I am not sure why.
[![enter image description here](https://i.stack.imgur.com/ozJPG.png)](https://i.stack.imgur.com/ozJPG.png)
This is how it should look like, without the extra lines stacked on top of each other :D
Here's my code so far but it's garbage:
```
drawEdges <- function(x) {
d <- cbind(x$latitude1,x$latitude2)
s <- rep(1:nrow(x), each = 2) + (0:1) * nrow(x)
latitudeOut <- d[s]
e <- cbind(x$longitude1,x$longitude2)
t <- rep(1:nrow(x), each = 2) + (0:1) * nrow(x)
longitudeOut <- e[t]
mymap <<- addPolylines(map = mymap,data = x, lng = ~longitudeOut, lat = ~latitudeOut)
}
if (!is.null(edges)){
segments <- split( edges , f = edges$segment_id )
segments
sapply(segments, drawEdges)
}
```
Thank you for helping
| To get the lines joining in sequence you need your data reshaped into a long form, with the points in order.
And to do this without using any spatial objects (e.g. from `library(sp)`) you need to add the lines using a loop.
```
library(leaflet)
### --- reshaping the data ----
## keep the order - but because we're going to split the data, only use odd numbers
## and we'll combine the even's on later
df$myOrder <- seq(from = 1, to = ((nrow(df) * 2) - 1), by = 2)
## put the data in long form by splitting into two sets and then rbinding them
## I'm renaming the columns using setNames, as we need to `rbind` them
## together later
df1 <- setNames(df[, c("segment_id","latitude1","longitude1", "myOrder")],
c("segment_id", "lat","lon", "myOrder"))
df2 <- setNames(df[, c("segment_id","latitude2","longitude2", "myOrder")],
c("segment_id", "lat","lon", "myOrder"))
## make df2's order even
df2$myOrder <- (df2$myOrder + 1)
df <- rbind(df1, df2)
## can now sort the dataframe
df <- df[with(df, order(myOrder)), ]
## and de-dupelicate it
df <- unique(df[, c("segment_id", "lat","lon")])
### -----------------------------
## ----- plotting ---------------
map <- leaflet(data = df) %>%
addTiles() %>%
addCircles()
## without using any spatial objects, you add different lines in a loop
for(i in unique(df$segment_id)){
map <- addPolylines(map, data = df[df$segment_id == i,],
lat = ~lat, lng = ~lon, group = ~segment_id)
}
map
```
[![enter image description here](https://i.stack.imgur.com/ve86H.png)](https://i.stack.imgur.com/ve86H.png)
|
Make a vscode snippet that can use a variable number of arguments
I am new to VSCode. Thinking about code snippets, I looked around for a way to kind of script inside the snippet. I mean to do more than just fill or transform a variable. For example...
This is a simple snippet. I am going to type `rci` for the class initializer. When I enter the method arguments I would like the assignment and documentation + some other things to happen.
`rci<tab>` and then `def initialize(a, b)`) to result in something like this...
```
attr_reader :a
attr_reader :b
# @param a [...] ...
# @param b [...] ...
def initialize(a, b)
@a = a
@b = b
end
```
Is it possible? How can it be achieved? There could be any number of arguments. And each argument would trigger another line of the class initializer.
|
```
"Class Initializer": {
"prefix": "rci",
"body": [
"${1/([^,]+)([,\\s]*|)/attr_reader :$1\n/g}",
"${1/([^,]+)([,\\s]*|)/# @param $1 [...]${2:+\n}/g}",
"def initialize($1)",
"${1/([^,]+)((,\\s*)|)/\t@$1 = $1${2:+\n}/g}",
"end"
],
"description": "Initialize Class"
}
```
The key to get it to work for any number of method arguments is to get them into the **same regex capture group**.
Then, with the global flag set, each capture group will trigger the replacement text. So for instance, `/attr_reader :$1\n/g` will get triggered 3 times if you have 3 method arguments.
You will see this `${2:+\n}` in the transforms above. That means if there is a capture group 2, add a newline. The regex is designed so that there is only a capture group 2 if there is another `,` between arguments. So a final `)` after the last argument will not trigger another newline - so the output exactly matches your desired output as to newlines (but you could easily add or remove newlines).
Your input must be in the correct form:
`v1, v2, v3`
Here is a demo:
[![demo snippet of multiple arguments](https://i.stack.imgur.com/hlZvw.gif)](https://i.stack.imgur.com/hlZvw.gif)
So again **the necessary form is just `v1 v2 v3`**. There doesn't need to be a space between the arguments but then you would get `def initialize(v1,v2,v3)` without spaces either.
Hit `Tab` after the final argument to trigger completion.
**It turns out snippets are pretty powerful!!**
For a similar question about using multiple arguments, see [VSCode snippet: add multiple objects to a class constructor](https://stackoverflow.com/questions/53998252/vscode-snippet-add-multiple-objects-to-js-constructor/58459793#58459793)
|
Compressing Object in Dot Net
I want to Compress an Object in dot net to reduce its size and then UnCompress it on in my client application.
Thanks,
Mrinal Jaiswal
| I have update the code there was a problem with older version.
Here is a function which serialize and compress and viceversa.
```
public static byte[] SerializeAndCompress(object obj) {
using (MemoryStream ms = new MemoryStream()) {
using (GZipStream zs = new GZipStream(ms, CompressionMode.Compress, true)) {
BinaryFormatter bf = new BinaryFormatter();
bf.Serialize(zs, obj);
}
return ms.ToArray();
}
}
public static object DecompressAndDeserialze(byte[] data) {
using (MemoryStream ms = new MemoryStream(data)) {
using (GZipStream zs = new GZipStream(ms, CompressionMode.Decompress, true)) {
BinaryFormatter bf = new BinaryFormatter();
return bf.Deserialize(zs);
}
}
}
```
Following is how to use it.
```
[Serializable]
class MyClass
{
public string Name { get; set; }
}
static void Main(string[] args) {
MyClass myClassInst = new MyClass();
myClassInst.Name = "Some Data";
byte[] data= SerializeAndCompress(myClassInst);
MyClass desInst = (MyClass)DecompressAndDeserialze(data);
}
```
But there is a catch to compression. Remember the above example object is serialize to 153 bytes but the compress version is 266 bytes the reason is that if have small objects with less data then the gzip header information and compression header will at least take 120bytes. So if your object are big enough than compress them if they are just less 300 bytes or so its no need to compress them. You can check compression ratio and see if you object even require compression.
Another suggestion try to compress bulk of data will always give better compression over individual compress objects.
|
In Maxima, how do I express the recurrence for the hermite polynomials?
I'm trying to tell maxima about a recurrence relation for the hermite polynomials:
My first expression is like this:
```
phi[0]:exp(-1/2*x^2);
phi[1]:sqrt(2)*x*phi[0];
wxplot2d([phi[0],phi[1]], [x,-5,5]);
```
So far so good, but I'd like now to define all the others by:
```
phi[n]:sqrt(2/n)*x*phi[n-1] - sqrt((n-1)/n)*phi[n-2];
```
This just bombs (stack overflow). What do I really want to say so that
wxplot2d(phi[10], [x,-5,5]) will give me a sensible picture?
| There's more than one way to handle this. Here is one way that works.
```
(%i2) phi[n](x) := sqrt(2/n)*x*phi[n-1](x) - sqrt((n-1)/n)*phi[n-2](x) $
(%i3) phi[0] : lambda ([x], exp(-1/2*x^2)) $
(%i4) phi[1] : lambda ([x], sqrt(2)*x*phi[0](x)) $
(%i5) phi[0];
(%o5) lambda([x],exp((-1)/2*x^2))
(%i6) phi[1];
(%o6) lambda([x],sqrt(2)*x*phi[0](x))
(%i7) phi[2];
(%o7) lambda([x],sqrt(2)*x^2*%e^-(x^2/2)-%e^-(x^2/2)/sqrt(2))
(%i8) phi[3];
(%o8) lambda([x],
sqrt(2)*x*(sqrt(2)*x^2*%e^-(x^2/2)-%e^-(x^2/2)/sqrt(2))/sqrt(3)
-2*x*%e^-(x^2/2)/sqrt(3))
(%i9) phi[10];
<very large expression here>
(%i10) plot2d (%, [x, -5, 5]);
<nice plot appears>
```
This makes use of so-called array functions. For any integer `n`, `phi[n]` is a lambda expression (unnamed function).
Note that this only works for literal integers (e.g., 0, 1, 2, 3, ...). If you need to work with `phi[n]` where `n` is a symbol, we can look for a different approach.
|
What does this zsh solution to "the argument list is too long" do?
I read in [this answer](https://unix.stackexchange.com/questions/128559/solving-mv-argument-list-too-long) from @Gilles the following:
>
> In zsh, you can load the `mv` builtin:
>
>
>
> ```
> setopt extended_glob
> zmodload -Fm zsh/files b:zf_\*
> mv -- ^*.(jpg|png|bmp) targetdir/
>
> ```
>
>
as a solution to the `"mv: Argument list too long”` problem. The answer suggests using zsh's [`mv`](http://zsh.sourceforge.net/Doc/Release/Zsh-Modules.html#index-mv) (as opposed to GNU's) but what exactly does this line do?:
```
zmodload -Fm zsh/files b:zf_\*
```
| The best way, to look at zsh documentation is using `info`.
If you run `info zsh`, you can use the *index* (think of a *book*'s index) to locate the section that describes the `zmodload` command.
Press `i`, then you can enter `zmo` and press `Tab`. You'll get straight to the `zmodload` builtin description which will tell you all about it.
In short, `zmodload -F` loads the module (if not loaded) and enables only the specified *features* from that module.
With `-m`, we enabled the features that `m`atch a pattern, here `b:zf_*`. `b:` is for builtin, so the above command loads the `zsh/files` module (see `info -f zsh -n 'The zsh/files Module,'` for details on that) and only enables the builtins whose name starts with `zf_`.
```
zmodload -F zsh/files
```
loads the module, but doesn't enable any feature:
```
$ zmodload -FlL zsh/files
zmodload -F zsh/files -b:chgrp -b:chown -b:ln -b:mkdir -b:mv -b:rm -b:rmdir -b:sync -b:zf_chgrp -b:zf_chown -b:zf_ln -b:zf_mkdir -b:zf_mv -b:zf_rm -b:zf_rmdir -b:zf_sync
```
lists the features of that module specifying which are currently enabled (none for now). You'll notice there's both a `mv` and `zf_mv` builtin.
```
$ zmodload -mF zsh/files 'b:zf_*'
$ zmodload -FlL zsh/files
zmodload -F zsh/files -b:chgrp -b:chown -b:ln -b:mkdir -b:mv -b:rm -b:rmdir -b:sync +b:zf_chgrp +b:zf_chown +b:zf_ln +b:zf_mkdir +b:zf_mv +b:zf_rm +b:zf_rmdir +b:zf_sync
```
You'll notice the `zf_mv` builtin has been enabled, but not the `mv` one (same for the other builtins). That means, those builtin versions of the system commands have been enabled, but without overriding the system one:
```
$ type zf_mv
zf_mv is a shell builtin
$ type mv
mv is /bin/mv
```
Now that you have a builtin `mv`, **as `zf_mv`**, not `mv`, you can do:
```
zf_mv -- ^*.(jpg|png|bmp) targetdir/
```
Because `zf_mv` is builtin, there's no `execve()` system call, so you won't hit the `Too many args` limit associated with it.
Of course, you can also do:
```
zmodload zsh/files # without -F, all the features are enabled
mv -- ^*.(jpg|png|bmp) targetdir/
```
But beware that replaces the system's `mv` with `zsh` builtin equivalent.
To overcome the `E2BIG` `execve()` error (the *Too many args* upon executing an external command), `zsh` also provides with a `zargs` function.
You run:
```
autoload zargs # in ~/.zshrc if you use it often
```
To mark it for autoloading.
Then you can use:
```
zargs -- ^*.(jpg|png|bmp) -- mv -t targetdir/
```
(here assuming GNU `mv` for the `-t` option). `zargs` will run as many `mv` commands as necessary to avoid the E2BIG (as `xargs` would do).
|
Proper way to implement near-match searching MySQL
I have a table on a MySQL database that has two (relevant) columns, 'id' and 'username'.
I have [read that MySQL](https://stackoverflow.com/a/45650281/8402030) and relational databases in general are not optimal for searching for near matches on strings, so I wonder, what is the industry practice for implementing simple, but not exact match, search functionalities- for example when one searches for accounts by name on Facebook and non-exact matches are shown? I found Apache Lucene when researching this, but this seems to be used for indexing pages of a website, not necessarily arbitrary strings in a database table.
Is there an external tool for this use case? It seems like any SQL query for this task would require a full scan, even if it was simply looking for the inclusion of a substring.
| In your situation I would recommend for you to use Elasticsearch instead of relational database. This search engine is a powerful tool for implementing search and analytics functionality.
Elasticsearch also flexible and versatile, with a rich query language using JSON as query language and support for many different types of data.
And of course supports near-match searching. As you said, MySQL and anothers relational databases aren't recommended to use near-match searching, they aren't for this purpose.
**--------------UPDATE------------**
If you want to use full-text-search using a relational database It's possile but you might have problem to scale if your numbers of users increase a lot. Keep in mind that ElasticSearch is robust and powerfull, so, you can do a lot of types of searches so easily in this search engine, but it can be more expensive too.
When I propose to you use ElasticSearch I'm thinking about the scaling the search. But I've thinking in your problem since I answered and I've understood that you only need a simple full-text-search. For conclude, in the begginning you can use only relational database to do that, but in the future you might move your search to ElasticSearch or if your search became complex.
Follow this guide to do full-text search in Postgresql. <http://rachbelaid.com/postgres-full-text-search-is-good-enough/>
There's another example in MySql: <https://sjhannah.com/blog/2014/11/03/using-soundex-and-mysql-full-text-search-for-fuzzy-matching/>
Like I said in the comments, It's a trade-off you must to do. You can prefer to use ElasticSearch in the beginning or you can choose another database and move to ElasticSearch in the future.
I also recommend this book to you: **Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable**, and Maintainable Systems. Actually I'm reading this book and it would help you to understand this topic.
**--------------UPDATE------------**
To implement near-match searching in ElasticSearch you can use fuzzy matching query. The fuzzy matching query allows you to controls how lenient the matching should be, for example for this query bellow:
```
{
"query": {
"fuzzy": {
"username": {
"value": "julienambrosio",
"fuzziness": 2
}
}
}
}
```
They'll return "julienambrosio", such as "julienambrosio1", "julienambrosio12" or "juliembrosio".
You can adjust the level of fuzziness to control how lenient/strict the matching should be.
Before you create this example you should to study more about ElasticSearch. There're a lot of courses in udemy, youtube and etc.
You can read more about in the official [docs](https://www.elastic.co/guide/index.html).
|
WinForms DataGridView - databind to an object with a list property (variable number of columns)
I have a .NET class I'd like to show in a DataGridView, and the default databinding - setting the DGV's DataSource to the object - produces 90% of my requirements (i.e. it's outputting the public properties correctly and I can add sorting easily).
However, one of the properties I need to bind is a List which contains data which needs to be in separate columns after the other databound items. I'm stuck on how best to implement this.
My class looks something like this:
```
public class BookDetails
{
public string Title { get; set; }
public int TotalRating { get; set; }
public int Occurrence { get; set; }
public List<int> Rating { get; set; }
}
```
Ideally, I'd be able to expand that Rating property into a number of numeric columns to give an output like this at runtime:
Title | Total Rating | Occurrence | R1 | R2 | R3 ... RN
It would also be useful to have Total Rating be calculated as the sum of all the individual ratings, but I'm updating that manually at the moment without issue.
| Like this?
```
using System;
using System.Collections;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.Windows.Forms;
public class BookDetails
{
public string Title { get; set; }
public int TotalRating { get; set; }
public int Occurrence { get; set; }
public List<int> Rating { get; set; }
}
class BookList : List<BookDetails>, ITypedList
{
public PropertyDescriptorCollection GetItemProperties(PropertyDescriptor[] listAccessors)
{
var origProps = TypeDescriptor.GetProperties(typeof(BookDetails));
List<PropertyDescriptor> newProps = new List<PropertyDescriptor>(origProps.Count);
PropertyDescriptor doThisLast = null;
foreach (PropertyDescriptor prop in origProps)
{
if (prop.Name == "Rating") doThisLast = prop;
else newProps.Add(prop);
}
if (doThisLast != null)
{
var max = (from book in this
let rating = book.Rating
where rating != null
select (int?)rating.Count).Max() ?? 0;
if (max > 0)
{
// want it nullable to account for jagged arrays
Type propType = typeof(int?); // could also figure this out from List<T> in
// the general case, but make it nullable
for (int i = 0; i < max; i++)
{
newProps.Add(new ListItemDescriptor(doThisLast, i, propType));
}
}
}
return new PropertyDescriptorCollection(newProps.ToArray());
}
public string GetListName(PropertyDescriptor[] listAccessors)
{
return "";
}
}
class ListItemDescriptor : PropertyDescriptor
{
private static readonly Attribute[] nix = new Attribute[0];
private readonly PropertyDescriptor tail;
private readonly Type type;
private readonly int index;
public ListItemDescriptor(PropertyDescriptor tail, int index, Type type) : base(tail.Name + "[" + index + "]", nix)
{
this.tail = tail;
this.type = type;
this.index = index;
}
public override object GetValue(object component)
{
IList list = tail.GetValue(component) as IList;
return (list == null || list.Count <= index) ? null : list[index];
}
public override Type PropertyType
{
get { return type; }
}
public override bool IsReadOnly
{
get { return true; }
}
public override void SetValue(object component, object value)
{
throw new NotSupportedException();
}
public override void ResetValue(object component)
{
throw new NotSupportedException();
}
public override bool CanResetValue(object component)
{
return false;
}
public override Type ComponentType
{
get { return tail.ComponentType; }
}
public override bool ShouldSerializeValue(object component)
{
return false;
}
}
static class Program
{
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
var data = new BookList {
new BookDetails { Title = "abc", TotalRating = 3, Occurrence = 2, Rating = new List<int> {1,2,1}},
new BookDetails { Title = "def", TotalRating = 3, Occurrence = 2, Rating = null },
new BookDetails { Title = "ghi", TotalRating = 3, Occurrence = 2, Rating = new List<int> {3, 2}},
new BookDetails { Title = "jkl", TotalRating = 3, Occurrence = 2, Rating = new List<int>()},
};
Application.Run(new Form
{
Controls = {
new DataGridView {
Dock = DockStyle.Fill,
DataSource = data
}
}
});
}
}
```
|
Chmod and -r +r
I have tried calling the command chmod in the wrong order. `chmod file.txt -r` This worked for some reason. `chmod file.txt +r` On the other hand refused to work. Why is this? For what reason does one command work, and the other not?
| This is a quirk of how GNU chmod handles input, and is not portable to all POSIX-compatible chmod implementations.
Note that the [POSIX `chmod`](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/chmod.html) coomand-line syntax *requires* mode to come first, as does [GNU `chmod`](https://www.gnu.org/software/coreutils/manual/html_node/chmod-invocation.html#chmod-invocation) (options should come before mode, too). Anything else is an undocumented implementation quirk.
---
Now, onto why it happens in this particular implementation:
It's hinted at in [the manual](https://www.gnu.org/software/coreutils/manual/html_node/chmod-invocation.html#chmod-invocation):
>
> Typically, though, ‘`chmod a-w file`’ is preferable, and `chmod -w file` (without the `--`) complains if it behaves differently from what ‘`chmod a-w file`’ would do.
>
>
>
Briefly, options parsed by `getopt` are prefixed with a `-`. Like in `ls -a`, `a` is an option. The long form `ls --all` has `all` as an option. `rm -rf` (equivalent to `rm -r -f`) has both `r` and `f` options.
Everything else is a non-option argument, technically called *operands*. I like to call these *positional* arguments, as their meaning is determined by their relative position. In `chmod`, the first positional argument is the mode and the second positional argument is the file name.
Optimally, mode should not lead with a `-`. If it does, you should use `--` to force parsing as an operand instead of an option (i.e. use `chmod a-w file` or `chmod -- -w file` instead of `chmod -w file`. This is also [suggested](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/chmod.html#tag_20_17_16) by POSIX.
---
If you look at [the source code](https://git.savannah.gnu.org/cgit/coreutils.git/tree/src/chmod.c?h=v8.30#n435), you'll notice it uses [getopt](https://www.gnu.org/software/libc/manual/html_node/Getopt.html) to parse command-line options. Here, there's special handling for 'incorrect' modes like `-w`:
```
case 'r':
case 'w':
case 'x':
case 'X':
case 's':
case 't':
case 'u':
case 'g':
case 'o':
case 'a':
case ',':
case '+':
case '=':
case '0': case '1': case '2': case '3':
case '4': case '5': case '6': case '7':
/* Support nonportable uses like "chmod -w", but diagnose
surprises due to umask confusion. Even though "--", "--r",
etc., are valid modes, there is no "case '-'" here since
getopt_long reserves leading "--" for long options. */
```
Taking your example:
- `chmod a-r file.txt` would be the *most robust* invocation.
- `chmod +r file.txt` works because the first argument is positionally interpreted as the mode.
- `chmod -r file.txt` still works because the `-r` is interpreted as a short `r` option and special-cased.
- `chmod -- -r file.txt` is correct and works because the `-r` is positionally interpreted as the mode. This differs from the case without `--` because with `--` the `-r` is not interpreted as an *option*.
- `chmod file.txt -r` still works because the `-r` is interpreted as a short `r` option and special-cased. Options are not position-dependent. This technically abuses an undocumented quirk.
- `chmod file.txt +r` does not work because the `+r` is a operand, not an option. The first operand (`file.txt`) is interpreted as a mode ... and fails to parse.
|
Why is the gettext alias \_() missing on OS X?
I'm running `OS X Lion` and some of my code uses the `gettext` alias of `_()` but I get this error
```
Fatal error: Call to undefined function _()
```
Here is my env
```
PHP 5.3.6 with Suhosin-Patch (cli) (built: Jun 25 2011 10:41:21)
Copyright (c) 1997-2011 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies
with Xdebug v2.1.1, Copyright (c) 2002-2011, by Derick Rethans
```
I tried using the php option `suhosin.simulation = On` but that didn't change anything so it doesn't seem to be the Suhosin-Patch.
| For the record... This is how you add `gettext` to `OS X Lion`
# Installing ICU
[Download ICU](http://site.icu-project.org/download/48#ICU4C-Download)
Run these commands
```
tar xzvf icu4c-4_8_1-src.tgz
cd icu/source
./runConfigureICU MacOSX
make
sudo make install
```
[Download PHP 5.3.6 sources](http://www.php.net/get/php-5.3.6.tar.gz/from/a/mirror)
Run these commands
```
tar -zxf php-5.3.6.tar.gz
cd ext/intl
phpize
./configure --enable-intl
make
sudo cp modules/intl.so /usr/lib/php/extensions/no-debug-non-zts-20090626/
```
Put this in your php.ini file with
```
extension=intl.so
```
# Installing Gettext
[Download Gettext](http://ftp.gnu.org/gnu/gettext/)
Run these commands
```
tar -zxf gettext-0.18.1.1.tar.gz
cd gettext-0.18.1.1
```
>
> Apple will not ship Gettext and Intl the problem is that Gettext apparently defines Stpncpy function, as does something in Lion.
>
>
>
You need to open `gettext-tools/gnulib-lib/stpncpy.c` and change all references of `stpncpy` to `stpncpy2`
Then run these commands
```
./configure
make
sudo make install
```
Go back to the PHP sources directory:
Run these commands
```
cd ext/gettext
phpize
./configure --with-gettext
make
sudo cp modules/gettext.so /usr/lib/php/extensions/no-debug-non-zts-20090626/
```
And add this to the php.ini file:
```
extension=gettext.so
```
References:
<http://www.ittreats.com/os/php/php-with-intl-and-gettext-on-osx-lion-bertrand-mansion.html>
|
Exchange 2013 mailbox forwarding even though it's disabled
I had a user which had his email forwarded to gmail.com. Latly i disabled that option via ECP and it has no such settings as forwarding. But then his emails never get into his mailbox.
```
HARED... SMTP test@poland.pl {account@reprezenta... T34
RECEIVE SMTP test@poland.pl {account@reprezenta... T34
RESOLVE ROUTING test@poland.pl {Account@rs.pl} T34
REDIRECT AGENT test@poland.pl {Account@rs.pl} T34
EXPAND AGENT test@poland.pl {account.r@gmail.com} T34
AGENT... AGENT test@poland.pl {Account@rs.pl, account... T34
RESUBMIT AGENT test@poland.pl {Account@rs.pl, account... T34
DROP ROUTING test@poland.pl {account.r@gmail.com} T34
AGENT... AGENT test@poland.pl {account.r@gmail.com} T34
```
And this is with forwarding disabled. Yet if i go again to ECP and I see this message:
![enter image description here](https://i.stack.imgur.com/dqKBp.png)
If the fields are empty when I am setting them up why show this message?
I can confirm now with:
```
[PS] C:\Windows\system32>Get-Mailbox | Where {$_.ForwardingAddress -ne $null}
Name Alias ServerName ProhibitSendQuota
---- ----- ---------- -----------------
Account account exchange Unlimited
```
But I've even run following command:
```
[PS] C:\Windows\system32>Get-Mailbox | Where {$_.ForwardingAddress -ne $null} | Set-Mailbox -ForwardingAddress $null -De
liverToMailboxAndForward $false
[PS] C:\Windows\system32>Get-Mailbox | Where {$_.ForwardingAddress -ne $null}
```
No results. I go into GUI the forwarding address is cleaned.
![enter image description here](https://i.stack.imgur.com/h1MIk.png)
I set it again just for test and again message about forwarding email being there.
![enter image description here](https://i.stack.imgur.com/dqKBp.png)
So what's wrong? It's Exchange 2013 -> Version 15.0 (Build 775.38). So CU3.
| So i went further with this investigation. The thing here to check was: `forwardingsmtpaddress` which wasn't empty.
```
get-mailbox -Identity account | fl alias, forwardingaddress, forwardingsmtpaddress
```
Which seems to stay set even thou it was disabled via GUI. After I cleaned it up it started working correctly. Why would GUI unchecking and even powershell commands not work as they should when `forwardingsmtpaddress` is a bit over my head. It seems to be a bug in Exchange 2013 CU3 as far as I can tell.
```
Get-Mailbox | Where {$_.ForwardingAddress -ne $null} | Set-Mailbox -ForwardingAddress $null -ForwardingSmtpAddress $null -DeliverToMailboxAndForward $false
```
This cleaned it up (although it only works when forwarding was enabled). I would be happy to know why this is why it is. I did some checking and it seems the field `forwardingsmtpaddress` doesn't get set when setting up contact forwarding so why was it set this time. Oh well. Hopefully someone will find it useful.
|
cloudformation error - Value of property SubnetIds must be of type List of String
When I run create-stack it fails with the creation of elasticsearch domain with this error - "Value of property SubnetIds must be of type List of String"
Here is the snippet of the CF template...
```
Parameters:
SubnetIds:
Type: 'List<AWS::EC2::Subnet::Id>'
Description: Select a VPC subnet to place the instance. Select Multiple Subnets for multi-AZ deployments
Resources:
ElasticsearchDomain:
Type: 'AWS::Elasticsearch::Domain'
Properties:
AccessPolicies:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
AWS: '*'
Action:
- 'es:ESHttp*'
Resource: !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${DomainName}/*'
DomainName: !Ref 'DomainName'
EBSOptions:
EBSEnabled: !Ref EBSEnabled
VolumeSize: !Ref EBSVolumeSize
VolumeType: gp2
ElasticsearchClusterConfig:
DedicatedMasterCount: !If [HasDedicatedMasterNodes, !Ref DedicatedMasterCount, !Ref 'AWS::NoValue']
DedicatedMasterEnabled: !If [HasDedicatedMasterNodes, true, false]
DedicatedMasterType: !If [HasDedicatedMasterNodes, !Ref DedicatedMasterType, !Ref 'AWS::NoValue']
InstanceCount: !Ref ClusterInstanceCount
InstanceType: !Ref ClusterInstanceType
ZoneAwarenessEnabled: !If [HasSingleClusterInstance, false, true]
ElasticsearchVersion: !Ref ElasticsearchVersion
EncryptionAtRestOptions: !If [HasKmsKey, {Enabled: true, KmsKeyId: !Ref KMSEncryptionKey}, !Ref 'AWS::NoValue']
SnapshotOptions:
AutomatedSnapshotStartHour: 0
VPCOptions:
SecurityGroupIds:
- !Ref SecurityGroup
SubnetIds:
- !Ref SubnetIds
```
Tried it like this as well but doesn't work -
```
SubnetIds:
- [!Ref SubnetIds]
```
| Try to use the following code snippet :
```
VPCOptions:
SubnetIds: !Ref ESSubnetsID
SecurityGroupIds: !Ref ESSecurityGroup
```
And update the parameters section with the following :
```
ESSubnetsID:
Description: Choose which subnets the Elasticsearch cluster should use
Type: 'List<AWS::EC2::Subnet::Id>'
Default: 'subnet-1,subnet-2'
ESSecurityGroup:
Description: Select the SecurityGroup to use for the Elasticsearch cluster
Type: 'List<AWS::EC2::SecurityGroup::Id>'
Default: 'sg-1,sg-2'
```
Make sure you pass **List** of subnets ids.
|
Test assertions for tuples with floats
I have a function that returns a tuple that, among others, contains a float value. Usually I use `assertAlmostEquals` to compare those, but this does not work with tuples. Also, the tuple contains other data-types as well. Currently I am asserting every element of the tuple individually, but that gets too much for a list of such tuples. Is there any good way to write assertions for such cases?
Consider this function:
```
def f(a):
return [(1.0/x, x * 2) for x in a]
```
Now I want to write a test for it:
```
def testF(self):
self.assertEqual(f(range(1,3)), [(1.0, 2), (0.5, 4)])
```
This will fail because the result of `1.0/2` is not exactly `0.5`. Can anyone recommend a good way of writing such an assertion in a readable way?
**Edit**: Actually `1.0/2` is exactly `0.5`, but you get my meaning.
| Well how about pimping up your function with couple of zips:
```
def testF(self):
for tuple1, tuple2 in zip(f(range(1,3)), [(1.0, 2), (0.5, 4)]):
for val1, val2 in zip(tuple1, tuple2):
if type(val2) is float:
self.assertAlmostEquals(val1, val2, 5)
else:
self.assertEquals(val1, val2)
```
My premise here is that it is better to use multiple asserts in a loop as to get the exact values where it breaks, vs. using single assert with all().
ps. If you have other numeric types you want to use assertAlmostEquals for, you can change the if above to e.g. `if type(val2) in [float, decimal.Decimal]:`
|
C#-like signals or AS3-like events in Scala?
How do I implement C# style signals or AS3 style events in Scala? I mean mostly observer pattern. To dispatch change events. Thanks.
Or (what's even better) are there any libraries/framework which do just that?
| I'd suggest reading [Deprecating the Observer Pattern](http://lamp.epfl.ch/~imaier/pub/DeprecatingObserversTR2010.pdf) first, for inspiration and to get a taste of where things are going.
Then, have a look at [how scala-swing uses "reactions"](https://stackoverflow.com/questions/938437/scala-swing-event-framework-where-do-i-add-my-reactors/938654#938654) to see how you can do this kind of thing in a library.
Finally, note that first-class functions make implementing the observer pattern yourself [relatively easy](https://stackoverflow.com/questions/3755453/scala-listener-observer/3755538#3755538).
|
RAWINPUT strange behaviour
I'm having some strange behaviour with RAWINPUT. The following code below WORKS:
```
case WM_INPUT:
{
UINT rawInputSize;
GetRawInputData((HRAWINPUT)(lParam), RID_INPUT, nullptr, &rawInputSize, sizeof(RAWINPUTHEADER));
LPBYTE inputBuffer = new BYTE[rawInputSize];
GetRawInputData((HRAWINPUT)(lParam), RID_INPUT, inputBuffer, &rawInputSize, sizeof(RAWINPUTHEADER));
RAWINPUT* inp = (RAWINPUT*)inputBuffer; // valid
}
```
But the following does NOT WORK:
```
case WM_INPUT:
{
UINT rawInputSize;
BYTE inputBuffer[40];
GetRawInputData((HRAWINPUT)(lParam), RID_INPUT, inputBuffer, &rawInputSize, sizeof(RAWINPUTHEADER)); // returns error code
RAWINPUT* inp = (RAWINPUT*)inputBuffer;
}
```
Nor:
```
case WM_INPUT:
{
UINT rawInputSize;
RAWINPUT inputBuffer;
GetRawInputData((HRAWINPUT)(lParam), RID_INPUT, &inputBuffer, &rawInputSize, sizeof(RAWINPUTHEADER)); // returns error code
}
```
Both fails at `GetRawInputData()` which returns a general error code (with no details).
The working solution I posted first is not an option, I cannot do heap allocation at every keystroke or mouse action, I must use the stack.
Why does the two last fail?
| The 4th parameter of [GetRawInputData](http://msdn.microsoft.com/en-us/library/windows/desktop/ms645596.aspx), `pcbSize`, has two functions. Upon entry, it specifies the length of the available buffer. Upon exit, it contains the length of the really used data. This is fairly common concept in Windows API.
In your first case, first call, the input value is not used and only the required length is stored there upon exit. The second call works, because the required length is still there.
But in your second and third example, you leave the variable uninitialized, so it contains random junk from the stack. Apparently something near `0`, which makes the function fail. But that is just speculation, there are of course many ways how this can not work, crash etc.
You should initialize the variable like this:
```
RAWINPUT inputBuffer;
UINT rawInputSize = sizeof(inputBuffer);
GetRawInputData((HRAWINPUT)(lParam), RID_INPUT, &inputBuffer, &rawInputSize, sizeof(RAWINPUTHEADER));
```
---
As a side-note, be careful when using that `BYTE[]` array as in your 2nd example -- some Alexander Belyakov made this helpful comment on the API docs page:
>
> On Win64, GetRawInputData would return -1 with ERROR\_NOACCESS, unless the pData buffer is aligned by 8 bytes.
>
>
>
|
How do I enable WebAPI response tracing?
I have a service built on WebAPI 4.0 but we have an issue with some clients receiving JSON instead of XML, how can I see what's going on inside the WebAPI on the live service?
| Ensure that the compiled and deployed code base has this set:
```
public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
// etc.
config.EnableSystemDiagnosticsTracing();
// etc.
}
}
```
Then add this to your `Web.config` file:
```
<configuration>
<system.diagnostics>
<trace autoflush="false" indentsize="4">
<listeners>
<add name="myListener"
type="System.Diagnostics.TextWriterTraceListener"
initializeData="E:\CompressedLogs\Service1-WebApi-TraceOutput.log" />
<remove name="Default" />
</listeners>
</trace>
</system.diagnostics>
</configuration>
```
You should see this stuff in the log file:
```
w3wp.exe Information: 0 : Request, Method=GET, Url=http://MonkeyChops.potato.org/v1/PantsSpiderman/431?api_key=0, Message='http://MonkeyChops.potato.org/v1/PantsSpiderman/431?api_key=0'
w3wp.exe Information: 0 : Message='PantsSpiderman', Operation=DefaultHttpControllerSelector.SelectController
w3wp.exe Information: 0 : Message='Spandex.MonkeyChops.WebApi.Controllers.PantsSpidermanController', Operation=DefaultHttpControllerActivator.Create
w3wp.exe Information: 0 : Message='Spandex.MonkeyChops.WebApi.Controllers.PantsSpidermanController', Operation=HttpControllerDescriptor.CreateController
w3wp.exe Information: 0 : Message='Selected action 'Get(String id)'', Operation=ApiControllerActionSelector.SelectAction
w3wp.exe Information: 0 : Message='Parameter 'id' bound to the value '431'', Operation=ModelBinderParameterBinding.ExecuteBindingAsync
w3wp.exe Information: 0 : Message='Model state is valid. Values: id=431', Operation=HttpActionBinding.ExecuteBindingAsync
w3wp.exe Information: 0 : Message='Will use same 'XmlMediaTypeFormatter' formatter', Operation=XmlMediaTypeFormatter.GetPerRequestFormatterInstance
w3wp.exe Information: 0 : Message='Selected formatter='XmlMediaTypeFormatter', content-type='text/xml; charset=utf-8'', Operation=DefaultContentNegotiator.Negotiate
w3wp.exe Information: 0 : Message='Action returned 'StatusCode: 200, ReasonPhrase: 'OK', Version: 1.1, Content: System.Net.Http.ObjectContent`1[[Spandex.MonkeyChops.Models.Pants.result, Spandex.MonkeyChops.WebApi, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], Headers:
{
Cache-Control: public, must-revalidate, max-age=86400
Content-Type: text/xml; charset=utf-8
}'', Operation=ReflectedHttpActionDescriptor.ExecuteAsync
w3wp.exe Information: 0 : Operation=ApiControllerActionInvoker.InvokeActionAsync, Status=200 (OK)
w3wp.exe Information: 0 : Operation=PantsSpidermanController.ExecuteAsync, Status=200 (OK)
w3wp.exe Information: 0 : Response, Status=200 (OK), Method=GET, Url=http://MonkeyChops.potato.org/v1/PantsSpiderman/431?api_key=0, Message='Content-type='text/xml; charset=utf-8', content-length=unknown'
w3wp.exe Information: 0 : Operation=XmlMediaTypeFormatter.WriteToStreamAsync
w3wp.exe Information: 0 : Operation=PantsSpidermanController.Dispose
```
|
How to parse field to string with Serde?
I have a custom field in my `JSON` which is coming dynamic and needs to be parsed to struct which has a `HashMap` field like following:
```
#[macro_use]
extern crate serde_derive;
extern crate serde;
extern crate serde_json;
use std::collections::HashMap;
#[derive(Serialize, Deserialize)]
struct MyStruct {
field1: String,
custom: HashMap<String, String>,
}
fn main() {
let json_string = r#"{"field1":"3","custom":{"custom1":"15000","custom2":"60"}}"#;
let my_struct = serde_json::from_str::<MyStruct>(json_string).unwrap();
println!("{}", serde_json::to_string(&my_struct).unwrap());
}
```
It works when my json string has string fields in custom field which can be easily parsed to string.
But the problem is my json string is:
```
let json_string_wrong = r#"{"field1":"3","custom":{"custom1":15000,"custom2":"60"}}"#; // Need to parse this
```
How to handle such castings in serde?
| Serde provides `serde_json::Value` ( [reference](https://docs.serde.rs/serde_json/value/enum.Value.html) ) . It is an enum which contains data types like:
```
pub enum Value {
/// Represents a JSON null value.
Null,
/// Represents a JSON boolean.
Bool(bool),
/// Represents a JSON number, whether integer or floating point.
Number(Number),
/// Represents a JSON string.
String(String),
/// Represents a JSON array.
Array(Vec<Value>),
/// Represents a JSON object.
Object(Map<String, Value>),
}
```
You can use `serde_json::Value` as a value type for your HashMap. It is simply possible to pull data from `serde_json::Value` with using [serde\_json::from\_value](https://docs.serde.rs/serde_json/value/fn.from_value.html) or use pattern matching. In your case i would use pattern matching, because only `Integer` types will be converted into a `String` and rest will be the same.
But you'll need to consider adding one more step after deserialize. Like
- Creating shadow field for `custom`, will be filled after deserialization.
- Or constructing new struct which contains `custom` as `HashMap<String, String>`.
- Add a function to convert `HashMap<String, Value>` to `HashMap<String, String>`,
---
Implementation of this trait can solve your problem.
```
trait ToStringStringMap {
fn to_string_string_map(&self) -> HashMap<String, String>;
}
impl ToStringStringMap for HashMap<String, Value> {
fn to_string_string_map(&self) -> HashMap<String, String> {
self.iter()
.map(|(k, v)| {
let v = match v.clone() {
e @ Value::Number(_) | e @ Value::Bool(_) => e.to_string(),
Value::String(s) => s,
_ => {
println!(r#"Warning : Can not convert field : "{}'s value to String, It will be empty string."#, k);
"".to_string()
}
};
(k.clone(), v)
})
.collect()
}
}
```
**Example**: [Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=ba1a9ca08e89a2517fd3c0fa025d86f6)
**Note**: Trait's name is not well chosen, suggestions are welcomed.
|
How do you convey intent regarding pointers?
I've always used the tried-and-true "Document and Pray" approach when it comes to conveying the fact that a pointer result from a function (either as a return value or a reference parameter) should not be freed by client code.
The other day someone showed me this function prototype:
```
const struct foo * const bar(void);
```
When I pointed out that the `const` qualifier on the pointer was stripped and ignored by the compiler (along with generating a warning with -Wall), they said, "I know, I use it to let consumers know that they shouldn't free the pointer." And the kicker is that they really didn't seem to think there was a problem with it.
My question is whether or not it could be considered common practice (I've never seen it before) to use this sort of jiggery-pokery to try and get superfluous or even questionable language constructs to "self-document" an API?
If you were consuming an API with this prototype in it, would you come away with the notion that the pointer was "hands off", or that the library writer was in dire need of medication?
| In general I fully support annotations, even if the compiler doesn't do anything different with them. I agree with your coworker that it's a good idea that can help clarify the code. I'm not sure that their particular annotation is helpful, unless most of your team comes from a C++ background, though. I really don't like that it generates a warning.
Unfortunately, C allows too many unsafe ways to use pointers. Even more unfortunately, sometimes those unsafe ways of using pointers are the best solution to a given problem. So your options are to write them anyway and pray, as you say, or to try to find ways to minimize them and document them when they are needed. If you can get the compiler to help you, all the better!
One way to minimize these types of issues is to use opaque pointers. For example, let's say you have a library function that returns a pointer to some memory that represents the pixels of an image. Instead of returning a raw pointer, you could return something called an `ImageRef`. What's an image ref? To a user of the library it's an opaque object that the header defines like so:
```
typedef struct image_t* ImageRef;
```
What's an `image_t`? Users of the library have no idea. It's never defined for them. Since `ImageRef` is a pointer, it doesn't need to be. You can write functions that take an `ImageRef` and other data and perform work for the caller rather than just giving them a pointer and going to town. Behind the scenes, it can be defined as:
```
struct image_t {
int width;
int height;
int bytesPerPixel;
unsigned char* pixels;
};
```
And your library can have access to that definition and be very careful about how it uses the `pixels` pointer internally, but publicly, if the definition of that structure isn't published anywhere, there's no way to get at the pointer and mess things up.
A couple other things I forgot to mention. You can use assertions in your debug builds to ensure things like that pointers are not NULL. Every function that takes a pointer can have an assertion as its first line:
```
assert (ptr != NULL);
```
This will help you find issues during debugging, hopefully before you release.
Another thing you can do is nullability annotations, even if they're not supported by your language. In C you can simply make a couple of macros:
```
#define NONNULL
#define NULLABLE
```
You can then put those in the function prototypes for functions that take pointers:
```
SomeStruct* NULLABLE foo(SomeType* NONNULL somePtr);
```
The above will let a developer calling the function know that `somePtr` must not be NULL. It's on them to check it before calling the function. It also lets them know that the return value may be NULL so they have to check it on return. You could come up with your own annotations defined the same way for things like passing ownership to the caller. Perhaps something like:
```
#define STRONG // Caller must free
#define WEAK // Only a reference, caller must not free
```
Since these are all `#define`d to nothing, the compiler ignores them, but readers can see what they mean. It would be nice if the compiler could help with enforcement, but it's better than nothing.
FWIW, [this Stack Overflow Question](https://stackoverflow.com/questions/21398791/annotating-c-c-code) seems to indicated that MSVC, llvm, and gcc all have some sorts of annotations for C. It looks like they're all different, but perhaps some preprocessor magic could unite them for your code (or maybe you can standardize on a single compiler)?
|
Create but not start a task with a custom task factory?
I'd like to be able to create a task without starting it, similar to running `var a = new Task(); a.Start();` but with a custom factory. Factories provide `StartNew()`, but I can't find a method to separate the two actions. Is this possible?
| A `TaskFactory` is basically two sets of default options (creation and continuation), a default cancellation token, and a task scheduler.
You can specify the cancellation token and the creation options when you just new up a task - and then start it on whatever scheduler you want. So:
```
Task task = new Task(action,
factory.CancellationToken,
factory.CreationOptions);
...
task.Start(factory.Scheduler);
```
should do the job other than continuation options. The continuation options are only relevant when you add a continuation anyway, which you can specify directly. Is there anything which isn't covered by this?
(One thing to note is that the Task-based Asynchronous Pattern generally revolves around "hot" tasks which are started by the time you see them anyway. So you probably want to avoid exposing the unstarted tasks too widely.)
|