prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Scope of variable inside switch case
```
package main
import "fmt"
func main() {
x := 10
switch x {
case 0:
y := 'a'
fmt.Printf("%c\n", y)
case 1:
// y = 'b' // this can't compile,
y := 'b'
fmt.Printf("%c\n", y)
default:
y := '-'
fmt.Printf("%c\n", y)
}
}
```
Seems `y` in each `case` is local to the containing `case`, not visible to other case.
As I know:
- `{}` can create a local scope, but there is no `{}` for each `case`.
- Java is different.
I've searched the internet, didn't get a clear definition. I know we can declare switch scope variable in the `initialization` part of `switch`.
**Questions**:
1. Can you confirm that, in Golang, scope of variable inside switch `case` is local to the case itself?
2. Is this a special design, as mentioned above there is no `{}` for each `case`.
| [Spec: Blocks:](https://go.dev/ref/spec#Blocks)
>
> A *block* is a possibly empty sequence of declarations and statements within matching brace brackets.
>
>
>
> ```
> Block = "{" StatementList "}" .
> StatementList = { Statement ";" } .
>
> ```
>
> In addition to explicit blocks in the source code, there are implicit blocks:
>
>
> 1. The *universe* block encompasses all Go source text.
> 2. Each [package](https://go.dev/ref/spec#Packages) has a *package block* containing all Go source text for that package.
> 3. Each file has a *file block* containing all Go source text in that file.
> 4. Each ["if"](https://go.dev/ref/spec#If_statements), ["for"](https://go.dev/ref/spec#For_statements), and ["switch"](https://go.dev/ref/spec#Switch_statements) statement is considered to be in its own implicit block.
> 5. **Each clause in a ["switch"](https://go.dev/ref/spec#Switch_statements) or ["select"](https://go.dev/ref/spec#Select_statements) statement acts as an implicit block.**
>
>
> Blocks nest and influence scoping.
>
>
>
As you can see in the spec: each clause (e.g. `case`) acts as an implicit block without explicitly using `{}`.
|
Laravel 4.2 Migrations - Alter decimal precision and scale without dropping column
I wish to increase decimal precision and scale for a decimal column.
I am aware that I can drop the column, and re-create it, but doing so will mean losing the data in the column.
Is there a way using Laravel Schema::table that I can alter the precision and scale of the column without dropping it?
e.g. something like:
```
Schema::table('prices', function(Blueprint $t) {
$t->buy_price->decimal(5,2);
});
```
| Just create another `migration` and in the `up` method add following code:
```
public function up()
{
// Change db_name and table_name
DB::select(DB::raw('ALTER TABLE `db_name`.`table_name` CHANGE COLUMN `buy_price` `buy_price` decimal(10,2) NOT NULL;'));
}
```
Also in the `down` method just set the old value so you can `roll-back`:
```
public function down()
{
// Change db_name and table_name
DB::select(DB::raw('ALTER TABLE `db_name`.`table_name` CHANGE COLUMN `buy_price` `buy_price` decimal(5,2) NOT NULL;'));
}
```
Then migrate as usual from the `terminal/command prompt` using `php artisan migrate`.
|
Transmute struct into array in Rust
Let's say we have a structure, all fields of which are of the same sized types:
```
struct Homogeneous {
a: u64,
b: u64,
c: u64,
d: u64
}
```
And we have a "safe" way to construct it from array of bytes:
```
impl From<[u8; 32]> for Homogeneous {
fn from(slice: [u8; 32]) -> Self {
// helper macro to convert slice of u8s into u64
macro_rules! to_u64 {
($slice: expr, $at: expr) => {{
let ss = &$slice[$at..$at + 8];
let mut buf = [0u8; 8];
buf.copy_from_slice(&ss);
u64::from_ne_bytes(buf)
}};
}
Self {
a: to_u64!(bytes, 0),
b: to_u64!(bytes, 8),
c: to_u64!(bytes, 16),
d: to_u64!(bytes, 24),
}
}
}
```
Which is all good and it works. The question is whether unsafe solution (using [transmute](https://doc.rust-lang.org/stable/std/mem/fn.transmute.html)) is more efficient (safe?), also whether the reverse conversion will not cause UB due to optimizing compiler reordering struct fields?
```
impl From<[u8; 32]> for Homogeneous {
fn from(slice: [u8; 32]) -> Self {
unsafe { std::mem::transmute(slice) };
}
}
impl From<Homogeneous> for [u8; 32] {
fn from(h: Homogeneous) -> Self {
unsafe { std::mem::transmute(h) }
}
}
```
Those conversions work on my x86 processor using rust 1.57 compiler, and I wonder if they will always work, despite the architecture/compiler.
| From the [rustlang reference](https://github.com/rust-lang/reference/blob/8e7d614303b0dec7492e048e63855fcd3b944ec8/src/types/struct.md):
>
> The memory layout of a struct is undefined by default to allow for compiler optimizations like field reordering, but **it can be fixed
> with the [repr attribute](https://github.com/rust-lang/reference/blob/8e7d614303b0dec7492e048e63855fcd3b944ec8/src/type-layout.md#representations)**. In either case, fields may be given in any
> order in a corresponding struct expression; the resulting struct value
> will always have the same memory layout.
>
>
>
This means that is not guaranteed that the attributes will be arranged as you wish. So you have to ensure it in your implementation so it will always work.
For example using `#[repr(c)]`:
```
#[repr(c)]
struct Homogeneous {
a: u64,
b: u64,
c: u64,
d: u64
}
```
|
Difference between layerX and offsetX in JavaScript
There are different co-ordinate system for JavaScript, such as e.clientX, e.screenX.
I understand those two well, but there are some like e.layerX and e.offsetX. These two are not very clear to me.
Can someone explain those two co-ordinates for me?
| `offsetX`/`offsetY` are a neat extension by Microsoft to mouse event objects, and mean the position of the mouse pointer relatively to the target element. Sadly, they're not implemented by Firefox, and there's discordance among the other browsers about what should be the origin point: IE thinks it's the *content* box, while Chrome, Opera and Safari the *padding* box (which makes more sense, since it's the same origin of absolutely positioned elements).
`layerX`/`layerY` are properties of `MouseEvent` objects defined by Gecko-based browsers (Firefox et al.). Some say they're substitutes for `offsetX`/`offsetY` - they're not. They're the position of the mouse relatively to the "closest positioned element", i.e. an element whose `position` style property is not `static`. That's not the target element if it's statically positioned.
They're supported by Chrome and Opera, but they (`layerX`/`layerY`) [are deprecated](https://developer.mozilla.org/en-US/docs/Web/API/UIEvent/layerX) and going to be removed soon. So forget about them.
|
ffmpeg join two mp4 files with ffmpeg on command line
I can successfully join multiple files using the following command:
```
ffmpeg -f concat -i input.txt -codec copy output.mp4
```
The only problem with this command is that you need to read the filepaths from the text file called **input.txt** with the following content:
```
file 'C:\Users\fabio\Downloads\Super\Sharks\01.mp4'
file 'C:\Users\fabio\Downloads\Super\Sharks\02.mp4'
file 'C:\Users\fabio\Downloads\Super\Sharks\03.mp4'
```
Is there a way to achieve the same goal without having to read the filepaths from a file? I have tried the following with no luck:
```
ffmpeg -f concat -i file "C:\a\b\01.mp4" file "C:\a\b\02.mp4" -codec copy output.mp4
ffmpeg -f concat -i "C:\a\b\01.mp4" "C:\a\b\02.mp4" -codec copy output.mp4
```
Do I have to use a different command?
| ## 2019 Update:
As mentioned in the comments, Stack Overflow has a great description of the available options for concatenation, as well as a discussion of which method to use depending on the types of files you're using:
[How to concatenate two MP4 files using FFmpeg?](https://stackoverflow.com/questions/7333232/how-to-concatenate-two-mp4-files-using-ffmpeg)
## Original 2016 Answer:
You should be able to use the [concat protocol](https://trac.ffmpeg.org/wiki/Concatenate#protocol) method to combine the files:
```
ffmpeg -i "concat:input1.mp4|input2.mp4|input3.mp4" -c copy output.mp4
```
In addition, the FFmpeg manual discusses a method specifically for MP4 files, in order to losslessly concatenate them, but requires that you create temporary files (or named pipes):
```
ffmpeg -i input1.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts
ffmpeg -i input2.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate2.ts
ffmpeg -i "concat:intermediate1.ts|intermediate2.ts" -c copy -bsf:a aac_adtstoasc output.mp4
```
|
How does Groovy resolve method calls with whitespaces
I am wondering why Groovy compiler isn't capable to correctly resolve the following calls
```
a = { p -> p }
b = { p -> p }
a b 1
```
I would expect that to be interpreted correctly as
```
a(b(1))
```
Or is there any syntax that could be interpreted differently?
Interestingly enough this yields the right result
```
a b { 1 }
```
| It tries to evaluate that as:
```
a( b ).1
```
The way I imagine it, is as if it were a list of symbols, and `collate( 2 )` was called on them...
```
def list = [ 'a', 'b', 'c', 'd', 'e' ]
def pairs = list.collate( 2 )
```
All entries in this list with 2 values are a method/parameter pair, and any single elements left at the end are property access calls
So `a b c d e` would be evaluated as: `a( b ).c( d ).e`
Your second example is an interesting edge case. I am guessing that because of the final closure, the call to `b( Closure )` takes precedence, and so is executed first, followed by the call to `a` on the result.
So given:
```
a = { p -> p + 10 }
b = { c -> c() * 5 }
a b { 1 }
```
The result is `15`
|
Match the first 3 characters of a string to specific column
I have a dataframe,df, where I would like to take the first 3 characters of a string from a specific column and place these characters under another column
**Data**
```
id value stat
aaa 10 aaa123
aaa 20
aaa 500 aaa123
bbb 20
bbb 10 bbb123
aaa 5 aaa123
aaa123
ccc123
```
**Desired**
```
id value stat
aaa 10 aaa123
aaa 20
aaa 500 aaa123
bbb 20
bbb 10 bbb123
aaa 5 aaa123
aaa aaa123
ccc ccc123
```
**Doing**
```
df.append({'aaa':aaa123}, ignore_index=True)
```
I believe I have to append the values, perhaps using a mapping or append function, however, not sure how to specify first 3 characters. Any suggestion is appreciated
| One option would be [`Series.fillna`](https://pandas.pydata.org/docs/reference/api/pandas.Series.fillna.html) + [`Series.str`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.html) to slice the first 3 values:
```
df['id'] = df['id'].fillna(df['stat'].str[:3])
```
```
id value stat
0 aaa 10.0 aaa123
1 aaa 20.0 NaN
2 aaa 500.0 aaa123
3 bbb 20.0 NaN
4 bbb 10.0 bbb123
5 aaa 5.0 aaa123
6 aaa NaN aaa123
7 ccc NaN ccc123
```
Probably overkill for this situation, but [`Series.str.extract`](https://pandas.pydata.org/docs/reference/api/pandas.Series.str.extract.html) could also be used:
```
df['id'] = df['id'].fillna(df['stat'].str.extract(r'(^.{3})')[0])
```
---
`mask` if those are empty strings and not `NaN`:
```
df['id'] = df['id'].mask(df['id'].eq('')).fillna(df['stat'].str[:3])
```
|
Simplest Android Activity Lifecycle
I noticed that the Android Developers Activity section has been updated since I started my app, but I am still unclear what the simplest Activity Lifecycle is.
As far as I can make out:
onCreate, onResume and onPause are the essential ones.
The activity may be deleted any time after onPause, so I should save my whole app state to a file onPause and not rely on onStop or onDestroy. Also, onSaveInstanceState is not called before every onPause so is not really worth using.
Rather than trying to write loads of code to handle all the scenarios, why not destroy the Activity at the end of its onPause?
The Lifecycle would then be onCreate and onResume before it is active, then onPause when it becomes inactive. Other methods would not be needed.
I'd use onCreate to call setContentView and set up view listeners, but everything else would be put in onResume, including loading the restored state from a file?
As stated earlier, onPause would save the state to a file and destroy the activity.
As far as I can see, the only disadvantage of this might be that when a popup is on screen, the activity is deleted and has to be recreated when the popup is closed, meaning the activity won't be visible behind the popup (although I have not tested this)
It may take a bit longer to restart the activity, but since the system could have deleted the activity anyway without any notice, you have to save the whole state anyway.
Any thoughts?
Update:
I suppose what I was thinking of was where a 'front page' activity calls a game activity. The frontpage activity would call the game activity when the player clicks 'Play'
The game activity would set up its views and listeners etc. in onCreate, and in onResume it would load a file containing the game state, or start a new game if no file existed.
onPause of the game, it writes the game state to the file, then whatever happens to the game activity (nothing, or gets stopped/destroyed, or whatever) the onResume method would always load all the data back in again from the file.
That's sort of what I was thinking, if that makes any sense?
Update2:
I've devised a simple solution which I've documented in an answer below, if anyone's interested!
It doesn't support the Android Activity Lifecycle 'Paused' and 'Stopped' states. Once it is no longer displayed it kills itself and has to be restarted manually, but it does carry on from where you left off!
| Are you looking for this?
[![Activity Lifecycle](https://i.stack.imgur.com/1byIg.png)](http://d.android.com/reference/android/app/Activity.html)
To further answer your question, yes, as you can plainly see from the above diagram the "simplest" (i.e. smallest number of method calls) lifecycle is indeed `onCreate(); onStart(); onResume(); onPause();`.
You should also know about `onSaveInstanceState()` and `onRetainNonConfigurationInstance()`. These are **NOT** lifecycle methods.
All these methods are very well documented. Please read this documentation thoroughly.
To clarify things further, here are a couple of real-life scenarios:
1. Activity is running, other activities come on top of it, `onPause` is called. System runs out of memory, calls `onSaveInstanceState`, kills activity. User pressed back a few times, activity has to be re-instantiated (preferably using the data saved in `onSaveInstanceState`).
2. Activity is running, user presses back. At this point `onPause->onDestroy` are called, without calling `onSaveInstanceState`.
You should understand the essential difference between `onPause` and `onSaveInstanceState`. The former is **always** called, while the latter is only called when the activity **instance** *might* be re-instantiated in the future. Following this train of thought, your users will expect two things:
1. When they navigate away from your Activity and later come back to it, they want it in the exact same instance that they left it (this would be achieved using `onSaveInstanceState`). They don't expect that if they **exit** your activity. However:
2. They will expect that data **they have entered** will be persisted (which will be done in `onPause`). For example, if they started composing a message, they'll expect to see it as a draft the next time they come back, even if they exited the activity.
You should understand how these methods are *supposed* to be used in order to get what your users expect. How you *actually* use them is up to you, your needs, and your app's nature.
|
jasmine angular 4 unit test router.url
I am unit testing a function in angular 4 project using jasmine which a switch statement like mentioned below:
```
switch(this.router.url) {
case 'firstpath': {
// some code
}
break;
case 'secondpath': {
// some more code
}
break;
default:
break;
}
```
In my spec.ts file. I can't stub or change the value of router.url.I want my cases to execute but default is executing. I tried different ways to set or spyOn and return value, but everytime url is '/'. Every suggestion or solution will be welcomed.
| First you need to mock router in your testing module:
```
TestBed.configureTestingModule({
...
providers: [
{
provide: Router,
useValue: {
url: '/path'
} // you could use also jasmine.createSpyObj() for methods
}
]
});
```
You can also change the url in the test and run your tested method:
```
const router = TestBed.inject(Router);
// @ts-ignore: force this private property value for testing.
router.url = '/path/to/anything';
// now you can run your tested method:
component.testedFunction();
```
As you mention `spyOn` doesnt work because it works only for methods/functions. But `url` is a property.
|
Dictionary in Swift
I really confuse with the way we create dictionary in swift. So could you please tell me what is the different between
```
var myDic3 = [String : AnyObject]()
```
and
```
var myDic2 = Dictionary <Int,AnyObject>()
```
and
```
var myDic4 = [ : ]
```
When i declare like myDic4 I cannot add key and value for it:
```
myDic4["001"] = "ABC"
```
And the error is "**Cannot assign to the result of this expression**"
| In Swift, you can declare and initialize an empty Dictionary with type `String` for keys and type `Any` for values in 4 different ways:
1. `var myDic1 = [String : Any]()`
2. `var myDic2 = Dictionary<String, Any>()`
3. `var myDic3: [String : Any] = [:]`
4. `var myDic4: Dictionary<String, Any> = [:]`
These will all give you the same result which is an empty Dictionary with `String`s as the keys and `Any`s as the values.
`[String : Any]` is just shorthand for `Dictionary<String, Any>`. They mean the same thing but the shorthand notation is preferred.
In cases 1 and 2 above, the types of the variables are inferred by Swift from the values being assigned to them. In cases 3 and 4 above, the types are explicitly assigned to the variables, and then they are initialized with an empty dictionary `[:]`.
When you create a dictionary like this:
```
var myDic5 = [:]
```
Swift has nothing to go on, and it gives the error:
>
> Empty collection literal requires an explicit type
>
>
>
**Historical Note:** In older versions of Swift, it inferred `[:]` to be of type `NSDictionary`. The problem was that `NSDictionary` is an immutable type (you can't change it). The mutable equivalent is an `NSMutableDictionary`, so these would work:
```
var myDic6: NSMutableDictionary = [:]
```
*or*
```
var myDic7 = NSMutableDictionary()
```
but you should prefer using cases 1 or 3 above since `NSMutableDictionary` isn't a Swift type but instead comes from the Foundation framework. In fact, the only reason you were ever able to do `var myDic = [:]` is because you had imported the Foundation framework (with `import UIKit`, `import Cocoa`, or `import Foundation`). Without importing Foundation, this was an error.
|
GAE: unit testing taskqueue with testbed
I'm using testbed to unit test my google app engine app, and my app uses a taskqueue.
When I submit a task to a taskqueue during a unit test, it appears that the task is in the queue, but the task does not execute.
How do I get the task to execute during a unit test?
| The dev app server is single-threaded, so it can't run tasks in the background while the foreground thread is running the tests.
I modified TaskQueueTestCase in taskqueue.py in gaetestbed to add the following function:
```
def execute_tasks(self, application):
"""
Executes all currently queued tasks, and also removes them from the
queue.
The tasks are execute against the provided web application.
"""
# Set up the application for webtest to use (resetting _app in case a
# different one has been used before).
self._app = None
self.APPLICATION = application
# Get all of the tasks, and then clear them.
tasks = self.get_tasks()
self.clear_task_queue()
# Run each of the tasks, checking that they succeeded.
for task in tasks:
response = self.post(task['url'], task['params'])
self.assertOK(response)
```
For this to work, I also had to change the base class of TaskQueueTestCase from BaseTestCase to WebTestCase.
My tests then do something like this:
```
# Do something which enqueues a task.
# Check that a task was enqueued, then execute it.
self.assertTrue(len(self.get_tasks()), 1)
self.execute_tasks(some_module.application)
# Now test that the task did what was expected.
```
This therefore executes the task directly from the foreground unit test. This is not quite the same as in production (ie, the task will get executed 'some time later' on a separate request), but it works well enough for me.
|
Passing a set of NumPy arrays into C function for input and output
Let's assume we have a C function that takes a set of one or more input arrays, processes them, and writes its output into a set of output arrays. The signature looks as follows (with `count` representing the number of array elements to be processed):
```
void compute (int count, float** input, float** output)
```
I want to call this function from Python via ctypes and use it to apply a transformation to a set of NumPy arrays. For a one-input/one-output function defined as
```
void compute (int count, float* input, float* output)
```
the following works:
```
import ctypes
import numpy
from numpy.ctypeslib import ndpointer
lib = ctypes.cdll.LoadLibrary('./block.so')
fun = lib.compute
fun.restype = None
fun.argtypes = [ctypes.c_int,
ndpointer(ctypes.c_float),
ndpointer(ctypes.c_float)]
data = numpy.ones(1000).astype(numpy.float32)
output = numpy.zeros(1000).astype(numpy.float32)
fun(1000, data, output)
```
However, I have no clue how to create the corresponding pointer array for *multiple* inputs (and/or outputs). Any ideas?
**Edit**: So people have been wondering how `compute` knows how many array pointers to expect (as `count` refers to the number of elements per array). This is, in fact, hard-coded; a given `compute` knows precisely how many inputs and outputs to expect. It's the caller's job to verify that `input` and `output` point to the right number of inputs and outputs. Here's an example `compute` taking 2 inputs and writing to 1 output array:
```
virtual void compute (int count, float** input, float** output) {
float* input0 = input[0];
float* input1 = input[1];
float* output0 = output[0];
for (int i=0; i<count; i++) {
float fTemp0 = (float)input1[i];
fRec0[0] = ((0.09090909090909091f * fTemp0) + (0.9090909090909091f * fRec0[1]));
float fTemp1 = (float)input0[i];
fRec1[0] = ((0.09090909090909091f * fTemp1) + (0.9090909090909091f * fRec1[1]));
output0[i] = (float)((fTemp0 * fRec1[0]) - (fTemp1 * fRec0[0]));
// post processing
fRec1[1] = fRec1[0];
fRec0[1] = fRec0[0];
}
}
```
I have no way of influencing the signature and implementation of `compute`. I can verify (from Python!) how many inputs and outputs are required. Key problem is how to give the correct `argtypes` for the function, and how to produce appropriate data structures in NumPy (an array of pointers to NumPy arrays).
| To do this specifically with Numpy arrays, you could use:
```
import numpy as np
import ctypes
count = 5
size = 1000
#create some arrays
arrays = [np.arange(size,dtype="float32") for ii in range(count)]
#get ctypes handles
ctypes_arrays = [np.ctypeslib.as_ctypes(array) for array in arrays]
#Pack into pointer array
pointer_ar = (ctypes.POINTER(C.c_float) * count)(*ctypes_arrays)
ctypes.CDLL("./libfoo.so").foo(ctypes.c_int(count), pointer_ar, ctypes.c_int(size))
```
Where the C side of things might look like:
```
# function to multiply all arrays by 2
void foo(int count, float** array, int size)
{
int ii,jj;
for (ii=0;ii<count;ii++){
for (jj=0;jj<size;jj++)
array[ii][jj] *= 2;
}
}
```
|
How can we use SDA or SCL lines for I2C Addresses?
TMP102 chip( <http://www.ti.com/lit/ds/symlink/tmp102.pdf> ) can have multiple I2c slave addresses. It has an address pin called ADD0(**Section 5**) which can be used to select multiple addresses(**Section 7.3.4**). The logic level at that pin can be used to select a particular TMP102 slave device. According to **table 4**, 4 addresses are possible. I do understand that connecting the pin to high or low voltage will produce two different addresses. But the table mentions that we can use SDA and SCL pins for two different addresses. I am not sure how this works. Can anyone explain how can this be used and how can we use multiple TMP102 devices based on SDA and SCL pins.
|
>
> The logic level at that pin can be used to select a particular TMP102 slave device
>
>
>
That is not the purpose of ADD0 - it is a configuration pin, not a select pin. It is not used to *select* the device; I2C addresses are part of the data stream on SDA, there is no "*select*" pin as there is on SPI for example.
Rather, ADD0 is used to *define* the address of each device to one of four defined in Table 4. Those addresses being one of 0x48, 0x49, 0x4A or 0x4B depending on connection of ADD0 to GND, V+, SDA or SDL respectively. Like so:
[![enter image description here](https://i.stack.imgur.com/e0lhr.png)](https://i.stack.imgur.com/e0lhr.png)
How the device determines the address is not revealed in the datasheet and you don't really need to know, but given:
[![enter image description here](https://i.stack.imgur.com/NJ8Sw.png)](https://i.stack.imgur.com/NJ8Sw.png)
Public domain image by Marcin Floryan from <https://en.wikipedia.org/wiki/I%C2%B2C>
at the *start condition* at the falling edge of SDA the following occurs:
```
ADDR ADD0 SDA SCL
0x48 0 v 1
0x49 1 v 1
0x4A v v 1
0x4B 1 v 1
```
Then on the next falling edge SCL
```
ADDR ADD0 SDA SCL
0x48 0 0 v
0x49 1 0 v
0x4A 0 0 v
0x4B v 0 v
```
So it is possible with suitable sequential logic to latch the address by the end of the S phase and before B1 which is sufficient because the address match does not occur until B7, and all devices on the bus must listen for the address.
|
Popover segue to static cell UITableView causes compile error
I currently have an application with two view controllers. The first is a view controller with an embedded table view that has dynamic cells. The second is a table view controller with static cells. If I add a segue from selecting one of the dynamic table's cells to the static table view controller (using the Push or Modal style setting), I can see that the segue works as expected. However, when I change the style to Popover I get the following compile error:
```
Couldn't compile connection: <IBCocoaTouchOutletConnection:0x4004c75a0 <IBProxyObject: 0x400647960> => anchorView => <IBUITableViewCell: 0x400f58aa0>>
```
Has anyone else ran into this issue, or does anyone know what this error message might mean? It seems strange that this is happening at compile time unless a static table view controller is not supported in a Popover...
| I figured out how to do this. You can't hook it up from the storyboard but can do it programmatically like this:
```
- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath
{
UIStoryboard *sb = [UIStoryboard storyboardWithName:@"MainStoryboard_iPad"
bundle:nil];
UITableViewController *detailController = [sb instantiateViewControllerWithIdentifier:@"TableSettingDetails"];
self.popoverController = [[UIPopoverController alloc] initWithContentViewController:detailController];
self.popoverController.popoverContentSize = CGSizeMake(320, 416);
UITableViewCell *cell = [tableView cellForRowAtIndexPath:indexPath];
[self.popoverController presentPopoverFromRect:cell.bounds inView:cell.contentView
permittedArrowDirections:UIPopoverArrowDirectionAny
animated:YES];
}
```
Just make sure that you have a reference to your popover in your controller, otherwise it will get immediately disposed - causing some other interesting exceptions.
|
What's the difference between application and request contexts?
Flask documentation says that there are 2 local context: application context, and request context. Both are created on request and torn down when it finishes.
So, what's the difference? What are the use cases for each? Are there any conditions when only one of these are created?
|
>
> Both are created on request and torn down when it finishes.
>
>
>
It is true in the request lifecycle. Flask create the app context, the request context, do some magic, destroy request context, destroy app context.
The application context can exist without a request and that is the reason you have both. For example, if I'm running from a shell, I can create the `app_context`, without the request and has access to the ´current\_app` proxy.
It is a design decision to separate concerns and give you the option to not create the request context. The request context is expensive.
In old Flask's (0.7?), you had only the request context and created the `current_app` with a Werkzeug proxy. So the application context just create a pattern.
Some docs about appcontext, but you probably already read it: <http://flask.pocoo.org/docs/appcontext/>
|
Persisting and retrieving a Map of Maps with Morphia and Mongodb
I would like to be able to persist and retrieve, amongst other things, a map of maps in a MongoDB collection. I am using Java to access the MongoDB via Morphia.
The example I am using below is a collection that contains documents detailing the owners of various cars. In this example the number of vehicles of a specific make and model are stored in a map of maps
The majority of the properties are working with no problems experienced, but for the case where a property is a map of a map defined in the following way:
```
@Property("vehicles")
private Map<String, Map<String, Integer> vehicles = new HashMap<String, HashMap<String, Integer>>();
```
The object is created (some values inserted into the map) and persisted to the Mongo database as one would expect it to be:
```
"vehicles" : {
"FORD" : {
"FIESTA" : 1
},
"TOYOTA" : {
"COROLLA" : 1,
"PRIUS": 1
},
"BMW" : {
"SLK" : 1
}
}
```
However when the object is retrieved via java code (a query on the MongoDB console works as expected)) in the following way...
```
Query<Owner> q = ds.find(Owner.class);
System.out.println(q.countAll());
Iterable<Owner> i = q.fetch();
for (Owner o : i) {
System.out.println(o);
}
```
...the code dies in a horrible way on the q.fetch() line.
Please help :)
| The issue stems from the fact that a Map (being an interface) does not have a default constructor, and while Morphia was correctly assigning the constructor for the concrete HashMap on the outer Map it was failing to resolve a constructor for the inner Map. This was resulting in the NullPointerException.
After a lot of debugging and trying this and that, eventually I stumbled (with the help of a colleague) on to the solution.
- Instead of using the @Property annotation use @Embedded. and
- Declare the maps using the concrete HashMap and not use the Map interface
```
@Embedded("vehicles")
private HashMap<String, HashMap<String, Integer>> vehicles = new HashMap<String, HashMap<String, Integer>>();
```
For those of you who are wondering... specifying the concrete class in either the @Property or @Embedded annotation did nothing to help resolve the constructor for the inner HashMap.
|
What does mbstring.strict\_detection do?
The mbstring PHP module has a `strict_detection` setting, [documented here](http://www.php.net/manual/en/mbstring.configuration.php#ini.mbstring.strict-detection). Unfortunately, the manual is completely useless; it only says that this option *"enables the strict encoding detection"*.
I did a few tests and could not find how any of the mbstring functions are affected by this. [`mb_check_encoding()`](http://php.net/manual/en/function.mb-check-encoding.php) and [`mb_detect_encoding()`](http://php.net/manual/en/function.mb-detect-encoding.php) give exactly the same result for both valid and invalid UTF-8 input.
(edit:) The `mbstring.strict_detection` option was added in PHP 5.1.2.
| Without the *strict* parameter being set, the encoding detection is faster but will not be as accurate. For example, if you had a UTF-8 string with partial UTF-8 sequence like this:
```
$s = "H\xC3\xA9ll\xC3";
$encoding = mb_detect_encoding($s, mb_detect_order(), false);
```
The result of the `mb_detect_encoding` call would still be "UTF-8" even though it's not valid UTF-8 (the last character is incomplete).
But if you set the *strict* parameter to true...
```
$s = "H\xC3\xA9ll\xC3";
$encoding = mb_detect_encoding($s, mb_detect_order(), true);
```
It would perform a more thorough check, and the result of that call would be FALSE.
|
Updating the actual values upon filtering using PrimeNG
I am using PrimeNG with Global filter added to my table:
```
<input #gb type="text" pInputText size="50" placeholder="Filter">
```
Datatable:
```
<p-dataTable *ngIf="users != null && users.length > 0" [value]="users" loadingIcon="fa-spinner" [globalFilter]="gb">
```
I need to send mails to the users that have been filtered. I noticed however that the users count (amount of users) does not update upon filter.
The records are displayed correctly based on the filter in the table but mailing these users would send the mail to all users retrieved from the DB.
Is there a way of updating the actual users' amount upon filter using PrimeNG filter option?
| `DataTable` component has a variable called `filteredValue` and filtered values are stored in that variable. There are two ways to get filtered values:
>
> First way
>
>
>
You can use `ViewChild` to get a reference to `DataTable` object and get the users you filtered:
**Template**
```
<p-dataTable #dataTable *ngIf="users != null && users.length > 0" [value]="users" loadingIcon="fa-spinner" [globalFilter]="gb">
```
**Component**
```
import { Component, ViewChild } from '@angular/core';
import { DataTable } from 'primeng/primeng';
@ViewChild('dataTable')
dataTable: DataTable;
```
Now that you have reference to `DataTable` component, it is easy to get filtered users:
```
printFilteredUsers() {
console.log(this.dataTable.filteredValue);
}
```
>
> Second way
>
>
>
`DataTable` component has event called `onFilter` which is triggered each time `DataTable`'s content is filtered:
**Template**
```
<p-dataTable *ngIf="users != null && users.length > 0"
[value]="users" loadingIcon="fa-spinner" [globalFilter]="gb"
(onFilter)="printFilteredUsers($event)">
```
**Component**
```
printFilteredUsers(event: any) {
console.log(event.filteredValue); // filtered users
console.log(event.filters); // applied filters
}
```
PrimeNG's `DataTable` is well documented, I suggest checking it out. You can do it [here](https://www.primefaces.org/primeng/#/datatable).
|
How to sum N columns in python?
I've a pandas df and I'd like to sum N of the columns. The df might look like this:
```
A B C D ... X
1 4 2 6 3
2 3 1 2 2
3 1 1 2 4
4 2 3 5 ... 1
```
I'd like to get a df like this:
```
A Z
1 15
2 8
3 8
4 11
```
The A variable is not an index, but a variable.
| Use [`join`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html) for new `Series` created by `sum` all columns without `A`:
```
df = df[['A']].join(df.drop('A', 1).sum(axis=1).rename('Z'))
```
Or extract column `A` first by [`pop`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pop.html):
```
df = df.pop('A').to_frame().join(df.sum(axis=1).rename('Z'))
```
If want select columns by positions use [`iloc`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html):
```
df = df.iloc[:, [0]].join(df.iloc[:, 1:].sum(axis=1).rename('Z'))
```
---
```
print (df)
A Z
0 1 15
1 2 8
2 3 8
3 4 11
```
|
Openid is really a nightmare? Is there an open source lib that could "abstract" everything into an internal id?
I do not create big websites, so I'm mainly a user of openid and it's various implementations. I use openid here in SO, and I like it because I'm using my gmail account for everything (I plan to exit gmail in the future, but now I'm using it for everything), so I don't have to remember another login/account.
So, in my *naive* user mind, I think using openid is **simple**: you have a lot of providers, when a user tries to login, your app asks his/her provider: "is this person able to login?", the provider returns "this user is valid" with some data (some may return more data than others) or just doesn't return anything if it was an unsuccesfull login for example.
I was planning to use openid in a future website. In my mind, I think I would be using some kind of lib that could "abstract" this for me: your user logins with providers, your app doesn't know each one, and this lib returns a unique identifier from a username from a provider. For example: myemail@gmail.com is used to login. It has a unique id in the lib, returned after a hash calculation or something, and your app uses this for everything.
In my application, I would try to deal only with this id created by an openid libm taht would handle facebook, twitter, gmail and such. Maybe this lib has a table with a provider name, and when a person first logins to your site, you have a row in database saying "this unique id is from twitter". In theory, this seems to work in my mind.
...but after reading [Openid is a Nightmare](http://blog.wekeroad.com/thoughts/open-id-is-a-party-that-happened), I'm starting to think if it really would be a good idea even trying to find such a lib that would work so perfectly.
Since everyone (openid providers) now has it's own login mechanism, instead of worrying about only one system (my own, I were to create my own login mechanism) - now I have to worry about dozens of systems - and openid was supposed to address *exactly* that. Now, I need a huge lib I'm envisioning to abstract this whole mess.
Does a library like the one I described exist? If so, is it possible to create an application that at least tries to behave like a really "openid" dream?
(I know openid is not the same as facebook auth and such... but from an end user perspective, it's the same IMHO, "use the same login across websites". I want to abstract the login process even if the protocol wasn't made just for that. For a lot of services, you just need to know if someone is who she is claiming to be.)
| Libraries do exist (such as [DotNetOpenAuth](http://www.dotnetopenauth.net/)), but these usually only abstract the protocol and spec for you - your application still has to be aware of the individual providers (and their quirks).
One of the biggest problems with OpenId/OpenAuth in my opinion is that many providers do things slightly differently: Some of them will give you the info you ask for, some will not. Some of them respond to Simpleregistration/Claim, some of them respond to AttributeExchange/Fetch. Some use a general url for logging on, some require one with the username incorporated.
There are many subtle differences, which make OpenId a lot weaker than it should be. Ideally it should have been a system that I (as a website owner) can implement, and feel confident that I now support any user that has an account with an OpenId provider. The reality is quite different. And you are always going to have to track your user with your own ids, even if you use OpenId for authentication.
There are services, such as [RPX/Janrain](https://rpxnow.com/) that purport to take care of all of these messy details for you, but I have heard that even this has its headaches (e.g. it is mentioned in the article you quoted).
At the end of the day it probably comes down to the type of website you run. Ask yourself this (rather cynical) question - How much do I care if an individual user can't log in? If the answer is 'a lot' (because each issue will cost you money or cause other grief) then maybe forget openId for now. If it's a mass/free website like stackoverflow then it might be the way to go.
---
**Update**
There's a comparison of some provider implementations here: <http://spreadopenid.org/provider-comparison/>. It's no longer updated though so it might be out of date. *[Down at the moment, possibly permanently, but see [google's cache](http://webcache.googleusercontent.com/search?q=cache%3ajJKuVaRp76AJ%3aspreadopenid.org/provider-comparison/+provider-comparison+spread+openid&cd=2&hl=en&ct=clnk&gl=uk)]*
You might be able to find more info via [Wikipedia's list of providers](http://en.wikipedia.org/wiki/List_of_OpenID_providers)
There's also a pretty [comprehensive comparison of providers here](http://willnorris.com/openid-support).
|
Dynamic list constraint on Alfresco
I'm trying to follow the examples provided in [this post](http://blogs.alfresco.com/wp/jbarmash/2008/08/08/dynamic-data-driven-drop-downs-for-list-properties/), to create a dynamic list constraint in Alfresco 3.3.
So, I've created my own class extending `ListOfValuesConstraint`:
```
public class MyConstraint extends ListOfValuesConstraint {
private static ServiceRegistry registry;
@Override
public void initialize() {
loadData();
}
@Override
public List getAllowedValues() {
//loadData();
return super.getAllowedValues();
}
@Override
public void setAllowedValues(List allowedValues) {
}
protected void loadData() {
List<String> values = new LinkedList<String>();
String query = "+TYPE:\"cm:category\" +@cm\\:description:\"" + tipo + "\"";
StoreRef storeRef = new StoreRef("workspace://SpacesStore");
ResultSet resultSet = registry.getSearchService().query(storeRef, SearchService.LANGUAGE_LUCENE, query);
// ... values.add(data obtained using searchService and nodeService) ...
if (values.isEmpty()) {
values.add("-");
}
super.setAllowedValues(values);
}
}
```
`ServiceRegistry` reference is injected by Spring, and it's working fine. If I only call `loadData()` from `initialize()`, it executes the Lucene query, gets the data, and the dropdown displays it correctly. Only that it's not dynamic: data doesn't get refreshed unless I restart the Alfresco server.
`getAllowedValues()` is called each time the UI has to display a property having this constraint. The idea on the referred post is to call `loadData()` from `getAllowedValues()` too, so the values will be actually dynamic. But when I do this, I don't get any data. The Lucene query is the same, but returns 0 results, so my dropdown only displays `-`.
BTW, the query I'm doing is: `+TYPE:"cm:category" +@cm\:description:"something here"`, and it's the same on each case. It works from initialize, but doesn't from getAllowedValues.
Any ideas on why is this happening, or how can I solve it?
Thanks
**Edit:** we upgraded to Alfresco 3.3.0g Community yesterday, but we're still having the same issues.
| This dynamic-list-of-values-constraint is a bad idea and I tell you why:
The Alfresco repository should be in a valid state all the time. Your (dynamic) list of constraints will change (that's why you want it to be dynamic). Adding items would not be a problem, but editing and removing items are. If you would remove an item from your option-list, the nodes in the repository with this property value will be invalid.
You will not be able to fix this easily. The standard UI will fail on invalid-state-nodes. Simply editing this value and setting it to something valid will not work. You have been warned.
Because the default UI widget for a ListConstraint is a dropdown, not every dropdown should be a ListConstraint. ListConstraints are designed for something like a Status property: { Draft, Waiting Approval, Approved }. Not for a list of customer-names.
I have seen this same topic come up again and again over the last few years. What you actually want is let the user choose a value from a dynamic list of options (combo box). This is a UI problem, not a dictionary-model-issue. You should setup something like this with the web-config-context.xml (Alfresco web UI) or in Alfresco Share. The last one is more flexible and I would recommend taking that path.
|
Get the character and its count which has highest occurrence
I have this C# method which get the character and its count which has highest occurrence
```
public KeyValuePair<char, int> CountCharMax_Dictionary_LINQ(string s)
{
char[] chars = s.ToCharArray();
var result = chars.GroupBy(x => x)
.OrderByDescending(x => x.Count())
.ToDictionary(x => x.Key, x => x.Count())
.FirstOrDefault();
return result;
}
```
It works but is there any better or efficient way to get the same result? Is there any difference if I return it as Tuple vs KeyValuePair?
Test data:
- Input: 122. Expected Output: 2,1
- Input: 122111. Expected Output: 1,4
| As per [this answer](https://stackoverflow.com/a/19522559/3312), it would be wiser to use a tuple in this case. I use lowercase "t"uple as I highly recommend the built-in language support for tuples `(char, int)` over the explicit `Tuple<T1, T2>` declaration, etc. Few more points:
- What should happen in a tie? Which gets returned? If any is acceptable, it's fine as-is. To be more of a determinate function, secondarily order by the character itself.
- Don't need to convert it to a character array as a string already *is* a character array.
- Re-order the `OrderByDescending` and the `Select` so that `Count` only has to be called once.
- It can be made `static` since it doesn't access class instance data.
- It can be made an extension method if in a `static` class.
- It can be made into an expression-bodied method.
- Maybe consider a better name; one that confers what it does rather than how it does it.
All that said, here's my take:
```
public static (char, int) MaxCharacterFrequency(this string s) =>
s.GroupBy(x => x)
.Select(x => (x.Key, x.Count()))
.OrderByDescending(x => x.Item2)
.ThenBy(x => x.Item1)
.FirstOrDefault();
```
|
How can I use ftd2xx.lib with a C++ program?
I am having trouble making a program to interact with an FTDI chip. I don't know how much information I can give in regards to the model number of the chip.
I am trying to use the API FTDI provides to communicate with the chip I have. I am using Qt Creator for the program and this is my first time using it. All of the examples I have found use `include "ftd2xx.h"`. Well, I have tried so many ways to get it working. I have manually typed in the directory of the ftd2xx.lib, moved the files to the project directory and chose "Internal Library", used the original directory and chose "External Library", and chosen the "System Library".
The only method that gives me a different error is when I include the driver package files in the project directory and just include the header file with or without the `LIBS += ...`. Even then I get 393 errors saying `NAME does not name a type`, `NAME not declared in scope`, etc.
How do I create a Qt Creator C++ project that recognizes the `ftd2xx.lib` and lets me use the functions from `ftd2xx.h`?
EDIT: I am using the Windows 64bit driver [package](http://www.ftdichip.com/Drivers/VCP.htm). In my frustration, I forgot I should include these important details.
EDIT2: Code below.
## main.cpp
```
#include <QCoreApplication>
#include <iostream>
#include "ftd2xx.h"
using namespace std;
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
cout << "test" << endl;
return a.exec();
}
```
## test.pro
```
#-------------------------------------------------
#
# Project created by QtCreator 2013-10-04T16:31:18
#
#-------------------------------------------------
QT += core
QT -= gui
TARGET = test
CONFIG += console
CONFIG -= app_bundle
TEMPLATE = app
SOURCES += main.cpp
win32:CONFIG(release, debug|release): LIBS += -L$$PWD/ -lftd2xx
else:win32:CONFIG(debug, debug|release): LIBS += -L$$PWD/ -lftd2xxd
else:unix: LIBS += -L$$PWD/ -lftd2xx
INCLUDEPATH += $$PWD/
DEPENDPATH += $$PWD/
```
## Errors
[Errors.png](https://i.stack.imgur.com/rIJAi.png)
All of that is followed by more `NAME does not name a type` errors.
| The ftd2xx header has a lot of Windows types in it so you need to include windows.h before including the ftdi header.
The .lib file is a DLL link library which provides the linker information required to make use of the DLL at runtime. The following compiles and runs using g++:
```
#include <windows.h>
#include <stdio.h>
#include <ftd2xx.h>
int main(int argc, char *argv[])
{
DWORD version = 0;
FT_STATUS status = FT_GetLibraryVersion(&version);
printf("version %ld\n", version);
return (status == FT_OK) ? 0 : 1;
}
```
Compiled using:
```
g++ -Wall -Idriver -o check.exe check.cpp driver/i386/ftd2xx.lib
```
where the `driver` folder contains the distributed FTDI windows driver package. The `-lftd2xx` will have the linker searching for something called libftd2xx.a so just explicitly provide the .lib filename.
|
Is the following Assembly Atomic, If not, Why?
```
addl, $9, _x(%rip)
```
\_x is a global variable. Essentially I'm not certain as to how adding to a global variable in this case is implemented and whether or not there are inherent race conditions with this line in a multi processor system.
| As duskwuff pointed out, you need a `lock` prefix.
The reason why is that:
```
addl $9,_x(%rip)
```
is actually three "micro operations" from the standpoint of the memory system [herein `%eax` just for illustration--never really used]:
```
mov _x(%rip),%eax
addl $9,%eax
mov %eax,_x(%rip)
```
Here's a valid sequence of events. This is guaranteed by the `lock` prefix. At the end, `_x` *will* be 18:
```
# this is a valid sequence
# cpu 1 # cpu 2
mov _x(%rip),%eax
addl $9,%eax
mov %eax,_x(%rip)
mov _x(%rip),%eax
addl $9,%eax
mov %eax,_x(%rip)
```
But, without the `lock`, we could get:
```
# this is an invalid sequence
# cpu 1 # cpu 2
mov _x(%rip),%eax
mov _x(%rip),%eax
addl $9,%eax addl $9,%eax
mov %eax,_x(%rip)
mov %eax,_x(%rip)
```
At the end, `_x` will be 9. A further jumbling of the sequence could produce 18. So, depending on the exact sequencing between the micro ops on the two CPUs, we could have *either* 9 or 18.
We can make it a bit worse. If CPU 2 added 8 instead of 9, the sequence *without* `lock` could produce any of: 8, 9, or 17
---
**UPDATE:**
Based on some comments, just to clarify terminology a bit.
When I said micro operations ... it was in quotation marks, so I was coining a term for purposes of discussion herein. It was *not* meant to translate directly to x86 uops as defined in the x86 processor literature. I could have [perhaps *should* have] said *steps*.
Likewise, although it seemed easiest and clearest to express the steps using x86 asm, I could have been more abstract:
```
(1) FETCH_MEM_TO_MREG _x
(2) ADD_TO_MREG 9
(3) STORE_MREG_TO_MEM _x
```
Unfortunately, these steps are carried out purely in hardware logic (i.e. no way for a program to see them or step through them with a debugger). The memory system (e.g. cache logic, DRAM controller, et. al.) will notice (and have to respond to) steps (1) and (3). The CPU's ALU will perform step (2), which is invisible to the memory logic.
Note that some RISC CPU arches don't have add instructions that work on memory nor do they have lock prefixes. See below.
Aside from reading some literature, a practical way to examine the effects is to create a C program that uses multiple threads (via `pthreads`) and uses some C atomic operations and/or `pthread_mutex_lock`.
Also, this page [Atomically increment two integers with CAS](https://stackoverflow.com/questions/33083270/atomically-increment-two-integers-with-cas) has an answer I gave and also a link to a video talk given by another guy at cppcon (about "lockless" implementations)
In this more general model, it can also illustrate what can happen in a database that doesn't do proper record locking.
The actual mechanics of how `lock` is implemented can be x86 model specific.
And, possibly, target instruction specific (e.g. `lock` works differently if the target instruction is [say] `addl` vs `xchg`) because the processor may be able to use a more efficient/special type of memory cycle (e.g. something like an atomic "read-modify-write").
In other cases (e.g. where the data is too wide for a single cycle or spans a cache line boundary), it may have to lock the entire memory bus (e.g. grab a global lock and force full serialization), do multiple reads, make changes, do multiple writes, and *then* unlock the memory bus. This mode is similar to how one would wrap something inside a mutex lock/unlock pairing, only done in hardware at the memory bus logic level
A note about ARM [a RISC cpu]. ARM only supports `ldr r1,memory_address`, `str r1,memory_address`, but *not* `add r1,memory_address`. It only allows `add r1,r2,r3` [i.e. it's "ternary"] or possibly `add r1,r2,#immed`. To implement locking, ARM has two special instructions: `ldrex` and `strex` that *must* be paired. In the abstract model above, it would look like:
```
ldrex r1,_x
add r1,r1,#9
strex r1,_x
// must be tested for success and loop back if failed ...
```
|
Calling C++ from clojure
Is it possible to call C++ libraries like CGAL or VTK from Clojure? Can this be possibly done if C++ functions are wrapped with C interface functions, like Haskell does with the c2hs tool and its excellent C FFI?
[Can I call clojure code from C++?](https://stackoverflow.com/questions/8650485/can-i-call-clojure-code-from-c)
this question asks for the reverse,
| You have several alternatives here:
- you can do it the same way as Java does - via [JNI (Java Native Interface)](https://docs.oracle.com/javase/8/docs/technotes/guides/jni/). There is a complete example of using [clojure with JNI](https://github.com/jakebasile/clojure-jni-example).
- there is a [JNA project](https://github.com/java-native-access/jna) that allows to have access to native libraries without writing bridge as JNI requires. There is an [old (2009th) blog post](https://nakkaya.com/2009/11/16/java-native-access-from-clojure/) about using JNA with Clojure.
- for C++ better alternative could be [JavaCpp](https://github.com/bytedeco/javacpp).
- generate Java bindings via [Swig](http://www.swig.org/) & access them as normal Java methods.
|
AS3 adding 1 (+1) not working on string cast to Number?
just learning as3 for flex. i am trying to do this:
```
var someNumber:String = "10150125903517628"; //this is the actual number i noticed the issue with
var result:String = String(Number(someNumber) + 1);
```
I've tried different ways of putting the expression together and no matter what i seem to do the result is always equal to 10150125903517628 rather than 10150125903517629
Anyone have any ideas??! thanks!
| All numbers in JavaScript/ActionScript are effectively double-precision [IEEE-754](http://en.wikipedia.org/wiki/IEEE_754) floats. These use a 64-bit binary number to represent your decimal, and have a precision of roughly 16 or 17 decimal digits.
You've run up against the limit of that format with your 17-digit number. The internal binary representation of 10150125903517628 is no different to that of 10150125903517629 which is why you're not seeing any difference when you add 1.
If, however, you add 2 then you will (should?) see the result as 10150125903517630 because that's enough of a "step" that the internal binary representation will change.
|
Performing side-effects in Vavr
I'm going through [Vavr Usage Guide](http://www.vavr.io/vavr-docs)'s section about performing side-effects with Match and other "syntactic sugar" as they call it. Here is the example given there:
```
Match(arg).of(
Case($(isIn("-h", "--help")), o -> run(this::displayHelp)),
Case($(isIn("-v", "--version")), o -> run(this::displayVersion)),
Case($(), o -> run(() -> {
throw new IllegalArgumentException(arg);
}))
);
```
and then it goes into discussing how `run` should not be run outside of lambda body, etc.
IMHO, something was lacking in the explanation to give me full clarity, i.e. is `run` an existing method on some Vavr interface (which I couldn't find) or should it be my own method in the surrounding code base?
So I endeavored and spelled out the above example just slightly to be something that I can run and see the results of it:
```
@Test public void match(){
String arg = "-h";
Object r = Match(arg).of(
Case($(isIn("-h", "--help")), o -> run(this::displayHelp)),
Case($(isIn("-v", "--version")), o -> run(this::displayVersion)),
Case($(), o -> run(() -> {
throw new IllegalArgumentException(arg);
}))
);
System.out.println(r);
}
private Void run(Supplier<String> supp) {
System.out.println(supp.get());
return null;}
private String displayHelp() {return "This is a help message.";}
private String displayVersion() {return "This is a version message.";}
```
Could someone please confirm that I'm on the right track with how this was envisioned to function by Vavr's designers or did I totally go off on a tangent in which case I'd appreciate some guidance as to how it should be.
Thank you in advance.
**Updated:**
```
import static io.vavr.API.run;
@Test public void match1() {
String arg = "-h";
Object r = Match(arg).of(
Case($(isIn("-h", "--help")), o -> run(this::displayHelp)),
Case($(isIn("-v", "--version")), o -> run(this::displayVersion)),
Case($(), o -> run(() -> {
throw new IllegalArgumentException(arg);
}))
);
System.out.println("match: " +r);
}
//private Void run(Supplier<Void> supp) {supp.get();}
private void displayHelp() {System.out.println("This is a help message.");}
private void displayVersion() {System.out.println("This is a version message.");}
```
| It's [`io.vavr.API.run`](https://www.javadoc.io/doc/io.vavr/vavr/0.9.2). According to the Javadoc, you're supposed to import the basic VAVR functionality via
```
import static io.vavr.API.*;
```
The `run` function calls a `Runnable` (a function `() -> void`) once and returns `(Void)null`. It's used because
```
Case($(isIn("-h", "--help")), o -> this.displayHelp())
```
does not work when `displayHelp()` is `void`, since `void` isn't a well-behaved type in Java. Specifically, `Supplier<void>` and `Function<?, void>` do not work. Additionally,
```
Case($(isIn("-h", "--help")), this.displayHelp())
```
would execute `displayHelp()` *before* the match, so the matching is useless. This rules out all three (ignoring argument number) overloads of `Case`. `run` fixes this, because `Supplier<Void>` and `Function<?, Void>` *are* possible, and taking a `Runnable` means the action can be deferred until the argument to `Case` is needed.
|
Ajax render attribute don't work in a h:dataTable in JSF2
I have some problem's with a simple application in JSF 2.0.
I try to build a ToDo List with ajax support. I have some todo strings which I display using a datatable. Inside this datatable I have a commandLink to delete a task. The problem is now that the datatable don't get re-rendered.
```
<h:dataTable id="todoList" value="#{todoController.todos}" var="todo">
<h:column>
<h:commandLink value="X" action="#{todoController.removeTodo(todo)}">
<f:ajax execute="@this" render="todoList" />
</h:commandLink>
</h:column>
<h:column>
<h:outputText value="#{todo}"/>
</h:column>
</h:dataTable>
```
Thanks for your help.
Edit (TodoController):
```
@ManagedBean
@SessionScoped
public class TodoController {
private String todoStr;
private ArrayList<String> todos;
public TodoController() {
todoStr="";
todos = new ArrayList<String>();
}
public void addTodo() {
todos.add(todoStr);
}
public void removeTodo(String deleteTodo) {
todos.remove(deleteTodo);
}
/* getter / setter */
}
```
| *(Looks like I don't have enough reputation to comment on others' answers)*
I think FRotthowe suggests wrapping the table with another element and referencing it using absolute reference (ie. naming all the parent containers from the root of the document) from the <f:ajax> tag.
Something like this:
```
<h:form id="form">
<h:panelGroup id ="wrapper">
<h:dataTable value="#{backingBean.data}" var="list">
<h:column>
<h:commandButton value="-" action="#{backingBean.data}">
<f:ajax render=":form:wrapper"/>
</h:commandButton>
</h:column>
</h:dataTable>
</h:panelGroup>
</h:form>
```
But, using absolute references is always a source of problems and increases exponentially the refactoring time as the view grows.
Isn't there a way to just render the table from a <f:ajax> tag (prevent jsf from adding those annoying ":number\_of\_row" in the ajax event)?
|
Azure Data Factory - Insert Sql Row for Each File Found
I need a data factory that will:
- check an Azure blob container for csv files
- for each csv file
- insert a row into an Azure Sql table, giving filename as a column value
There's just a single csv file in the blob container and this file contains five rows.
So far I have the following actions:
[![top-level](https://i.stack.imgur.com/FLmPO.png)](https://i.stack.imgur.com/FLmPO.png)
Within the for-each action I have a copy action. I did give this a source of a dynamic dataset which had a filename set as a parameter from @Item().name. However, as a result 5 rows were inserted into the target table whereas I was expecting just one.
The for-each loop executes just once but I don't know to use a data source that is variable(s) holding the filename and timestamp?
| You are headed in the right direction, but within the For each you just need a Stored Procedure Activity that will insert the FileName (and whatever other metadata you have available) into Azure DB Table.
Like this:
[![ADF Pipeline Example](https://i.stack.imgur.com/bCvmg.png)](https://i.stack.imgur.com/bCvmg.png)
Here is an example of the stored procedure in the DB:
```
CREATE Procedure Log.PopulateFileLog (@FileName varchar(100))
INSERT INTO Log.CvsRxFileLog
select
@FileName as FileName,
getdate() as ETL_Timestamp
```
**EDIT:**
You could also execute the insert directly with a Lookup Activity within the For Each like so:
[![Lookup Activity within For Each](https://i.stack.imgur.com/Jc9No.png)](https://i.stack.imgur.com/Jc9No.png)
**EDIT 2**
This will show how to do it without a for each
NOTE: This is the most cost effective method, especially when dealing with hundred or thousands of files on a recurring basis!!!
**1st, Copy the output Json Array from your lookup/get metadata activity using a Copy Data activity with a Source of Azure SQLDB and Sink of Blob Storage CSV file**
-------SOURCE:
[![SOURCE of Copy Data Azure DB to Blob](https://i.stack.imgur.com/MZETJ.png)](https://i.stack.imgur.com/MZETJ.png)
-------SINK:
[![SINK of Copy Data Azure DB to Blob](https://i.stack.imgur.com/w6Dk7.png)](https://i.stack.imgur.com/w6Dk7.png)
**2nd, Create another Copy Data Activity with a Source of Blob Storage Json file, and a Sink of Azure SQLDB**
---------SOURCE:
[![Source of Copy Data Blob to Azure DB](https://i.stack.imgur.com/dkdXH.png)](https://i.stack.imgur.com/dkdXH.png)
---------SINK:
[![Sink of Copy Data Blob to Azure DB](https://i.stack.imgur.com/lAkVq.png)](https://i.stack.imgur.com/lAkVq.png)
---------MAPPING:
[![Mapping of Copy Data Blob to Azure DB](https://i.stack.imgur.com/7qgxR.png)](https://i.stack.imgur.com/7qgxR.png)
In essence, you save the entire json Output to a file in Blob, you then copy that file using a json file type to azure db. This way you have 3 activities to run even if you are trying to insert from a dataset that has 500 items in it.
|
How to test my servlet using JUnit
I have created a web system using Java Servlets and now want to make JUnit testing. My `dataManager` is just a basic piece of code that submits it to the database. How would you test a Servlet with JUnit?
My code example that allows a user to register/sign up, which is submitted from my main page via AJAX:
```
public void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException{
// Get parameters
String userName = request.getParameter("username");
String password = request.getParameter("password");
String name = request.getParameter("name");
try {
// Load the database driver
Class.forName("com.mysql.jdbc.Driver");
//pass reg details to datamanager
dataManager = new DataManager();
//store result as string
String result = dataManager.register(userName, password, name);
//set response to html + no cache
response.setContentType("text/html");
response.setHeader("Cache-Control", "no-cache");
//send response with register result
response.getWriter().write(result);
} catch(Exception e){
System.out.println("Exception is :" + e);
}
}
```
| You can do this using [Mockito](https://github.com/mockito/mockito) to have the mock return the correct params, verify they were indeed called (optionally specify number of times), write the 'result' and verify it's correct.
```
import static org.junit.Assert.*;
import static org.mockito.Mockito.*;
import java.io.*;
import javax.servlet.http.*;
import org.apache.commons.io.FileUtils;
import org.junit.Test;
public class TestMyServlet extends Mockito{
@Test
public void testServlet() throws Exception {
HttpServletRequest request = mock(HttpServletRequest.class);
HttpServletResponse response = mock(HttpServletResponse.class);
when(request.getParameter("username")).thenReturn("me");
when(request.getParameter("password")).thenReturn("secret");
StringWriter stringWriter = new StringWriter();
PrintWriter writer = new PrintWriter(stringWriter);
when(response.getWriter()).thenReturn(writer);
new MyServlet().doPost(request, response);
verify(request, atLeast(1)).getParameter("username"); // only if you want to verify username was called...
writer.flush(); // it may not have been flushed yet...
assertTrue(stringWriter.toString().contains("My expected string"));
}
}
```
|
How to find the vocabulary size of a spaCy model?
I am trying to find the vocabulary size of the large English model, i.e. `en_core_web_lg`, and I find three different sources of information:
- spaCy's docs: 685k keys, 685k unique vectors
- `nlp.vocab.__len__()`: 1340242 # (number of lexemes)
- `len(vocab.strings)`: 1476045
What is the difference between the three? I have not been able to find the answer in the docs.
| The most useful numbers are the ones related to word vectors. `nlp.vocab.vectors.n_keys` tells you how many tokens have word vectors and `len(nlp.vocab.vectors)` tells you how many unique word vectors there are (multiple tokens can refer to the same word vector in `md` models).
`len(vocab)` is the number of cached lexemes. In `md` and `lg` models most of those `1340242` lexemes have some precalculated features (like `Token.prob`) but there can be additional lexemes in this cache without precalculated features since more entries can be added as you process texts.
`len(vocab.strings)` is the number of strings related to both tokens and annotations (like `nsubj` or `NOUN`), so it's not a particularly useful number. All strings used anywhere in training or processing are stored here so that the internal integer hashes can be converted back to strings when needed.
|
rvest: html\_table() only picks up header row. Table has 0 rows
I'm learning how to webscrape with `rvest` and I'm running into some issues. Specifically, the code is only picking up the header-row.
```
library(rvest)
library(XML)
URL1 <- "https://swishanalytics.com/optimus/nba/daily-fantasy-salary-changes?date=2017-11-25"
df <- URL1 %>% read_html() %>% html_node("#stat-table") %>% html_table()
```
Calling `df` results in a data.frame with 7 columns and 0 rows. I installed inspector gadget, and even that is telling me that `id = #stat-table` is correct. What is unique about this website that it doesn't want to pickup the table data?
As a separate question, if I "View Page Source", I can see all the data on the page and I wouldn't have to use `RSelenium` to flip through DK, FD, or yahoo salaries. It looks like there are keys that would be easy to find (e.g. find "FD" > find all "player name:" and pick up characters after, etc), but I don't know of a library/process that handles the page source. Are there any resources for this?
Thanks.
| You could -- in theory -- extract the data from the `<script>` tag and then process it with `V8` but this is also pretty easy to do with `splashr` or `seleniumPipes`. I wrote `splashr` so I'll show that:
```
library(splashr)
library(rvest)
start_splash()
pg <- render_html(url="https://swishanalytics.com/optimus/nba/daily-fantasy-salary-changes?date=2017-11-25")
html_node(pg, "table#stat-table") %>%
html_table() %>%
tibble::as_tibble()
## # A tibble: 256 x 7
## Position Player Salary Change `Proj Fantasy Pts` `Avg Fantasy Pts` Diff
## <chr> <chr> <chr> <chr> <dbl> <chr> <chr>
## 1 PF Thon Maker $3,900 +$600 (18.2%) 12.88 13.24 -0.36
## 2 PG DeAndre Liggins $3,500 +$500 (16.7%) 9.68 7.80 +1.88
## 3 PG Elfrid Payton $6,400 +$700 (12.3%) 32.77 28.63 +4.14
## 4 C Jahlil Okafor $3,000 -$400 (-11.8%) 1.71 12.63 -10.92
## 5 PF John Collins $5,200 +$400 (8.3%) 29.65 24.03 +5.63
## 6 SG Buddy Hield $4,600 -$400 (-8.0%) 17.96 21.84 -3.88
## 7 SF Aaron Gordon $7,000 +$500 (7.7%) 32.49 36.91 -4.42
## 8 PG Kemba Walker $7,600 -$600 (-7.3%) 36.27 38.29 -2.02
## 9 PG Lou Williams $6,600 -$500 (-7.0%) 34.28 30.09 +4.19
## 10 PG Raul Neto $3,200 +$200 (6.7%) 6.81 10.57 -3.76
## # ... with 246 more rows
killall_splash()
```
BeautifulSoup won't read this data either. Well, you can target the `<script>` tag that has it in JS form and use a similar V8-engine on Python as well, but it's not going to be able to do this any easier than `rvest`.
Further expansion on ^^:
Most scraping guides tell you to do "Inspect Element" to eventually find the XPath or CSS selector to target. Inspecting on a random row of that table shows:
[![enter image description here](https://i.stack.imgur.com/yjFeq.png)](https://i.stack.imgur.com/yjFeq.png)
For "normal" sites, that generally works.
Sites with JS-rendered XHR requests (or on-page JS+data) will look like ^^ but your targeting won't work b/c `read_html()` (and the BeautifulSoup equiv) can't render JavaScript on pages without the help of some rendering engine. You can try to tell if this is happening by doing a View Source along with the element inspection. Here's the View Source for that site cropped to the very long lines of data + JS + HTML that eventually make the table:
[![enter image description here](https://i.stack.imgur.com/SazUC.png)](https://i.stack.imgur.com/SazUC.png)
I've posted numerous SO answers for how to target those `<script>` tags and use `V8`. Using `splashr` or `decapitated` is just easier (if they're installed and working).
If you don't want to deal with Docker and use a recent version of Chrome, you can also follow the guidance [here](https://developers.google.com/web/updates/2017/04/headless-chrome) to get headless working and do:
```
res <- system2("chrome", c("--headless", "--dump-dom", "https://swishanalytics.com/optimus/nba/daily-fantasy-salary-changes?date=2017-11-25"), stdout=TRUE)
```
`res` then becomes plain HTML that you can read in with `rvest` and scrape away.
A package-in-development —- `decapitated` -- makes ^^ a bit less ugly:
```
install_github("hrbrmstr/decapitated")
library(decapitated)
library(rvest)
chrome_version()
## Google Chrome 63.0.3239.59 beta
pg <- chrome_read_html("https://swishanalytics.com/optimus/nba/daily-fantasy-salary-changes?date=2017-11-25")
html_node(pg, "table#stat-table") %>%
html_table() %>%
tibble::as_tibble()
## # A tibble: 256 x 7
## Position Player Salary Change `Proj Fantasy Pts` `Avg Fantasy Pts` Diff
## <chr> <chr> <chr> <chr> <dbl> <chr> <chr>
## 1 PF Thon Maker $3,900 +$600 (18.2%) 12.88 13.24 -0.36
## 2 PG DeAndre Liggins $3,500 +$500 (16.7%) 9.68 7.80 +1.88
## 3 PG Elfrid Payton $6,400 +$700 (12.3%) 32.77 28.63 +4.14
## 4 C Jahlil Okafor $3,000 -$400 (-11.8%) 1.71 12.63 -10.92
## 5 PF John Collins $5,200 +$400 (8.3%) 29.65 24.03 +5.63
## 6 SG Buddy Hield $4,600 -$400 (-8.0%) 17.96 21.84 -3.88
## 7 SF Aaron Gordon $7,000 +$500 (7.7%) 32.49 36.91 -4.42
## 8 PG Kemba Walker $7,600 -$600 (-7.3%) 36.27 38.29 -2.02
## 9 PG Lou Williams $6,600 -$500 (-7.0%) 34.28 30.09 +4.19
## 10 PG Raul Neto $3,200 +$200 (6.7%) 6.81 10.57 -3.76
## # ... with 246 more rows
```
NOTE: Headless Chrome is having issues on High Sierra due to the new permissions and sandboxing. It works on older macOS systems and Windows/Linux. You just need the right version and the right environment variable set.
|
Eclipse: How to import git project as library for Android Project which has to be pushed to Bitbucket
Sorry for the long title, here's the jist:
- I have a android application project which I'm hosting on bitbucket.
- There is a library on github I'd like to add as a dependency.
I'm unsure of
1. How to add the github project as a library to my Eclipse project?
2. How this will work when pushing/pulling from Bitbucket?
Thanks, David.
| **Setting your dependency as a library:** you'll have to clone the project to a local folder, import it as a project into Eclipse, and in your project configuration you'll have to set the library project as a library: do a right-click in the project's name, go to Properties and under "Android" click in the checkbox "Is library".
**Adding the library to the main project:** In your main project, go to project properties the same way, and under "Android" click in the "Add" button and add a reference to your library problem.
More details here: <http://developer.android.com/tools/projects/projects-eclipse.html>
**git:** if you don't want to put the library's source code into your project you can add it to a .gitignore file and download it manually everytime you clone your project from Bitbucket. You can also take a look at git submodules: <http://git-scm.com/book/en/Git-Tools-Submodules> . Sorry but I never used them to give you more details.
|
RuntimeError at / cannot cache function '\_\_shear\_dense': no locator available for file '/home/...site-packages/librosa/util/utils.py'
I am trying to host django application with apache2. But getting the following error.
```
RuntimeError at / cannot cache function '__shear_dense': no locator available for file '/home/username/project/env/lib/python3.6/site-packages/librosa/util/utils.py'
```
When running the Django server, no such error is encountered but in case of apache2 server, this error is thrown.
Similar question can be found here : [RuntimeError: cannot cache function '\_\_jaccard': no locator available for file '/usr/local/lib/python3.7/site-packages/librosa/util/matching.py'](https://stackoverflow.com/questions/56995232/runtimeerror-cannot-cache-function-jaccard-no-locator-available-for-file)
The problem is a wsgi error and appears to be due to import of librosa and numba. I have been stuck on these days. Any pointers on how to approach this problem will be highly appreciated.
| After spending a couple of days banging my head against this, and reading all I could google on this, I figured it out. Here goes.
**TL;DR:** Make very sure you set the `NUMBA_CACHE_DIR` environment variable to something your app can write to, and make sure that the variable is actually propagated to your app, and your app sees it. In some environments, this may appear so in local testing but may be silently lost when you deploy. **Really, test it! I read this advice probably a dozen times, and I thought I checked everything, and my problem was elsewhere but in the end I was wrong.**
**Details.**
The culprit is the location of caching directories, and the corresponding lack of write permissions for these directories in the numba package, which is a dependency for `librosa`. Librosa tries to cache some functions using `numba` decorators. Numba has four locator classes, which inform where the cache is to be written.
I think Numba tries to be clever and uses fallback strategies depending on what user specified (e.g. a dedicated cache directory), and what might be available in the system to write cache to. As a result, it usually works but when it doesn't, it might seem that you specified a perfectly good caching location, it got lost or overridden by the fallback strategy and then fail.
I have noticed that some of these fallback caching location strategies include trying to cache inside the library's root directory (in this case, librosa's), and to cache to `/root/something...` But I am now pretty sure that if you set `NUMBA_CACHE_DIR` correctly, it will be fine.
Below is my specific case: using `librosa` in AWS Lambda. What helped me was to add debugging printouts in various places in the locator classes in `numba/core/caching.py`
**My use case: AWS Lambda**
If you get this, chances are you are using some restrictive environment with somewhat unusual defaults.
In my case, it was AWS Lambda, the root of the docker container with the app is mounted read-only. So, one of strategies to cache to the library root dir was not an option.
The caching directory did not default to /tmp by itself. Eventually, I set it explicitly via `NUMBA_CACHE_DIR: /tmp` in the CloudFormation template, and it tested successfully when invoked locally but when I deployed it via ZIP file manually to AWS for testing, I forgot to set it again in the console, and it came to the app as None, and failed.
Once I specified the caching dir env var in the lambda console, it worked.
**Various Sources that Helped**
<https://github.com/numba/numba/issues/5566>
<https://github.com/numba/numba/issues/4032>
|
Regular expression must contain and may only contain
I want a regular expression for python that matches a string which must contain 4 digits, it may not contain any special character other than "-" or ".", and it may only contain uppercase letters. I know the following matches text with 4 digits or more. How would I add the rest of the criteria?
`[0-9]{4,}`
An example would be:
**ART-4.5-11** is good, **ART5411** is good, **76543** is good, but **aRT-4!5-11** is bad since it contains a lowercase char and a special char that is not "-" or "."
| The pattern:
```
pattern = '^[A-Z.-]*(\d[A-Z.-]*){4,}$'
```
- `^` - start of the word
- `[A-Z.-]*` - any number of optional non-digit "good characters": letters, periods or dashes
- `(\d[A-Z.-]*){4,}` - 4 or more groups of a digit and other "good characters"; this part provides at least 4 digits
- `$` - end of the word
Examples:
```
re.match(pattern, "ART-4.5-11")
# <_sre.SRE_Match object; span=(0, 10), match='ART-4.5-11'>
re.match(pattern, "ART5411")
# <_sre.SRE_Match object; span=(0, 7), match='ART5411'>
re.match(pattern, "aRT-4!5-11") # No match
re.match(pattern, "76543")
# <_sre.SRE_Match object; span=(0, 5), match='76543'>
```
|
Files to remove from Windows XP to save space
I'm in the process of backing up an XP system, and I'm trying to save as much space as I can for long term storage. I would like to know what directories that might accumulate with time I can safely remove, to minimize the space required. What can I do?
| Having been concerned about disk space Windows takes for a long time, I'll throw in my share.
- Scour through each user's home folder and My Documents folder and remove unneeded files
- Run `cleanmgr`. On its Advanced tab, also clear System restore checkpoints (if you have them enabled at all)
- Remove the hibernation file (`hiberfil.sys`) and/or relocate the swap file (`pagefile.sys`)
- Delete everything in `C:\Windows\SoftwareDistribution\Download`, which contains Windows Update download cache
- Delete everything in `C:\Windows\Temp`
- Delete everything in `C:\Documents and Settings\<user>\Local Settings\Temp`
- Delete the hidden `C:\Windows\$Nt*Uninstall*` folders, which allow you to uninstall updates you installed over time (not that you need it, right?). Be careful to **not to delete** the folder `$hf_mig$`
- Search the partition for `*.log` files and delete them
- Using [CCleaner](http://www.piriform.com/ccleaner) may also help (or may not)
|
cleanup tmp directory with carrierwave
I use carrierwave for my images upload, in my form i added a hidden field for caching like is described in documentation.
```
= form_for @user, html: {multipart: true} do |f|
%p
= f.label :image, "your image"
= f.file_field :image, id: "img"
= f.hidden_field :image_cache
```
but the problem is after uploading images and saving record, tmp directory still having all temporary/caching files.
there is a way to clean up the tmp directory ?
i found this post [here](https://github.com/carrierwaveuploader/carrierwave/wiki/How-to%3a-Delete-cache-garbage-directories) but can't understand it as well, and there is no simple example explained
**Edit**
I tried to run this command with console
```
CarrierWave.clean_cached_files!
```
it outputs me an array of all files in tmp directory like this:
```
["/home/medBo/projects/my_project/public/uploads/tmp/1380732930-5006-6671","/home/medBo/projects/my_project/public/uploads/tmp/1380754280-4623-3698" ....
```
but when i go to see what happens, i find that all files still exists in /tmp (not removed)
i tried to read more in the link above, i found a special considerations about **CarrierWave.clean\_cached\_files!** :
>
> Special Considerations
>
>
> This method breaks uploaders that have more than one version defined.
> Your first version will be saved, but afterwards the cleanup code will
> run and remove the tmp file that is used to generate additional
> versions. In that case, you are better off creating a rake task that
> cleans out the tmp folders periodically.
>
>
>
what means : "This method breaks uploaders that have more than one version" ? (because i use two versions in my uploader class "thumb and large versions") :
```
class ImageUploader < CarrierWave::Uploader::Base
# Include RMagick or MiniMagick support:
include CarrierWave::RMagick
# include CarrierWave::MiniMagick
# Choose what kind of storage to use for this uploader:
storage :file
# storage :fog
# Override the directory where uploaded files will be stored.
# This is a sensible default for uploaders that are meant to be mounted:
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
...
...
version :large do
resize_to_limit(600, 600)
end
version :thumb do
process :crop_image
resize_to_fill(100, 100)
end
...
...
end
```
I also try to run a task to see if folders inside tmp/ directory will be removed but the task doesn't work :
```
task :delete_tmp_files do
FileUtils.rm_rf Dir.glob("#{Rails.root}/public/uploads/tmp/*")
end
```
| CarrierWave will take care of tidying most of the tmp files and folders for you when everything is working properly. To catch the anomalies create a custom rake task to clean up the garbage and then use the Whenever gem to schedule this task to run every day, every hour etc.
my\_custom\_task.rake
```
task :delete_tmp_files do
FileUtils.rm_rf Dir.glob("#{Rails.root}/where/you/store/your/tmp_images/*") #public/tmp/screenshots etc
#note the asterisk which deletes folders and files whilst retaining the parent folder
end
```
call with `rake delete_tmp_files`
Ryan Bates has done a great railscast on setting up whenever in rails. <http://railscasts.com/episodes/164-cron-in-ruby-revised>
|
Unhide Excel Application Session
I have an Excel VBA method (I didn't write it) that runs and one of the first things it does is hide the Excel session `Application.Visible = False`.
However, when the method has finished, it does not unhide the Excel session so it remains open and listed in the Task Manager but is hidden and seemingly unusable.
Does anyone know, without have the VBE open (so one can access the Immediate Window and run `Application.Visible = True`), how to unhide this Excel session? At the moment, I'm simply having to kill the session using the Task Manager.
This isn't a massive deal but I'm just interested if anyone knows how to resurrect such a session.
|
>
> Like I said, it's not a big deal but was just interested if anyone knew of shortcut key or anything to bring it back.
>
>
>
There is no shortcut as such that I am aware of but you can do this.
Open MS Word and paste this code in the VBA Editor. Close all open instances of Excel which are visible and then run and this code. This will make a hidden instance visible. Manually close the instance and repeat the process if there are more instances.
```
Option Explicit
Sub Sample()
Dim oXLApp As Object
'~~> Get an existing instance of an EXCEL application object
On Error Resume Next
Set oXLApp = GetObject(, "Excel.Application")
On Error GoTo 0
oXLApp.Visible = True
Set oXLApp = Nothing
End Sub
```
I am not deliberately using a loop as the hidden instance can have a workbook which you might like to save?
If you want you can convert the above code to a VB Script document which you can directly run from the desktop.
>
> Unfortunately, I don't have the control to make the changes required.
>
>
>
What do you exactly mean? Is the VBA Password Protected? If no then my suggestion is still the same as earlier
>
> This is a case of poor programming. Even if we give a code to close
> all hidden Excel instances, that won't help you. Because next time you
> run that macro, you will face the same problem again. Why not edit the
> existing code and add Application.Visible = True at the end? Is the
> VBA password protected? – Siddharth Rout 28 mins ago
>
>
>
|
How do I implement a member wise comparison in java?
I'm from a C++ background and just started Java today. Say I have a class with a couple of data members. For example:
```
public class Person {
//Constructors/Destructor
public Person(String strFirstName, String strLastName) {
m_strFirstName = strFirstName;
m_strLastName = strLastName;
m_strFullName = m_strFirstName + m_strLastName;
}
//Getters
public String GetFullName() { return m_strFullName; }
public String GetFirstName() { return m_strFirstName; }
public String GetLastName() { return m_strLastName; }
//Private Data Members
private String m_strFirstName;
private String m_strLastName;
private String m_strFullName;
}
```
Now lets say I do this:
```
Person john = new Person("john", "doe");
Person johndoe = new Person("john", "doe");
if (john == johndoe) {
System.out.println("They are Equal");
} else {
System.out.println("They are NOT Equal");
}
```
Here the result is "They are NOT Equal". I understand this is because Java is comparing the references (memory addresses), and since they are different locations in memory the test fails. I have read that Java doesn't support operator overloading, so I can't overload the operator==, so is there a method I'd override to implement my memberwise comparison? The object.equals method looked promising, but I've read that it's bad practice ot override this one.
UPDATE:
Ok I'm convinced overriding equals is OK! I can't find that article that said that it's bad. Thanks for the help, I'll probably have more questions as I learn Java!!
| You do:
```
if (john.equals(johndoe)) {
...
}
```
and implement the `equals()` method on your object:
```
public class Person {
private String firstName;
private String lastName;
private String fullName;
public Person(String firstName, String lastName) {
this.firstName = firstName;
this.lastName = lastName;
this.fullName = firstName + lastName;
}
public String getFirstName() { return firstName; }
public String getLastName() { return lastName; }
public String getFullName() { return fullName; }
@Override
public boolean equals(Object ob) {
if (ob == null) return false;
if (ob.getClass() != getClass()) return false;
Person other = (Person)ob;
if (!firstName.equals(other.firstName)) return false;
if (!lastName.equals(other.lastName)) return false;
if (!(fullName.equals(other.fullName)) return false;
return true;
}
@Override
public int hashCode() {
return firstName.hashCode() ^ lastName.hashCode() ^ fullName.hashCode();
}
}
```
Two things to note:
1. The above is more Java-style than C++-style. I'd highly suggest you adopt Java coding conventions when coding Java. If you're new to Java, you should try and learn them; and
2. Java has an equals/hashCode contract meaning that if two objects are equal then their hash codes *must* be equal.
|
How to create a personal git function, usable every time I launch Bash?
I am developing a web site and I use git to update the site,
and I have two branch so it goes:
- git add .
- git commit "something"
- git push
- git checkout "prod"
- git merge --no-ff dev
- git push
- git checkout "dev"
I need a lazygit function which would looks like
```
function lazygit(){
git add .
git commit "$1"
git push
git checkout "prod"
git merge --no-ff dev
git push
git checkout "dev"
}
```
And would be use like
```
lazygit( "CSS UPDATE" )
```
Now my question is, how can I save this function into file or whatsoever so I can use it anywhere ?
Thank's alot
| Another option is to define a Git alias instead of a shell function.
Aliases are part of configuration, so let's look at the `gitconfig(7)` manual page (run `git help config` locally):
>
> `alias.*`
>
>
> Command aliases for the `git(1)` command wrapper - e.g. after defining `"alias.last = cat-file commit
> HEAD"`, the invocation "git last" is equivalent to "git cat-file commit HEAD". To avoid confusion and
> troubles with script usage, aliases that hide existing Git commands are ignored. Arguments are split by
> spaces, the usual shell quoting and escaping is supported. A quote pair or a backslash can be used to
> quote them.
>
>
> If the alias expansion is prefixed with an exclamation point, it will be treated as a shell command. For
> example, defining "alias.new = !gitk --all --not ORIG\_HEAD", the invocation "git new" is equivalent to
> running the shell command "gitk --all --not ORIG\_HEAD". Note that shell commands will be executed from
> the top-level directory of a repository, which may not necessarily be the current directory. GIT\_PREFIX
> is set as returned by running `git rev-parse --show-prefix` from the original current directory. See `git-rev-parse(1)`.
>
>
>
So you can do
```
$ git config --add alias.whatever '!set -eu; git add . &&
git commit "$1" &&
git push &&
git checkout prod &&
git merge --no-ff dev &&
git push &&
git checkout dev'
```
and then just
```
$ git whatever "commit message"
```
The `set -eu;` would make the whole thing crash unless you submit the required parameter.
Another approach would be to stick something like `test $# -gt 0 || exit 1;` there instead.
|
Should I validate a method call's return value even if I know that the method can't return bad input?
I'm wondering if I should defend against a method call's return value by validating that they meet my expectations even if I know that the method I'm calling will meet such expectations.
GIVEN
```
User getUser(Int id)
{
User temp = new User(id);
temp.setName("John");
return temp;
}
```
SHOULD I DO
```
void myMethod()
{
User user = getUser(1234);
System.out.println(user.getName());
}
```
OR
```
void myMethod()
{
User user = getUser(1234);
// Validating
Preconditions.checkNotNull(user, "User can not be null.");
Preconditions.checkNotNull(user.getName(), "User's name can not be null.");
System.out.println(user.getName());
}
```
I'm asking this at the conceptual level. If I know the inner workings of the method I'm calling. Either because I wrote it or I inspected it. And the logic of the possible values it returns meet my preconditions. Is it "better" or "more appropriate" to skip the validation, or should I still defend against wrong values before proceeding with the method I'm currently implementing even if it should always pass.
---
### My conclusion from all answers (feel free to come to your own):
Assert when
- The method has shown to misbehave in the past
- The method is from an untrusted source
- The method is used from other places, and does not explicitly state it's post-conditions
Do not assert when:
- The method lives closely to yours (see chosen answer for details)
- The method explicitly defines it's contract with something like proper doc, type safety, a unit test, or a post-condition check
- Performance is critical (in which case, a debug mode assert could work as a hybrid approach)
| That depends on how likely getUser and myMethod are to change, and more importantly, **how likely they are to change independently of each other**.
If you somehow know for certain that getUser will never, ever, ever change in the future, then yes it's a waste of time validating it, as much as it is to waste time validating that `i` has a value of 3 immediately after an `i = 3;` statement. In reality, you don't know that. But there are some things you do know:
- Do these two functions "live together"? In other words, do they have the same people maintaining them, are they part of the same file, and thus are they likely to stay "in sync" with each other on their own? In this case it's probably overkill to add validation checks, since that merely means more code that has to change (and potentially spawn bugs) every time the two functions change or get refactored into a different number of functions.
- Is the getUser function is part of a documented API with a specific contract, and myMethod merely a client of said API in another codebase? If so, you can read that documentation to find out whether you should be validating return values (or pre-validating input parameters!) or if it really is safe to blindly follow the happy path. If the documentation does not make this clear, ask the maintainer to fix that.
- Finally, if this particular function has suddenly and unexpectedly changed its behavior in the past, in a way that broke your code, you have every right to be paranoid about it. Bugs tend to cluster.
Note that all of the above applies even if you are the original author of both functions. We don't know if these two functions are expected to "live together" for the rest of their lives, or if they'll slowly drift apart into separate modules, or if you have somehow shot yourself in the foot with a bug in older versions of getUser. But you can probably make a pretty decent guess.
|
How to set the default file(starting point) in dot net core
I am trying to explore the dot net core functionality to get better understanding on it, for the same i executed
```
dotnet new
dotnet build
dotnet run
```
command in command prompt window, it created a project for me and file with name `Project.cs` has been created and at last it showed `Hello World!` in the window.
Now i added one more file in the same folder structure with different name `SampleDotNetCoreApp.cs`, i want to know how i can set `SampleDotNetCoreApp.cs` as the default starting point of the program execution when i will execute `dotnet run` command.
In other words i want to know how i can change the starting execution point in dot net core when i have multiple cs file in the same folder.
| You can edit the "Project.csproj" file to specify which `Main` method is used
```
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.1</TargetFramework>
<StartupObject>Project.SampleDotNetCoreApp</StartupObject>
</PropertyGroup>
```
Note the `StartupObject` tag, identifying the class to start the program with. This class must have a `static Main(string[] args)` method.
Use `dotnet restore` to ensure the changes are saved successfully, then build / run the project
>
> `dotnet run` and `dotnet build` do perform an automatic restore, so doing this manually is not necessary
>
>
>
- Yes, I know I'm late to the party, but I just encountered this issue and found it hard to solve. Might as well share my newfound wisdom.
|
How do I see a design of my databound ItemTemplate?
I have a simple `ListBox` and bound a collection of objects as `ItemsSource`.
The DataTemplate I assigned right now is very simple, but how can I see that template in the designer?
This is my xaml:
```
<ListBox.ItemTemplate>
<DataTemplate>
<Grid>
<TextBlock Text="{Binding Title}" />
<TextBlock Text="{Binding Address}" />
</Grid>
</DataTemplate>
</ListBox.ItemTemplate>
```
And this is how the designer looks:
![enter image description here](https://i.stack.imgur.com/k1QzC.png)
And this is how it looks when the data is bound and the app is running:
![enter image description here](https://i.stack.imgur.com/OGTA4.png)
How can I get the designer to show a preview of my `DataTemplate`?
I don't need the real data filled in (that happens in runtime), but a preview is appreciated.
| You need design time data. You can declare a design-time data context using the [d:DataContext](https://stackoverflow.com/questions/4033600/silverlight-constructor-injection-into-view-model-design-mode/4034057#4034057) property. You can create mock classes that expose mock lists for your the designer to show at design time.
```
<Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
x:Class="WpfAnswer001.Window1"
d:DataContext="{StaticResource ResourceKey=MockMasterViewModel}"
Title="Window1" d:DesignWidth="523.5">
<Grid>
<ListBox ItemsSource="{Binding Path=Items}">
<ListBox.ItemTemplate>
<DataTemplate>
<StackPanel Margin="4">
<TextBlock Text="{Binding Title}" />
<TextBlock Text="{Binding Address}" />
</StackPanel>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
</Grid>
</Window>
```
This is how you declare the mock view model in App.xaml:
```
<Application x:Class="WpfAnswer001.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:WpfAnswer001"
StartupUri="Window1.xaml">
<Application.Resources>
<local:MockMasterViewModel x:Key="MockMasterViewModel"/>
</Application.Resources>
</Application>
```
This is how the code for the mock view model looks like:
```
using System.Collections.ObjectModel;
public class MockItemViewModel
{
public string Title { get; set; }
public string Address { get; set; }
}
public class MockMasterViewModel
{
public ObservableCollection<MockItemViewModel> Items { get; set; }
public MockMasterViewModel()
{
var item01 = new MockItemViewModel() { Title = "Title 01", Address = "Address 01" };
var item02 = new MockItemViewModel() { Title = "Title 02", Address = "Address 02" };
Items = new ObservableCollection<MockItemViewModel>()
{
item01, item02
};
}
}
```
This is how it looks in Visual Studio:
![enter image description here](https://i.stack.imgur.com/PYsEz.png)
Is it worth the effort and coding? That is up to you, but this is how it should be done.
Otherwise, put up with a blank designer and test only at runtime.
This is very useful when you are working a lot with Expression Blend and really need to see how the items look like.
|
Generating an Existential type with QuickCheck?
I'm struggling with this one - how could QuickCheck generate a value *for all* types? Maybe it could forge it, and only test types with the context `Arbitrary a => a`?
I'm just wondering how someone could make an instance of arbitrary for data constructors with an existential type:
```
data Foo a = Foo a (forall b. (a -> b, b -> a))
```
| It's a bit hard to tell what you're really trying to do, especially since your example type doesn't make a lot of sense. Consider something else:
```
newtype WrappedLens s t a b = WrappedLens (forall f . Functor f => (a -> f b) -> s -> f t)
newtype WL = WL (WrappedLens (Int, Int) (Int, Int) Int Int)
```
Is it possible to make an arbitrary `WL`? Sure! Just pass `fmap` explicitly and use the arbitrary function instance. Is it possible to make an arbitrary `WL` that's a law-abiding lens? Now that is a much taller order.
I would speculate that the hard thing about making arbitrary values involving higher rank types is not the types so much as the fact that they tend to involve functions in some fashion, and it's hard to constrain arbitrary functions to the ones you actually want to consider.
|
python bokeh: get image from webcam and show it in dashboard
I want to display an image - e.g. capture with the webcam - in bokeh. I tried image\_url and image\_rgba, but both are not working. Image\_url is showing nothing, image\_rgb shows something, but there seems to be some index shift.
```
# -*- coding: utf-8 -*-
from bokeh.plotting import figure, show
import scipy.misc
import cv2
import matplotlib.pyplot as plt
import os
# capture video frame from webcam
#vc = cv2.VideoCapture(1)
vc = cv2.VideoCapture(-0)
rval, frame = vc.read()
vc.release()
# show captured image (figure 1)
fig = plt.figure()
ax = plt.Axes(fig,[0,0,1,1])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(frame)
plt.show()
# save captured image
scipy.misc.imsave('outfile.jpg', frame)
mypath = os.path.join(os.getcwd(),'outfile.jpg')
# 1. try using image_url ... not working (figure 2)
p = figure()
p.image_url(url=[mypath],x=[0],y=[1],w=[10],h=[10])
show(p)
# 2. try using image_rgba ... also not working (figure 3)
p = figure(x_range=(0,10), y_range=(0,10))
p.image_rgba(image=[frame], x=0, y=0, dw=10, dh=10)
show(p)
# 3. solution provided by @bigreddot ... flipped but also not working (figure4)
img = scipy.misc.imread(mypath)[::-1] # flip y-direction for Bokeh
p = figure(x_range=(0,10), y_range=(0,10))
p.image_rgba(image=[img], x=0, y=0, dw=10, dh=10)
show(p)
```
***update:*** according to @bigreddot 's comment
figure 1
[![figure 1](https://i.stack.imgur.com/bonfN.png)](https://i.stack.imgur.com/bonfN.png)
figure 2
[![figure 2](https://i.stack.imgur.com/fxvES.png)](https://i.stack.imgur.com/fxvES.png)
figure 3
[![figure 3](https://i.stack.imgur.com/VQRmE.png)](https://i.stack.imgur.com/VQRmE.png)
figure 4
[![figure 4](https://i.stack.imgur.com/vKhYT.png)](https://i.stack.imgur.com/vKhYT.png)
the bokeh version I use is 0.13 the python version 3.6.0
| After investigation, the return result from OpenCV is a Numpy array of bytes with shape *(M, N, 3)*, i.e. RGB *tuples*. What Bokeh expects is a Numpy array of shape *(M, N)* 32-bit integers representing RGBA values. So you need to convert from one format to the other. Here is a complete example:
```
from bokeh.plotting import figure, show
import scipy.misc
import cv2
import os
import numpy as np
#vc = cv2.VideoCapture(1)
vc = cv2.VideoCapture(-0)
rval, frame = vc.read()
vc.release()
M, N, _ = frame.shape
img = np.empty((M, N), dtype=np.uint32)
view = img.view(dtype=np.uint8).reshape((M, N, 4))
view[:,:,0] = frame[:,:,0] # copy red channel
view[:,:,1] = frame[:,:,1] # copy blue channel
view[:,:,2] = frame[:,:,2] # copy green channel
view[:,:,3] = 255
img = img[::-1] # flip for Bokeh
p = figure()
p.image_rgba(image=[img],x=0,y=0, dw=10, dh=10)
show(p)
```
Generates:
[![enter image description here](https://i.stack.imgur.com/qIFUr.png)](https://i.stack.imgur.com/qIFUr.png)
|
Haskell construct analogous to Rust trait objects
Haskell supports *type classes*, like equality:
```
class Eq a where
(==) :: a -> a -> Bool
```
Rust does the same with *type traits*:
```
pub trait Draw {
fn draw(&self);
}
```
Now, it's possible to declare in Haskell a list whose elements must belong to the equality type class: `Eq a => [a]` (I believe `a` is called a *constrained type* in Haskell). However, the elements of the list still must all be the same type! Say, all `Integer` or all `Float` or something. In Rust, however, one can have a list (a vector) of values where each implements a given trait but they are not necessarily the same concrete type: `Vec<Box<dyn Draw>>`. Is there a way to do the same in Haskell? Like, I want a list of values but all I care about is that each belong to some type class but not necessarily the same concrete type.
| In Haskell, you can use existential types to express "some unknown type of this typeclass". (In older versions of GHC, you will need a few standard extensions on.)
```
class Draw a where
-- whatever the methods are
data SomeDraw where
SD :: Draw a => a -> SomeDraw
type MyList = [SomeDraw]
```
However, note that this is often overkill, and leads to a [known anti-pattern](https://lukepalmer.wordpress.com/2010/01/24/haskell-antipattern-existential-typeclass/).
For instance, if we had a class as follows:
```
class Draw a where
draw :: a -> String
```
then the type `MyList` above is isomorphic to `[String]` (or at least morally such). There is no advantage to store an unknown "drawable" object whose only method converts it to string compared to storing the string directly. Also note that Haskell is lazy, so you can "store a string which is not evaluated yet", so to speak.
Anyway, existential quantification on typeclasses can also be defined in a generic way:
```
import Data.Kind
-- Ex has the same role of "dyn" in Rust here
data Ex (c :: Type -> Constraint) where
Ex :: c a => a -> Ex c
type MyList = [Ex Draw]
```
|
An example of a simple higher-order function in javascript
While going through [Eloquent Javascript (Chapter 6)](http://eloquentjavascript.net/chapter6.html) there is a reference to higher-order functions in Javascript. While there is an example provided in Chapter 3, I believe it could be a bit simpler since I still don't fully understand the concept. After searching the web I can't seem to find any succinct examples of a higher-order function.
I'd like to see a basic/simple higher-order function in Javascript that will explain the concept.
| Higher functions are concepts from [functional programming](http://en.wikipedia.org/wiki/Functional_programming). In briefly, a higher function is a function which takes another function as parameter. In javascript, some higher functions are added recently.
```
Array.prototype.reduce
//With this function, we can do some funny things.
function sum(array){
return array.reduce(function(a, b){ return a + b; }, 0);
}
```
So, in the above sample, `reduce` is a higher order function, it takes another function, the anonymous function in the sample, as a parameter. The signature of `reduce` looks like this
```
reduce(func, init);
//func is a function takes two parameter and returns some value.
// init is the initial value which would be passed to func
//when calling reduce, some thing happen
//step 1.
[1, 2, 3, 4, 5].reduce(function(a, b){ return a + b }, 0);
//step 2.
[2, 3, 4, 5].reduce(function(a, b){ return a + b}, 0 + 1);
//step 3.
[3, 4, 5].reduce(function(a, b){ return a + b}, 0 + 1 + 2);
//...
```
As you can see, `reduce` iterate an array, and apply the `func` with `init` and first element of that array, then bind the result to `init`.
Another higher order funciton is `filter`.
```
Array.prototype.filter
//As the name indicates, it filter out some unwanted values from an Aarry. It also takes a function, which returns a boolean value, true for keeping this element.
[1, 2, 3, 4, 5].filter(function(ele){ return ele % 2 == 0; });
```
With the above two examples, I have to say higher order function is not that much easy to understand, especially `reduce`. But that's not *complex*, with higher order function, actually your code would be more clean and readable. Take the `filter` as example, it tells people that it throws all odd numbers away.
Here I'd like to implement a simple `filter` function to show you how.
```
function filter(array, func){
var output = [];
for(var i = 0; i < array.length; i++){
if(func(array[i])) output.push(array[i]);
}
return output;
}
```
|
React input focus event to display other component
I was read some tutorial about this. They told me should using ref to do that.
But It's very general.
Here is my problem:
Basically in `Header` component include `NavBar`, `SearchBar` and `ResultSearch` component.
```
const Header = () => {
return (
<header className="ss_header">
<Navbar />
<SearchBar />
<ResultSearch />
</header>
);
};
```
And In `SearchBar` component. Whenever I focused on input text. It emit an event and display `ResultSearch` component by any way (changing style, or ...).
```
class SearchBar extends Component{
render() {
return (
<div className="search_bar">
<section className="search">
<div className="sub_media container">
<form method="GET" action="" id="search_form">
<Icon icon="search" />
<span className="autocomplete">
<input
className="search_input"
autoCorrect="off"
autoComplete="off"
name="query"
type="text"
placeholder="Search for a movie, tv show, person..." />
</span>
</form>
</div>
</section>
</div>
);
}
}
```
Style in `ResultSearch` component. I was set `display: none`.
```
.results_search { display: none; }
```
I think ResultSearch will receive an event from `SearchBar` and set back `display: block` for `ResultSearch` component. Is possible?
How can I handle that?
My Code here: <https://codesandbox.io/s/3xv8xnx3z5>
| only you should convert Header component like following:
```
class Header extends Component {
state = {
focus: false
};
handleInputFocus = () => {
this.setState({ focus: true });
};
handleInputBlur = () => {
this.setState({ focus: false });
};
render() {
return (
<header className="ss_header">
<SearchBar
onFocus={this.handleInputFocus}
onBlur={this.handleInputBlur}
/>
{this.state.focus ? <ResultSearch /> : null}
</header>
);
}
}
```
and also in SearchBar component add following attributes to your input:
```
onFocus={this.props.onFocus}
onBlur={this.props.onBlur}
```
also, you should remove your CSS about result box.
And, you can see the updated code on the following sandbox:
<https://codesandbox.io/s/mmj46xkpo9>
|
Fixing "Line indented incorrectly" error from phpcs
I am validating PHP code with [phpcs](http://pear.php.net/package/PHP_CodeSniffer) using:
```
phpcs --standard=PSR1 .
```
And it produces [this output](https://travis-ci.org/fulldecent/cameralife/builds/21399263) which is littered with:
```
FILE: /home/travis/build/fulldecent/cameralife/setup/upgrade/upgrade.php
--------------------------------------------------------------------------------
FOUND 7 ERROR(S) AND 1 WARNING(S) AFFECTING 8 LINE(S)
--------------------------------------------------------------------------------
34 | ERROR | Line indented incorrectly; expected 4 spaces, found 8
...
```
I tried to fix this with [php-cs-fixer](https://github.com/fabpot/PHP-CS-Fixer), however they do not support lexing and properly setting indentation, so it can only convert tabs. See: <https://github.com/fabpot/PHP-CS-Fixer/issues/229>
**Since phpcs confidently tells me how many spaces are required, is there way I can correct to the required indentation for the entire project?**
| First up, it might be good to know that those indent errors are coming from your PSR2 run and not the PSR1 run. PSR2 contains all of the checks from PSR1, so you don't actually need to do 2 PHPCS runs. You can just use --standard=PSR2 if you want to adhere to both of them.
As for fixing, the current alpha release of PHP\_CodeSniffer contains a script called phpcbf, which can fix errors automatically for you, including the indent issues. When I run the alpha version of PHP\_CodeSniffer on one of your files (setup/upgrade/upgrade.php) I get this report for PSR2:
```
phpcs --standard=PSR2 /path/to/file
--------------------------------------------------------------------------------
FOUND 8 ERRORS AND 1 WARNING AFFECTING 10 LINES
--------------------------------------------------------------------------------
34 | ERROR | [x] Line indented incorrectly; expected 4 spaces, found 8
36 | ERROR | [x] Line indented incorrectly; expected 8 spaces, found 12
40 | ERROR | [x] Line indented incorrectly; expected 4 spaces, found 8
43 | ERROR | [x] Line indented incorrectly; expected 8 spaces, found 12
47 | ERROR | [x] Line indented incorrectly; expected 8 spaces, found 12
51 | ERROR | [x] Line indented incorrectly; expected 12 spaces, found 16
52 | WARNING | [ ] Line exceeds 120 characters; contains 200 characters
55 | ERROR | [x] Line indented incorrectly; expected 4 spaces, found 8
60 | ERROR | [x] A closing tag is not permitted at the end of a PHP file
--------------------------------------------------------------------------------
PHPCBF CAN FIX THE 8 MARKED SNIFF VIOLATIONS AUTOMATICALLY
--------------------------------------------------------------------------------
```
If I then run PHPCS with the new diff report, it will show me what changes need to be made to the file, including this snippet:
```
phpcs --standard=PSR2 --report=diff /path/to/file
@@ -31,32 +31,29 @@
if ($installed_version >= $latest_version) {
echo "<p style=\"color:green\">No upgrade is necessary. Return to the <a href=\"../../\">main page</a>.</p>";
} else {
- foreach (glob(dirname(__FILE__) . '/*.inc') as $script) {
- $a = basename($script, '.inc');
- if (is_numeric($a) && ($a > $installed_version) && ($a <= $latest_version)) {
- $scripts[] = $a;
- }
+ foreach (glob(dirname(__FILE__) . '/*.inc') as $script) {
+ $a = basename($script, '.inc');
+ if (is_numeric($a) && ($a > $installed_version) && ($a <= $latest_version)) {
+ $scripts[] = $a;
}
```
If you want the file fixed automatically, you use the phpcbf command instead of the phpcs command:
```
phpcbf --standard=PSR2 /path/to/file
Patched 1 files
Time: 78 ms, Memory: 4.50Mb
```
You can read more about this here: <https://github.com/squizlabs/PHP_CodeSniffer/wiki/Fixing-Errors-Automatically>
And this is the release you are going to want to get: <https://github.com/squizlabs/PHP_CodeSniffer/releases/tag/2.0.0a1>
Or you can clone the Github repo and checkout the phpcs-fixer branch to get the very latest code. You can then run phpcs and phpcbf from the clone without having to install them via PEAR of Composer:
```
git clone -b phpcs-fixer git://github.com/squizlabs/PHP_CodeSniffer.git
cd PHP_CodeSniffer
php scripts/phpcs ...
php scripts/phpcbf ...
```
|
Dynamic MemberExpression
I am wanting to create a MemberExpression knowing only the field name; eg:
```
public static Expression<Func<TModel, T>> GenerateMemberExpression<TModel, T>(string fieldName)
{
PropertyInfo fieldPropertyInfo;
fieldPropertyInfo = typeof(TModel).GetProperty(fieldName);
var entityParam = Expression.Parameter(typeof(TModel), "e"); // {e}
var columnExpr = Expression.MakeMemberAccess(entityParam, fieldPropertyInfo); // {e.fieldName}
var lambda = Expression.Lambda(columnExpr, entityParam) as Expression<Func<TModel, T>>; // {e => e.column}
return lambda;
}
```
The problem with the above is that the field type must be strongly typed. Passing "object" in as the field type doesn't work. Is there any way to generate this? Even Dynamic LINQ doesn't appear to work.
| There are a number of issues with your code:
1. The parameter to your method is called `fieldName`, but you are getting a *property* out with it.
2. You are using the non-generic `Expression.Lambda` method to generate the expression, which may choose an inappropriate delegate-type if the type-argument `T` passed to the method is not the same as the property-type. In this case, the `as` cast from the expression to the method's return-type will fail and evaluate to `null`. Solution: Use the [generic](http://msdn.microsoft.com/en-us/library/bb336566.aspx) `Lambda` method with the appropriate type-arguments. No casting required.
3. If you solve the second issue, things will work fine when a safe reference-conversion is available from the property-type to `T`, but not when more complicated conversions such as boxing / lifting are required. Solution: Use the [**`Expression.Convert`**](http://msdn.microsoft.com/en-us/library/bb292051.aspx) method where necessary.
---
Here's an update to your sample that addresses these issues:
```
public static Expression<Func<TModel, T>> GenerateMemberExpression<TModel, T>
(string propertyName)
{
var propertyInfo = typeof(TModel).GetProperty(propertyName);
var entityParam = Expression.Parameter(typeof(TModel), "e");
Expression columnExpr = Expression.Property(entityParam, propertyInfo);
if (propertyInfo.PropertyType != typeof(T))
columnExpr = Expression.Convert(columnExpr, typeof(T));
return Expression.Lambda<Func<TModel, T>>(columnExpr, entityParam);
}
```
This will make all of the following calls succeed:
```
GenerateMemberExpression<FileInfo, string>("Name");
GenerateMemberExpression<string, int>("Length");
// Reference conversion
GenerateMemberExpression<FileInfo, object>("Name");
//Boxing conversion
GenerateMemberExpression<string, object>("Length");
//Lifted conversion
GenerateMemberExpression<string, int?>("Length");
```
|
How to get all Kubernetes Deployment objects using kubernetes java client?
I am planning to write simple program using kubernetes java client (<https://github.com/kubernetes-client/java/>). I could get all namespaces and pods but how do i get list of deployments in a given namespace? I couldn't find any method. Is there any way to get it?
```
for (V1Namespace ns: namespaces.getItems()) {
System.out.println("------Begin-----");
System.out.println("Namespace: " + ns.getMetadata().getName());
V1PodList pods = api.listNamespacedPod(ns.getMetadata().getName(), null, null, null, null, null, null, null, null, null);
int count = 0;
for (V1Pod pod: pods.getItems()) {
System.out.println("Pod " + (++count) + ": " + pod.getMetadata().getName());
System.out.println("Node: " + pod.getSpec().getNodeName());
}
System.out.println("------ENd-----");
}
```
| I guess you're looking for the following [example](https://github.com/kubernetes-client/java/blob/5ef1c54d43399ad747bd7f0fc99a63f1e4768b89/kubernetes/docs/AppsV1Api.md#listnamespaceddeployment):
```
public class Example {
public static void main(String[] args) {
ApiClient defaultClient = Configuration.getDefaultApiClient();
defaultClient.setBasePath("http://localhost");
// Configure API key authorization: BearerToken
ApiKeyAuth BearerToken = (ApiKeyAuth) defaultClient.getAuthentication("BearerToken");
BearerToken.setApiKey("YOUR API KEY");
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//BearerToken.setApiKeyPrefix("Token");
AppsV1Api apiInstance = new AppsV1Api(defaultClient);
String namespace = "namespace_example"; // String | object name and auth scope, such as for teams and projects
String pretty = "pretty_example"; // String | If 'true', then the output is pretty printed.
Boolean allowWatchBookmarks = true; // Boolean | allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. If the feature gate WatchBookmarks is not enabled in apiserver, this field is ignored.
String _continue = "_continue_example"; // String | The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
String fieldSelector = "fieldSelector_example"; // String | A selector to restrict the list of returned objects by their fields. Defaults to everything.
String labelSelector = "labelSelector_example"; // String | A selector to restrict the list of returned objects by their labels. Defaults to everything.
Integer limit = 56; // Integer | limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
String resourceVersion = "resourceVersion_example"; // String | When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
Integer timeoutSeconds = 56; // Integer | Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.
Boolean watch = true; // Boolean | Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
try {
V1DeploymentList result = apiInstance.listNamespacedDeployment(namespace, pretty, allowWatchBookmarks, _continue, fieldSelector, labelSelector, limit, resourceVersion, timeoutSeconds, watch);
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling AppsV1Api#listNamespacedDeployment");
System.err.println("Status code: " + e.getCode());
System.err.println("Reason: " + e.getResponseBody());
System.err.println("Response headers: " + e.getResponseHeaders());
e.printStackTrace();
}
}
}
```
|
How much does performance change with a VARCHAR or INT column - MySQL
I have many tables, with millions of lines, with MySQL. Those tables are used to store log lines.
I have a field "country" in VARCHAR(50). There is an index on this column.
Would it change the performances a lot to store a countryId in INT instead of this country field ?
Thank you !
| Your question is a bit more complicated than it first seems. The simple answer is that `Country` is a string up to 50 characters. Replacing it by a 4-byte integer should reduce the storage space required for the field. Less storage means less I/O overhead in processing the query and smaller indexes. There are outlier cases of course. If `country` typically has a `NULL` value, then the current storage might be more efficient than having an id.
It gets a little more complicated, though, when you think about keeping the field up-to-date. One difference with a reference table is that the countries are now standardized, rather than being ad-hoc names. In general, this is a good thing. On the other hand, countries do change over time, so you have to be prepared to add a "South Sudan" or "East Timor" now and then.
If your database is heavy on inserts/updates, then changing the country field requires looking in the reference table for the correct value -- and perhaps inserting a new record there.
My opinion is "gosh . . . it would have been a good idea to set the database up this way in the beginning". At this point, you need to understand the effects on the application of maintaining a country reference table for the small performance gain of making the data structure more efficient and more accurate.
|
ng-click in parent div also in child div
I've the following code:
```
<table class="table">
<tr>
<th>Name___</th>
</tr>
<tr ng-repeat="app in apps"
ng-click="go('/editApp/' + plugin.name);">
<td>
<span>{{app.name}}</span>
</td>
<td style="width: 100px;">
<i class="glyphicon glyphicon-pencil"
ng-click="openPopup(app)"></i>
</td>
</tr>
</table>
```
When I click on the OpenPopup, also the go() method firing, how can I do that, If I click on popup just the popup will fire?
| This executing because your `<td>` nested in `<tr>`, and click firstly fired `openPopup()`
then fired `go()`. You can use `$event.stopPropagation()` for stop event propagation to `<tr>`.
Try
```
<table class="table">
<tr>
<th>Name___</th>
</tr>
<tr ng-repeat="app in apps"
ng-click="go('/editApp/' + plugin.name);">
<td>
<span>{{app.name}}</span>
</td>
<td style="width: 100px;">
<i class="glyphicon glyphicon-pencil"
ng-click="openPopup(app);$event.stopPropagation()"></i>
</td>
</tr>
</table>
```
|
Is it wise to rely on optimizations?
Should I write my code to be clear what I am doing and rely on the optimizer to clean up my code efficiency, or should I be obsessive about getting every last ounce of power out of my code?
And how much speed/size am I losing on by choosing one option over the other?
| There are 2 very different kinds of optimisations.
The first is micro-optimisations. These are things like (e.g.) changing `x = (y * 4 + z) / 2` into `x = y + y + z / 2`, or `x = y % 8` into `x = y & 7`. Compilers are very good at micro-optimisations, so don't bother.
The second is algorithmic optimisations. Things like replacing "array of structures" with "structure of arrays" to make code more suitable for SIMD, or using multiple threads instead of one to take advantage of multiple CPUs, or ensuring a list remains sorted while its being created and modified to avoid the need to sort it after, or using an array instead of a linked list to get rid of "unpredictable (for CPUs) pointer chasing". These are things that compilers struggle with. They're also things that can cause major quantities of effort if you attempt to retro-fit them into existing code; and it's far better to consider these things during the design phase before any of the code is implemented.
|
How to find out if a Object is a integer or is a string or is a boolean?
I have an object and I want to detect what type is, so I can call
```
if (obj isa Integer)
put(key,integerval);
if (obj isa String)
put(key,stringval);
if (obj isa Boolean)
put(key,booleanval);
```
| You're pretty close, actually!
```
if (obj instanceof Integer)
put(key,integerval);
if (obj instanceof String)
put(key,stringval);
if (obj instanceof Boolean)
put(key,booleanval);
```
From the [JLS 15.20.2](http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.20.2):
>
> *RelationalExpression* `instanceof` *ReferenceType*
>
>
> At run time, the result of the `instanceof` operator is `true` if the value of the *RelationalExpression* is not `null` and the reference could be cast (§15.16) to the *ReferenceType* without raising a `ClassCastException`. Otherwise the result is `false`.
>
>
>
Looking at your usage pattern, though, it looks like you may have bigger issues than this.
|
Use Entity framework I want to include only first children objects and not child of child(sub of sub)
Useing Entity framework I want to include an only the first level of children objects and not the children of child
I have these two classes:
```
public class BusinessesTBL
{
public string ID { get; set; }
public string FirstName { get; set; }
public string lastName { get; set; }
public ICollection<OffersTBL> OffersTBLs { get; set; }
}
public class OffersTBL
{
public int ID { get; set; }
public string Name { get; set; }
public int CatId { get; set; }
public string BusinessesTBLID { get; set; }
public virtual BusinessesTBL BusinessesTBLs { get; set; }
}
```
when I try to bring all offers according to CatId field, I need to return the BusinessesTBLs also, but the method also return offers again per each BusinessesTBL obj , My code is :
```
public IQueryable<OffersTBL> GetOffersTBLsCat(int id)
{
db.OffersTBLs.Include(s => s.BusinessesTBLs);
}
```
You can see the wrong result on :
<http://mycustom.azurewebsites.net/api/OffersApi/GetOffersTBLsCat/4>
As you can see it return all offers under each Business object while business object under each offer, And I want only to return offers with its Business object without offer under Business obj.
Could anyone help please?
| I now see that a big part of the original answer is nonsense.
Sure enough, the reason for the endless loop is relationship fixup. But you can't stop EF from doing that. Even when using `AsNoTracking`, EF performs relationship fixup in the objects that are materialized *in one query*. Thus, your query with `Include` will result in fully populated navigation properties `OffersTBLs` and `BusinessesTBLs`.
The message is simple: if you don't want these reference loops in your results, you have to project to a view model or DTO class, as in [one of the other answers](https://stackoverflow.com/a/30366778/861716). An alternative, less attractive in my opinion, when serialization is in play, is to configure the serializer to ignore reference loops. Yet another less attractive alternative is to get the objects separately with `AsNoTracking` and selectively populate navigation properties yourself.
---
**Original answer:**
This happens because Entity Framework performs *relationship fixup*, which is the process that auto-populates navigation properties when the objects that belong there are present in the context. So with a circular references you could drill down navigation properties endlessly even when lazy loading is disabled. The Json serializer does exactly that (but apparently it's instructed to deal with circular references, so it isn't trapped in an endless loop).
The trick is to prevent relationship fixup from ever happing. Relationship fixup relies on the context's `ChangeTracker`, which caches objects to track their changes and associations. But if there's nothing to be tracked, there's nothing to fixup. You can stop tracking by calling `AsNoTracking()`:
```
db.OffersTBLs.Include(s => s.BusinessesTBLs)
.AsNoTracking()
```
If besides that you also disable lazy loading on the context (by setting `contextConfiguration.LazyLoadingEnabled = false`) you will see that only `OffersTBL.BusinessesTBLs` are populated in the Json string and that `BusinessesTBL.OffersTBLs` are empty arrays.
A bonus is that `AsNoTracking()` increases performance, because the change tracker isn't busy tracking all objects EF materializes. In fact, you should always use it in a disconnected setting.
|
Update data property / object in vue.js
is there a way I can programmatically update the `data` object / property in vue.js? For example, when my component loads, my data object is:
```
data: function () {
return {
cars: true,
}
}
```
And after an event is triggered, I want the `data` object to look like:
```
data: function () {
return {
cars: true,
planes: true
}
}
```
I tried:
```
<script>
module.exports = {
data: function () {
return {
cars: true
}
},
methods: {
click_me: function () {
this.set(this.planes, true);
}
},
props: []
}
</script>
```
But this gives me the error `this.set is not a function`. Can someone help?
Thanks in advance!
| Vue does not allow dynamically adding new root-level reactive properties to an already created instance. However, it’s possible to add reactive properties to a nested object, So you may create an object and add a new property like that:
```
data: function () {
return {
someObject:{
cars: true,
}
}
```
and add the property with the [`vm.$set`](https://v2.vuejs.org/v2/guide/reactivity.html#For-Objects) method:
```
methods: {
click_me: function () {
this.$set(this.someObject, 'planes', true)
}
}
```
for `vue 1.x` use `Vue.set(this.someObject, 'planes', true)`
[reactivity](https://v2.vuejs.org/v2/guide/reactivity.html)
|
After upgrading to xcode 9, cordova app won't build, error 70, requires provisioning profile
Yesterday we upgraded from xcode 8.3.2 to version 9. And now our enterprise distribution apache cordova ios app refuses to build.
```
2017-09-21 07:37:16.787 xcodebuild[70400:217569] [MT] IDEDistribution: -[IDEDistributionLogging _createLoggingBundleAtPath:]: Created bundle at path '/var/folders/wj/yj3cfvh954gbc_btlhcrcx7nk7t4dj/T/App Name_2017-09-21_07-37-16.786.xcdistributionlogs'.
2017-09-21 07:37:16.938 xcodebuild[70400:217569] [MT] IDEDistribution: Step failed: <IDEDistributionSigningAssetsStep: 0x7ff756bbdf70>: Error Domain=IDEDistributionSigningAssetStepErrorDomain Code=0 "Locating signing assets failed." UserInfo={NSLocalizedDescription=Locating signing assets failed., IDEDistributionSigningAssetStepUnderlyingErrors=(
"Error Domain=IDEProvisioningErrorDomain Code=9 \"\"App Name.app\" requires a provisioning profile.\" UserInfo={NSLocalizedDescription=\"App Name.app\" requires a provisioning profile., NSLocalizedRecoverySuggestion=Add a profile to the \"provisioningProfiles\" dictionary in your Export Options property list.}"
)}
error: exportArchive: "App Name.app" requires a provisioning profile.
Error Domain=IDEProvisioningErrorDomain Code=9 ""App Name.app" requires a provisioning profile." UserInfo={NSLocalizedDescription="App Name.app" requires a provisioning profile., NSLocalizedRecoverySuggestion=Add a profile to the "provisioningProfiles" dictionary in your Export Options property list.}
** EXPORT FAILED **
Error: Error code 70 for command: xcodebuild with args: -exportArchive,-archivePath,App Name.xcarchive,-exportOptionsPlist,/Users/Shared/Workspace/github/AppName/platforms/ios/exportOptions.plist,-exportPath,/Users/Shared/Workspace/github/AppName/platforms/ios/build/device
```
(I replaced the apps name with "App Name" for this question)
I have verified that all the certs and provisioning profiles are set in xcode. And the build.json is set. I'm not really a Mac guy and I'm stumped on what to do next.
-Edit, Downgrading xcode to 8.3.3 fixed the problem. Not an ideal solution but not much I can do.
| If you specify your provisioning profile explicitly, like me. Like this in your Cordova build.json:
```
"ios": {
"debug": {
"codeSignIdentitiy": "iPhone Developer",
"developmentTeam":"MYTEAMID",
"packageType": "developer",
"iCloudContainerEnvironment": "Development"
},
"release": {
"codeSignIdentitiy": "iPhone Distribution",
"developmentTeam":"MYTEAMID",
"provisioningProfile": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"packageType": "ad-hoc",
"iCloudContainerEnvironment": "Production"
}
}
```
**Please Note**
iCloudContainerEnvironment = Production/Development is only required if you use push notifications
You need to explicitly set manual signing and provide the provisioning keys in your ExportOptions.plist that is generated by Cordova. Unfortunately Cordova is not currently generating all of the required keys.
Here is what it needs to look like, at least for me:
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>compileBitcode</key>
<false/>
<key>method</key>
<string>ad-hoc</string>
<key>iCloudContainerEnvironment</key >
<string>Production</string>
<key>provisioningProfiles</key>
<dict>
<key>my.bundle.idenifier</key>
<string>My Provisioning Profile Name</string>
</dict>
<key>signingCertificate</key>
<string>iPhone Distribution</string>
<key>signingStyle</key>
<string>manual</string>
<key>stripSwiftSymbols</key>
<true/>
<key>teamID</key>
<string>YOURTEAMID</string>
<key>thinning</key>
<string><none></string>
</dict>
</plist>
```
The file Cordova generates @ cordova/app/platforms/ios/exportOptions.plist looks like:
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>compileBitcode</key>
<false/>
<key>method</key>
<string>development</string>
<key>teamID</key>
<string>MYTEAMID</string>
</dict>
</plist>
```
notice it is missing the important bits that Xcode 9 requires.
I generated the correct file by archiving the build manually, then exporting it which also creates the exportOptions.plist that I now use as reference.
After digging deeper, I found that this cannot be fixed after running "Cordova add platform iOS", because it is generated during the build phase dynamically. I decided to fork the Cordova-ios repo and submit a pull request. You may use my fork directly, or wait until the pull request is merged.
**Pull Request**
<https://github.com/apache/cordova-ios/pull/338/commits>
**Fork**
<https://github.com/jrryhrtn/cordova-ios>
**Usage notes from pull request**
See example below, please note that the provisioning profile can be the name or UUID of the profile. Name is preferred for maintenence, as UUID changes each time to regenerate the profile.
```
{
"ios": {
"debug": {
"codeSignIdentity": "iPhone Developer",
"developmentTeam":"YOURTEAMID",
"provisioningProfile": "provisioning profile name or UUID",
"packageType": "development"
},
"release": {
"codeSignIdentity": "iPhone Distribution",
"developmentTeam":"YOURTEAMID",
"provisioningProfile": "provisioning profile name or UUID",
"packageType": "ad-hoc"
}
}
}
```
I plan to manually patch until the/a fix is merged into the next Cordova release. You will have to regenerate your iOS platform after the patch via "Cordova platform rm iOS" and then "Cordova platform add ~/forks/cordova-ios". ~/forks/cordova-ios my local path, use the path on your local machine where you downloaded the forked Cordova-ios repo.
**Update**
cordova-ios 4.5.2 has been officially released! Upgrade by running the following commands: "cordova platform rm ios", and then "cordova platform add ios@4.5.2"
Cheers!
|
What exactly is the ResourceConfig class in Jersey 2?
I have seen a lot of Jersey tutorials that starts with something like
```
@ApplicationPath("services")
public class JerseyApplication extends ResourceConfig {
public JerseyApplication() {
packages("com.abc.jersey.services");
}
}
```
without explaining what exactly the `ResourceConfig` class is. So where can I find its documentation, usage, etc.? Googling for "jersey resourceconfig" does not yield any official doc.
Some of my questions about this class and its usage are:
- What things can I do inside the subclass of `ResourceConfig`?
- Do I need to register the subclass of `ResourceConfig` somewhere so that it can be found or is it automatically detected by Jersey?
- If the subclass is automatically detected what happens if I have multiple subclasses of `ResourceConfig`?
- Is the purpose of `ResourceConfig` the same as the `web.xml` file? If so what happens if I have both in my project? Does one of them take precedence over the other?
| Standard JAX-RS uses an [`Application`](https://javaee.github.io/javaee-spec/javadocs/javax/ws/rs/core/Application.html) as its configuration class. [`ResourceConfig`](https://eclipse-ee4j.github.io/jersey.github.io/apidocs/latest/jersey/org/glassfish/jersey/server/ResourceConfig.html) *extends* `Application`.
There are three main ways (in a servlet container) to configure Jersey (JAX-RS):
1. With only web.xml
2. With both web.xml *and* an `Application/ResourceConfig` class
3. With only an `Application/ResourceConfig` class annotated with `@ApplicationPath`.
## With only web.xml
It is possible to configure the application in a standard JAX-RS way, but the following is specific to Jersey
```
<web-app>
<servlet>
<servlet-name>jersey-servlet</servlet-name>
<servlet-class>org.glassfish.jersey.servlet.ServletContainer</servlet-class>
<init-param>
<param-name>jersey.config.server.provider.packages</param-name>
<param-value>com.mypackage.to.scan</param-value>
</init-param>
</servlet>
...
<servlet-mapping>
<servlet-name>jersey-servlet</servlet-name>
<url-pattern>/api/*</url-pattern>
</servlet-mapping>
...
</web-app>
```
Since Jersey runs in a servlet container, it is only right that the Jersey application runs as a servlet. The Jersey Servlet that handles incoming requests is the [`ServletContainer`](https://eclipse-ee4j.github.io/jersey.github.io/apidocs/latest/jersey/org/glassfish/jersey/servlet/ServletContainer.html). So here we declare it as the `<servlet-class>`. We also configure an `<init-param>` telling Jersey which package(s) to scan for our `@Path` and `@Provider` classes so it can register them.
Under the hood, Jersey will actually create a `ResourceConfig` instance, as that's what it uses to configure the application. Then it will register all the classes that it discovers through the package scan.
## With both web.xml and `Application/ResourceConfig`
If we want to programmatically configure our application with an `Application` or `ResourceConfig` subclass, we can do so with one change to the above web.xml. Instead of setting an init-param to scan for packages, we use an init-param to declare our `Application/ResourceConfig` subclass.
```
<servlet>
<servlet-name>jersey-servlet</servlet-name>
<servlet-class>org.glassfish.jersey.servlet.ServletContainer</servlet-class>
<init-param>
<param-name>javax.ws.rs.Application</param-name>
<param-value>com.example.JerseyApplication</param-value>
</init-param>
<servlet-mapping>
<servlet-name>jersey-servlet</servlet-name>
<url-pattern>/api/*</url-pattern>
</servlet-mapping>
</servlet>
```
```
package com.example;
public class JerseyApplication extends ResourceConfig {
public JerseyApplication() {
packages("com.abc.jersey.services");
}
}
```
Here, we configure the `init-param` `javax.ws.rs.Application` with the fully qualified name of our `ResourceConfig` subclass. And instead of using the `init-param` that tells Jersey which package(s) to scan, we just use the convenience method `packages()` of the `ResourceConfig`.
We could also use the methods `register()` and `property()` to register resources and providers, and to configure Jersey properties. With the `property()` method, anything that can be configured as an `init-param`, can also be configured using the `property()` method. For instance instead of calling `packages()`, we could do
```
public JerseyApplication() {
property("jersey.config.server.provider.packages",
"com.mypackage.to.scan");
}
```
## With only `Application/ResourceConfig`
Without a web.xml, Jersey needs a way for us to provide the servlet-mapping. We do this with the `@ApplicationPath` annotation.
```
// 'services', '/services', or '/services/*'
// is all the same. Jersey will change it to be '/services/*'
@ApplicationPath("services")
public class JerseyApplication extends ResourceConfig {
public JerseyApplication() {
packages("com.abc.jersey.services");
}
}
```
Here with the `@ApplicationPath`, it's just like if we configured the servlet mapping in the web.xml
```
<servlet-mapping>
<servlet-name>JerseyApplication</servlet-name>
<url-pattern>/services/*</url-pattern>
</servlet-mapping>
```
When using only Java code for configuration, there needs to be some way for Jersey to discover our configuration class. This is done with the use of a [`ServletContanerInitializer`](https://javaee.github.io/javaee-spec/javadocs/javax/servlet/ServletContainerInitializer.html). This is something that was introduced in the Servlet 3.0 Specification, so we cannot use "Java only" configuration in earlier servlet containers.
Basically what happens is that the implementor of the initializer can tell the servlet container what classes to look for, and the servlet container will pass those classes to the initializer `onStartup()` method. In Jersey's implementation of the initializer, Jersey configures it to look for `Application` classes and classes annotated with `@ApplicationPath`. See [this post](https://stackoverflow.com/a/29730471/2587435) for further explanation. So when the servlet container starts the application, Jersey's initializer will get passed our `Application/ResourceConfig` class.
## What things can I do inside the subclass of ResourceConfig
Just look at the [javadoc](https://eclipse-ee4j.github.io/jersey.github.io/apidocs/latest/jersey/org/glassfish/jersey/server/ResourceConfig.html). Its mostly just registration of classes. Not much else you need to do with it. The main methods you will be using are the `register()`, `packages()`, and `property()` methods. The `register()` method lets you manually register classes and instances of resources and providers manually. The `packages()` method, discussed earlier, lists the package(s) you want Jersey to scan for `@Path` and `@Provider` classes and register them for you. And the `property()` method allows you to set some [configurable properties](https://eclipse-ee4j.github.io/jersey.github.io/apidocs/latest/jersey/org/glassfish/jersey/server/ServerProperties.html) 1.
The `ResourceConfig` is just a convenience class. Remember, it extends `Application`, so we could even use the standard `Application` class
```
@ApplicationPath("/services")
public class JerseyApplication extends Application {
private final Set<Class<?>> classes;
private final Set<Object> singletons;
public JerseyApplication() {
// configure in constructor as Jersey
// may call the getXxx methods multiple times
this.classes = new HashSet<>();
this.classes.add(MyResource.class);
this.singletons = new HashSet<>();
this.singletons.add(new MyProvider());
}
@Override
public Set<Class<?>> getClasses() {
return this.classes;
}
@Override
public Set<Object> getSingletons() {
return this.singletons;
}
@Override
public Map<String, Object> getProperties() {
final Map<String, Object> properties = new HashMap<>();
properties.put("jersey.config.server.provider.packages",
"com.mypackage.to.scan");
return properties;
}
}
```
With a `ResourceConfig`, we would just do
```
public class JerseyApplication extends ResourceConfig {
public JerseyApplication() {
register(MyResource.class);
register(new MyProvider());
packages("com.mypackages.to.scan");
}
}
```
Aside from being more convenient, there are also a few thing under the hood that help Jersey configure the application.
## An SE Environment
All the examples above assume you are running in an installed server environment, e.g. Tomcat. But you can also run the app in an SE environment, where you run an embedded server and start the app from a `main` method. You will sometimes see these examples when searching around for info, so I want to show what that looks like, so that if you ever do come across this, you are not surprised and know how it differs from your setup.
So sometimes you will see an example like
```
ResourceConfig config = new ResourceConfig();
config.packages("com.my.package");
config.register(SomeFeature.class);
config.property(SOME_PROP, someValue);
```
What is most likely happening here is that the example is using an embedded server, like Grizzly. The rest of the code to start the server might be something like
```
public static void main(String[] args) {
ResourceConfig config = new ResourceConfig();
config.packages("com.my.package");
config.register(SomeFeature.class);
config.property(SOME_PROP, someValue);
String baseUri = "http://localhost:8080/api/";
HttpServer server = GrizzlyHttpServerFactory
.createHttpServer(URI.create(baseUri), config);
server.start();
}
```
So in this example, there is a standalone server being started and the `ResourceConfig` is used to configure Jersey. The different here and from previous examples is that in this example, we are not extending the `ResourceConfig`, but instead just instantiating it. It wouldn't be any different if we were to do
```
public class JerseyConfig extends ResourceConfig {
public JerseyConfig() {
packages("com.my.package");
register(SomeFeature.class);
property(SOME_PROP, someValue);
}
}
HttpServer server = GrizzlyHttpServerFactory
.createHttpServer(URI.create(baseUri), new JerseyConfig());
```
Say you were going through some tutorial and it showed a configuration for a standalone app where they instantiate the `ResourceConfig`. But you are running your app in an installed servlet container and have been using the earlier configuration where you are extending the `ResourceConfig`. Well now you know what the difference is and what changes you need to make. I've seen people do some really weird stuff because they didn't understand this difference. For example I saw someone instantiating a `ResourceConfig` inside a resource class. So this is why I added this extra little piece; so you don't make the same mistake.
---
### Footnotes
1. There are a number of different configurable properties. The link to the [`ServerProperties`](https://eclipse-ee4j.github.io/jersey.github.io/apidocs/latest/jersey/org/glassfish/jersey/server/ServerProperties.html) are just some general properties. There are also different properties related to specific features. The documentation should mention these properties in the section of the docs related to that feature. For a complete list of *all* configurable properties, you can look at all the [Jersey constants](https://eclipse-ee4j.github.io/jersey.github.io/apidocs/latest/jersey/constant-values.html) and look for the ones where the string value starts with `jersey.config`. If you are using a web.xml, then you would use the string value as the `init-param` `param-name`. If you are using Java config (`ResourceConfig`), then you would call `property(ServerProperties.SOME_CONF, value)`
|
@PreAuthorize on spring controller sending redirect if authorization fails
I've got spring security successfully evaluating a @PreAuthorize on my controller. If i use "permitAll" then I can view the page, and if I use "isAuthenticated()" then I get an ugly Access is Denied stack trace. If I put the configuration in an intercept-url within the http node in my security context configuration xml file then I am nicely redirected to the login page instead of getting the nasty stack trace right in my page.
Is there a way for me to get the redirection with the annotation mechanism only?
| I got this to work. There were a couple of things I had to deal with.
First, my Spring MVC configuration had a SimpleMappingExceptionResolver with a defaultErrorView configured. That was intercepting the Authentication and Authorization errors before they could get to the access-denied-handler that I had configured in the http element in my security configuration. The final code looks something like this.
securitycontext.xml
```
<global-method-security pre-post-annotations="enabled"/>
<!-- HTTP security configurations -->
<http auto-config="false" use-expressions="true" entry-point-ref="loginUrlAuthenticationEntryPoint">
<access-denied-handler ref="myAccessDeniedHandler" />
... other configuration here ...
</http>
<!-- handler for authorization failure. Will redirect to the login page. -->
<beans:bean id="myAccessDeniedHandler" class="org.springframework.security.web.access.AccessDeniedHandlerImpl">
<beans:property name="errorPage" value="/index" />
</beans:bean>
```
note that the loginUrlAuthenticationEntryPoint is actually not a part of the solution, it's the access-denied-handler.
my mvc-config.xml still has the SimpleMappingExceptionResolver, but with no defaultErrorView configured. If I were to continue with this path, I would probably implement my own SimpleMappingExceptionResolver that would let Authentication and Authorization excpetions pass through, OR configure it in the SimpleMappingExceptionResolver instead.
The killer in this deal is that I haven't found a way to configure the requires-channel="https" from the intercept-url through an annotation, so I'll be putting it in the xml configuration file for now anyway.
|
307 Redirect when loading analytics.js in Chrome
I'm building a web app and using Google Analytics (analytics.js) for analytics. I recently noticed that analytics aren't working properly in Chrome.
I'm loading analytics using the standard code snippet in a separate module and included via requirejs. I've verified that this script runs as expected and executes the analytics snippet.
When I inspect network traffic in Firefox, I can see that the analytics script is loaded from Google as expected (HTTP 200 response):
![enter image description here](https://i.stack.imgur.com/hpT3S.png)
However, when I run the exact same page in Chrome, I get an HTTP 307 response pointing to about:blank, and analytics does not run:
![enter image description here](https://i.stack.imgur.com/7Anb1.png)
However, if I paste the analytics URL directly into the Chrome address bar, the script is found. Any ideas what's going on here, or how to fix it?
| `307 Internal Redirect` with `Non-Authorative-Reason: Delegate` indicates that the request was intercepted and modified (redirected) by a Chrome extension via the [webRequest](https://developer.chrome.com/extensions/webRequest) or [declarative webRequest](https://developer.chrome.com/extensions/declarativeWebRequest) extension APIs.
You can find out which extension triggered the redirect as follows:
1. Visit `chrome://net-internals/#events`
2. Trigger the request (google analytics, in your case).
3. Go back to the `chrome://net-internals/#events` tab and look for a URL\_REQUEST matching your request (you can use the searchbox to filter the search).
4. Click on the entry to show the log at the right side. You will see the extension name, extension ID and other information about the request:
```
t=7910 [st=0] +REQUEST_ALIVE [dt=6]
t=7910 [st=0] +URL_REQUEST_DELEGATE [dt=5]
t=7910 [st=0] DELEGATE_INFO [dt=5]
**--> delegate\_info = "extension [Name of extension]"
t=7915 [st=5] CHROME\_EXTENSION\_REDIRECTED\_REQUEST
--> extension\_id = "ebmlimjkpnhckbaejoagnjlgcdhdnjlb"**
t=7915 [st=5] -URL_REQUEST_DELEGATE
t=7915 [st=5] +URL_REQUEST_START_JOB [dt=1]
--> load_flags = 339804160 (BYPASS_DATA_REDUCTION_PROXY | MAYBE_USER_GESTURE | REPORT_RAW_HEADERS | VERIFY_EV_CERT)
--> method = "GET"
--> priority = "LOW"
**--> url = "https://www.google-analytics.com/analytics.js"**
t=7915 [st=5] URL_REQUEST_REDIRECT_JOB
--> reason = "Delegate"
**t=7915 [st=5] URL\_REQUEST\_FAKE\_RESPONSE\_HEADERS\_CREATED
--> HTTP/1.1 307 Internal Redirect
Location: about:blank
Non-Authoritative-Reason: Delegate**
```
In this log sample, an extension with name "[Name of extension]" and extension ID "ebmlimjkpnhckbaejoagnjlgcdhdnjlb" redirected the request. After finding the extension name and/or ID, you can visit `chrome://extensions` and disable or remove the extension that modified the request.
|
Auto grow textarea with knockout js
I've implemented the logic of auto-expanding the height of a textarea on the keyup event. However, I want the textarea to also initialise its height once the value is bound to the textarea via a knockout custom binding. Any solutions? (With the use of only KnockoutJS, without using jQuery or any other library.)
| I'd strongly advice against using an event to trigger the resize. Instead, you can use the `textInput` binding to keep track of the input in an observable and subscribe to changes there.
Here's an example:
```
<textarea data-bind="textInput: value, autoResize: value"></textarea>
```
```
ko.bindingHandlers.autoResize = {
init: function(element, valueAccessor, allBindings, viewModel, bindingContext) {
ko.computed(function() {
ko.unwrap(valueAccessor());
resizeToFitContent(element);
})
}
};
```
This works because:
- The `textInput` binding writes any input change to an observable `value` variable.
- The `computed` uses this value to trigger a resize. This creates a subscription automatically.
This is better than a `keydown` approach because it deals with stuff like `Right Mouse Button > cut`
Example showing the `event` equivalent as well:
```
var resizeToFitContent = function(el) {
// http://stackoverflow.com/a/995374/3297291
el.style.height = "1px";
el.style.height = el.scrollHeight + "px";
}
ko.bindingHandlers.autoResize = {
init: function(element, valueAccessor, allBindings, viewModel, bindingContext) {
ko.computed(function() {
ko.unwrap(valueAccessor());
resizeToFitContent(element);
})
}
};
ko.applyBindings({
value: ko.observable("Test"),
onKey: function(data, event) {
resizeToFitContent(event.target);
}
});
```
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/knockout/3.2.0/knockout-min.js"></script>
<textarea data-bind="textInput: value, autoResize: value"></textarea>
<textarea data-bind="event: { keyup: onKey }"></textarea>
```
|
Vue computed setter not working with checkboxes?
I have a computed setter:
```
rating: {
get() {
return this.$store.state.rating;
},
set(value) {
console.log(value);
this.$store.commit('updateFilter', {
name: this.name,
value
});
}
}
```
This is linked to my rating like so:
```
<label>
<input type="checkbox" :value="Number(value)" v-model="rating">
{{ index }}
</label>
```
I expect the computed setter to log an array because when I use a watcher to watch for changes on the rating model I am getting an array.
Except whenever I use a computed setter like above it simply outputs `true` when a checkbox is selected or `false` when they are all deselected.
What is going on here, should I just be getting an array just as like with a watcher?
| `v-model` has somewhat ["magical"](https://v2.vuejs.org/v2/guide/forms.html#Checkbox) behavior, particularly when [applied to checkboxes](https://v2.vuejs.org/v2/guide/forms.html#Checkbox). When bound to an array, the checkbox will add or remove the value to/from the array based on its checked state.
It is not clear in your example what `value` is in `Number(value)`. It should be a value that you want included in the array when the box is checked.
Taking the example from the Vue docs linked above, I have modified it to use a computed, and it works as you might expect, with the `set` getting the new value of the array.
```
new Vue({
el: '#app',
data: {
checkedNames: []
},
computed: {
proxyCheckedNames: {
get() {
return this.checkedNames;
},
set(newValue) {
this.checkedNames = newValue;
}
}
}
});
```
```
<script src="//cdnjs.cloudflare.com/ajax/libs/vue/2.3.4/vue.min.js"></script>
<div id="app">
<input type="checkbox" id="jack" value="Jack" v-model="proxyCheckedNames">
<label for="jack">Jack</label>
<input type="checkbox" id="john" value="John" v-model="proxyCheckedNames">
<label for="john">John</label>
<input type="checkbox" id="mike" value="Mike" v-model="proxyCheckedNames">
<label for="mike">Mike</label>
<br>
<span>Checked names: {{ checkedNames }}</span>
</div>
```
|
Android cookies using phonegap
I have developed a phonegap application that has a user login page with a text box for username/password, a check box for "remember my info", and a submit button.
Pretty standard.
When I open it in Firefox, the cookie works fine and my login data is remembered when the box is checked.
However, when using the android emulator, my cookie is not saved, despite "navigator.cookieEnabled" returning true when printed on the emulator for debugging.
I can post the relevant code if needed but my question is more general:
How can you store cookies in android programming using the web based languages that PhoneGap supports? Is it similar to normal web page javascript cookies? Is there a method other than "navigator.cookieEnabled" that will return whether or not cookies are enabled for the android device?
Thanks.
| For mobile application development HTML5 has new feature for Local Store and Session Store.
Same like cookies and session in web development
Try with ***localStorage*** option. I worked this way only.
For storing values in local storage i.e stored in browser permanently
```
window.localStorage.setItem("key", "Value");
```
For getting values in local storage
```
window.localStorage.getItem("Key")
```
For manually remove values for local storage
```
window.localStorage.removeItem("Key")
```
If you want to **manage session** you need to use ***sessionStorage*** option.
For storing values in session storage i.e values destroyed once mobile apps closed
```
window.sessionStorage.setItem("key", "Value");
```
For getting values in session storage
```
window.sessionStorage.getItem("Key")
```
For manually remove values for session storage
```
window.sessionStorage.removeItem("Key")
```
|
Vision API: How to get JSON-output
I'm having trouble saving the output given by the Google Vision API. I'm using Python and testing with a demo image. I get the following error:
```
TypeError: [mid:...] + is not JSON serializable
```
Code that I executed:
```
import io
import os
import json
# Imports the Google Cloud client library
from google.cloud import vision
from google.cloud.vision import types
# Instantiates a client
vision_client = vision.ImageAnnotatorClient()
# The name of the image file to annotate
file_name = os.path.join(
os.path.dirname(__file__),
'demo-image.jpg') # Your image path from current directory
# Loads the image into memory
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = types.Image(content=content)
# Performs label detection on the image file
response = vision_client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')
for label in labels:
print(label.description, label.score, label.mid)
with open('labels.json', 'w') as fp:
json.dump(labels, fp)
```
the output appears on the screen, however I do not know exactly how I can save it. Anyone have any suggestions?
| FYI to anyone seeing this in the future, google-cloud-vision 2.0.0 has switched to using proto-plus which uses different serialization/deserialization code. A possible error you can get if upgrading to 2.0.0 without changing the code is:
```
object has no attribute 'DESCRIPTOR'
```
Using google-cloud-vision 2.0.0, protobuf 3.13.0, here is an example of how to serialize and de-serialize (example includes json and protobuf)
```
import io, json
from google.cloud import vision_v1
from google.cloud.vision_v1 import AnnotateImageResponse
with io.open('000048.jpg', 'rb') as image_file:
content = image_file.read()
image = vision_v1.Image(content=content)
client = vision_v1.ImageAnnotatorClient()
response = client.document_text_detection(image=image)
# serialize / deserialize proto (binary)
serialized_proto_plus = AnnotateImageResponse.serialize(response)
response = AnnotateImageResponse.deserialize(serialized_proto_plus)
print(response.full_text_annotation.text)
# serialize / deserialize json
response_json = AnnotateImageResponse.to_json(response)
response = json.loads(response_json)
print(response['fullTextAnnotation']['text'])
```
*Note 1*: proto-plus doesn't support converting to snake\_case names, which is supported in protobuf with `preserving_proto_field_name=True`. So currently there is no way around the field names being converted from `response['full_text_annotation']` to `response['fullTextAnnotation']`
There is an ~~open~~ closed feature request for this: [googleapis/proto-plus-python#109](https://github.com/googleapis/proto-plus-python/issues/109)
*Note 2*: The google vision api doesn't return an x coordinate if x=0. If x doesn't exist, the protobuf will default x=0. In python vision 1.0.0 using `MessageToJson()`, these x values weren't included in the json, but now with python vision 2.0.0 and `.To_Json()` these values are included as x:0
|
Finding the unbiased variance estimator in high dimensional spaces
The problem comes from linear regression. Assume the regression function is linear, i.e.
$$
f(X) = \beta\_0+\sum\_{j=1}^pX\_j\beta\_j
$$
.Given a set of training data $(x\_1, y\_1),\ldots,(x\_N,y\_N)$,we try to estimate the parameters $\beta$ by minimizing the residual sum of squares:
$$\text{RSS}(\beta) = (\sum\_{i=1}^N(y\_i-\beta\_0-\sum\_{j=1}^px\_{ij}\beta\_j)^2$$
Here each $x\_i=(x\_{i1},\ldots,x\_{ip})^T$ is vector in $\mathbb{R}^p$,$y\_i\in\mathbb{R}$,and $\beta=(\beta\_0,\ldots,\beta\_{p+1})$. Let $\mathbf{X}$ denote matrix $(x\_1,\ldots,x\_N)^T$, $\mathbf{y}=(y\_1,\ldots,y\_N)$ and assume it has full column rank, it's easy to get
$$\hat{\beta}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}$$
Suppose $\mathbf{\hat{y}}=\mathbf{X}\hat{\beta}$, and $\mathbf{y}=\mathbf{X}\beta$ for some $\beta$, to pin down the sampling properties of $\hat{\beta}$, now assume that the observations $y\_i$ are uncorrelated and have constant variance $\sigma^2$, and that the $x\_i$ are fixed(non random). It's easy to see that
$$\text{Var}(\hat\beta) = (\mathbf{X}^T\mathbf{X})^{-1}\sigma^2$$
,so we turn to estimating the $\sigma^2$, and now comes the question:
Let $\hat\sigma$ defined as below:
$$\hat\sigma^2=\frac{1}{K}\sum\_{i=1}^N(y\_i-\hat{y\_i})^2$$
It is said that only when $K=N-p-1$ did $\hat\sigma$ be an unbiased estimator of $\sigma$. But how do $p$(the dimension of $x$) be introduced in?
| Well it might be an overkill but I think this proof is OK. I would use basic linear algebra tools. Starting with slight change in your notations:
Let $X$ denote the matrix $(x\_1\; x\_2 \;...\; x\_N)^T$ where $x\_i=(1 \; x\_{i2} \; x\_{i3} \; ... \; x\_{ip})$. So now we have $p-1$ covariates. Our model is
$$
y=X\beta+\epsilon
$$
with $E(\epsilon)=0$ and $Var(\epsilon)=\sigma^2I$
Now, we can write
$$
\hat{y}=X\hat{\beta}=P\_Xy
$$
where $P\_X$ is the projection matrix onto the space spanned by the columns of $X$. That is,
$$
P\_X=X(X^TX)^{-1}X^T.
$$
Note that $P\_X$ is $n\times n$ matrix.
Now, note that we can actually write $\sum\_{i=1}^{N}(y\_i-\hat{y}\_i)^2$ as $||y-P\_Xy||^2 $ with $||\cdot||$ being the $\ell\_2$ norm of a vector.
Next,
$$
||y-P\_Xy||^2=||(I-P\_X)y||^2=y^T(I-P\_X)y
$$
where the last equality holds since $I-P\_X$ is a projection matrix.
Note also that $(I-P\_X)X=0$ and hence $y^T(I-P\_X)y=\epsilon^T(I-P\_X)\epsilon$ (using our model definition).
If we assume normality of the error, we could continue to derive the appropriate $\chi^2$ distribution.
But you wrote nothing regarding the distribution so I am now turning to the expectation
Recall that $I-P\_X$ is $N\times N$ projection matrix. We can use the eigendecomposition of a matrix to write
$$
(I-P\_X)=U\Lambda U^T
$$
where $UU^T=I$ and $\Lambda$ is a diagonal matrix with the diagonal being the eigenvalues of $I-P\_X$.
Since $I-P\_X$ is a projection matrix, all of its eigenvalues are equal to zero or one. How many "ones" there are? as the dimension of the subspace that $I-P\_X$ projecting onto. Since this is the complementary of the subspace spanned by $P\_X$, we get $n-p$ ones in the diagonal, and by using the fact that $UU^T=I$ we get the final answer
which is
$$
E(||(I-P\_X)y||^2)= E(\epsilon^TU\Lambda U^T\epsilon)=\sum\_{j=1}^n\lambda\_j\sigma^2=(n-p)\sigma^2
$$
and as written before there are $n-p \; \lambda\_j$'s that are equal to one and $p \; \lambda\_j$'s that equals to zero ($\lambda\_j, i=1,...,n$ are the eigenvalues of $I-P\_X$).
Some relevant Wikipedia pages for the linear algebra:
<http://en.wikipedia.org/wiki/Projection_(linear_algebra)>
<http://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix>
<http://en.wikipedia.org/wiki/Symmetric_matrix>
|
How to wait for end of a coroutine
I have some code below. Delay (3000) is just replacement for a long loop (or cycle). I’m expecting that after completion of loop `println(res)` will print “Some String” and then enable `button`. But in real life `println(res)` prints an empty string and `button` became enabled at same time when I click it.
My question is: how I can wait for end of a coroutine and only after completion of the coroutine run `println(res)` and `button.isEnabled = true`.
```
private var res: String = ""
private suspend fun test(): String {
delay(3000) // delay - just replacement for long loop
return "Some String" // String received after loop
}
fun onClick(view: View) {
res = ""
button.isEnabled = false
GlobalScope.launch {
res = withContext(Dispatchers.Default) {
test()
}
}
println(res) // 1. trying to get string received after loop, but not working
button.isEnabled = true // 2. button must be enabled after loop in cycle, but it's not waiting till end of loop
}
```
| You can try some thing like this:
```
suspend fun saveInDb() {
val value = GlobalScope.async {
delay(1000)
println("thread running on [${Thread.currentThread().name}]")
10
}
println("value = ${value.await()} thread running on [${Thread.currentThread().name}]")
}
```
await will wait for the coroutine to finish and then run code below it
```
fun onClick(view: View) {
res = ""
button.isEnabled = false
GlobalScope.launch(Dispatchers.Main){ // launches coroutine in main thread
updateUi()
}
}
suspend fun updateUi(){
val value = GlobalScope.async { // creates worker thread
res = withContext(Dispatchers.Default) {
test()
}
}
println(value.await()) //waits for workerthread to finish
button.isEnabled = true //runs on ui thread as calling function is on Dispatchers.main
}
```
|
Counting the number of times a character occurs in a string in C
I'm new to C, and I'm working on my own `explode` like function. I'm trying to count how many times a specified character occurs in a string.
```
int count_chars(char * string, char * chr)
{
int count = 0;
int i;
for (i = 0; i < sizeof(string); i++)
{
if (string[i] == chr)
{
count++;
}
}
return count;
}
```
It just returns 0 every time. Can anyone explain why, please? :)
| Your code is hopelessly flawed. Here's how it *should* look like:
```
int count_chars(const char* string, char ch)
{
int count = 0;
int i;
// We are computing the length once at this point
// because it is a relatively lengthy operation,
// and we don't want to have to compute it anew
// every time the i < length condition is checked.
int length = strlen(string);
for (i = 0; i < length; i++)
{
if (string[i] == ch)
{
count++;
}
}
return count;
}
```
**[See this code run on example input](http://www.ideone.com/Nd5Ic)**.
Here's what you are doing wrong:
1. Since you want to find a *character*, the second parameter should be a character (and not a `char*`), This has implications later (see #3).
2. `sizeof(string)` does not give you the length of the string. It gives the size (in bytes) of a pointer in your architecture, which is *a constant number* (e.g. 4 on 32-bit systems).
3. You are comparing some value which is *not* a memory address to the memory address `chr` points to. This is comparing apples and oranges, and will always return `false`, so the `if` will never succeed.
4. What you want to do instead is compare *a character* (`string[i]`) to the second parameter of the function (this is the reason why that one is also a `char`).
## A "better" version of the above
Commenters below have correctly identified portions of the original answer which are not the usual way to do things in C, can result in slow code, and possibly in bugs under (admittedly extraordinary) circumstances.
Since I believe that "the" correct implementation of `count_chars` is probably too involved for someone who is making their first steps in C, I 'll just append it here and leave the initial answer almost intact.
```
int count_chars(const char* string, char ch)
{
int count = 0;
for(; *string; count += (*string++ == ch)) ;
return count;
}
```
*Note: I have intentionally written the loop this way to make the point that at some stage you have to draw the line between what is possible and what is preferable.*
**[See this code run on example input](http://www.ideone.com/2uNsa)**.
|
Json.NET does not preserve primitive type information in lists or dictionaries of objects. Is there a workaround?
The following example illustrates a fundamental flaw in Json.NET's type handling:
```
List<object> items = new List<object>() {Guid.NewGuid(),DateTime.Now};
var settings = new JsonSerializerSettings() { TypeNameHandling=TypeNameHandling.All };
var json = JsonConvert.SerializeObject<List<object>>(value,settings);
```
resulting in the following JSON:
```
{"$type":"System.Collections.Generic.List`1[[System.Object, mscorlib]], mscorlib","$values":["9d7aa4d3-a340-4cee-baa8-6af0582b8acd","2014-07-28T21:03:17.1287029-04:00"]}
```
As you can see the list items have lost their type information. Deserializing that same JSON will result in a list containing just strings.
This issue was previously reported on codeplex and perfunctorily closed, stating including the type information would make the JSON too messy. I am surprised we aren't given a separate option to include primitive type information for such scenarios as the round-trip consistency is broken.
<https://json.codeplex.com/workitem/23833>
I would expect the data to come back with the same type information that it left with.
Does anybody have any suggestions or workarounds to remedy this undesired behavior?
Thanks,
Chris
| Here is a solution using a custom `JsonConverter`:
```
public sealed class PrimitiveJsonConverter : JsonConverter
{
public PrimitiveJsonConverter()
{
}
public override bool CanRead
{
get
{
return false;
}
}
public override bool CanConvert(Type objectType)
{
return objectType.IsPrimitive;
}
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
throw new NotImplementedException();
}
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
switch (serializer.TypeNameHandling)
{
case TypeNameHandling.All:
writer.WriteStartObject();
writer.WritePropertyName("$type", false);
switch (serializer.TypeNameAssemblyFormat)
{
case FormatterAssemblyStyle.Full:
writer.WriteValue(value.GetType().AssemblyQualifiedName);
break;
default:
writer.WriteValue(value.GetType().FullName);
break;
}
writer.WritePropertyName("$value", false);
writer.WriteValue(value);
writer.WriteEndObject();
break;
default:
writer.WriteValue(value);
break;
}
}
}
```
Here is how to use it:
```
JsonSerializerSettings settings = new JsonSerializerSettings()
{
TypeNameHandling = TypeNameHandling.All,
};
settings.Converters.Insert(0, new PrimitiveJsonConverter());
return JsonConvert.SerializeObject(myDotNetObject, settings);
```
*I'm currently using this solution to serialize an `IDictionary<string, object>` instance that can contain primitives.*
|
How do I publish Gradle plugins to Artifactory?
I am working with this example Gradle Plugin project:
<https://github.com/AlainODea/gradle-com.example.hello-plugin>
When I run **./gradlew publishToMavenLocal** it creates these files in M2\_HOME:
1. com/hello/com.example.hello.gradle.plugin/maven-metadata-local.xml
2. com/hello/com.example.hello.gradle.plugin/0.1-SNAPSHOT/com.example.hello.gradle.plugin-0.1-SNAPSHOT.pom
3. com/hello/com.example.hello.gradle.plugin/0.1-SNAPSHOT/maven-metadata-local.xml
4. com/hello/gradle-com.example.hello-plugin/maven-metadata-local.xml
5. com/hello/gradle-com.example.hello-plugin/0.1-SNAPSHOT/gradle-com.example.hello-plugin-0.1-SNAPSHOT.jar
6. com/hello/gradle-com.example.hello-plugin/0.1-SNAPSHOT/gradle-com.example.hello-plugin-0.1-SNAPSHOT.pom
7. com/hello/gradle-com.example.hello-plugin/0.1-SNAPSHOT/maven-metadata-local.xml
When I run **./gradlew artifactoryPublish** it logs:
```
Deploying artifact: https://artifactory.example.com/artifactory/libs-release-local-maven/com/example/hello/gradle-com.example.hello-plugin/0.1-SNAPSHOT/gradle-com.example.hello-plugin-0.1-SNAPSHOT.jar
Deploying artifact: https://artifactory.example.com/artifactory/libs-release-local-maven/com/example/hello/gradle-com.example.hello-plugin/0.1-SNAPSHOT/gradle-com.example.hello-plugin-0.1-SNAPSHOT.pom
Deploying build descriptor to: https://artifactory.example.com/artifactory/api/build
Build successfully deployed. Browse it in Artifactory under https://artifactory.example.com/artifactory/webapp/builds/gradle-com.example.hello-plugin/1234567890123
```
Attempting to load the plug-in from another build.gradle:
```
plugins {
id 'java'
id 'com.example.hello' version '0.1-SNAPSHOT'
}
```
With settings.gradle:
```
pluginManagement {
repositories {
maven {
url 'https://artifactory.example.com/artifactory/libs-release-local-maven/'
}
}
}
```
Results in this error:
```
Plugin [id: 'com.example', version: '0.1-SNAPSHOT'] was not found in any of the following sources:
- Gradle Core Plugins (plugin is not in 'org.gradle' namespace)
- Plugin Repositories (could not resolve plugin artifact 'com.example.hello:com.example.hello.gradle.plugin:0.1-SNAPSHOT')
Searched in the following repositories:
maven(https://artifactory.example.com/artifactory/libs-release-local-maven/)
Gradle Central Plugin Repository
```
I'd like to get all of the artifacts that publishToMavenLocal creates to be published to Artifactory when I run artifactoryPublish. I am open to alternatives to artifactoryPublish if it is the wrong tool.
How do I publish Gradle plugins to Artifactory?
| Since you have the **maven-publish** plugin on, the **java-gradle-plugin** already declares publications for you, so you can remove [this explicit publications block](https://github.com/AlainODea/gradle-com.example.hello-plugin/blob/d7a21432311c2d07a29a590638eed71fe3ef50a5/build.gradle.kts#L74-L80) from your build:
```
publishing {
publications {
create<MavenPublication>("mavenJava") {
from(components["java"])
}
}
}
```
You can then reference all automatically created publications in [your artifactory publish defaults block](https://github.com/AlainODea/gradle-com.example.hello-plugin/blob/d7a21432311c2d07a29a590638eed71fe3ef50a5/build.gradle.kts#L88) as follows:
```
invokeMethod("publications", publishing.publications.names.toTypedArray())
```
Why not just **publishing.publications.names**?:
- publishing.publications.names has type SortedSet<String>
- ArtifactoryTask.publications() expects an Object... which is an Object[] really.
- Calling ArtifactoryTask.publications() with a SortedSet<String> will attempt to add the entire set as if it is a single publication
- So you need toTypedArray() to make it a Object[] so that the varargs call works
Here's the complete, corrected artifactory block:
```
artifactory {
setProperty("contextUrl", "https://artifactory.verafin.com/artifactory")
publish(delegateClosureOf<PublisherConfig> {
repository(delegateClosureOf<GroovyObject> {
setProperty("repoKey", "libs-release-local-maven")
})
defaults(delegateClosureOf<GroovyObject> {
invokeMethod("publications", publishing.publications.names.toTypedArray())
})
})
}
```
Here's a complete adaptation of your build.gradle.kts solving the problem:
```
import groovy.lang.GroovyObject
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
import org.jfrog.gradle.plugin.artifactory.dsl.PublisherConfig
buildscript {
repositories {
jcenter()
}
}
plugins {
`java-gradle-plugin`
`maven-publish`
`kotlin-dsl`
id("com.jfrog.artifactory") version "4.9.0"
kotlin("jvm") version "1.3.11"
id("io.spring.dependency-management") version "1.0.6.RELEASE"
}
group = "com.example.hello"
version = "0.1-SNAPSHOT"
gradlePlugin {
plugins {
create("helloPlugin") {
id = "com.example.hello"
implementationClass = "com.example.HelloPlugin"
}
}
}
repositories {
mavenCentral()
}
dependencyManagement {
imports {
mavenBom("org.junit:junit-bom:5.3.2")
}
}
dependencies {
implementation(kotlin("stdlib-jdk8"))
testImplementation(kotlin("test"))
testImplementation(kotlin("test-junit5"))
testImplementation("org.junit:junit-bom:latest.release")
testImplementation("org.junit.jupiter:junit-jupiter-api")
testImplementation("com.natpryce:hamkrest:1.7.0.0")
testRuntimeOnly("org.junit.jupiter:junit-jupiter-engine")
}
tasks {
withType<JavaExec> {
jvmArgs = listOf("-noverify", "-XX:TieredStopAtLevel=1")
}
withType<KotlinCompile> {
val javaVersion = JavaVersion.VERSION_1_8.toString()
sourceCompatibility = javaVersion
targetCompatibility = javaVersion
kotlinOptions {
apiVersion = "1.3"
javaParameters = true
jvmTarget = javaVersion
languageVersion = "1.3"
}
}
withType<Test> {
@Suppress("UnstableApiUsage")
useJUnitPlatform()
}
}
artifactory {
publish(delegateClosureOf<PublisherConfig> {
repository(delegateClosureOf<GroovyObject> {
setProperty("repoKey", "libs-release-local-maven")
})
defaults(delegateClosureOf<GroovyObject> {
invokeMethod("publications", publishing.publications.names.toTypedArray())
})
})
}
```
Here's a log showing the successful deployment of the plugin artifact to Artifactory:
```
Deploying artifact: https://artifactory.example.com/artifactory/libs-release-local-maven/com/example/hello/gradle-com.example.hello-plugin/0.1-SNAPSHOT/gradle-com.example.hello-plugin-0.1-SNAPSHOT.jar
Deploying artifact: https://artifactory.example.com/artifactory/libs-release-local-maven/com/example/hello/gradle-com.example.hello-plugin/0.1-SNAPSHOT/gradle-com.example.hello-plugin-0.1-SNAPSHOT.pom
Deploying artifact: https://artifactory.example.com/artifactory/libs-release-local-maven/com/example/hello/com.example.hello.gradle.plugin/0.1-SNAPSHOT/com.example.hello.gradle.plugin-0.1-SNAPSHOT.pom
Deploying build descriptor to: https://artifactory.example.com/artifactory/api/build
Build successfully deployed. Browse it in Artifactory under https://artifactory.example.com/artifactory/webapp/builds/gradle-com.example.hello-plugin/1234567890123
```
|
Can get image from PHAsset of library
I am using [QBImagePickerController](https://github.com/questbeat/QBImagePicker) for selecting multiple images at a time.
So, here is my whole code
I am presenting `imagepickerController` with this code
```
let imagePickerController = QBImagePickerController()
imagePickerController.delegate = self
imagePickerController.allowsMultipleSelection = true
imagePickerController.mediaType = .Image
self.presentViewController(imagePickerController, animated:
true, completion: nil)
```
so when I choose multiple images and click done, This method called
```
func qb_imagePickerController(imagePickerController: QBImagePickerController!, didFinishPickingAssets assets: [AnyObject]!) {
for asset in assets {
print(asset.fileName)
}
self.dismissViewControllerAnimated(true, completion: nil)
}
```
For example, I select one image then it prints like this
```
<PHAsset: 0x7fc55d954500> 6006CE57-81FE-4DC0-8C52-5DB43CE7638D/L0/001 mediaType=1/0, sourceType=1, (1920x1080), creationDate=2016-05-26 09:15:34 +0000, location=0, hidden=0, favorite=0
```
From this how can I get image and set it into collectionview?
I get fileName from it but its not worthy to set image from it.
I use filePathURL, fileURL, absoluteURL but nothing happened it crashed
So please help me with it
Thank you
| You need to use `requestImageForAsset` to get `UIImage`.
You can get image like this way
```
func qb_imagePickerController(imagePickerController: QBImagePickerController!, didFinishPickingAssets assets: [AnyObject]!) {
let requestOptions = PHImageRequestOptions()
requestOptions.resizeMode = PHImageRequestOptionsResizeMode.Exact
requestOptions.deliveryMode = PHImageRequestOptionsDeliveryMode.HighQualityFormat
// this one is key
requestOptions.synchronous = true
for asset in assets
{
if (asset.mediaType == PHAssetMediaType.Image)
{
PHImageManager.defaultManager().requestImageForAsset(asset as! PHAsset, targetSize: PHImageManagerMaximumSize, contentMode: PHImageContentMode.Default, options: requestOptions, resultHandler: { (pickedImage, info) in
self.yourImageview.image = pickedImage // you can get image like this way
})
}
}
imagePickerController.dismissViewControllerAnimated(true, completion: nil)
}
```
|
String vs char[]
I have some slides from IBM named : ["From Java Code to Java Heap: Understanding the Memory Usage of Your Application"](http://www.ibm.com/developerworks/library/j-codetoheap/#N101DC), that says, when we use `String` instead of `char[]`, there is
**Maximum overhead would be 24:1 for a single character!**
but I am not able to understand what overhead is referred here. Can anybody please help?
**Source :**
![enter image description here](https://i.stack.imgur.com/WhEBr.png)
| This figure relates to JDK 6- 32-bit.
## JDK 6
In pre-Java-7 world strings which were implemented as a pointer to a region of a `char[]` array:
```
// "8 (4)" reads "8 bytes for x64, 4 bytes for x32"
class String{ //8 (4) house keeping + 8 (4) class pointer
char[] buf; //12 (8) bytes + 2 bytes per char -> 24 (16) aligned
int offset; //4 bytes -> three int
int length; //4 bytes -> fields align to
int hash; //4 bytes -> 16 (12) bytes
}
```
So I counted:
```
36 bytes per new String("a") for JDK 6 x32 <-- the overhead from the article
56 bytes per new String("a") for JDK 6 x64.
```
## JDK 7
Just to compare, in JDK 7+ `String` is a class which holds a `char[]` buffer and a `hash` field only.
```
class String{ //8 (4) + 8 (4) bytes -> 16 (8) aligned
char[] buf; //12 (8) bytes + 2 bytes per char -> 24 (16) aligned
int hash; //4 bytes -> 8 (4) aligned
}
```
So it's:
```
28 bytes per String for JDK 7 x32
48 bytes per String for JDK 7 x64.
```
**UPDATE**
For `3.75:1` ratio see @Andrey's explanation below. This proportion falls down to 1 as the length of the string grows.
Useful links:
- [Memory usage of Java Strings and string-related objects](http://www.javamex.com/tutorials/memory/string_memory_usage.shtml).
- [Calculate memory of a Map Entry](https://stackoverflow.com/questions/20458456/calculate-memory-of-a-map-entry/20459081#20459081) - a simple technique to get a size of an object.
|
R converting integer column to 3 factor columns based on digits
I have a column of int's like this:
```
idNums
2
101
34
25
8
...
```
I need to convert them to 3 factor columns like this:
```
digit1 digit2 digit3
0 0 2
1 0 1
0 3 4
0 2 5
0 0 8
... ... ...
```
Any suggestions?
| Here's a fun solution using the modular arithmetic operators `%%` and `%/%`:
```
d <- c(2, 101, 34, 25, 8)
res <- data.frame(digit1 = d %/% 100,
digit2 = d %% 100 %/% 10,
digit3 = d %% 10)
# digit1 digit2 digit3
# 1 0 0 2
# 2 1 0 1
# 3 0 3 4
# 4 0 2 5
# 5 0 0 8
```
Note that it has the minor -- but nice -- side benefit of returning **numeric** values for each of the columns. If you do, however, want **factor** columns instead, just follow up with this command:
```
res[] <- lapply(res, as.factor)
all(sapply(res, class)=="factor")
#[1] TRUE
```
|
Uncaught TypeError: Cannot read property 'aDataSort' of undefined
i am working on pagination and i am using [DataTables](https://www.datatables.net/) plugin ,
on some tables it's work but on some tables it gives error:
>
> Uncaught TypeError: Cannot read property 'aDataSort' of undefined
>
>
>
my page script looks like:
```
$(document).ready(function() {
$('.datatable').dataTable( {
"scrollY": "200px",
"scrollCollapse": true,
"info": true,
"paging": true
} );
} );
```
//HTML code
```
<table class="table table-striped table-bordered datatable">
<thead>
<tr>
<th><?php echo lang('date_label')?></th>
<th><?php echo lang('paid_label')?></th>
<th><?php echo lang('comments_label');?></th>
</tr>
</thead>
<tbody>
<?php foreach ($payments as $pay): ?>
<tr>
<td><?php echo dateformat($pay['time_stamp'], TRUE);?></td>
<td><?php echo format_price($pay['amount']);?></td>
<td><?php echo $pay['note'];?></td>
</tr>
<?php endforeach?>
</tbody>
</table>
```
no idea how the problem comes ,i know this is very common error but i search and found nothing supporting my problem .
does anyone knows the solution ?
| use something like the following in your code to disable sorting on `DataTables` (adapted from a project of mine which uses latest `DataTables`)
```
$(document).ready(function() {
$('.datatable').dataTable( {
'bSort': false,
'aoColumns': [
{ sWidth: "45%", bSearchable: false, bSortable: false },
{ sWidth: "45%", bSearchable: false, bSortable: false },
{ sWidth: "10%", bSearchable: false, bSortable: false }
],
"scrollY": "200px",
"scrollCollapse": true,
"info": true,
"paging": true
} );
} );
```
the `aoColumns` array describes the width of each column and its `sortable` properties, adjust as needed for your own table (number of) columns.
|
The difference of parameters between “glm” and “optim” in R
I’d like to know the difference of parameters(intercept, slopes) between “glm” and “optim” in R.
I think those predictions would be fine, but I can’t understand why those parameters are different. If it’s a misinterpretation, please give me some advice.
glm; -23.36, 46.72, 46.72
optim; -73.99506, 330.09424, 122.50453
```
#data
x1<-c(0,0,1,1)
x2<-c(0,1,0,1)
y<-c(0,1,1,1)
#glm
model<-glm(y~x1+x2,family=binomial(link=logit))
summary(model)
# Estimate Std. Error z value Pr(>|z|)
#(Intercept) -23.36 71664.47 0 1
#x1 46.72 101348.81 0 1
#x2 46.72 101348.81 0 1
round(fitted(model))
#0 1 1 1
#optim
f<-function(par){
eta<-par[1]+par[2]*x1+par[3]*x2
p<-1/(1+exp(-eta))
-sum(log(choose(1,y))+y*log(p)+(1-y)*log(1-p),na.rm=TRUE)
}
(optim<-optim(c(1,1,1),f))
$par
#-73.99506 330.09424 122.50453
round(1/(1+exp(-(optim$par[1]+optim$par[2]*x1+optim$par[3]*x2))))
#0 1 1 1
```
| I'm going to answer this in a more general context. You have set up a logistic regression with *complete separation* (you can read about this elsewhere); there is a linear combination of parameters that perfectly separates all-zero from all-one outcomes, which means that the maximum likelihood estimates are actually *infinite* in this case. This has several consequences:
- you will have seen a warning `glm.fit: fitted probabilities numerically 0 or 1 occurred` (this will usually, but not always, happen in this case)
- the standard deviations, which are based on the local curvature and depend on the assumption that the log-likelihood surface is quadratic, are ridiculous
- since the log-likelihood surface becomes flatter and flatter as you go toward extreme values of the parameter (slope approaching zero in the infinite limit), different optimizers will get essentially arbitrarily different answers, depending on the point where the surface happens to be flat enough for the algorithm to conclude that it has gotten sufficiently close to an optimum (= zero gradient). Not only different optimization methods (IRLS as in `glm` vs. the different `method` options in `optim`), but even the same optimization method on different operating systems, or even possibly built with different compilers, will give different answers.
- the `brglm2` package has methods for diagnosing complete separation (`?brglm2::detect_separation`), as well as methods for computing *bias-reduced* or *penalized* fits that will avoid the problem (by changing the objective function in a sensible way).
|
Is there an equivalent in ggplot to the varwidth option in plot?
I am creating boxplots using ggplot and would like to represent the sample size contributing to each box. In the base `plot` function there is the `varwidth` option. Does it have an equivalent in ggplot?
For example, in base plot
```
data <- data.frame(rbind(cbind(rnorm(700, 0,10), rep("1",700)),
cbind(rnorm(50, 0,10), rep("2",50))))
data[ ,1] <- as.numeric(as.character(data[,1]))
plot(data[,1] ~ as.factor(data[,2]), varwidth = TRUE)
```
![enter image description here](https://i.stack.imgur.com/d4NDL.jpg)
| Not elegant but you can do that by:
```
data <- data.frame(rbind(cbind(rnorm(700, 0,10), rep("1",700)),
cbind(rnorm(50, 0,10), rep("2",50))))
data[ ,1] <- as.numeric(as.character(data[,1]))
w <- sqrt(table(data$X2)/nrow(data))
ggplot(NULL, aes(factor(X2), X1)) +
geom_boxplot(width = w[1], data = subset(data, X2 == 1)) +
geom_boxplot(width = w[2], data = subset(data, X2 == 2))
```
![enter image description here](https://i.stack.imgur.com/KtsaO.png)
If you have several levels for `X2`, then you can do without hardcoding all levels:
```
ggplot(NULL, aes(factor(X2), X1)) +
llply(unique(data$X2), function(i) geom_boxplot(width = w[i], data = subset(data, X2 == i)))
```
Also you can post a feature request:
<https://github.com/hadley/ggplot2/issues>
|
Ruby Sqlite3 installation sqlite3\_libversion\_number() macOS Sierra
I'm trying to install the Metasploit framework (unimportant) and bundler is attempting to install sqlite3, which is where it fails consistently. Sqlite3 is installed (executing sqlite3 at the command line brings me into the environment) and is linked using `brew link sqlite3` (and adding the --force, for some reason) but `bundler install` fails each time with this error:
```
sudo gem install sqlite3
Building native extensions. This could take a while...
ERROR: Error installing sqlite3:
ERROR: Failed to build gem native extension.
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby extconf.rb
Error: Running Homebrew as root is extremely dangerous and no longer supported.
As Homebrew does not drop privileges on installation you would be giving all
build scripts full access to your system.
checking for sqlite3.h... yes
checking for pthread_create() in -lpthread... yes
checking for sqlite3_libversion_number() in -lsqlite3... no
sqlite3 is missing. Try 'brew install sqlite3',
'yum install sqlite-devel' or 'apt-get install libsqlite3-dev'
and check your shared library search path (the
location where your sqlite3 shared library is located).
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.
Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby
--with-sqlite3-config
--without-sqlite3-config
--with-pkg-config
--without-pkg-config
--with-sqlite3-dir
--without-sqlite3-dir
--with-sqlite3-include
--without-sqlite3-include=${sqlite3-dir}/include
--with-sqlite3-lib
--without-sqlite3-lib=${sqlite3-dir}/
--with-pthreadlib
--without-pthreadlib
--with-sqlite3lib
--without-sqlite3lib
Gem files will remain installed in /Library/Ruby/Gems/2.0.0/gems/sqlite3-1.3.12 for inspection.
Results logged to /Library/Ruby/Gems/2.0.0/gems/sqlite3-1.3.12/ext/sqlite3/gem_make.out
```
| I finally managed to solve this by **specifying the built-in Mac OS X sqlite library directory** on macOS Sierra 10.12.5 (16F73):
```
$ whereis sqlite3
/usr/bin/sqlite3
# if binary is in /usr/bin then library is typically in /usr/lib
$ gem install sqlite3 -- --with-sqlite3-lib=/usr/lib
Building native extensions with: '--with-sqlite3-lib=/usr/lib'
This could take a while...
Successfully installed sqlite3-1.3.13
Parsing documentation for sqlite3-1.3.13
Done installing documentation for sqlite3 after 0 seconds
1 gem installed
```
I tried specifying the [Homebrew](https://brew.sh/) library directory but for some reason it didn't work:
```
$ brew ls --verbose sqlite3
/usr/local/Cellar/sqlite/3.19.3/.brew/sqlite.rb
/usr/local/Cellar/sqlite/3.19.3/bin/sqlite3
/usr/local/Cellar/sqlite/3.19.3/include/msvc.h
/usr/local/Cellar/sqlite/3.19.3/include/sqlite3.h
/usr/local/Cellar/sqlite/3.19.3/include/sqlite3ext.h
/usr/local/Cellar/sqlite/3.19.3/INSTALL_RECEIPT.json
/usr/local/Cellar/sqlite/3.19.3/lib/libsqlite3.0.dylib
/usr/local/Cellar/sqlite/3.19.3/lib/libsqlite3.a
/usr/local/Cellar/sqlite/3.19.3/lib/libsqlite3.dylib
/usr/local/Cellar/sqlite/3.19.3/lib/pkgconfig/sqlite3.pc
/usr/local/Cellar/sqlite/3.19.3/README.txt
/usr/local/Cellar/sqlite/3.19.3/share/man/man1/sqlite3.1
$ gem install sqlite3 -- --with-sqlite3-lib=/usr/local/Cellar/sqlite/3.19.3/lib
This could take a while...
ERROR: Error installing sqlite3:
ERROR: Failed to build gem native extension.
...
```
If someone knows how to specify the Homebrew library directory, please let me know because that would provide a little more control over installation (supposedly [MacPorts](https://www.macports.org/) works but I no longer use it).
---
For anyone curious, here's the full command to install Ruby's [Sequel](https://github.com/jeremyevans/sequel):
```
gem install sequel mysql sqlite3 -- --with-sqlite3-lib=/usr/lib
```
And how to convert a [Laravel Homestead](https://laravel.com/docs/homestead) MySQL database listening on host port 3306 to SQLite from [my comment on the question](https://stackoverflow.com/questions/41370565/ruby-sqlite3-installation-sqlite3-libversion-number-macos-sierra#comment76437960_41370565):
```
sequel mysql://homestead:secret@192.168.10.10:3306/my_database -C sqlite://my_database.sqlite
```
|
Creating instance of class inside class
I am trying to create an instance of class inside class. I have declared two classes = first
```
class Student{
public:
Student(string m,int g){
name=m;
age=g;
}
string getName(){
return name;
}
int getAge(){
return age;
}
private:
string name;
int age;
};
```
And second , where i want to create instance of student.
```
class Class{
public:
Class(string n){
name = n;
};
string studentName(){
return Martin.getName();
}
private:
string name;
Student Martin("Martin",10);
Student Roxy("Roxy",15);
};
```
I keep getting this errors
>
> '((Class\*)this)->Class::Martin' does not have class type
>
>
> expected identifier before string constant|
>
>
>
The Student was defned before Class so it shouldnt have problem to access it.
What causes this behavior? How can i fix it?
| Member initialization should be done in your constructors initialization list:
```
Class(string n)
: Martin("Martin",10)
, Roxy("Roxy",15)
{
name = n;
};
private:
string name;
Student Martin;
Student Roxy;
```
Some more information on member initialization can be found here:
<http://en.cppreference.com/w/cpp/language/initializer_list>
And a more tutorial like explanation might also be useful to you:
<http://www.learncpp.com/cpp-tutorial/8-5a-constructor-member-initializer-lists/>
As James Root pointed out in the comments instead of:
```
private:
string name;
Student Martin("Martin",10);
Student Roxy("Roxy",15);
```
you can write
```
private:
string name;
Student Martin{"Martin",10};
Student Roxy{"Roxy",15};
```
But make sure you compile your code with the c++11 standard. On older compilers you might need to add -std=c++11 to your compile command like: *g++ -o foo -std=c++11 main.cpp*
|
Why does this CSS margin-top style not work?
I tried to add `margin` values on a `div` inside another `div`. All works fine except the top value, it seems to be ignored. But why?
**What I expected:**
[![What I expected with margin:50px 50px 50px 50px;](https://i.stack.imgur.com/ZEuMt.png)](https://i.stack.imgur.com/ZEuMt.png)
**What I get:**
[![What I get with margin:50px 50px 50px 50px;](https://i.stack.imgur.com/tmtMw.png)](https://i.stack.imgur.com/tmtMw.png)
**Code:**
```
#outer {
width: 500px;
height: 200px;
background: #FFCCCC;
margin: 50px auto 0 auto;
display: block;
}
#inner {
background: #FFCC33;
margin: 50px 50px 50px 50px;
padding: 10px;
display: block;
}
```
```
<div id="outer">
<div id="inner">
Hello world!
</div>
</div>
```
[W3Schools](https://www.w3schools.com/css/css_margin.asp) has no explanation to why `margin` behaves this way.
| You're actually seeing the top margin of the `#inner` element [collapse](http://www.w3.org/TR/CSS21/box.html#collapsing-margins) into the top edge of the `#outer` element, leaving only the `#outer` margin intact (albeit not shown in your images). The top edges of both boxes are flush against each other because their margins are equal.
Here are the relevant points from the W3C spec:
>
> ### 8.3.1 Collapsing margins
>
>
> In CSS, the adjoining margins of two or more boxes (which might or might not be siblings) can combine to form a single margin. Margins that combine this way are said to *collapse*, and the resulting combined margin is called a *collapsed margin*.
>
>
> Adjoining vertical margins collapse *[...]*
>
>
>
>
> Two margins are *adjoining* if and only if:
>
>
> - both belong to in-flow block-level boxes that participate in the same block formatting context
> - no line boxes, no clearance, no padding and no border separate them
> - both belong to vertically-adjacent box edges, i.e. form one of the following pairs:
> - top margin of a box and top margin of its first in-flow child
>
>
>
**You can do any of the following to prevent the margin from collapsing:**
>
> - Float either of your `div` elements
> - Make either of your `div` elements [inline blocks](https://stackoverflow.com/a/9519896/106224)
> - Set [`overflow` of `#outer` to `auto`](https://stackoverflow.com/a/9519937/106224) (or any value other than `visible`)
>
>
>
**The reason the above options prevent the margin from collapsing is because:**
>
> - Margins between a floated box and any other box do not collapse (not even between a float and its in-flow children).
> - Margins of elements that establish new block formatting contexts (such as floats and elements with 'overflow' other than 'visible') do not collapse with their in-flow children.
> - Margins of inline-block boxes do not collapse (not even with their in-flow children).
>
>
>
**The left and right margins behave as you expect because:**
>
> Horizontal margins never collapse.
>
>
>
|
Inject an integer with Ninject
I have folowing class
```
public class Foo
{
public Foo(int max=2000){...}
}
```
and I want to use Ninject to inject a constant value into Foo. I have try this
```
Bind<Foo>().ToSelft().WithConstructorArgument("max", 1000);
```
but I get following error when I try to use `_ninject.Get<Foo>`:
```
Error activating int
No matching bindings are available, and the type is not self-bindable.
Activation path:
3) Injection of dependency int into parameter max of constructor of type Foo
```
| the below works for me:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Ninject;
using Ninject.Activation;
using Ninject.Syntax;
public class Foo
{
public int TestProperty { get; set; }
public Foo(int max = 2000)
{
TestProperty = max;
}
}
public class Program
{
public static void Main(string [] arg)
{
using (IKernel kernel = new StandardKernel())
{
kernel.Bind<Foo>().ToSelf().WithConstructorArgument("max", 1000);
var foo = kernel.Get<Foo>();
Console.WriteLine(foo.TestProperty); // 1000
}
}
}
```
|
Should one perform a calculation in a DTO model or in the destination entity model?
I'm currently creating various entities in ASP.NET Core 2.2 with accompanying DTOs for a Web API. The client application would submit a DTO object to the relevant controller action. There using the AutoMapper, this would be mapped from the DTO object to an entity object. The resulting entity object would be saved to an entity framework repository which at this moment would be a Microsoft SQL database. For brevity let's assume that the time zones would be irrelevant in this case.
I was just wondering which approach would be more appropriate or to even have not have the calculation in either the DTO or Entity but perhaps within the Controller action.
*Note: The actual code is more complicated and involves various calculations for various properties, I have simply chosen a simple case to illustrate my question.*
**Approach #1**
```
// Entity
public class EventTimes
{
public DateTime Start { get; set; }
public DateTime End { get; set; }
public decimal TotalHours => (decimal)(End - Start).TotalHours;
}
// DTO
public class EventTimesDto
{
public DateTime Start { get; set; }
public DateTime End { get; set; }
}
```
**Approach #2**
```
// Entity
public class EventTimes
{
public DateTime Start { get; set; }
public DateTime End { get; set; }
public decimal TotalHours { get; set; }
}
// DTO
public class EventTimesDto
{
public DateTime Start { get; set; }
public DateTime End { get; set; }
public decimal TotalHours => (decimal)(End - Start).TotalHours;
}
```
| It depends on the actual context. Is the `EventTimes` an entity or is it rather part of your domain model?
Either way I would not put it in the **dto** as this is really just for transferring data, so it should not contain any logic (besides maybe validation).
Since the responsibility for this calculation is neither part of the dto, nor the entity model's, you could put the heavy calculation in an `EventTimesCalculator` something like this:
```
public class EventTimesCalculator
{
public decimal CalculateTotalHours(EventTimes eventTimes)
{
return (decimal)(eventTimes.End - eventTimes.Start).TotalHours;
}
}
```
If the `EventTimes` is part of your business layer / domain model, a more appropriate way would be to have a `GetTotalHours()` method inside of the model, instead of a property. Of course you would need to map it to the persistence model, if you want to save that information. Then again, since this information can be calculated, you don’t need to persist it at all, mainly because the logic might change (example: exclude breaks, interruptions or such).
My advice is to stop thinking in terms of database entities (which I assume you meant above).
At the end, it’s rather a detail where you put the calculation logic, more importantly is to have a straight forward design. Is the application monolithic put that logic in your layer that contains the business logic. Is it a distributed architecture, handle the calculation for the model in the service responsible for Eventing. Is it just a small API, keep it simple, put it where you or your team would expect it the most.
|
How to call an Objective-C singleton from Swift?
I have an objective-C singleton as follows:
```
@interface MyModel : NSObject
+ (MyModel*) model;
...
+ (MyModel*) model
{
static MyModel *singlton = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^ {
singlton = [[MyModel alloc] initSharedInstance];
});
return singlton;
}
- (MyModel*) initSharedInstance
{
self = [super init];
if (self)
etc.
}
```
Which gets called in multiple places within the GUI code as:
```
[[MyModel model] someMethod];
```
And therefore the model will get created as a consequence of whichever part of the GUI happens to reference it first.
I'm not sure how to implement the equivalent of accessing the class via [[MyModel model] someMethod] in Swift as all examples of using Swift involve creating an object using an initializer and when Objective C class method code is converted to Swift initializer code there is a problem with it not working when the method does not have parameters.
| UPDATE
++++++++++
The workaround below is only necessary if you name your singleton method with a name derived from the suffix of the class name i.e. the OPs question the method name is model and the class is called MyModel.
If the method is renamed to something like singleton then it is possible to call it from Swift just like this:
```
let m = MyModel.singleton()
```
+++++++++++
I don't know if this is good/bad practice but I was able to get around the problem with initializer conversion not working when there are no parameters by adding a dummy init method. So using the code from the other answer as an example:
```
@interface XYZThing : NSObject
+ (XYZThing*) thing;
+ (XYZThing*) thingWithFoo:(int)foo bar:(int)bar;
@end
@implementation XYZThing
+ (XYZThing*) thing
{
NSLog(@"This is not executed");
return nil;
}
+ (XYZThing*)thingWithFoo:(int)foo bar:(int)bar
{
NSLog(@"But this is");
return nil;
}
@end
...
let thing = XYZThing()
let otherThing = XYZThing(foo:3, bar:7)
```
With this code above the thing method is not called, but the thingWithFoo:bar: method is.
But if it is changed to this then now the thing method will get called:
```
@interface XYZThing : NSObject
+ (XYZThing*) init;
+ (XYZThing*) thing;
+ (XYZThing*) thingWithFoo:(int)foo bar:(int)bar;
@end
@implementation XYZThing
+ (XYZThing*) init
{
return nil;
}
+ (XYZThing*) thing
{
NSLog(@"Now this is executed");
return nil;
}
+ (XYZThing*)thingWithFoo:(int)foo bar:(int)bar
{
NSLog(@"And so is this");
return nil;
}
@end
...
let thing = XYZThing()
let otherThing = XYZThing(foo:3, bar:7)
```
|
Is using explicit return type in one translation unit and deduced return type in another allowed?
My question is similar to [this one](https://stackoverflow.com/questions/27746467/using-functions-that-return-placeholder-types-defined-in-another-translation-uni), but subtly different.
Suppose I have two translation units, `exec.cpp` and `lib.cpp`, as followed:
```
// exec.cpp
int foo();
int main() {
return foo();
}
```
and
```
// lib.cpp
auto foo() {
return 42;
}
```
Is it legal to compile and link them together? Or is it ill-formed NDR?
Note: both g++ and clang generate the expected executable (i.e. returns 42) with command `<compiler> exec.cpp lib.cpp -o program`
Note: **Arguably this is a bad practice** (as the return type can change if the implementation changes, and breaks the code). But I still would like to know the answer.
| *All standard references below refers to [N4861: March 2020 post-Prague working draft/C++20 DIS.](https://timsong-cpp.github.io/cppwp/n4861/).*
---
From [[basic.link]/11](https://timsong-cpp.github.io/cppwp/n4861/basic.link#11) [**emphasis** mine]:
>
> **After all adjustments of types** (during which typedefs are replaced by their definitions), **the types specified by all declarations referring to a given** variable or **function shall be identical**, except that declarations for an array object can specify array types that differ by the presence or absence of a major array bound ([dcl.array]). A violation of this rule on type identity does not require a diagnostic.
>
>
>
[[dcl.spec.auto]/3](https://timsong-cpp.github.io/cppwp/n4861/dcl.spec.auto#3) covers that a placeholder type can appear with a function declarator, and if this declarator does not include a *trailing-return-type* (as is the case of OP's example)
>
> [...] Otherwise [no *trailing-return-type*], the function declarator **shall declare a function**.
>
>
>
where
>
> [...] the return type of the function is deduced from non-discarded `return` statements, if any, in the body of the function ([stmt.if]).
>
>
>
[[dcl.fct]/1](https://timsong-cpp.github.io/cppwp/n4861/dcl.fct#1) covers function declarators that do not include a *trailing-return-type* [**emphasis** mine, removing *opt* parts of the grammar that do not apply in this particular example]:
>
> In a declaration `T D` where `D` has the form [...] **the type of the *declarator-id* in `D` is** “*derived-declarator-type-list* function of *parameter-type-list* returning `T`” [...]
>
>
>
Thus, the two declarations
```
int f(); // #1
auto foo() { // #2
// [dcl.spec.auto]/3:
// return type deduced to 'int'
}
```
both declare functions where the type of the associated *declarator-id* in `D` of these `T D` declarations is
>
> “*derived-declarator-type-list* function of *parameter-type-list* returning `T`”
>
>
>
where in both cases, `T` is `int`:
- explicitly specified in `#1`,
- deduced as per [dcl.spec.auto]/3 in `#2`.
Thus, the declarations `#1` and `#2`, after all adjustments of types, have identical (function) types, thus fulfilling [basic.link]/11, and the OP's example is well-formed. Any slight variation of the definition of `auto f()`, however, could lead to a deduced return type which is not `int`, in which case [basic.link]/11 is violated, NDR.
|
iOS Appstore app override enterprise app
Our company has both Appstore and Enterprise distribution licence. We are going to make demonstration with current beta version via enterprise licence. Some users going to download enterprise app to test beta release. After appstore publish we want Appstore app override the enterprise beta release which downloaded during the demonstration. So that, people who downloaded beta app can be switch with released version. If we give the same bundle identifier both to enterprise and store app, is this possible? What would happen to the push notification certificates?
| Unfortunately, you can not have an Enterprise App and an App Store App share the same Bundle Identifier (= AppID). App Store Apps need be provisioned by a profile created in a normal Developer Account. Enterprise In House Apps need to be provisioned by a separate Enterprise Developer Account, as you can not create Enterprise Distribution Profiles in a normal Developer Account and vice versa. Once you set up the AppID in one account, you can not set it up in the other, because an AppID needs to be unique.
Alternatives:
1. Don't use the Enterprise Account. Use AppStore- and AdHoc-Provisioning (for Beta-Testing) with the normal Developer Account
2. Try Beta-Testing via Apple's [TestFlight](https://developer.apple.com/library/ios/documentation/LanguagesUtilities/Conceptual/iTunesConnect_Guide/Chapters/BetaTestingTheApp.html). It allows you to distribute Pre-Release versions of your App-Store IPA without the restrictions of AdHoc-Provisioning. You don't need the Enterprise Acc in this setup.
3. Use separate AppIDs. One for the Enterprise Account. One for the normal Developer Account. That would result in 2 different Apps on a device, once installed.
To simplify your Push Notification setup, you should run with 1. or 2.
|
Should I use mysql for scalability?
I'm in the planning phase for developing a web application and am trying to figure out my best bet as far database options goes. I'm already familiar with php and mysql database. Initially, the website won't be handling any transactions, but my hope is that the website will expand to a large degree and I'll incorporate transactions and sales on the website. My concern is, will php be secure enough for the future transactions? If not, will it be possible to switch from mysql to maybe.. Oracle or another "more secure database" while the is expanding? Thanks for your time and responses.
| Plan for the foreseeable future and not beyond. If utilized correctly MySQL can scale incredibly well (just check out some of the names that use it <http://www.mysql.com/why-mysql/case-studies/> ).
As for the security aspect, that's totally up to your coding ability. Security is not inherently better in one language or DB over another, but certain languages have a very low barrier to entry (PHP being one of them) so they tend to invite many hobby programmers who develop applications that scale badly and are security nightmares.
Stick to what you know and let your technology develop as required. Most web apps that end up seriously scaling have a few rewrites in their history (often changing DB engines, frameworks, platforms, languages...). You can't predict the future, so stick to what you're good at now.
tl;dr: PHP and MySQL will be fine.
|
Haskell infinite types and my FSM function
I've just come across the "infinite type" in Haskell when I was attempting to write a finite state machine. I thought the following was very intuitive:
```
fsm [] _ acc = Right acc
fsm (x:xs) state acc =
case state acc x of
Left err -> Left err
Right (s, a) -> fsm xs s a
```
I give the state function the current state (the accumulator) and the new event, and the state function produces the next state function along with the new accumulator. I recurse until I have no more events.
The compiler tells me:
```
Occurs check: cannot construct the infinite type:
t1 = b0 -> t0 -> Either a0 (t1, b0)
In the second argument of `fsm', namely `s'
```
Because `state` is now an infinite type. How to I rearrange this to make it work?
| Infinite types like this wreak havoc with the type system; they don't make it unsafe, but they cause a great deal of programs to type which you don't really want to, thus hiding errors, and I believe they make type inference harder too.
Thankfully, the solution is simple: you just need to make a `newtype` wrapper. `data` and `newtype` declarations are of course allowed to be recursive (otherwise, we couldn't even define lists!); it's just plain, unwrapped types which aren't.
```
newtype FSMState err acc ev =
FSMState { stepFSM :: acc -> ev -> Either err (FSMState err acc ev, acc) }
fsm :: [ev] -> FSMState err acc ev -> acc -> Either err acc
fsm [] _ acc = Right acc
fsm (x:xs) state acc =
case stepFSM state acc x of
Left err -> Left err
Right (s, a) -> fsm xs s a
```
|
XNA ViewPort projection and SpriteBatch
I'm working on an XNA game and I am using ViewPort.Project and ViewPort.Unproject to translate to and from world coordinates. Currently I use these for each object I draw with SpriteBatch. What I would like to do is calculate a Matrix that I can send to SpriteBatch.Begin to do the screen-space transformation for me.
Here are the functions I currently use to translate to and from screenspace:
```
Vector2 ToWorldCoordinates(Vector2 pixels)
{
Vector3 worldPosition = graphics.GraphicsDevice.Viewport.Unproject(new Vector3(pixels, 0),
Projection, View, Matrix.Identity);
return new Vector2(worldPosition.X, worldPosition.Y);
}
Vector2 ToScreenCoordinates(Vector2 worldCoords)
{
var screenPositon = graphics.GraphicsDevice.Viewport.Project(new Vector3(worldCoords, 0),
Projection, View, Matrix.Identity);
return new Vector2(screenPositon.X, screenPositon.Y);
}
```
View is set to Matrix.Identity, and Projection is set like so:
```
Projection = Matrix.CreateOrthographic(40 * graphics.GraphicsDevice.Viewport.AspectRatio, 40, 0, 1);
```
And here is how I currently draw things:
```
spriteBatch.Begin();
foreach (var thing in thingsToDraw)
{
spriteBatch.Draw(thing.Texture, ToScreenCoordinates(thing.PositionInWorldCoordinates), thing.Color);
spriteBatch.End();
}
spriteBatch.End();
```
This is what I would like to do instead (using XNA 4.0 version of SpriteBatch.Begin())
```
// how do I calculate this matrix?
Matrix myTransformationMatrix = GetMyTransformationMatrix();
spriteBatch.Begin(SpriteSortMode.Immediate, null, null, null, null, null,
myTransformationMatrix);
foreach (var thing in thingsToDraw)
{
// note: no longer converting each object's position to screen coordinates
spriteBatch.Draw(thing.Texture, thing.PositionInWorldCoordinates, thing.Color);
spriteBatch.End();
}
spriteBatch.End();
```
| I've written about `SpriteBatch` and the various "spaces" (world, projection, client, etc) [here](https://stackoverflow.com/questions/3018980/using-createorthographicoffcenter-in-xna/3020190#3020190), [here](https://stackoverflow.com/questions/3495140/letterboxing-and-scaling-in-xna-on-pc/3499671#3499671) and [here](https://stackoverflow.com/questions/3538820/xna-about-the-relation-between-world-space-and-the-screen-space/3539665#3539665). Those answers are probably worth reading.
`SpriteBatch` assumes that your World space is the same thing as Client space - which is however many pixels tall and wide the viewport is, origin top-left, Y+ is down.
It looks like (based on your use of `CreateOrthographic`) you want your World space to appear as 40 units tall, on screen, and however many units will fit widthways. You also want the World origin at the centre of the screen and Y+ is up.
So you have to stick another matrix in between to convert your World space to Client space. I believe the correct matrix is (in psudocode): `scale(40/viewport.Height) * scale(1, -1) * translate(viewport.Width/2, viewport.Height/2)`. Scale, flip and translate to go from World to Client.
You must also remember that sprites assume that Y+ is down. So you have to pass a `(1, -1)` scale into `SpriteBatch.Draw`, otherwise they will be drawn inverted (and, due to backface culling, be invisible).
|
Extract specific folder from tarball into specific folder
I created a tarball on Ubuntu 14.04 with:
```
cd /tmp
tar -cfvz archive.tar.gz /folder
```
Now I want to extract a specific folder in the tarball (which inside the tarball lies in `/tmp`) into a specific folder:
```
cd /tmp
tar -xfvz archive.tar.gz folder/in/archive -C /tmp/archive
```
The result should be a new folder in `/tmp` called `archive`. Is this correct? Especially the missing slash (relative path) for the folder to be extracted and the absolute path with the leading slash for the folder to create?
| # Tl;dr
Since you are in `/tmp` already, you can just discard the `-C` option (since by default `tar` will extract files in the current working directory) and just add `--strip-components=2`:
```
tar --strip-components=2 -xfvz archive.tar.gz folder/in/archive
```
---
GNU `tar` by default stores relative paths.
Whether an archive uses relative paths can be checked by running `tar -tf archive | head -n 1`, which will print the path of the first file in the archive; if that file's path is a relative path, all the files in the archive use relative paths:
```
% tar -tf bash-4.3.tar.gz | head -n 1
bash-4.3/
```
To extract a single file / folder from an archive that uses relative paths without its ancestors into a relative path you'll need two options: `-C` and `--strip-components=N`: in the example below the archive `bash-4.3.tar.gz` uses relative paths and contains a file `bash-4.3/doc/bash.html` which is extracted into a relative path `path` (`-C` specifies the directory in which to extract the files, `--strip-components=2` specifies that the parent and the parent of the parent of the extracted files should be ignored, so in this case only `bash.html` will be extracted into the target directory):
```
% tree
.
├── bash-4.3.tar.gz
└── path
1 directory, 1 file
% tar -tf bash-4.3.tar.gz | grep -F 'bash.html'
bash-4.3/doc/bash.html
% tar -C path --strip-components=2 -zxf bash-4.3.tar.gz bash-4.3/doc/bash.html
% tree
.
├── bash-4.3.tar.gz
└── path
└── bash.html
1 directory, 2 files
```
So, back to your command, since you are in `/tmp` already, you can just discard the `-C` option (since by default `tar` will extract files in the current working directory) and just add `--strip-components=2`:
```
tar --strip-components=2 -xfvz archive.tar.gz folder/in/archive
```
|
How to add build step in team city to run Node Js unit tests (Mocha framework)
I have a NodeJs application. Currently I am using team city for build and deployment of this application.
Now I want to run unit test cases before deployment. I have used Mocha framework with Chai to write test cases.
I don't see any runner type for Mocha or Node Js in team city.
I know some plugin is needed to be installed on teamcity server.
does any one know what is the plugin and what steps I need to follow?
| You don't have to install any specific TeamCity plugin, you have to use test reporter capable of writing TeamCity [service messages](https://confluence.jetbrains.com/display/TCD10/Build+Script+Interaction+with+TeamCity), e.g. [mocha-teamcity-reporter](https://www.npmjs.com/package/mocha-teamcity-reporter), which is just another npm package.
You'll get you tests consumed by TeamCity after you run `mocha --reporter mocha-teamcity-reporter test` in your build step, so `Command-Line Runner` may be used for this purpose.
It is a good practice to extract this command to a separate [script](https://docs.npmjs.com/cli/run-script) in your `package.json`, e.g:
```
"test:ci": "mocha --reporter mocha-teamcity-reporter test"
```
and use `npm run test:ci` in your build step.
|
Extra Characters while using XML PATH
I have a table called Map\_Data and the data looks like:
```
ID SoCol Descol
125 case Per_rating when 5 then 'Good' when 4 then 'Ok' else null end D_Code
```
And I wrote a query on this particular row and the query is:
```
SELECT Params = ( SELECT DesCol + ' = ''' + SoCol + ''''
FROM dbo.Map_Data t1
WHERE ID = 125
FOR
XML PATH('')
)
```
and I get the output as :
```
D_Code = 'case per_rating
 when 5 then 'Good'
 when 4
 then 'Ok'
 end'
```
Can anyone tell me why i am getting `'
'` it and how can i correct it?
| This slight change will make the ugly entities go away, but they won't eliminate carriage returns (look at the results in Results to Text, not Results to Grid, to see them):
```
SELECT Params = ( SELECT DesCol + ' = ''' + SoCol + ''''
FROM dbo.Map_Data t1
WHERE ID = 125
FOR
XML PATH(''), TYPE
).value(N'./text()[1]', N'nvarchar(max)');
```
If you want to get rid of the CR/LF too you can say:
```
SELECT Params = ( SELECT REPLACE(REPLACE(DesCol + ' = ''' + SoCol + '''',
CHAR(13), ''), CHAR(10), '')
FROM dbo.Map_Data t1
WHERE ID = 125
FOR
XML PATH(''), TYPE
).value(N'./text()[1]', N'nvarchar(max)');
```
Also I'm not sure how you're going to use the output but if you're going to evaluate it later with dynamic SQL you're going to need to replace the embedded single quotes (`'`) with two single quotes (`''`). Otherwise it will blow up because they're also string delimiters.
|
Why am I getting a NameError when I try to call my function?
This is my code:
```
import os
if os.path.exists(r'C:\Genisis_AI'):
print("Main File path exists! Continuing with startup")
else:
createDirs()
def createDirs():
os.makedirs(r'C:\Genisis_AI\memories')
```
When I execute this, it throws an error:
```
File "foo.py", line 6, in <module>
createDirs()
NameError: name 'createDirs' is not defined
```
I made sure it's not a typo and I didn't misspell the function's name, so why am I getting a NameError?
| You can't call a function unless you've already defined it. Move the `def createDirs():` block up to the top of your file, below the imports.
Some languages allow you to use functions before defining them. For example, javascript calls this "hoisting". But Python is not one of those languages.
---
Note that it's allowable to refer to a function in a line higher than the line that creates the function, as long as *chronologically* the definition occurs before the usage. For example this would be acceptable:
```
import os
def doStuff():
if os.path.exists(r'C:\Genisis_AI'):
print("Main File path exists! Continuing with startup")
else:
createDirs()
def createDirs():
os.makedirs(r'C:\Genisis_AI\memories')
doStuff()
```
Even though `createDirs()` is called on line 7 and it's defined on line 9, this isn't a problem because `def createDirs` executes before `doStuff()` does on line 12.
|
Set readonly fields in a constructor local function c#
The following does not compile.
```
public class A
{
private readonly int i;
public A()
{
void SetI()
{
i = 10;
}
SetI();
}
}
```
It fails with this error:
>
> CS0191 A readonly field cannot be assigned to (except in a constructor or a variable initializer)
>
>
>
Technically are we not in the constructor still, since the visibility of the local function is limited, so I'm wondering why this does not compile.
| The compiler turns the `SetI` local function into a separate class-level method. Since this separate class-level method is not a constructor, you are not allowed to assign to readonly fields from it.
So the compiler takes this:
```
public class A
{
private readonly int i;
public A()
{
void SetI()
{
i = 10;
}
SetI();
}
}
```
and turns it into this:
```
public class A
{
private readonly int i;
public A()
{
<.ctor>g__SetI|1_0();
}
[CompilerGenerated]
private void <.ctor>g__SetI|1_0()
{
i = 10;
}
}
```
([SharpLab](https://sharplab.io/#v2:EYLgHgbALANAJiA1AHwATtQAQMxYEyoCCAsAFAaoDeZFFADgE4CWAbgIYAuApqkwHYdeAbjI1a6HEQAUASjHjq5cbUxRUAZS4cAkrPnL0igwaaoAvKgCMABiGp9BgL6ilxjVt0yRr2s5+OgA). I left off the `readonly` so it would compile.)
As you can see, it's trying to assign `i` from the method `<.ctor>g__SetI|1_0()`, which isn't a constructor.
Unfortunately the C# 7.0 language specification hasn't yet been published, so I can't quote it.
Exactly the same happens if you try and use a delegate:
```
public class A
{
private readonly int i;
public A()
{
Action setI = () => i = 10;
setI();
}
}
```
Gets compiled to:
```
public class A
{
private readonly int i;
public A()
{
Action action = <.ctor>b__1_0;
action();
}
[CompilerGenerated]
private void <.ctor>b__1_0()
{
i = 10;
}
}
```
([SharpLab](https://sharplab.io/#v2:EYLgHgbALANAJiA1AHwAIAYAEqCMBuTQgWAChTUBmbAJkwEFSBvUwwgBwCcBLANwEMALgFNMXAHYDReUi1aV6ACgCUswsxKtN2HLQDOQgQElMAXkzLTAPlGnMOdNLIathfUeWPNAX1JegA==), again without the `readonly`.)
... which likewise fails to compile.
|
Facebook OAuth is not returning email in user info
I'm doing a spree 3.0 installation (ROR) and trying to use facebook oauth for authentication, but the fields sent back after a successful oauth, do NOT contain the email, which is critical to our application. here is the return from the facebook successful authentication.
```
#<OmniAuth::AuthHash credentials=#<OmniAuth::AuthHash expires=true expires_at=1442435073 token="CAAJa3dyBtY4BAJ2ZB3vrenNOFJKSMtvxYO09ZCJtEsmKNBs90q9nmUF4LIBr06xCizEAR3lwht3BwycLkVFdjlvkS1AUGpYODQHu25K0uO8XLDDPkTO0E9oPdIILsbTTOuIT7qcl8nJ6501z0dCXEi9hVNwPYqZBbGqiEhyoLyVgCNnDWdPRLBRF5xSovJdhjjCf6XC8ulJ4NnKBfM8"> extra=#<OmniAuth::AuthHash raw_info=#<OmniAuth::AuthHash id="101230990227589" name="David Alajbbjfdjgij Bowersstein">> info=#<OmniAuth::AuthHash::InfoHash image="http://graph.facebook.com/101230990227589/picture" name="David Alajbbjfdjgij Bowersstein"> provider="facebook" uid="101230990227589"
```
as you can see, all i get back is the user name and their ID. Is there some setting on my facebook app that i need to check in order to get the email back? or is there a different way i'm supposed to do Oauth? I'm just using the spree\_social gem which does this all internally so i've actually not written any code around this.
here is the code. copied out of the gem, i just added the logging lines to see what was coming back from facebook.
```
def #{provider}
authentication = Spree::UserAuthentication.find_by_provider_and_uid(auth_hash['provider'], auth_hash['uid'])
if authentication.present? and authentication.try(:user).present?
flash[:notice] = I18n.t('devise.omniauth_callbacks.success', kind: auth_hash['provider'])
sign_in_and_redirect :spree_user, authentication.user
elsif spree_current_user
spree_current_user.apply_omniauth(auth_hash)
spree_current_user.save!
flash[:notice] = I18n.t('devise.sessions.signed_in')
redirect_back_or_default(account_url)
else
user = Spree::User.find_by_email(auth_hash['info']['email']) || Spree::User.new
user.apply_omniauth(auth_hash)
Rails.logger.debug("THE AUTO HASH")
Rails.logger.debug(auth_hash.inspect)
if user.save
flash[:notice] = I18n.t('devise.omniauth_callbacks.success', kind: auth_hash['provider'])
sign_in_and_redirect :spree_user, user
else
session[:omniauth] = auth_hash.except('extra')
flash[:notice] = Spree.t(:one_more_step, kind: auth_hash['provider'].capitalize)
redirect_to new_spree_user_registration_url
return
end
end
if current_order
user = spree_current_user || authentication.user
current_order.associate_user!(user)
session[:guest_token] = nil
end
end
```
| Facebook just released latest APIv2.4 that does not return email by default but we need to *explicitly* specify what fields to use.
[Introducing Graph API v2.4](https://developers.facebook.com/blog/post/2015/07/08/graph-api-v2.4/)
Now, on the very latest omniauth-facebook(possibly 2.1.0), "email" fields are specified by default and just works.
[Fix default info\_fields to 'name,email' #209](https://github.com/mkdynamic/omniauth-facebook/pull/209)
Or, you can just specify like
```
options['info_fields']='id,email,gender,link,locale,name,timezone,updated_time,verified';
```
|
How to make every Class Method call a specified method before execution?
I want to make my Python Class behave in such a way that when any Class method is called a default method is executed first without explicitly specifying this in the called Class. An example may help :)
```
Class animals:
def _internalMethod():
self.respires = True
def cat():
self._internalMethod()
self.name = 'cat'
def dog():
self._internalMethod()
self.name = 'dog'
```
I want \_internalMethod() to be called automatically when any method is called from an instance of animals, rather than stating it explicitly in the def of each method. Is there an elegant way to do this?
Cheers,
| You could use a metaclass and [**getattribute**](http://docs.python.org/reference/datamodel.html#object.__getattribute__) to decorate all methods dynamically (if you are using Python 2, be sure to subclass from `object`!).
Another option is just to have a fixup on the class, like:
```
def add_method_call(func, method_name):
def replacement(self, *args, **kw):
getattr(self, method_name)()
return func(self, *args, **kw)
return replacement
def change_all_attrs(cls, added_method):
for method_name in dir(cls):
attr = getattr(cls, method_name)
if callable(attr):
setattr(cls, method_name, add_method_call(attr, added_method))
class animals(object):
...
change_all_attrs(animals, '_internalMethod')
```
This is kind of sloppy, `dir()` won't get any methods in superclasses, and you might catch properties and other objects you don't intend to due to the simple `callable(attr)` test. But it might work fine for you.
If using Python 2.7+ you can use a class decorator instead of calling `change_all_attrs` after creating the class, but the effect is the same (except you'll have to rewrite `change_all_attrs` to make it a decorator).
|
Document.referrer wrong when pressing the back button
I have a PHP / Javascript page that automatically logs a user into different systems from one log on. These are external sites and all works good except when the user hits the back button. It then redirects them right back where they came from.
I'm looking to have it redirect back to my main website and avoid getting stuck in this redirect nightmare. So I tried `document.referrer`, but that only seems to grab the current page I'm on and not the referred site. Am I wrong or can this not be done like this?
```
function myfunc () {
var frm = document.getElementById("loggedin1");
if(document.referrer != 'https://someurl') {
alert(document.referrer);//Just used to see output
frm.submit();
}
}
window.onload = myfunc;
```
If I could get it to function I would add an `else` in there and have it going back to my website.
Thanks!
| It sounds like you are trying to go back to a previous page, that is not within the website of the page you're on?
A few points:
1) document.referrer will only work if the person got to the current page through a link, or clicking something.... not if they were redirected through other means.
2) Due to browser security implementations, you will not be able to access the javascript history for other sites. So if you go from your site, site A, to site B for a login, you will not be able to access the site A history from site B.
If you need to take them back to the previous page they were on on your site, can you use an iframe to load the external page? that way they'll never leave your site? Or maybe a window popup?
If what you are trying to accomplish is site logins, have you looked into the available apis? Sites like facebooks have apis for allowing logging in on your site through theirs.
Cheers!
|
Is it feasible to use Lisp/Scheme as a scripting language?
Is it feasible to script in a Lisp, as opposed to Ruby/Python/Perl/(insert accepted scripting language)? By this I mean do things like file processing (open a text file, count the number of words, return the nth line), string processing (reverse, split, slice, remove punctuation), prototyping/quick computations, and other things you would normally use Python, etc. for. How productive would doing such tasks in a Lisp be, as opposed to Ruby/Python/Perl/scripting language of choice?
I ask because I want to learn a Lisp but also use it to do something instead of only learning it for the sake of it. I looked around, but couldn't find much information about scripting in a Lisp. If it is feasible, what would be a good implementation?
Thank you!
| Today, using `LISP` as if it's certain that anyone would understand what language one is talking about is absurd since it hasn't been one language since the 70's or probably some time earlier too. LISP only indicates that it's fully parenthesized Polish prefix notation just like Pascal, Ruby, Python and Perl are just [variations of ALGOL](http://en.wikipedia.org/wiki/Generational_list_of_programming_languages#ALGOL_based).
`Scheme` is a standard and `Common LISP` is a standard. Both of those are general purpose though `Common LISP` is a batteries included while `Scheme` is a minimalistic language. They are quite different in style so comparing them would be like comparing `Java` with `Python`.
**Embedded LISPS**
There are lots of use of `Scheme` and specialized LISP dialects as embedded languages. [Emacs](http://www.gnu.org/software/emacs/) is the most widely used editor in the unix segment and its lisp `elisp` is the most used lisp language because of this. Image processing applpication [GIMP](http://www.gimp.org/) has a `Scheme` base with extensions for image processing.
**Stand alone scripts**
It's possible in many `Common LISP` implementation with the standard `#!`-notation to make a script work as an executable and run it as an application. Eg. I use `CLISP` and have scripts using `#!/usr/bin/clisp -C` as first line. I also use `Scheme` the same way and in the very fast incremental compiler ikarus you use `#!/usr/bin/ikarus --r6rs-script`. `Clojure` has all power of Java libraries and you can use your own classes from it and it also can be made an application with `#!/usr/bin/env java -cp /path/to/clojure-1.2.0.jar clojure.main`
**more permanent application**
In `Common LISP` you can dump an image. It will be a Common Lisp binary with your code already compiled in. Many Scheme implementations has compilation to native and `Clojure` can compile to java bytecode (though it's not the most common way to do it). Still I have had experience with Ikarus sometimes interpreting faster than a compiled executable from racket, chicken and gambit so I often do my programming in `DrRacket` and running it in `ikarus` in Scheme.
Try both `Common LISP` and `Scheme` as both of them are good enough for the tasks you specified in your question. There are many free books on the subject and some are worth their price as well. You may also try `Racket` too, which is a `Scheme` deviate with lots of libraries for everyday tasks, but it's not conforming to any standard.
**About productivity**
I imagine you are referring to how quick you can write a certain task in a Lisp dialect. I imagine it depends on how used you are to the syntax. It takes a while to get used to it after only knowing Algol dialects. It takes different approaches as well as you need to think in a more functional manner, especially for `Scheme`. I imagine when you are as good in Scheme as in your favorite Algol dialect it will be similar. Eg. some algol dialects are faster to prototype inn than others and that is true for Lisp dialects as well.
|
EntityFramework not updating column with default value
I am inserting an object into a SQL Server db via the EntityFramework 4 (EF). On the receiving table there is a column of (`CreatedDate`), which has its default value set to `getdate()`. So I do not provide it to the EF assuming its value will be defaulted by SQL Server to `getdate()`.
However this doesn't happen; instead EF return a validation error n `SaveChanges()`.
Is there any reason that you know for this happening? Please let me know.
Many thanks.
| If you never want to edit that value (like with a created date), you can use:
```
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public virtual DateTime CreatedDate { get; set; }
```
This will tell the Entity Framework that the value is controlled by the database, but will still fetch the value.
Note that you then cannot change that value, so it's not a solution if you simply want an initial value.
If you just want a default value but are still allowed to edit it, or you are using the Entity Framework 5 and below, you have to set the default in code.
More discussion about this here:
[How to use Default column value from DataBase in Entity Framework?](https://stackoverflow.com/questions/584556/how-to-use-default-column-value-from-database-in-entity-framework)
|
Some questions about OpenGL transparency
I have two questions about OpenGL blending.
1) I know I have to draw opaque objects first and then draw from back to front the non-opaque ones. So I put them in a list depending in the distance to the center (0,0,0). But do transformations (rotate and translate) affect the "center" from where I measure the distance?
2) And second, if the items I draw are triangles, how do I measure the distance? To its incenter? To its orthocenter?
| You certainly need to take the transformations into account for sorting. Applying all the transformations, and then sorting by the resulting depth (z-coordinate), is the most direct approach.
A mostly more efficient way of achieving the same thing is to apply the inverse transformations to your view direction once for each object (or set of objects that use the same transformations), and then calculate the depth of each vertex/triangle as the dot product of the vertex with this inverse-transformed view vector. This will require only one dot product per triangle, instead of applying full transformations to them. And the number of triangles is often orders magnitude larger than the number of objects.
As to which point to use: There's really no solution that will work for all cases. The center of the triangle should e as good as anything. This whole approach is an approximation that will work sufficiently well in many cases, but will not be entirely correct in some scenarios.
To illustrate the fundamental challenges with order-dependent transparency, let's look at a few examples. In the following figure, the view direction is from left to right, and we look at two triangles A and B edge on:
```
\
\
B
\
\ \
\ \
view -----> \
A
\
\
```
Visually, it's clear that B is behind A, and needs to be drawn first. Yet:
- The closest point of B is closer to the view point than the closest point of A.
- The farthest point of B is closer to the view point than the farthest point of A.
- The center point of B is closer to the view point than the center point of A.
You can't sort these triangles correctly by comparing one single depth value from each of them. To handle this properly, you have to take the geometry into account, and use more complex criteria to order them properly.
Then there are cases where there is no valid order:
```
\ /
\ /
view -----> \/
/\
B A
/ \
```
Here, there is no sorting order for A and B that would be valid. Parts of B are behind A, and parts of A are behind B. This will be the case whenever you have intersecting triangles. The only way to correctly resolve this is to split triangles.
There are also configuration without any intersecting triangles where there is no valid order. This is an example with 4 triangles, looking from the top this time:
```
___________
|\ \ |
__|_\________\ |___
| \ |__/
| ______\ |
|________/ \ |
| \ \_____|__
| \_______/ |
__| \ |
/__| \______________|
| \ \ |
|__________\ \|
```
These difficulties are an important reason why order independent transparency rendering methods are so attractive, beyond just avoiding the overhead of sorting.
|
RegEx with \d doesn’t work in if-else statement with [[
i wrote the following script. It will be used in a build process later. My goal is to decide whether it's a pre release or a release. To archive this i compare $release to a RegEx.
If my RegEx matches it's a pre release, if not it's a release.
```
#/bin/bash
release="1.9.2-alpha1"
echo "$release"
if [[ "$release" =~ \d+\.\d+\.\d+[-]+.* ]];then
echo "Pre"
else
echo "Release"
fi
```
But as result i always end up with the following:
```
~$ bash releasescript.sh
1.9.2-alpha1
Release
```
Version:
```
Ubuntu 18.04.1 LTS
```
I used this [editor](https://www.regextester.com/97945) to test my RegEx. I'm stuck for at least 6h, so i would appreciate some help greatly.
| `\d` and `\w` don't work in [POSIX regular expressions](http://en.wikipedia.org/wiki/Regular_expression#POSIX), you could use `[[:digit:]]` though
```
#/bin/bash
release="1.9.2-alpha1"
echo "$release"
LANG=C # This needed only if script will be used in locales where digits not 0-9
if [[ "$release" =~ ^[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+-+ ]];then
echo "Pre"
else
echo "Release"
fi
```
I have tested this script, it output "Pre" for given $release
Checked out your regex builder, it works only with perl compatible and javascript regex, while you need posix, or posix extended.
By [@dessert](https://askubuntu.com/users/507051/dessert):
>
> `[0-9]` is the shorter alternative to `[[:digit:]]`. As the beginning
> of the string is to be matched, one should add `^`, while `.*` at the
> end is superfluous: `^[0-9]+\.[0-9]+\.[0-9]+-+` – using a group this
> can be further shortened to: `^([0-9]+\.){2}[0-9]+-+`
>
>
>
|
Adding a field to Scala case class?
I've seen some blogs on the `Pimp my Library pattern`, and these seem to work well for adding behavior to classes.
But what if I have a `case class` and I want to `add data members` to it? As a case class I can't extend it (*inheriting from a case class is deprecated/strongly discouraged*). Will any of these pimp patterns allow me to add data to a case class?
| No - I don't see how you could make this work because the *enriched instance* is usually thrown away (note: newly the pimp-my-library pattern is called enrich-my-library). For example:
```
scala> case class X(i: Int, s: String)
defined class X
scala> implicit class Y(x: X) {
| var f: Float = 0F
| }
defined class Y
scala> X(1, "a")
res17: X = X(1,a)
scala> res17.f = 5F
res17.f: Float = 0.0
scala> res17.f
res18: Float = 0.0
```
You would have to make sure you kept hold of the wrapped instance:
```
scala> res17: Y
res19: Y = Y@4c2d27de
scala> res19.f = 4
res19.f: Float = 4.0
scala> res19.f
res20: Float = 4.0
```
However, I find this not useful in practice. You have a wrapper; you're better off making this explicit
|