prompt
stringlengths 49
4.73k
| ground_truth
stringlengths 238
35k
|
---|---|
Make material UI drawer stay the same size, instead of resizing when content size changes
I'm using Material UI with a drawer.
Inside the drawer, is a set of Collapsible lists. When I expand the list, the list text items can be quite long, and the drawer jumps out much wider.
I would like the drawer to have a width that is 30% of the window size, but when I try to set the classes on the drawer, neither root nor modal classNames seem to hold the drawer width in place.
This is the Drawer code:
```
<Drawer classes={drawerClasses} open={showStandardDrawer} anchor={"right"} onClose={closeDrawer}>
{Array.from(items).map((item, index) => {
return (
<List
key={`list-${index}`}
component="div"
aria-labelledby="nested-list-subheader"
subheader={
<ListSubheader component="div" id="nested-list-subheader">
{item.title}
</ListSubheader>
}
className={classes.root}
>
{ item.elements.map((el, index) => {
return (
<React.Fragment key={index}>
<ListItem key={index} button onClick={() => handleExpand(index)}>
<ListItemText primary={el.name} />
{open[index] ? <ExpandLess /> : <ExpandMore />}
</ListItem>
<Collapse in={open[index]} timeout="auto" unmountOnExit>
{ el.descriptions.map((description, index) => {
return (
<List key={`l-${index}`} component="div" disablePadding>
<ListItem button className={classes.nested} >
<ListItemIcon>
<StarBorder />
</ListItemIcon>
<ListItemText primary={description} primaryTypographyProps={{noWrap:true, width:'200px'} } />
</ListItem>
</List>
)})
}
</Collapse>
</React.Fragment>
)
})}
</List>
)
})}
</Drawer>
```
and these are the classes applied to the drawer ('drawerClasses'):
```
{
root: {
maxWidth: '200px',
minWidth: '50%',
width: '50%',
overflow: 'hidden'
},
modal: {
maxWidth: '50%',
minWidth: '50%',
width: '50%'
}
}
```
These aren't the styles I necessarily want, I'm just trying to see if I can get Drawer to size itself instead of sizing around its children.
| Instead of `modal`, use the `paper` class. The `Paper` element within the drawer is the main visible container. The `root` and `modal` classes are applied to wrapper elements that are positioned in such a manner that their widths won't necessarily affect the Paper width.
Here's a code excerpt from the [Permanent drawer demo](https://material-ui.com/components/drawers/#permanent-drawer):
```
const useStyles = makeStyles(theme => ({
drawer: {
width: drawerWidth,
flexShrink: 0,
},
drawerPaper: {
width: drawerWidth,
},
}));
...
<Drawer
className={classes.drawer}
variant="permanent"
classes={{
paper: classes.drawerPaper,
}}
anchor="left"
>
```
<https://codesandbox.io/s/zxljh>
|
SwiftUI TabBar: Action for tapping TabItem of currently selected Tab to reset view
The app I am working on is based around a TabBar, and when I am on a tab I want to be able to click the tabItem again to reset the view, similar to how Twitter does it in their tabBar.
I do not know how to recognize that action though. Adding a button to the TabItem is not working, addidng a tapGesture modifier isn't either, and I can't think of anything else I could try.
```
struct ContentView: View {
var body: some View {
TabView() {
Text("Tab 1")
.tabItem {
Image(systemName: "star")
.onTapGesture {
print("Hello!")
}
Text("One")
}
.tag(0)
Text("Tab 2")
.tabItem {
Button(action: {
print("Hello!")
}, label: {
Image(systemName: "star.fill")
})
}
.tag(1)
}
}
}
```
It should't automatically reset when opening the tab again, which I have seen discussed elsewhere, but when tapping the tabItem again.
What other things am I possibly missing here?
| Here is possible solution - inject proxy binding around `TabView` selection state and handle repeated tab tapped before bound value set, like below.
Tested with Xcode 12.1 / iOS 14.1
```
struct ContentView: View {
@State private var selection = 0
var handler: Binding<Int> { Binding(
get: { self.selection },
set: {
if $0 == self.selection {
print("Reset here!!")
}
self.selection = $0
}
)}
var body: some View {
TabView(selection: handler) {
Text("Tab 1")
.tabItem {
Image(systemName: "star")
Text("One")
}
.tag(0)
Text("Tab 2")
.tabItem {
Image(systemName: "star.fill")
}
.tag(1)
}
}
}
```
|
format output data in pandas to\_html
I use pandas' to\_html to generate output file, when data are written to the file they have many digits after the decimal point. The pandas' to\_html float\_format method can limit the digits, but when I used 'float\_format' as below:
```
DataFormat.to_html(header=True,index=False,na_rep='NaN',float_format='%10.2f')
```
it raise a exception:
```
typeError: 'str' object is not callable
```
how to solve this problem?
| From the `to_html` docs:
```
float_format : one-parameter function, optional
formatter function to apply to columns' elements if they are floats
default None
```
You need to pass a function. For example:
```
>>> df = pd.DataFrame({"A": [1.0/3]})
>>> df
A
0 0.333333
>>> print df.to_html()
<table border="1" class="dataframe">
<tr>
<th>0</th>
<td> 0.333333</td>
</tr>
[...]
```
but
```
>>> print df.to_html(float_format=lambda x: '%10.2f' % x)
<table border="1" class="dataframe">
[...]
<tr>
<th>0</th>
<td> 0.33</td>
</tr>
[...]
```
|
How to generate a new MS access file programmatically
I have looked far and wide, in the deepest darkest corners of the internet, but for the life of me, I can not find the correct way to open a NEW Access file and then using vb.net to write data in the database..
The keywords here are NEW database, I don't want to open an existing file.
Is this even possible?
Thanks in advance!
| I have finally found the way, thanks to a co-worker of mine
>
> Neither ADO.NET nor ActiveX Data Object (ADO) provides the means to create Microsoft
> Access Database. However, we can create Access databases by using the Microsoft Jet OLE DB
> Provider and Microsoft ADO Ext. 2.7 for DDL and Security (ADOX) with the COM Interop
> layer. To do so, **select References from the Project Menu, choose the COM tab, and add a
> reference to Microsoft ADO Ext. 2.7 for DDL and Security**; then you can use this function.
>
>
>
When you have done this, use the following snippet to create a database
---
```
Public Class Form1
Private Sub btnLoad_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) _
Handles btnLoad.Click
CreateAccessDatabase("C:\test\testDB.mdb")
MsgBox("Database created")
End Sub
```
---
```
Public Function CreateAccessDatabase( ByVal DatabaseFullPath As String) As Boolean
Dim bAns As Boolean
Dim cat As New ADOX.Catalog()
Try
Dim sCreateString As String
sCreateString =_
"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & _
DatabaseFullPath
cat.Create(sCreateString)
bAns = True
Catch Excep As System.Runtime.InteropServices.COMException
bAns = False
Finally
cat = Nothing
End Try
Return bAns
End Function
End Class
```
---
|
Django REST framework serializer without a model
I'm working on a couple endpoints which aggregate data. One of the endpoints will for example return an array of objects, each object corresponding with a day, and it'll have the number of comments, likes and photos that specific user posted. This object has a predefined/set schema, but we do not store it in the database, so it doesn't have a model.
Is there a way I can still use Django serializers for these objects without having a model?
| You can create a serializer that inherits from *serializers.Serializer* and pass your data as the first parameter like:
serializers.py
```
from rest_framework import serializers
class YourSerializer(serializers.Serializer):
"""Your data serializer, define your fields here."""
comments = serializers.IntegerField()
likes = serializers.IntegerField()
```
views.py
```
from rest_framework import views
from rest_framework.response import Response
from .serializers import YourSerializer
class YourView(views.APIView):
def get(self, request):
yourdata= [{"likes": 10, "comments": 0}, {"likes": 4, "comments": 23}]
results = YourSerializer(yourdata, many=True).data
return Response(results)
```
|
UIDatePicker in UIActionSheet on iPad
In the iPhone version of my app, I have a `UIDatePicker` in a `UIActionSheet`. It appears correctly. I am now setting up the iPad version of the app, and the same `UIActionSheet` when viewed on the iPad appears as a blank blank box.
Here is the code I am using:
```
UIDatePicker *datePickerView = [[UIDatePicker alloc] init];
datePickerView.datePickerMode = UIDatePickerModeDate;
self.dateActionSheet = [[UIActionSheet alloc] initWithTitle:@"Choose a Follow-up Date"
delegate:self cancelButtonTitle:nil
destructiveButtonTitle:nil otherButtonTitles:@"Done", nil];
[self.dateActionSheet showInView:self.view];
[self.dateActionSheet addSubview:datePickerView];
[self.dateActionSheet sendSubviewToBack:datePickerView];
[self.dateActionSheet setBounds:CGRectMake(0,0,320, 500)];
CGRect pickerRect = datePickerView.bounds;
pickerRect.origin.y = -95;
datePickerView.bounds = pickerRect;
```
| I ended up creating a separate segment of code for the iPad Popover:
```
//build our custom popover view
UIViewController* popoverContent = [[UIViewController alloc] init];
UIView* popoverView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 320, 344)];
popoverView.backgroundColor = [UIColor whiteColor];
datePicker.frame = CGRectMake(0, 44, 320, 300);
[popoverView addSubview:toolbar];
[popoverView addSubview:datePicker];
popoverContent.view = popoverView;
//resize the popover view shown
//in the current view to the view's size
popoverContent.contentSizeForViewInPopover = CGSizeMake(320, 244);
//create a popover controller
UIPopoverController *popoverController = [[UIPopoverController alloc] initWithContentViewController:popoverContent];
//present the popover view non-modal with a
//refrence to the button pressed within the current view
[popoverController presentPopoverFromBarButtonItem:self.navigationItem.leftBarButtonItem
permittedArrowDirections:UIPopoverArrowDirectionAny
animated:YES];
//release the popover content
[popoverView release];
[popoverContent release];
```
|
How Can I Pipe the Java Console Output to File Without Java Web Start?
I am wanting to pipe the Java console output (generated by `System.out.println` and its ilk) to a file. I found an excellent solution [here](https://stackoverflow.com/a/626536/531762) to enable Java tracing, but this isn't working for me (no log file shows up in any location on Mac OS X or Windows). From what I can tell, this is because I'm using a plain Java app without Java web start. So how can I do this with Java code that does not use Java web start? Ideally, I would like a solution that does not require modifying code.
| You don't require Java web start to do any of this. Just set the `System.out` to a `FileOutputStream`.
```
System.setOut(new PrintStream(new FileOutputStream(fileName)));
```
where `fileName` is the full path to the file you want to pipe to.
```
public static void main(String[] args) throws Exception {
System.setOut(new PrintStream(new FileOutputStream("home.txt")));
System.out.println("hello");
}
```
This will write `hello\n` to a file named `home.txt` in the current working directory.
If you can't modify code, on Windows 7, use [command redirection](http://pcsupport.about.com/od/commandlinereference/a/redirect-command-output-to-file.htm).
```
java YourMainClass > home.txt
```
If you need to run a jar, you can use the `-jar` option of the [java application launcher](http://docs.oracle.com/javase/7/docs/technotes/tools/windows/java.html).
```
java -jar /your/path.jar > /output/file/path.txt
```
|
Android TV RowsFragment - Can't disable row dimming effect
I have a fragment extending RowsFragment and no matter what I try I cannot disable the dimming effect on the unselected rows (First Picture).
[Can't disable the dimming of unselected rows](https://i.stack.imgur.com/dJtRb.png)
Here is my code:
```
public class MainFragment extends RowsFragment {
private static final String TAG = MainFragment.class.getSimpleName();
private ArrayObjectAdapter mRowsAdapter;
private static final int GRID_ITEM_WIDTH = 300;
private static final int GRID_ITEM_HEIGHT = 200;
@Override
public void onActivityCreated(Bundle savedInstanceState) {
Log.i(TAG, "onActivityCreated");
super.onActivityCreated(savedInstanceState);
loadRows();
}
private void loadRows() {
mRowsAdapter = new ArrayObjectAdapter(new ListRowPresenter(FocusHighlight.ZOOM_FACTOR_LARGE, false));
HeaderItem cardPresenterHeader = new HeaderItem(1, "CardPresenter");
CardPresenter cardPresenter = new CardPresenter();
ArrayObjectAdapter cardRowAdapter = new ArrayObjectAdapter(cardPresenter);
for(int i=0; i<10; i++) {
Movie movie = new Movie();
movie.setTitle("title" + i);
movie.setStudio("studio" + i);
cardRowAdapter.add(movie);
}
mRowsAdapter.add(new ListRow(cardPresenterHeader, cardRowAdapter));
mRowsAdapter.add(new ListRow(cardPresenterHeader, cardRowAdapter));
setAdapter(mRowsAdapter);
}
```
No matter what changes or combinations I try (Playing with the useFocusDimmer flag, trying to use BrowseFragment etc) I can't get the result i'm looking for.
The closest I got was changing to a VerticalGridFragment and Presenter, but this functionality is lacking and it only resembles what i'm trying to accomplish (Second Picture).
[Example of how I want it to look](https://i.stack.imgur.com/gh6om.jpg)
Thanks in advance,
| You must use [RowsFragment](https://developer.android.com/reference/android/support/v17/leanback/app/RowsFragment.html) together with a [RowPresenter](https://developer.android.com/reference/android/support/v17/leanback/widget/RowPresenter.html) subclass. In the RowPresenter subclass you can define your custom Selection animation or try to call [setSelectEffectEnabled](https://developer.android.com/reference/android/support/v17/leanback/widget/RowPresenter.html#setSelectEffectEnabled(boolean)).
Excerpt from the documentation:
>
> When a user scrolls through rows, a fragment will initiate animation
> and call setSelectLevel(Presenter.ViewHolder, float) with float value
> between 0 and 1. By default, the RowPresenter draws a dim overlay on
> top of the row view for views that are not selected. Subclasses may
> override this default effect by having isUsingDefaultSelectEffect()
> return false and overriding onSelectLevelChanged(ViewHolder) to apply
> a different selection effect.
>
>
> Call setSelectEffectEnabled(boolean) to enable/disable the select
> effect, This will not only enable/disable the default dim effect but
> also subclasses must respect this flag as well.
>
>
>
|
Is there a window manager that allows tabs of multiple different programs in one window? (Like Windows 10 Sets)?
I installed the Windows 10 preview releases awhile back because I wanted to try the Sets feature that was being worked on. Sadly, this was removed from the beta releases, and has not returned.
Is there a Linux window manager that has this capability? (Using tabs of multiple different programs in one window.)
| This table of [Window Managers](https://en.wikipedia.org/wiki/Comparison_of_X_window_managers) shows Linux Window Managers with tabbed windows include:
[xmonad](https://xmonad.org/), [wmii](https://code.google.com/archive/p/wmii/), [Window Maker](https://www.windowmaker.org/), WMFS, PekWM, [Ion](https://tuomov.iki.fi/software/), [i3](https://i3wm.org/), [FVWM](http://www.fvwm.org), [Fluxbox](http://fluxbox.org/), and [Compiz](https://launchpad.net/compiz).
Some Desktop Environments are locked in to a specific Window Manager (e.g., Cinnamon), but [GNOME](https://www.gnome.org/) and [KDE](https://kde.org/) are not.
|
When and why to map a lambda function to a list
I am working through a preparatory course for a Data Science bootcamp and it goes over the `lambda` keyword and `map` and `filter` functions fairly early on in the course. It gives you syntax and how to use it, but I am looking for why and when for context. Here is a sample of their solutions:
```
def error_line_traces(x_values, y_values, m, b):
return list(map(lambda x_value: error_line_trace(x_values, y_values, m, b, x_value), x_values))
```
I feel as if every time I go over their solutions to the labs I've turned a single `return` line solution into a multi-part function. Is this style or is it something that I should be doing?
| I'm not aware of any situations where it makes sense to use a map of a lambda, since it's shorter and clearer to use a [generator expression](https://docs.python.org/3/glossary.html#term-generator-expression) instead. And a list of a map of a lambda is even worse cause it could be a [list comprehension](https://docs.python.org/3/glossary.html#term-list-comprehension):
```
def error_line_traces(x_values, y_values, m, b):
return [error_line_trace(x_values, y_values, m, b, x) for x in x_values]
```
Look how much shorter and clearer that is!
A filter of a lambda can also be rewritten as a comprehension. For example:
```
list(filter(lambda x: x>5, range(10)))
[x for x in range(10) if x>5]
```
---
That said, there are good uses for `lambda`, `map`, and `filter`, but usually not in combination. Even `list(map(...))` can be OK depending on the context, for example converting a list of strings to a list of integers:
```
[int(x) for x in list_of_strings]
list(map(int, list_of_strings))
```
These are about as clear and concise, so really the only thing to consider is whether people reading your code will be familiar with `map`, and whether you want to give a meaningful name to the elements of the iterable (here `x`, which, admittedly, is not a great example).
Once you get past the bootcamp, keep in mind that `map` and `filter` are [iterators](https://docs.python.org/3/glossary.html#term-iterator) and do [lazy evaluation](https://en.wikipedia.org/wiki/Lazy_evaluation), so if you're only looping over them and not building a list, they're often preferable for performance reasons, though a generator will probably perform just as well.
|
why isn't monitor listed under /dev in linux?
If /dev is suppose to list all the devices that is connected, like usb, hdd, webcam, how come on my ubuntu 15 VM I don't see a monitor? I am running the desktop edition so there should be a monitor.
Or maybe it is named something different?
| Device files on Unix systems in general are just one way for user programs to access device drivers; there isn't a one-to-one mapping from devices files to physical hardware, and not all hardware has a device file (or even a device driver). The kernel itself doesn't use device files to interact with hardware.
As pointed out by [lcd047](https://unix.stackexchange.com/users/111878/lcd047), network cards don't have device files at all. Programs interact with the network using APIs, *e.g.* the [BSD socket API](http://en.wikipedia.org/wiki/Berkeley_sockets); even `ethtool` uses a socket and [`ioctl()`](http://en.wikipedia.org/wiki/Ioctl) to manipulate the network interface.
So when determining whether your monitor has a device file, it's useful to think of the ways programs interact with it. There aren't many tools which interact directly with a monitor... Programs display information on a monitor *via* a graphics card, and that does have device files: `/dev/dri/*`, `/dev/fb*` etc. But that's not the monitor. The only programs I know of which interact with a monitor directly are backlight control programs and `ddccontrol`; the former generally use ACPI or laptop-specific devices (so the monitor's backlight is just a part of the system's power-usage model), and `ddccontrol` uses the [I²C](http://en.wikipedia.org/wiki/I%C2%B2C) bus whose devices appear as `/dev/i2c-*` once the `i2c-dev` module is loaded.
|
JSON Schema - Allow only specific enum values for a property and reject the rest
Say I have the following JSON that I'd like validated.
```
[
{
"UpStatus":"Closed"
},
{
"UpStatus":"Open"
}
]
```
I want the json to pass validation only if there is at least one 'UpStatus' in the array defined to either 'Open' or 'Locked'.
If 'UpStatus' is not found as set to 'Open' or 'Locked' in the array, and is set to something else that is arbitrary say "Closed", I want the validation to fail.
I tinkered around with **anyOf** and came up with the following schema.
```
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "array",
"items": [
{
"type": "object",
"properties": {
"UpStatus": {
"type": "string"
}
},
"minItems": 1,
"anyOf": [
{
"properties": {
"UpStatus": {
"const": "Open"
}
},
"required": [
"UpStatus"
]
},
{
"properties": {
"UpStatus": {
"const": "Locked"
}
},
"required": [
"UpStatus"
]
}
]
}
]
}
```
The above does not work correctly as it allows the following to pass which I thought it should fail to validate.
```
[
{
"UpStatus": "Closed"
},
{
"UpStatus": "Closed"
}
]
```
I played with the json schema for a long time and looked at examples and read some docs but could not get it to work. Any help is appreciated. Thank you.
| In your schema above, you put the "minItems" keyword inside "items", which does nothing -- it needs to be adjacent to "items". But using "items" also means that *all* items must match, not just one.
Instead, use "contains":
```
{
"type: "array",
"contains": {
"type": "object",
"required": ["UpStatus"],
"properties": {
"UpStatus": {
"enum": ["Open","Locked"],
}
}
}
}
```
Translation: the data must be an array, where at least one element must be an object, which has the property "UpStatus" with value either "Open" or "Locked".
You may want all items in the array to conform to something specific, in which case you use "items" to specify that. The difference between "items" and "contains" is that the "items" schema must match *all* items, whereas the "contains" schema only has to match one.
**HOWEVER**, "contains" is not available in the draft 4 version of the spec. Is there any chance you can upgrade? [There is a list of implementations in various languages here.](https://json-schema.org/implementations.html) Alternatively, you can simulate the "contains" keyword with `"not": { "items": { "not": { ... schema ... } } }` ([courtesy Jason Desrosiers](https://github.com/json-schema-org/understanding-json-schema/issues/146#issuecomment-818003880)).
---
addendum: When I evaluate your schema and data, it does not pass, but rather produces these errors, so perhaps your implementation is buggy (or you mispasted something):
```
{
"errors" : [
{
"error" : "value does not match",
"instanceLocation" : "/0/UpStatus",
"keywordLocation" : "/items/0/anyOf/0/properties/UpStatus/const"
},
{
"error" : "not all properties are valid",
"instanceLocation" : "/0",
"keywordLocation" : "/items/0/anyOf/0/properties"
},
{
"error" : "value does not match",
"instanceLocation" : "/0/UpStatus",
"keywordLocation" : "/items/0/anyOf/1/properties/UpStatus/const"
},
{
"error" : "not all properties are valid",
"instanceLocation" : "/0",
"keywordLocation" : "/items/0/anyOf/1/properties"
},
{
"error" : "no subschemas are valid",
"instanceLocation" : "/0",
"keywordLocation" : "/items/0/anyOf"
},
{
"error" : "not all items are valid",
"instanceLocation" : "",
"keywordLocation" : "/items"
}
],
"valid" : false
}
```
|
bisect everything, from initial commit
Say I have a small project with a very fast test script, and I just want to bisect everything, from the initial commit to the curret commit. How can I do that?
To clarify, I don't want to waste time identifying a commit that is good and a commit that is bad, so I'm looking for a quick way to mark the latest commit as bad, and the initial commit as good.
|
```
git bisect start
git bisect good
git bisect bad `git rev-list --max-parents=0 HEAD`
git bisect run ./test.sh
```
Or incorporate these commands into an alias, e.g.:
```
bisect-all = !git bisect start && git bisect bad &&\
git bisect good `git rev-list --max-parents=0 --first-parent HEAD`
```
And then just use `git bisect-all`, `git bisect run ./test.sh`.
Creating an alias to handle the whole process is slightly more complicated:
```
quick-bisect = !sh -c 'git bisect start && git bisect bad &&\
git bisect good `git rev-list --max-parents=0 --first-parent HEAD` &&\
git bisect run "$@" && git bisect reset' -
```
But with that, you can simply run `git quick-bisect ./test.sh`.
---
If you're using a version of git older than 1.7.4.2, you won't have the `--max-parents` option, so will need to use something like `git rev-list HEAD | tail -n 1` instead.
|
Does double less than minus "<<-" sign mean anything in ruby?
I am trying to get myself familiar with ruby syntax and coding styles(i am a newbie). I came across a code that uses `<<-`, what does this mean in Ruby? Code is
```
def expectation_message(expectation)
<<-FE
#{expectation.message}
#{expectation.stack}
FE
end
```
This is just a section of the whole code. Any help would be appreciated.
| There are various ways to define multi-line string in Ruby. This is one of them.
```
> name = 'John'
> city = 'Ny'
> multiline_string = <<-EOS
> This is the first line
> My name is #{name}.
> My city is #{city} city.
> EOS
=> "This is the first line\nMy name is John.\nMy city is Ny city.\n"
>
```
the `EOS` in above example is just a convention, you can use any string you like and its case insensitive. Normally the `EOS` means `End Of String`
Moreover, even the `-` (dash) is not needed. However, allows you to indent the "end of here doc" delimiter. See the following example to understand the sentences.
```
2.2.1 :014 > <<EOF
2.2.1 :015"> My first line without dash
2.2.1 :016"> EOF
2.2.1 :017"> EOF
=> "My first line without dash\n EOF\n"
2.2.1 :018 > <<-EOF
2.2.1 :019"> My first line with dash. This even supports spaces before the ending delimiter.
2.2.1 :020"> EOF
=> "My first line with dash. This even supports spaces before the ending delimiter.\n"
2.2.1 :021 >
```
for more info see
<https://cbabhusal.wordpress.com/2015/10/06/ruby-multiline-string-definition/>
|
Why is stack size in C# exactly 1 MB?
Today's PCs have a large amount of physical RAM but still, the stack size of C# is only 1 MB for 32-bit processes and 4 MB for 64-bit processes ([Stack capacity in C#](https://stackoverflow.com/questions/823724/stack-capacity-in-c-sharp)).
Why the stack size in CLR is still so limited?
And why is it exactly 1 MB (4 MB) (and not 2 MB or 512 KB)? Why was it decided to use these amounts?
I am interested in **considerations and reasons behind that decision**.
| ![enter image description here](https://i.stack.imgur.com/yB8up.jpg)
You are looking at the guy that made that choice. David Cutler and his team selected one megabyte as the default stack size. Nothing to do with .NET or C#, this was nailed down when they created Windows NT. One megabyte is what it picks when the EXE header of a program or the CreateThread() winapi call doesn't specify the stack size explicitly. Which is the normal way, almost any programmer leaves it up the OS to pick the size.
That choice probably pre-dates the Windows NT design, history is way too murky about this. Would be nice if Cutler would write a book about it, but he's never been a writer. He's been extraordinarily influential on the way computers work. His first OS design was RSX-11M, a 16-bit operating system for DEC computers (Digital Equipment Corporation). It heavily influenced Gary Kildall's CP/M, the first decent OS for 8-bit microprocessors. Which heavily influenced MS-DOS.
His next design was VMS, an operating system for 32-bit processors with virtual memory support. Very successful. His next one was cancelled by DEC around the time the company started disintegrating, not being able to compete with cheap PC hardware. Cue Microsoft, they made him a offer he could not refuse. Many of his co-workers joined too. They worked on VMS v2, better known as Windows NT. DEC got upset about it, money changed hands to settle it. Whether VMS already picked one megabyte is something I don't know, I only know RSX-11 well enough. It isn't unlikely.
Enough history. One megabyte is a **lot**, a real thread rarely consumes more than a couple of handfuls of kilobytes. So a megabyte is actually rather wasteful. It is however the kind of waste you can afford on a demand-paged virtual memory operating system, that megabyte is just *virtual memory*. Just numbers to the processor, one each for every 4096 bytes. You never actually use the physical memory, the RAM in the machine, until you actually address it.
It is extra excessive in a .NET program because the one megabyte size was originally picked to accommodate native programs. Which tend to create large stack frames, storing strings and buffers (arrays) on the stack as well. Infamous for being a malware attack vector, a buffer overflow can manipulate the program with data. Not the way .NET programs work, strings and arrays are allocated on the GC heap and indexing is checked. The only way to allocate space on the stack with C# is with the unsafe *stackalloc* keyword.
The only non-trivial usage of the stack in .NET is by the jitter. It uses the stack of your thread to just-in-time compile MSIL to machine code. I've never seen or checked how much space it requires, it rather depends on the nature of the code and whether or not the optimizer is enabled, but a couple of tens of kilobytes is a rough guess. Which is otherwise how this website got its name, a stack overflow in a .NET program is quite fatal. There isn't enough space left (less than 3 kilobytes) to still reliably JIT any code that tries to catch the exception. Kaboom to desktop is the only option.
Last but not least, a .NET program does something pretty unproductive with the stack. The CLR will *commit* the stack of a thread. That's an expensive word that means that it doesn't just reserve the size of the stack, it also makes sure that space is reserved in the operating system's paging file so the stack can always be swapped out when necessary. Failing to commit is a fatal error and terminates a program unconditionally. That only happens on machine with very little RAM that runs entirely too many processes, such a machine will have turned to molasses before programs start dying. A possible problem 15+ years ago, not today. Programmers that tune their program to act like an F1 race-car use the [`<disableCommitThreadStack>`](https://msdn.microsoft.com/en-us/library/bb882564%28v=vs.110%29.aspx) element in their .config file.
Fwiw, Cutler didn't stop designing operating systems. That photo was made while he worked on Azure.
---
Update, I noticed that .NET no longer commits the stack. Not exactly sure when or why this happened, it's been too long since I checked. I'm guessing this design change happened somewhere around .NET 4.5. Pretty sensible change.
|
How to hide password in the nodejs console?
I want to hide password input. I see many answers in stackoverflow but I can't verify value if I press backspace. The condition return false.
I tried several solution to overwrite the function but I got an issue with buffer if I press backspace, I got invisible character `\b`.
I press : "A", backspace, "B", I have in my buffer this : "\u0041\u0008\u0042" (toString() = 'A\bB') and not "B".
I have :
```
var readline = require('readline');
var rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
rl.question("password : ", function(password) {
console.log("Your password : " + password);
});
```
| Overwrite \_writeToOutput of application's readline interface : <https://github.com/nodejs/node/blob/v9.5.0/lib/readline.js#L291>
To hide your password input, you can use :
## FIRST SOLUTION : "password : [=-]"
This solution has animation when you press a touch :
```
password : [-=]
password : [=-]
```
The code :
```
var readline = require('readline');
var rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
rl.stdoutMuted = true;
rl.query = "Password : ";
rl.question(rl.query, function(password) {
console.log('\nPassword is ' + password);
rl.close();
});
rl._writeToOutput = function _writeToOutput(stringToWrite) {
if (rl.stdoutMuted)
rl.output.write("\x1B[2K\x1B[200D"+rl.query+"["+((rl.line.length%2==1)?"=-":"-=")+"]");
else
rl.output.write(stringToWrite);
};
```
This sequence "\x1B[2K\x1BD" uses two escapes sequences :
- ***Esc* [2K :** clear entire line.
- ***Esc* D :** move/scroll window up one line.
To learn more, read this : <http://ascii-table.com/ansi-escape-sequences-vt-100.php>
## SECOND SOLUTION : "password : \*\*\*\*"
```
var readline = require('readline');
var rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
rl.stdoutMuted = true;
rl.question('Password: ', function(password) {
console.log('\nPassword is ' + password);
rl.close();
});
rl._writeToOutput = function _writeToOutput(stringToWrite) {
if (rl.stdoutMuted)
rl.output.write("*");
else
rl.output.write(stringToWrite);
};
```
## You can clear history with :
```
rl.history = rl.history.slice(1);
```
|
Spatial line start and end point in R
I am attempting to use the `sp` package to access the start and end points of a linestring, similar to what `ST_StartPoint` and `ST_EndPoint` would produce using `psql`.
No matter how I try to access the line, I get errors or NULL value:
```
> onetrip@lines[[1]][1]
Error in onetrip@lines[[1]][1] : object of type 'S4' is not subsettable
> onetrip@lines@Lines@coords
Error: trying to get slot "Lines" from an object of a basic class ("list") with no slots
> onetrip@lines$Lines
NULL
```
The only solution that works is verbose and requires conversion to `SpatialLines`, and I can only easily get the first point:
```
test = as(onetrip, "SpatialLines")@lines[[1]]
> test@Lines[[1]]@coords[1,]
[1] -122.42258 37.79494
```
Both the `str()` below and a simple `plot(onetrip)` show that my dataframe is not empty.
What is the workaround here - how would one return the start and endpoints of a linestring in `sp`?
I have subset the first record of a larger `SpatialLinesDataFrame`:
```
> str(onetrip)
Formal class 'SpatialLinesDataFrame' [package "sp"] with 4 slots
..@ data :'data.frame': 1 obs. of 6 variables:
.. ..$ start_time : Factor w/ 23272 levels "2018/02/01 00:12:40",..: 23160
.. ..$ finish_time: Factor w/ 23288 levels "1969/12/31 17:00:23",..: 23288
.. ..$ distance : num 2.74
.. ..$ duration : int 40196
.. ..$ route_id : int 5844736
.. ..$ vehicle_id : int 17972
..@ lines :List of 1
.. ..$ :Formal class 'Lines' [package "sp"] with 2 slots
.. .. .. ..@ Lines:List of 1
.. .. .. .. ..$ :Formal class 'Line' [package "sp"] with 1 slot
.. .. .. .. .. .. ..@ coords: num [1:3114, 1:2] -122 -122 -122 -122 -122 ...
.. .. .. ..@ ID : chr "0"
..@ bbox : num [1:2, 1:2] -122.4 37.8 -122.4 37.8
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "x" "y"
.. .. ..$ : chr [1:2] "min" "max"
..@ proj4string:Formal class 'CRS' [package "sp"] with 1 slot
.. .. ..@ projargs: chr "+proj=longlat +ellps=WGS84 +towgs84=0,0,0,0,0,0,0 +no_defs"
```
| Since you tagged question with sf as well, I'll provide a solution in sf. Note you can transform your sp object to sf using
```
library(sf)
st_as_sf(sp_obj)
```
Create linestring
```
line <- st_as_sfc(c("LINESTRING(0 0 , 0.5 1 , 1 1 , 1 0.3)")) %>%
st_sf(ID = "poly1")
```
Convert each vertex to point
```
pt <- st_cast(line, "POINT")
```
Start and end are simply the first and last row of the data.frame
```
start <- pt[1,]
end <- pt[nrow(pt),]
```
plot - green is start point, red is end point
```
library(ggplot2)
ggplot() +
geom_sf(data = line) +
geom_sf(data = start, color = 'green') +
geom_sf(data = end, color = 'red') +
coord_sf(datum = NULL)
```
[![enter image description here](https://i.stack.imgur.com/Zg7VZ.png)](https://i.stack.imgur.com/Zg7VZ.png)
|
Learning C++, looking for a clarification on this project from a book
The goal here was to create a program that found and output all the prime numbers between 1 and 100. I've noticed I have a tendency to complicate things and create inefficient code, and I'm pretty sure I did that here as well. The initial code is mine, and everything that I've put between the comment tags is the code given in the book as a solution.
```
// Find all prime numbers between 1 and 100
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
int counter; // loop counter
int count_two; // counter for second loop
int val; // equals the number of count, used in division to check for primes
bool check;
check = true;
for(counter = 1; counter <= 100; counter++){
val = counter;
for(count_two = 2; count_two <= 9; count_two++){
if((val % count_two) == !(check)){
cout << val << " is a prime number.\n";
}
}
}
return 0;
}
// program didn't work properly because of needless complication; all that needs to be checked for is whether a number is divisible by two
/*
*********correct code***********
#include <iostream>
using namespace std;
int main()
{
int i, j;
bool isprime;
for(i=1; i < 100; i++) {
isprime = true;
// see if the number is evenly divisible
for(j=2; j <= i/2; j++)
// if it is, then it is not prime
if((i%j) == 0) isprime = false;
if(isprime) cout << i << " is prime.\n";
}
return 0;
}
********************************
*/
```
From what I can gather, I was on a reasonably correct path here. I think I complicated things with the double loop and overuse of variables, which probably led to the program working incorrectly -- I can post the output if need be, but it's certainly wrong.
My question is basically this: where exactly did I go wrong? I don't need somebody to redo this because I'd like to correct the code myself, but I've looked at this for a while and can't quite figure out why mine isn't working. Also, since I'm brand new to this, any input on syntax/readability would be helpful as well. Thanks in advance.
| As it is, your code says a number is prime if it is *divisible* by *any* of the numbers from 2 to 9. You'll want a `bool` variable somewhere to require that it's *all* and not *any*, and you'll also need to change this line:
```
if((val % count_two) == !(check)){
```
Since `check` = `true`, this resolves as follows:
```
if ((val % count_two) == !true){
```
and
```
if ((val % count_two) == false){
```
and
```
if ((val % count_two) == 0){
```
(Notice how the value `false` is converted to `0`. Some languages would give a compile error here. C++ converts it into an integer).
This in fact does the opposite of what you want. Instead, write this, which is correct and clearer:
```
if (val % count_two != 0) {
```
Finally, one thing you can do for readability (and convenience!) is to write `i`, `j`, and `k` instead of `counter`, `count_two`, and `count_three`. Those three letters are universally recognized by programmers as loop counters.
|
Devise not redirecting where I would expect
Just migrated from Authlogic to Devise, and having a redirect issue.
I have the following:
```
root :to => "welcome#index"
authenticated :user do
root :to => "dashboard#show"
end
```
However, after loggin in, I end up on welcome#index, and not on dashboard#show as I would expect.
The [devise](https://github.com/plataformatec/devise) documentation says:
>
> After signing in a user, confirming the account or updating the
> password, Devise will look for a scoped root path to redirect.
> Example: For a :user resource, it will use user\_root\_path if it
> exists, otherwise default root\_path will be used.
>
>
>
Which only reinforces my expectation.
|
```
def after_sign_in_path_for(resource_or_scope)
new_order_path
end
```
Define this in your applications controller. This will route your user to a particular path after sign\_in.
Additional tidbit:
If you want to route the user to a particular page after confirming through email use this in your applications controller.
```
def after_confirmation_path_for(resource_or_scope)
end
```
---
Try this:
```
resources :dashboard
authenticated :user do
root :to => "dashboard#show"
end
```
make sure the
```
root :to => "path"
```
after the above code and not below that.
|
How to implement a function with array
```
<?php
class FileOwners
{
public static function groupByOwners($files)
{
return NULL;
}
}
$files = array
(
"Input.txt" => "Randy",
"Code.py" => "Stan",
"Output.txt" => "Randy"
);
var_dump(FileOwners::groupByOwners($files));
```
**Implement a groupByOwners function :**
- Accepts an associative array containing the file owner name for each file name.
- Returns an associative array containing an array of file names for each owner name, in any order.
**For example**
Given the input:
```
["Input.txt" => "Randy", "Code.py" => "Stan", "Output.txt" => "Randy"]
```
groupByOwners returns:
```
["Randy" => ["Input.txt", "Output.txt"], "Stan" => ["Code.py"]]
```
|
```
<?php
class FileOwners
{
public static function groupByOwners($files)
{
$result=array();
foreach($files as $key=>$value)
{
$result[$value][]=$key;
}
return $result;
}
}
$files = array
(
"Input.txt" => "Randy",
"Code.py" => "Stan",
"Output.txt" => "Randy"
);
print_r(FileOwners::groupByOwners($files));
```
**Output:**
```
Array
(
[Randy] => Array
(
[0] => Input.txt
[1] => Output.txt
)
[Stan] => Array
(
[0] => Code.py
)
)
```
|
How can PyPy be faster than Cpython
I have read [PyPy -- How can it possibly beat CPython?](https://stackoverflow.com/questions/2591879/pypy-how-can-it-possibly-beat-cpython) and countless other things but i am not able to understand how something written in Python be faster than python itself.
The only way I can think of is that PyPy somehow bypasses C and directly compiles into assembly language instructions. If that is the case, then it is fine.
Can someone explain to me how PyPy works? I need a simple answer.
I love python and want to start contributing. PyPy looks like an awesome place to start irrespective of whether they pull my code or not. But I am not able to understand from the brief research I have done.
| The easiest way to understand PyPy is to forget that it's implemented in Python.
It actually isn't, anyway, it's implemented in RPython. RPython is runnable with a Python interpreter, but Python code is **not** able to be compiled by the RPython compiler (the PyPy translation framework). RPython is a subset of Python, but the parts that are "left out" are substantive enough that programming in RPython is *very* different from programming normally in Python.
So since Python code can't be treated as RPython code, and idiomatic RPython programs "look and feel" very different to idiomatic Python programs, lets ignore the connection between altogether, and consider a made-up example.
Pretend I've developed a new language, Frobble, with a compiler. And I have written a Python interpreter in Frobble. I claim that my "FrobblePython" interpreter is often substantially faster than the CPython interpreter.
Does this strike you as weird or impossible? Of course not. A new Python interpreter can be either faster or slower than the CPython interpreter (or more likely, faster at some things and slower at others, by varying margins). Whether it's faster or not will depend upon the implementation of FrobblePython, as well as the performance characteristics of code compiled by my Frobble compiler.
That's **exactly** how you should think about the PyPy interpreter. The fact that the language used to implement it, RPython, happens to be able to be interpreted by a Python interpreter (with the same external results as compiling the RPython program and running it) is *completely irrelevant* to understanding how fast it is. All that matters is the implementation of the PyPy interpreter, and the performance characteristics of code compiled by the the RPython compiler (such as the fact that the RPython compiler can automatically add certain kinds of JITing capability to the programs it compiles).
|
How to use row level security in Superset UI
I am using the newest version of superset and it has the row-level security option in the UI. Can anyone help me and let me know or give a little walk through that how can I implement it in the UI and use it. There is hardly much documentation there.
| Row level security essentially works like a WHERE clause. Let's assume that we build a dashboard using table called `tbl_org` that look likes:
```
manager_name department agent
Jim Sales Agent 1
Jim Sales Agent 2
Jack HR Agent 3
Jack HR Agent 4
```
Say, we need to show Jim only the rows/records where he is a manager on the dashboard when he logs in. The same for Jack. This is when RLS is useful.
The Superset UI provides three fields that need to be filled.
1. **Table**: The table on which we want to apply RLS. In this case would be `tbl_org`
2. **Roles**: The role or roles to which you want this rule to apply to. Let's say we use the Gamma role.
3. **Clause**: The SQL condition. The condition provided here gets applied to the where clause when the query is executed to fetch data for the dashboard. So for example, if you use the condition `manager_name = Jim` this will result in the query: `SELECT * from tbl_org where manager_name = Jim`
If you want dynamically filter the table based on the user who logs in you can use a jinja template:
```
manager_name = '{{current_username()}}'
```
For this, the usernames created in Superset need to match the `manager_name` column in `tbl_org`
|
Changing from WPF User Control to Window?
I've been working on a commandline application, and have recently decided to add a wpf window to the application. I added this as a UserControl, however I noticed I can't call this class using ShowDialog() from my main code;
I've tried changing the Base class from a UserControl to Window, however an error occurs;
```
public partial class UserControl1 : Window
{
public UserControl1()
{
InitializeComponent();
}
```
>
> Error 1 Partial declarations of
> 'ExcelExample.UserControl1' must not
> specify different base
> classesExcelExample
>
>
>
I've added all the references found in my other WPF application to no avail. Help!
| In order to change the base class it is not sufficient to change it in code only. You must also change the root tag and any nested elements in accompanying XAML file. For example, you have something like:
```
<UserControl x:Class="Your.Namespace.UserControl1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<UserControl.Resources>
</UserControl.Resources>
</UserControl>
```
You must change it to something like:
```
<Window x:Class="Your.Namespace.UserControl1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Window.Resources>
</Window.Resources>
</Window>
```
|
Horizontally center first item of RecyclerView
I want to use a `RecyclerView` to emulate the behavior of a `MultiViewPager`, in particular I'd like to have the selected item at the center of the screen, **including the first and the last element**.
As you can see in this image, the first item is centered and this would be my expected result.
[![Expected result](https://i.stack.imgur.com/HsATz.gif)](https://i.stack.imgur.com/HsATz.gif)
What I did was to setup a `RecyclerView` with an horizontal `LinearLayoutManager` and a `LinearSnapHelper`. The problem with this solution is that the first and the last item will never be horizontally centered as selection. Should I switch my code so that it uses a `MultiViewPager` or is it possible to achieve a similar result taking advantage of a `RecyclerView`?
| You can implement this with an `RecyclerView.ItemDecoration` in [`getItemOffsets()`](https://developer.android.com/reference/android/support/v7/widget/RecyclerView.ItemDecoration.html#getItemOffsets(android.graphics.Rect,%20android.view.View,%20android.support.v7.widget.RecyclerView,%20android.support.v7.widget.RecyclerView.State)), to offset the first and last item appropriately.
>
> Retrieve any offsets for the given item. Each field of `outRect` specifies the number of pixels that the item view should be inset by, similar to padding or margin. The default implementation sets the bounds of outRect to 0 and returns.
>
>
> If you need to access Adapter for additional data, you can call [`getChildAdapterPosition(View)`](https://developer.android.com/reference/android/support/v7/widget/RecyclerView.html#getChildAdapterPosition(android.view.View)) to get the adapter position of the View.
>
>
>
You might need to use the messured size of the item and the `RecyclerView` as well. But these information is available to be used anyhow.
|
How does WebAssembly <-> JavaScript memory interaction work with multiple Typed Arrays?
I've got a simple c function.
```
void fill(float *a, float *b)
{
a[0] = 1;
b[0] = 2;
}
int main()
{
float a[1];
float b[1];
fill(a, b);
printf("%f\n", a[0]);
printf("%f\n", b[0]);
return 0;
}
```
That gives me
```
1.000000
2.000000
```
Now I'm trying to do the same but from JavaScript via WebAssembly.
```
var wasmModule = new WebAssembly.Module(wasmCode);
var wasmInstance = new WebAssembly.Instance(wasmModule, wasmImports);
const a = new Float32Array(wasmInstance.exports.memory.buffer, 0, 1)
const b = new Float32Array(wasmInstance.exports.memory.buffer, 4, 1)
wasmInstance.exports.fill(a, b)
log(a)
log(b)
```
Here is the wasm fiddle <https://wasdk.github.io/WasmFiddle/?19x523>
This time `a` is `[2]` and b is `[0]`. I think I'm doing something wrong with the memory. I assume both `a` and `b` point to the beginning of the memory. That's why `a` is first `[1]` and immediately afterwards `[2]`. I though the offsets from `new Float32Array(wasmInstance.exports.memory.buffer, 4, 1)` where the offset is `4` are somehow translated to WebAssembly.
How can I achieve that `a` and `b` actually use different memory? Thank you. I'm really stuck.
| There is a problem with this exported function call:
>
> wasmInstance.exports.fill(a, b)
>
>
>
`a` and `b` are JS `Float32Array` objects. **Never assume any JS objects will be translated to any C data types automagically.** Although a JS TypedArray behaves similar to C array, TypedArray is still a JS object that basically is a key-value storage, then how C can access JS object's field? C has no idea how to deal with a JS object.
## WebAssembly Types
Okay, let's look at it more closely in a lower level in WebAssembly. Here's the compiled result of `void fill(float *a, float *b)`:
```
(func $fill (; 0 ;) (param $0 i32) (param $1 i32)
(i32.store
(get_local $0)
(i32.const 1065353216)
)
(i32.store
(get_local $1)
(i32.const 1073741824)
)
)
```
I won't go over details, but at least it is easy to figure out this function `$fill` needs two parameters of `i32` type: `(param $0 i32) (param $1 i32)`. **So `fill()` expects numbers, not TypedArrays, as parameters**. WebAssembly defines [the following types](https://webassembly.github.io/spec/core/syntax/types.html) as function parameter types and return types: `i32`, `i64`, `f32`, `f64`, basically 32/64 bit intergers/floats. There is no other types like JS key-value store, no even array types.
Therefore, **whatever languages you use in Wasm side, you are not supposed to pass any JS types other than numbers to functions under `wasmInstance.exports` directly.** Many languages like Golang, Rust, and Emscripten C++ (not C) provides interfaces for seamless type translation by wrapping around exported functions in JS side and by hacking around those number types and Wasm memory addresses (so they need a well-defined ABI). However, you still must pass only number types if you access the exported functions directly through [`WebAssembly.Instance.exports`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/Instance/exports).
## Accessing Arrays
So then what integer value you need to pass to `fill()`? Well I think you are already close to the answer in the question, as you correctly set offsets for the arrays. You need to pass value of C pointers as integers. **In the Wasm linear memory, a C pointer is an offset address to the Wasm memory**. So you need to slightly change the code like this:
```
var wasmModule = new WebAssembly.Module(wasmCode);
var wasmInstance = new WebAssembly.Instance(wasmModule, wasmImports);
const ptrA = 0; // Yeah it's the same value as NULL, I'd avoid using zero...
const ptrB = 4;
const a = new Float32Array(wasmInstance.exports.memory.buffer, ptrA, 1)
const b = new Float32Array(wasmInstance.exports.memory.buffer, ptrB, 1)
wasmInstance.exports.fill(ptrA, ptrB)
log(a)
log(b)
```
Now you will get the value you want ;)
Related: [Using emscripten how to get C++ uint8\_t array to JS Blob or UInt8Array](https://stackoverflow.com/questions/53602955/using-emscripten-how-to-get-c-uint8-t-array-to-js-blob-or-uint8array/53605865#53605865)
|
why does docker have docker volumes and volume containers
Why does docker have docker volumes and volume containers? What is the primary difference between them. I have read through the docker docs but couldn't really understand it well.
| # Docker volumes
You can use Docker volumes to create a new volume in your container and to mount it to a folder of your host. E.g. you could mount the folder `/var/log` of your Linux host to your container like this:
```
docker run -d -v /var/log:/opt/my/app/log:rw some/image
```
This would create a folder called `/opt/my/app/log` inside your container. And this folder will be `/var/log` on your Linux host. You could use this to persist data or share data between your containers.
# Docker volume containers
Now, if you mount a host directory to your containers, you somehow break the nice isolation Docker provides. You will "pollute" your host with data from the containers. To prevent this, you could create a dedicated container to store your data. Docker calls this container a "Data Volume Container".
This container will have a volume which you want to share between containers, e.g.:
```
docker run -d -v /some/data/to/share --name MyDataContainer some/image
```
This container will run some application (e.g. a database) and has a folder called `/some/data/to/share`. You can share this folder with another container now:
```
docker run -d --volumes-from MyDataContainer some/image
```
This container will also see the same volume as in the previous command. You can share the volume between many containers as you could share a mounted folder of your host. But it will not pollute your host with data - everything is still encapsulated in isolated containers.
# My resources
<https://docs.docker.com/userguide/dockervolumes/>
|
Best Practices for AD DS Backup and Recovery?
So Microsoft claims that "[you cannot use a network shared folder as a backup target for a system state backup](https://technet.microsoft.com/en-us/library/cc753294(v=ws.10).aspx)", but I've seen plenty of posts where people indicate that they are able to do this from a command prompt using wbadmin.
My end goal:
I am not concerned about backing up any of our domain controllers, because if one dies, I'll just spin up a new domain controller and let the remaining DCs replicate to it. I am concerned however about at least making sure that I have a backup of AD, in case our entire AD infrastructure were to get hosed and needed to be restored from a backup.
Here’s what I’ve done so far to accomplish the goal:
From my PDCe, I ran a successful backup to a network share using the following command:
```
wbadmin start systemstatebackup -backuptarget:\srv-backup\b$\srv-dc1
```
I then created a scheduled backup like this:
```
wbadmin enable backup -addtarget:\srv-backup\b$\srv-dc1 -systemstate -schedule:03:00
```
I verified the following day that the scheduled backup completed successfully.
So here are new questions:
1. How do I properly backup AD? Is my current method correct?
2. If my current backup method will only yield ONE backup at any given time (because it’s backing up to a network share and it will overwrite the previous backup each night), should I look into getting local storage to push the backups to (so I can have multiple backups), or should I just do backups of my other two DCs in the same manner; to a network share (staggering the schedules of course - then I’ll at least have one or more daily backups that I can depend on)?
3. I've read in another thread in the community where someone said to "backup the NTDS folder from C:\Windows", but I'm assuming that is unnecessary since it gets backed up during the systemstate backup - is that correct?
|
>
> So Microsoft claims that "you cannot use a network shared folder as a backup target for a system state backup"
>
>
>
That is (or was) a restriction on the original version of Windows Backup, that came on older OSes (Vista RTM and Server 2008 RTM - this may or may not have been addressed in service packs or updates to those OSes). Windows 7+/Server 2008 R2+ handle system state backups to network folders fine.
>
> 1. How do I properly backup AD? Is my current method correct?
>
>
>
No. Backing up one Domain Controller is not the same as backing up Active Directory. *IF* everything goes well, then sure, you might be able to get away with it. Of course, backups only exist for when everything doesn't go well, so you should always consider what could go wrong when you're coming up with a backup strategy. In this case, I see two *major* issues.
1. You're only backing up one domain controller. If/when replication breaks to/from that domain controller, or *that* one domain controller is the source of corruption that's forcing you to restore from backups, you don't have backups of your actual Active Directory anymore.
2. Your retention period of one backup is pretty useless. By the time you realize you have a problem, you've probably overwritten your backup with a copy that contains your problem. So, that needs fixed, and fortunately, it's not hard - store your backups in folders named with the date taken. You may also want to consider doing incremental backups to save space. Weekly fulls, daily incrementals is a pretty common strategy that strikes a nice balance between disk space and speed/ease of backup restore.
>
> 2. If my current backup method will only yield ONE backup at any given time (because it’s backing up to a network share and it will overwrite the previous backup each night), should I look into getting local storage to push the backups to (so I can have multiple backups), or should I just do backups of my other two DCs in the same manner; to a network share (staggering the schedules of course - then I’ll at least have one or more daily backups that I can depend on)?
>
>
>
As stated/implied above, you should backup *all* your domain controllers daily. As mentioned above, put the backups in a folder that's named by the date taken to prevent backups from being overwritten, and consider incremental and/or differential backups to save space.
>
> 3. I've read in another thread in the community where someone said to "backup the NTDS folder from C:\Windows", but I'm assuming that is unnecessary since it gets backed up during the systemstate backup - is that correct?
>
>
>
You are correct. The NTDS (NT Directory Service) folder contains the databases that essentially are your Active Directory Domain. A system state backup should get that folder and the databases in it. (Wouldn't hurt to verify that on your backups, of course.) As the saying goes, backups that aren't restore-tested aren't really backups.
|
Why using both #pragma once and include guard?
I am studying boost libraries, and something strange to me is many libraries use this kind of code:
```
#ifndef BOOST_SERIALIZATION_ACCESS_HPP
#define BOOST_SERIALIZATION_ACCESS_HPP
// MS compatible compilers support #pragma once
#if defined(_MSC_VER)
# pragma once
#endif
```
MSDN explicitly states that:
>
> There is no advantage to use of both the #include guard idiom and
> #pragma once in the same file
>
>
>
I can't understand what might the reason be. include guard does the job anyway so why we bother to write pragma once too?
| Actually there might be a small difference inside the compiler. When compiler encounters `#pragma once` then it can internally mark that this file is included. When it encounters `#include` for this file for the second time it won't bother even to open it, it will just ignore the `#include` statement.
With only include guard the preprocessor has to parse the whole file every time it's included to find a matching `#endif`. Theoretically, if really complicated and large include files are included multiple times this can affect compilation time.
Other than that, include guard and `#pragma once` behave the same way. Both of them are usually used because `#pragma once` is not guaranteed to be supported on all compilers.
**Edit:**
Compiler might have the ability to detect that include guard's statements are surrounding the whole file's code and deduce that it's an include guard, producing exactly the same behavior as with `#pragma once`. If that's the case then MSDN's claim is correct.
|
Fading Indicator message in Java
How to/What is a good library, to create a fading indicator message in Java like that of Outlook when you get a message, or Ubuntu/Gnome when you've connected to a network?
| Java 1.6 has a TrayIcon class that can be used to display notification messages.
```
SystemTray tray = SystemTray.getSystemTray();
Image image = Toolkit.getDefaultToolkit().getImage("tray.gif");
TrayIcon trayIcon = new TrayIcon(image, "Tray Demo");
tray.add(trayIcon);
trayIcon.displayMessage("Hello, World", "notification demo", MessageType.INFO);
```
Here's the result:
[![TrayIcon on Windows](https://alex-public-images.s3.amazonaws.com/SO/java_tray_windows.png)](http://alex-public-images.s3.amazonaws.com/SO/java_tray_windows_large.png)
[![TrayIcon on Linux](https://alex-public-images.s3.amazonaws.com/SO/java_tray_ubuntu.png)](http://alex-public-images.s3.amazonaws.com/SO/java_tray_ubuntu_large.png)
On Linux you may also have a little program called notify-send. It makes it easy to invokes the standard freedesktop.org notification system from the shell. You can also run it from Java.
```
String[] notifyCmd = {"notify-send", "Hello, World!"};
Runtime.getRuntime().exec(notifyCmd);
```
I had to `apt-get install libnotify-bin` to get this on my Ubuntu box.
[![notify-send](https://alex-public-images.s3.amazonaws.com/SO/notify-send.png)](http://alex-public-images.s3.amazonaws.com/SO/notify-send_large.png)
---
I've tested these things on Windows 7 and Ubuntu 9.10. In each case the notification disappeared after some time which is I suppose the *fading indicator* effect that you want.
|
Get and Iterate through Controls from a TabItem?
How to get all the Controls/UIElements which are nested in a Tabitem (from a TabControl)?
I tried everything but wasn't able to get them.
(Set the SelectedTab):
```
private TabItem SelectedTab = null;
private void tabControl1_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
SelectedTab = (TabItem)tabControl1.SelectedItem;
}
```
Now I need something like this:
```
private StackPanel theStackPanelInWhichLabelsShouldBeLoaded = null;
foreach (Control control in tabControl.Children /*doesnt exist*/, or tabControl.Items /*only TabItems*/, or /*SelectedTab.Items ??*/ ) //I Have no plan
{
if(control is StackPanel)
{
theStackPanelInWhichLabelsShouldBeLoaded = control;
//Load Labels in the Stackpanel, thats works without problems
}
}
```
After Silvermind:
Doing this, the Count is always 1:
```
UpdateLayout();
int nChildCount = VisualTreeHelper.GetChildrenCount(SelectedTab);
```
| TabControl has Items property (derived from ItemsControl), which returns all TabItems - <http://msdn.microsoft.com/en-us/library/system.windows.controls.itemscontrol.items.aspx>. Or you can traverse visual tree:
```
var firstStackPanelInTabControl = FindVisualChildren<StackPanel>(tabControl).First();
```
Using:
```
public static IEnumerable<T> FindVisualChildren<T>(DependencyObject rootObject) where T : DependencyObject
{
if (rootObject != null)
{
for (int i = 0; i < VisualTreeHelper.GetChildrenCount(rootObject); i++)
{
DependencyObject child = VisualTreeHelper.GetChild(rootObject, i);
if (child != null && child is T)
yield return (T)child;
foreach (T childOfChild in FindVisualChildren<T>(child))
yield return childOfChild;
}
}
}
```
|
JSON serialize Django Queryset with values being called. Is there anything wrong with my approach?
Here is the problem: Django's serializer doesn't support dictionaries and simplejson doesn't support Django Querysets. See [JSON Serializing Django Models with simplejson](https://stackoverflow.com/questions/2249792/json-serializing-django-models-with-simplejson)
I was wondering if there is anything wrong with my solution. I have something like:
```
people = People.objects.all().values('name', 'id')
json.dumps(list(people))
```
I am still a newbie with Python/Django. Is casting the `QuerySet` to a list a bad idea? Is it more efficient to use the `DjangoJSONEncoder` suggested in the other topic?
| Your solution is totally valid and very clean in my own opinion.
If you need a list of lists (instead of a list of dictionaries) you can use too:
```
from django.utils import simplejson
people = People.objects.all().values_list('name', 'id')
simplejson.dumps(list(people))
```
Sometimes when the json output is very complex we usually use a json template with the \*render\_to\_string\* function, for example:
```
context = {'people': People.objects.all().values('name', 'id')}
render_to_string('templates/people.json', context, context_instance=RequestContext(request))
```
The template *people.json* could be:
```
[
{% for person in people %}
{"name": {{ person.name }}, "id": {{ person.id }} }
{% if not forloop.last %} , {% endif %}
{% endfor %}
]
```
But the use of templates is reserved for more complex cases than yours. I think that for easier problems a good solution is to use simplejson.dumps function.
|
Resizeble Dialog in Java SWT
I have a Composite(container) which is inside another composite(Dialog area). The container contains some UI elements. How can I either make the size of the dialog bigger or make it resizeable. Here is my code
```
protected Control createDialogArea(Composite parent) {
setMessage("Enter user information and press OK");
setTitle("User Information");
Composite area = (Composite) super.createDialogArea(parent);
Composite container = new Composite(area, SWT.NONE);
container.setLayout(new GridLayout(2, false));
container.setLayoutData(new GridData(SWT.FILL, SWT.FILL, true, true, 1, 1));
Label lblUserName = new Label(container, SWT.NONE);
lblUserName.setText("User name");
txtUsername = new Text(container, SWT.BORDER);
txtUsername.setLayoutData(new GridData(SWT.FILL, SWT.CENTER, true, false, 1, 1));
txtUsername.setEditable(newUser);
txtUsername.setText(name);
return area;
}
```
| To make a JFace dialog resizable add an override for the `isResizable` method:
```
@Override
protected boolean isResizable() {
return true;
}
```
To make the dialog larger when it opens you can set a width or height hint on the layout. For example:
```
GridData data = new GridData(SWT.FILL, SWT.CENTER, true, false, 1, 1);
data.widthHint = convertWidthInCharsToPixels(75);
txtUsername.setLayoutData(data);
```
or you can override `getInitialSize()`, for example this code is leaving space for more characters both horizontally (75 characters) and vertically (20 lines):
```
@Override
protected Point getInitialSize() {
final Point size = super.getInitialSize();
size.x = convertWidthInCharsToPixels(75);
size.y += convertHeightInCharsToPixels(20);
return size;
}
```
|
How to read status code from rejected WebSocket opening handshake with JavaScript?
I created a WebSocket client in JavaScript
```
if ("WebSocket" in window) {
ws = new WebSocket(url);
ws.binaryType = "arraybuffer";
} else if ("MozWebSocket" in window) {
ws = new MozWebSocket(url);
ws.binaryType = "arraybuffer";
}
```
and a WebSocket server application. For certain cases I programmed the server to reject the connection request and provide an error code.
In e.g. Firefox Console then a message is shown
```
Firefox can't establish a connection to the server at ws://123.123.123.123:1234/.
```
and it provides the status code
```
HTTP/1.1 403
```
which is the error code that I have sent by my WebSocket server.
My question is: how can I read this status code in my JavaScript client?
```
ws.onerror = function(e) {
console.log(e);
};
ws.onclose = function(e) {
console.log(e);
};
```
are both called, but none of the Event objects contains this error code.
| The spec forbids reading the HTTP status code (or anything like it) from the WebSocket object because otherwise the WebSocket object could be used to probe non-WebSocket endpoints, which would be a security issue:
>
> User agents must not convey any failure information to scripts in a way that would allow a script to distinguish the following situations:
>
>
> - A server whose host name could not be resolved.
> - A server to which packets could not successfully be routed.
> - A server that refused the connection on the specified port.
> - A server that failed to correctly perform a TLS handshake (e.g., the server certificate can't be verified).
> - **A server that did not complete the opening handshake (e.g. because it was not a WebSocket server).**
> - A WebSocket server that sent a correct opening handshake, but that specified options that caused the client to drop the connection (e.g. the server specified a subprotocol that the client did not offer).
> - A WebSocket server that abruptly closed the connection after successfully completing the opening handshake.
>
>
>
— <https://www.w3.org/TR/websockets/#feedback-from-the-protocol>
---
**There is another way to do it though!**
The WebSocket protocol allows for custom close codes:
>
> **4000-4999**
>
>
> Status codes in the range 4000-4999 are reserved for private use and thus can't be registered. Such codes can be used by prior agreements between WebSocket applications. The interpretation of these codes is undefined by this protocol.
>
>
>
— <https://www.rfc-editor.org/rfc/rfc6455#section-7.4.2>
In your server-side logic, even when you ultimately want to reject the connection (like say the user is currently unauthenticated), do this instead:
1. Accept the WebSocket connection
2. Immediately close the connection with a custom close status
The client can now look at the [CloseEvent.code](https://developer.mozilla.org/en-US/docs/Web/API/CloseEvent) to know why the connection was rejected.
You don't need to do this every time the server wants to reject a WebSocket connection. For example, I'd still reject the connection with a 4xx HTTP status if the request isn't a proper WebSocket request, or for security reasons (like if the [anti-CSWSH](https://www.christian-schneider.net/CrossSiteWebSocketHijacking.html) Origin check fails). You only need to use the WebSocket close status for cases that you want the client-side logic to handle.
|
java xpath remove element from xml
I am trying to remove an element and child elements from an xml file. Specifically appender with the name Testlogging.
First this is how my xml file looks.
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="null" threshold="null">
<appender class="DailyLogFileAppender" name="Testlogging">
<param name="encoding" value="UTF-8"/>
<param name="MaxFileSize" value="100MB"/>
<param name="MaxBackupIndex" value="2"/>
<param name="rootDir" value="c:\Logs"/>
<param name="componentId" value="Testlogging"/>
<param name="DatePattern" value="yyyyMMdd"/>
<layout class="SyslogPatternLayout">
<param ConversionPattern="%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} [%-5p] {%t} %c [%D] - %m%n"/>
</layout>
</appender>
```
Here is my java code:
```
DocumentBuilderFactory dbfact = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = dbfact.newDocumentBuilder();
Document d = builder.parse(xmlFile);
XPath xp = XPathFactory.newInstance().newXPath();
NodeList nl = (NodeList)xp.compile("//appender").evaluate(d,XPathConstants.NODESET);
for(int i = 0; i < nl.getLength(); i++)
{
if(xp.compile("./@name").evaluate(nl.item(i)).equals("Testlogging"))
{
Node node = nl.item(i);
node.removeChild(nl.item(i));
}
}
```
I would like to remove everything for this appender but an exception is being thrown. It's probably something simple I am missing.
Any ideas?
| Well you need to call `removeChild` on the parent node of the node you want to remove and you need to process the `NodeList` in reverse order (as W3C DOM collections are "live" collections that can change with any DOM manipulation (<https://www.w3.org/TR/2004/REC-DOM-Level-3-Core-20040407/core.html#td-live>)) so use an approach like
```
NodeList nl = (NodeList)xp.compile("//appender[@name = 'Testlogging']").evaluate(d,XPathConstants.NODESET);
for (int i = nl.getLength() - 1; i >= 0; i--)
{
nl.item(i).getParentNode().removeChild(nl.item(i));
}
```
|
Amazon Elastic Beanstalk Worker Tier
I need to do some async job processing given a web request that I will poll periodically until it is complete. I have the whole stack up and running locally but I can't conceptually understand how to move this over to the EBS worker tier. I'm using Django with Celery and RabbitMQ locally and was successfully able to swap out RabbitMQ with Amazon SQS. However when I tried to create a worker tier that would operate off of the same RDS database as the webapp but was unsuccessful. I'm stuck at the point where I can queue messages but can't read them from the queue. I need to use those messages to perform some expensive operation on the database and prepare the result for the consumer. Is there some architectural piece I'm missing? How and where can I get a celery daemon up to process the SQS messages?
| From the [Elastic Beanstalk documentation](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html):
>
> When you launch an AWS Elastic Beanstalk environment, you choose an environment tier, platform, and environment type. The environment tier that you choose determines whether AWS Elastic Beanstalk provisions resources to support a web application that handles HTTP(S) requests or a web application that handles background-processing tasks.
>
>
> AWS Elastic Beanstalk installs a daemon on each Amazon EC2 instance in the Auto Scaling group to process Amazon SQS messages in the worker environment tier. The daemon pulls data off the Amazon SQS queue, inserts it into the message body of an HTTP POST request, and sends it to a user-configurable URL path on the local host. The content type for the message body within an HTTP POST request is application/json by default.
>
>
> From a developer perspective, the application running on the worker tier is just a plain web service. It will receive calls from AWS Elastic Beanstalk daemon provisioned for you on the instance.
>
>
> The requests are sent to the HTTP Path value that you configure. This is done in such a way as to appear to the web application in the worker environment tier that the daemon originated the request. In this way, the daemon serves a similar role to a load balancer in a web server environment tier.
>
>
> The worker environment tier, after processing the messages in the queue, forwards the messages over the local loopback to a web application at a URL that you designate. The queue URL is only accessible from the local host. Because you can only access the queue URL from the same EC2 instance, no authentication is needed to validate the messages that are delivered to the URL.
>
>
> A web application in a worker environment tier should only listen on the local host. When the web application in the worker environment tier returns a 200 OK response to acknowledge that it has received and successfully processed the request, the daemon sends a DeleteMessage call to the SQS queue so that the message will be deleted from the queue. (SQS automatically deletes messages that have been in a queue for longer than the configured RetentionPeriod.) If the application returns any response other than 200 OK or there is no response within the configured InactivityTimeout period, SQS once again makes the message visible in the queue and available for another attempt at processing.
>
>
>
|
How can I create a Crypt::RSA object from modulus, exponent, and private exponent?
I'm trying to port the following php functionality over to perl:
```
public function loadKey($mod, $exp, $type = 'public')
{
$rsa = new Crypt_RSA();
$rsa->signatureMode = CRYPT_RSA_SIGNATURE_PKCS1;
$rsa->setHash('sha256');
$rsa->modulus = new Math_BigInteger(Magicsig::base64_url_decode($mod), 256);
$rsa->k = strlen($rsa->modulus->toBytes());
$rsa->exponent = new Math_BigInteger(Magicsig::base64_url_decode($exp), 256);
// snip...
}
```
I need to convert a string in the form ("RSA.$mod.$exp.$private\_exp"):
```
RSA.mVgY8RN6URBTstndvmUUPb4UZTdwvwmddSKE5z_jvKUEK6yk1u3rrC9yN8k6FilGj9K0eeUPe2hf4Pj-5CmHww==.AQAB.Lgy_yL3hsLBngkFdDw1Jy9TmSRMiH6yihYetQ8jy-jZXdsZXd8V5ub3kuBHHk4M39i3TduIkcrjcsiWQb77D8Q==
```
...to a Crypt::RSA object. I've split out the components so I have $mod, $exp, and $private\_exp, but the perl Crypt::RSA API doesn't seem to have a way to explicitly set.
| Worked out on IRC, documenting it here for the rest of the world: it's completely undocumented but `Crypt::RSA::Key` *does* have methods called `n`, `e`, and `d` that correspond to the modulus, the public exponent, and the private exponent. Modulo bugs in the check function (which is *supposed* to work if `p` and `q` are unavailable but `n` is, but actually doesn't), it's possible to create a working key with those methods.
We solved the problem together by creating a subclass of `Crypt::RSA::Key::Private` with a factory method that decodes the base64 encoding (using [MIME::Base64::URLSafe](http://search.cpan.org/perldoc/MIME::Base64::URLSafe)) and the additional binary encoding (using [Math::BigInt](http://search.cpan.org/perldoc/Math::BigInt)->from\_hex and `unpack "H*"`) and then sets those three private members, and the `Crypt::RSA` modules were able to accept it as a Key.
|
in python, how to connect points with smooth line in plotting?
I am trying to plot points + smooth line using spline. But the line "overshoots" some points, e.g in following codes, over the point 0.85.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import spline
x=np.array([0.1, 0.3, 0.5, 0.7, 0.9, 1.1, 1.3, 1.5, 1.7, 1.9, 2])
y=np.array([0.57,0.85,0.66,0.84,0.59,0.55,0.61,0.76,0.54,0.55,0.48])
x_new = np.linspace(x.min(), x.max(),500)
y_smooth = spline(x, y, x_new)
plt.plot (x_new,y_smooth)
plt.scatter (x, y)
```
how do I fix it?
| You might try the using interp1d in scipy.interpolate:
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
x=np.array([0.1, 0.3, 0.5, 0.7, 0.9, 1.1, 1.3, 1.5, 1.7, 1.9, 2])
y=np.array([0.57,0.85,0.66,0.84,0.59,0.55,0.61,0.76,0.54,0.55,0.48])
x_new = np.linspace(x.min(), x.max(),500)
f = interp1d(x, y, kind='quadratic')
y_smooth=f(x_new)
plt.plot (x_new,y_smooth)
plt.scatter (x, y)
```
which yields:
[![enter image description here](https://i.stack.imgur.com/qilHu.png)](https://i.stack.imgur.com/qilHu.png)
some other options for the `kind` parameter are in the docs:
>
> kind : str or int, optional Specifies the kind of interpolation as a string (‘linear’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’ where ‘zero’, ‘slinear’, ‘quadratic’ and ‘cubic’ refer to a spline interpolation of zeroth, first, second or third order) or as an integer specifying the order of the spline interpolator to use. Default is ‘linear’.
>
>
>
|
Boostrap CDN with 16-columns instead of 12
I want to use the latest Bootstrap CDN, but it defaults to 12 column layout. Is the only way to customize into a 16 col layout is to download and host files yourself?
Edit: I'm asking how to get bootstrap through a CDN. I use the bootstrap CDN from <https://cdnjs.com/> and it works great, except it is 12-col layout.
| There is really only 2 options to do this.
1. You override bootstraps grid system with your own css file.
```
<link rel="stylesheet" href="/cdn/path/to/bootstrap.css" />
<link rel="stylesheet" href="/my/path/overide_bootstrap.css" />
```
This can be a tedious task and I would not recommend doing it this way.
2. Customize Bootstrap and use your own version
If you go to <http://getbootstrap.com/customize/> you can customize and download a fully customized version of Bootstrap. Specifically for what you are looking for is located at <http://getbootstrap.com/customize/#grid-system>.
Once it has been downloaded you have a few options:
1. Serve the files yourself from your own project.
2. Use a free CDN service
There are multiple CDN services that you can use, such as CloudFlare (<https://www.cloudflare.com/features-cdn>) or you can use cdnjs (<https://github.com/cdnjs/cdnjs#adding-a-new-or-updating-an-existing-library>)
I hope this helps!
|
Why don't the values from my linq queries appear immediately?
I have the following block of linq queries to calculate some values for a report.
```
var items = (from trans in calclabordb.Sales_Transactions
select trans).SelectMany(st => st.Sales_TransactionLineItems).Where(stli => stli.TypeID == typeID);
decimal test = items.Where(stli => stli.Inventory_Item is Base).Sum(stli => (decimal?)stli.Inventory_Item.IntExtraServiceAmount) ?? 0;
decimal test2 = items.Where(stli => stli.Inventory_Item is Extra).Sum(stli => (decimal?)stli.ItemPrice) ?? 0;
decimal test3 = test + test2;
current.ExtraSales = items.Where(stli => stli.Inventory_Item is Base).Sum(stli => (decimal?)stli.Inventory_Item.IntExtraServiceAmount) ?? 0 +
items.Where(stli => stli.Inventory_Item is Extra).Sum(stli => (decimal?)stli.ItemPrice) ?? 0;
```
I've stepped through the code in a debugger and I've noticed some oddities. After assigning into `test` its value is 0. After assigning into `test2` `test2 == 0` and `test == 11.31` after assigning into test3 `test == 11.31` `test2 == 11.28` and `test3 == 22.59` after assigning into `ExtraSales` `ExtraSales == 11.31`. The value in `ExtraSales` when this is all complete should be 22.59. What's going on here?
EDIT: I've added additional lines after the assignment into `ExtraSales` but the value does not change.
| **The answers that say that this is a deferred execution problem are wrong.** It is an operator precedence problem.
Get rid of all that completely irrelevant and impossible-to-read code in there. It is all red herring. The relevant repro is:
```
decimal? d1 = 11.31m;
decimal? d2 = 11.28m;
decimal test1 = d1 ?? 0m;
decimal test2 = d2 ?? 0m;
decimal test3 = test1 + test2;
decimal test4 = d1 ?? 0m + d2 ?? 0m;
```
**What is the meaning of the final line? Does it mean the same thing as the line before it?**
No, it does not. The addition operator is **higher precedence** than the null coalescing operator, so this is
```
decimal test4 = d1 ?? (0m + d2) ?? 0m;
```
The code you wrote means "produce the value of d1 if d1 is not null. If d1 is null and 0m + d2 is not null then produce the value of 0m + d2. If 0m + d2 is null then produce the value 0m."
(You might not have known that the ?? operator has this pleasant chaining property. In general, `a ?? b ?? c ?? d ?? e` gives you the first non-null value of a, b, c or d, and e if they are otherwise all null. You can make the chain as long as you like. It's quite an elegant little operator.)
Since d1 is not null, we produce its value and test4 is assigned the value of d1.
You probably meant to say:
```
decimal test4 = (d1 ?? 0m) + (d2 ?? 0m);
```
If you mean "d1 or d2 could be null, and if either is, then treat the null one as zero". So 12 + 2 is 14, 12 + null is 12, null + null is 0
If you mean "either d1 and d2 could be null, and if *either* is null then I want zero", that's
```
decimal test4 = (d1 + d2) ?? 0m;
```
So 12 + 2 is 14, 12 + null is 0, null + null is 0
I note that if you had formatted your code so that the relevant text was on the screen, you probably wouldn't have gotten five or so incorrect answers posted first. Try to format your code so that all of it is on the screen; you'll get better answers if you do.
|
Default password for Kali Linux on Windows 10?
What is the default password for Kali on Windows 10 via Windows Subsystem for Linux?
| ### Traditional Kali
[Searching for this via Google](https://www.google.com/search?q=default%20password%20kali%20linux&oq=default%20password%20kali%20linux%20&aqs=chrome..69i57j69i60j0l4.4695j0j1&sourceid=chrome&ie=UTF-8) it appears to be `toor` for the `root` user. Notice it's just the name `root` backwards which is a typical hacker thing to do on compromised systems, as an insider's joke.
If you happened to provide a password during the installation, then this would be the password to use here instead of the default `toor`.
### Kali on WSL
**NOTE:** *WSL = Windows Subsystem for Linux*. In this particular flavor of Kali the root password appears to be randomly generated for the root user. To get into root you simply use `sudo su` instead.
Reference: [Thread: Unable to 'su root' in kali on WSL](https://forums.kali.org/showthread.php?39590-Unable-to-su-root-in-kali-on-WSL)
>
> I'm sure the root password is randomly generated in WSL.
> It's irrelevant though, just type
>
>
> Code:
>
>
>
> ```
> sudo su
>
> ```
>
>
### What's WSL?
So there are various flavors to Kali. You can download it and install it natively as a bare OS, you can also go into the Window's App Store and install it as an addon.
>
> For the past few weeks, we’ve been working with the Microsoft WSL team to get Kali Linux introduced into the Microsoft App Store as an official WSL distribution and today we’re happy to [announce](https://blogs.msdn.microsoft.com/commandline/2018/03/05/kali-linux-for-wsl/) the availability of the “Kali Linux” Windows application. For Windows 10 users, this means you can simply enable WSL, [search for Kali](https://www.microsoft.com/en-us/store/p/kali-linux/9pkr34tncv07) in the Windows store, and install it with a single click. This is especially exciting news for penetration testers and security professionals who have limited toolsets due to enterprise compliance standards.
>
>
>
For an overview of what limitations there are in WSL see this U&L Q&A titled: [Attempting to run a regular tunnel in Debian version 9.5 Linux](https://unix.stackexchange.com/questions/457454/attempting-to-run-a-regular-tunnel-in-debian-version-9-5-linux/457462#457462).
### References
- [Kali Linux Default Passwords](https://docs.kali.org/introduction/kali-linux-default-passwords)
- [Is there a default password of Kali Linux OS after first installation?](https://superuser.com/questions/619332/is-there-a-default-password-of-kali-linux-os-after-first-installation)
- [I cannot log into Kali Linux after installing it. How can I log in?](https://www.quora.com/I-cannot-log-into-Kali-Linux-after-installing-it-How-can-I-log-in)
- [Install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10)
|
Prepending each line with how long it took to generate it
I would like to have a script that would prepend each line of a stdin with information how long it took to generate it.
Basically for input:
```
foo
bar
baz
```
I would like to have
```
0 foo
10 bar
5 baz
```
Where 10 is 10 seconds passed between printing foo and printing bar, similar for 5, it took 5 seconds after printing bar to print baz.
I know there is a utility `ts` that shows timestamps and I know about <https://github.com/paypal/gnomon>, but I would prefer not to use javascript to do that.
Is there a standard tool for that or should I use awk and do processing?
| Let's suppose that the script generating the output is called `generate`. Then, to display the number of seconds it takes for it to generate each line:
```
$ generate | ( t0=$(date +%s); while read -r line; do t1=$(date +%s); echo " $((t1-t0)) $line"; t0=$t1; done )
2 foo
0 foo
5 foo
3 foo
```
The same commands spread out over multiple lines looks like:
```
generate | ( t0=$(date +%s)
while read -r line
do
t1=$(date +%s)
echo " $((t1-t0)) $line"
t0=$t1
done
)
```
Alternatively, for convenience, we can define a shell function that contains this code:
```
timer() { t0=$(date +%s); while read -r line; do t1=$(date +%s); echo " $((t1-t0)) $line"; t0=$t1; done; }
```
We can use this function as follows:
```
$ generate | timer
0 foo
2 foo
4 foo
3 foo
```
### How it works
- `t0=$(date +%s)`
This captures the current time at the start of the script in seconds-since-epoch.
- `while read -r line; do`
This starts a loop which reads from standard input
- `t1=$(date +%s)`
This captures the time in seconds-since-epoch at which the current line was captured.
- `echo " $((t1-t0)) $line"`
This prints out the time in seconds that it took for the current line.
- `t0=$t1`
This updates `t0` for the next line.
- `done`
This signals the end of the `while` loop.
|
Find fails if filename contains brackets
I'm trying to use find inside a loop to create a variable that contains a file matching the filename + desired string
**Example:**
```
file1.en.srt
file1.mkv
file1.pt.srt
```
This is the relevant part of the code:
```
shopt -s nullglob
shopt -s nocaseglob
if [ -d "$1" ]; then
for file in "${1%/}/"*mkv; do
# Get filename to match against subs and audios
filename="$(basename "$file" .mkv)"
# Find matching subtitle file
engsubs="$(find . -name "$filename*en.srt*" | sed -e 's,^\./,,')"
# Find matching audio file
engaudio="$(find . -iname "$filename*en.ac3" -o -iname "$filename*en.eac3" -o -iname "$filename*en.dts" | sed -e 's,^\./,,')"
done
fi
```
It works if files don't contain brackets, but the `find` commands don't find anything for files whose names contain brackets. Why this is happening? I want to create a variable like `$en` that would contain `file1.en.srt`
| The problem is that `[` and `]` are glob characters. For example, consider this file:
```
ba[r].mkv
```
When running your script on that file, `$filename` will be: `ba[r]` and, therefore, your `find` command will be:
```
find . -name 'ba[r]*pt-BR.srt*'
```
Since `[r]` is a single-letter character class, it means `r`. So your command is looking for a filename starting with `ba`, then an `r`, then any character(s), and `pt-BR.srt` and any characters again. You need to escape the brackets:
```
find . -name 'ba\[r\]*pt-BR.srt*'
```
The simplest way is to use `printf` and `%q`. So change this line:
```
filename="$(basename "$file" .mkv)"
```
To this:
```
filename=$(printf '%q' "$(basename "$file" .mkv)")
```
Or, without the command substitution around `printf`:
```
printf -v filename '%q' "$(basename "$file" .mkv)"
```
|
Canvas context.fillText() vs context.strokeText()
Is there any difference between `context.fillText()` and `context.strokeText()` besides the fact that the first uses `context.fillStyle` while the later uses `context.strokeStyle`. Any reason they did not add a `context.textStyle` property?
|
```
var canvas = document.getElementById("myCanvas");
var ctx = canvas.getContext("2d");
ctx.fillStyle = 'red';
ctx.strokeStyle = 'green'
ctx.lineWidth = 3;
ctx.font = '90px verdana';
ctx.fillText('Q', 50, 100);
ctx.strokeText('Q', 125, 100);
ctx.fillText('Q', 200, 100);
ctx.strokeText('Q', 200, 100);
```
```
<canvas id="myCanvas"></canvas>
```
Yep, strokeText actually strokes the outline of the letters while fillText fills the inside of the letters.
![enter image description here](https://i.stack.imgur.com/Ypwnz.png)
|
UITableViewCell Subclass with XIB Swift
I have a `UITableViewCell` subclass `NameInput` that connects to an xib with a custom `init` method.
```
class NameInput: UITableViewCell {
class func make(label: String, placeholder: String) -> NameInput {
let input = NSBundle.mainBundle().loadNibNamed("NameInput", owner: nil, options: nil)[0] as NameInput
input.label.text = label
input.valueField.placeholder = placeholder
input.valueField.autocapitalizationType = .Words
return input
}
}
```
Is there a way I can initialize this cell in the `viewDidLoad` method and still reuse it? Or do I have to register the class itself with a reuse identifier?
| The customary NIB process is:
1. Register your NIB with the reuse identifier. In Swift 3:
```
override func viewDidLoad() {
super.viewDidLoad()
tableView.register(UINib(nibName: "NameInput", bundle: nil), forCellReuseIdentifier: "Cell")
}
```
In Swift 2:
```
override func viewDidLoad() {
super.viewDidLoad()
tableView.registerNib(UINib(nibName: "NameInput", bundle: nil), forCellReuseIdentifier: "Cell")
}
```
2. Define your custom cell class:
```
import UIKit
class NameInput: UITableViewCell {
@IBOutlet weak var firstNameLabel: UILabel!
@IBOutlet weak var lastNameLabel: UILabel!
}
```
3. Create a NIB file in Interface Builder (with the same name referenced in step 1):
- Specify the base class of the tableview cell in the NIB to reference your custom cell class (defined in step 2).
- Hook up references between the controls in the cell in the NIB to the `@IBOutlet` references in the custom cell class.
4. Your `cellForRowAtIndexPath` would then instantiate the cell and set the labels. In Swift 3:
```
override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "Cell", for: indexPath) as! NameInput
let person = people[indexPath.row]
cell.firstNameLabel.text = person.firstName
cell.lastNameLabel.text = person.lastName
return cell
}
```
In Swift 2:
```
override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCellWithIdentifier("Cell", forIndexPath: indexPath) as! NameInput
let person = people[indexPath.row]
cell.firstNameLabel.text = person.firstName
cell.lastNameLabel.text = person.lastName
return cell
}
```
I wasn't entirely sure from your example what controls you placed on your cell, but the above has two `UILabel` controls. Hook up whatever `@IBOutlet` references make sense for your app.
|
UUID Primary Key in Postgres, What Insert Performance Impact?
I am wondering about the performance impact of using a non-sequential UUID as the primary key in a table that will become quite large in PosgreSQL.
In DBMS's that use clustered storage for table records it is a given that using a UUID is going to increase the cost of inserts due to having to read from disk to find the data page into which to perform the insert, once the table is too big to hold in memory. As I understand it, Postgres does not maintain row clustering on inserts, so I imagine that in Postgres using a UUID PK does not hurt the performance of that insert.
But I would think that it makes the insert into the index that the primary key constraint creates much more expensive once the table is large, because it will have to constantly be read from disk to update the index on insertion of new data. Whereas with a sequential key the index will only be updated at the tip which will always be in memory.
Assuming that I understand the performance impact on the index correctly, is there any way to remedy that or are UUIDs simply not a good PK on a large, un-partitioned table?
|
>
> As I understand it, Postgres does not maintain row clustering on inserts
>
>
>
Correct at the moment. Unfortunately.
>
> so I imagine that in Postgres using a UUID PK does not hurt the performance of that insert.
>
>
>
It still does have a performance cost because of the need to maintain the PK, and because the inserted tuple is bigger.
- The uuid is 4 times as wide as a typical 32-bit integer synthetic key, so the row to write is 12 bytes bigger and you can fit fewer rows into a given amount of RAM
- The b-tree index that implements the primary key will be 4x as large (vs a 32-bit key), taking longer to search and requiring more memory to cache. It also needs more frequent page splits.
- Writes will tend to be random within indexes, not appends to hot, recently accessed rows
>
> is there any way to remedy [the performance impact on the index] or are UUIDs simply not a good PK on a large, un-partitioned table?
>
>
>
If you need a UUID key, you need a UUID key. You shouldn't use one if you don't require one, but if you cannot rely on a central source of synthetic keys and there is no suitable natural key to use, it's still the way to go.
Partitioning won't help much unless you can confine writes to one partition. Also, you won't be able to usefully use constraint exclusion on searches for the key if writing only to one partition at a time, so you'll still have to search all the partitions' indexes for a key when doing queries. I can only see it being useful if your UUID forms part of a composite key and you can partition on the other part of the composite key.
|
Which link function for a regression when Y is continuous between 0 and 1?
I've always used logistic regression when Y was categorical data 0 or 1.
Now I have this dependent variable that is really a ratio/probability. That means it can be any number between 0 and 1.
I really think "logistic" shape would fit very nicely, but I remember categorical Y was a big deal when proving why MLE works.
The point is, I am wrong using logit regression for this Y or it doesn't matter? Should I use probit instead?
Am I committing a capital crime?
| There's nothing wrong per se with using "logistic regression" for this kind of data. You can think of it as an empirical adjustment to allow fitting a response that has a bounded support. It's better than the alternative (logit-transforming your response, then using ordinary linear regression) because the resulting predictions are asymptotically unbiased, the mean predicted value equals the observed mean response, and (probably the most important) you don't have to worry about situations where Y equals 0 or 1. The arcsin transformation can handle Y = 0 or 1, but then your regression results aren't so easily interpretable in terms of log-odds ratios.
The main thing to look out for is that, as with any generalized linear model, you are implicitly assuming a particular relationship between the $E(Y|X)$ and $\textrm{Var}(Y|X)$. You should check that this assumption holds, eg by looking at diagnostic plots of residuals.
For most cases, doing a probit regression will give very similar results to a logistic regression. An alternative is to use the complementary-log-log link if you have reason to believe there is asymmetry between Y = 0 and 1.
|
What is the --release flag in the Java 9 compiler?
Java 9's `javac` has a new flag `--release`:
```
> javac --help
...
--release <release>
Compile for a specific VM version. Supported targets: 6, 7, 8, 9
```
How is it different from `-source` and `-target` flags? Is it just a shortcut for `-source X -target X`?
| Not exactly.
[*JEP 247: Compile for Older Platform Versions*](http://openjdk.java.net/jeps/247) defines this new command-line option, `--release`:
>
> We defined a new command-line option, `--release`, which automatically configures the compiler to produce class files that will link against an implementation of the given platform version. For the platforms predefined in `javac`, **`--release N` is equivalent to `-source N -target N -bootclasspath <bootclasspath-from-N>`**. (emphasis mine)
>
>
>
So no, it is not equivalent to `-source N -target N`. The reason for this addition is stated in the "Motivation" section:
>
> `javac` provides two command line options, `-source` and `-target`, which can be used to select the version of the Java language accepted by the compiler and the version of the class files it produces, respectively. By default, however, `javac` compiles against the most-recent version of the platform APIs. The compiled program can therefore accidentally use APIs only available in the current version of the platform. Such programs cannot run on older versions of the platform, regardless of the values passed to the `-source` and `-target`. options. This is a long-term usability pain point, since users expect that by using these options they'll get class files that can run on the specified platform version.
>
>
>
In short, specifying the source and target options are not sufficient for cross-compilation. Because `javac`, by default, compiles against the most recent of the platform APIs, they can't be guaranteed to run on older versions. You also need to specify the `-bootclasspath` option corresponding to the older version to cross-compile correctly. This would include the correct API version to compile against and allow for execution on older version. Since it was very often forgotten, it was decided to add one command line option which did all the necessary things to correctly cross-compile.
Further reading [in the mailing list](http://mail.openjdk.java.net/pipermail/jdk9-dev/2015-July/002414.html) and [Oracle Docs](https://docs.oracle.com/javase/9/tools/javac.htm). The original bug was filed [here](https://bugs.openjdk.java.net/browse/JDK-8058150). Note that since the integration of this option, JDK builds have come bundled with descriptions of the platform APIs of older releases, mentioned under section "Risks and Assumptions". That means you don't need the older version installed on your machine for cross-compilation to work.
|
Bit trick to detect if any of some integers has a specific value
Is there any clever bit trick to detect if any of a small number of integers (say 3 or 4) has a specific value?
The straightforward
```
bool test(int a, int b, int c, int d)
{
// The compiler will pretty likely optimize it to (a == d | b == d | c == d)
return (a == d || b == d || c == d);
}
```
[in GCC compiles to](https://godbolt.org/g/43VHVk)
```
test(int, int, int, int):
cmp ecx, esi
sete al
cmp ecx, edx
sete dl
or eax, edx
cmp edi, ecx
sete dl
or eax, edx
ret
```
Those `sete` instructions have higher latency than I want to tolerate, so I would rather use something bitwise (`&`, `|`, `^`, `~`) stuff and a single comparison.
| The only solution I've found yet is:
```
int s1 = ((a-d) >> 31) | ((d-a) >> 31);
int s2 = ((b-d) >> 31) | ((d-b) >> 31);
int s3 = ((c-d) >> 31) | ((d-c) >> 31);
int s = s1 & s2 & s3;
return (s & 1) == 0;
```
alternative variant:
```
int s1 = (a-d) | (d-a);
int s2 = (b-d) | (d-b);
int s3 = (c-d) | (d-c);
int s = (s1 & s2 & s3);
return (s & 0x80000000) == 0;
```
both are translated to:
```
mov eax, ecx
sub eax, edi
sub edi, ecx
or edi, eax
mov eax, ecx
sub eax, esi
sub esi, ecx
or esi, eax
and esi, edi
mov eax, edx
sub eax, ecx
sub ecx, edx
or ecx, eax
test esi, ecx
setns al
ret
```
which has less sete instructions, but obviously more mov/sub.
Update: as BeeOnRope@ suggested - it makes sense to cast input variables to unsigned
|
Why is Rust NLL not working for multiple borrows in the same statement?
First, I tried something like this:
```
let mut vec = vec![0];
vec.rotate_right(vec.len());
```
It can't be compiled because:
>
> error[E0502]: cannot borrow `vec` as immutable because it is also borrowed as mutable
>
>
>
I thought that the Rust borrow checker could be smarter than this, so I found something called **NLL**, and it should solve this problem.
I tried the sample:
```
let mut vec = vec![0];
vec.resize(vec.len(), 0);
```
It could work, but why is it not working with `rotate_right`? Both of them take a `&mut self`. What's going on?
| It is definitely an interesting one.
They are similar - but not quite the same. [`resize()`](https://doc.rust-lang.org/stable/std/vec/struct.Vec.html#method.resize) is a member of `Vec`. [`rotate_right()`](https://doc.rust-lang.org/stable/std/primitive.slice.html#method.rotate_right), on the other hand, is a method of slices.
[`Vec<T>` derefs to `[T]`](https://doc.rust-lang.org/stable/std/vec/struct.Vec.html#deref-methods-%5BT%5D), so most of the time this does not matter. But actually, while this call:
```
vec.resize(vec.len(), 0);
```
Desugars to something like:
```
<Vec<i32>>::resize(&mut vec, <Vec<i32>>::len(&vec), 0);
```
This call:
```
vec.rotate_right(vec.len());
```
Is more like:
```
<[i32]>::rotate_right(
<Vec<i32> as DerefMut>::deref_mut(&mut vec),
<Vec<i32>>::len(&vec),
);
```
But in what order?
This is the [MIR](https://blog.rust-lang.org/2016/04/19/MIR.html) for `rotate_right()` (simplified a lot):
```
fn foo() -> () {
_4 = <Vec<i32> as DerefMut>::deref_mut(move _5);
_6 = Vec::<i32>::len(move _7);
_2 = core::slice::<impl [i32]>::rotate_right(move _3, move _6);
}
```
And this is the MIR for `resize()` (again, simplified a lot):
```
fn foo() -> () {
_4 = Vec::<i32>::len(move _5);
_2 = Vec::<i32>::resize(move _3, move _4, const 0_i32);
}
```
In the `resize()` example, we first call `Vec::len()` with a reference to `vec`. This returns `usize`. Then we call `Vec::resize()`, when we have no outstanding references to `vec`, so mutably borrowing it is fine!
However, with `rotate_right()`, first we call `<Vec<i32> as DerefMut>::deref_mut(&mut vec)`. This returns `&mut [i32]`, with its lifetime tied to `vec`. That is, as long as this reference (mutable reference!) is alive, we are not allowed to use have any other reference to `vec`. But then we try to borrow `vec` in order to pass the (shared, but it doesn't matter) reference to `Vec::len()`, while we still need to use the mutable reference from `deref_mut()` later, in the call to `<[i32]>::rotate_right()`! This is an error.
This is because Rust defines [an evaluation order for operands](https://doc.rust-lang.org/reference/expressions.html#evaluation-order-of-operands):
>
> Expressions taking multiple operands are evaluated left to right as written in the source code.
>
>
>
Because `vec.resize()` is actually `(&mut *vec).rotate_right()`, we first evaluate the dereference+reference, then the arguments:
```
let dereferenced_vec = &mut *vec;
let len = vec.len();
dereferencec_vec.rotate_right(len);
```
Which is obviously a violation of the borrow rules.
On the other hand, `vec.resize(vec.len())` has no work to do on the callee (`vec`), and so we first evaluate `vec.len()`, and then the call itself.
Solving this is as easy as extracting the `vec.len()` to a new line (new statement, to be precise), and the compiler also suggests that.
|
Counting of algebraic data types
I'm reading/listening to Chris Taylor's presentation on algebraic data types.
<http://chris-taylor.github.io/blog/2013/02/10/the-algebra-of-algebraic-data-types/>
And there's a section on function types. Specifically the example
```
data Bool = True | False
data Trio = First | Second | Third
```
Given the law
```
a -> b == B^A
```
Given
```
Trio -> Bool should equal 8
```
**Why 8 and not 6 via multiplication?**
If I'm understanding this correctly, the concrete combinations should be
```
First -> True
First -> False
Second -> True
Second -> False
Third -> True
Third -> False
```
Isn't that just 6 concrete implementations of `Trio -> Bool`?
What am I missing?
| Those aren't full implementations. For the full implementations, it is like counting from 0 to 7 (which is a total of 8 = 23 numbers) in binary, with each line of each implementation representing one of the three bits. All the possibilities look like this (if we call our function `f`):
1)
```
f First = False
f Second = False
f Third = False
```
2)
```
f First = True
f Second = False
f Third = False
```
3)
```
f First = False
f Second = True
f Third = False
```
4)
```
f First = True
f Second = True
f Third = False
```
5)
```
f First = False
f Second = False
f Third = True
```
6)
```
f First = True
f Second = False
f Third = True
```
7)
```
f First = False
f Second = True
f Third = True
```
8)
```
f First = True
f Second = True
f Third = True
```
|
Trix Editor Rails - Customizing
I am using a trix editor such as `<%= f.trix_editor :body, class: 'trix-content form-control', rows: 15 %>`
Currently the buttons are black and obviously translated english (as well as the alt texts).
How am I supposed to change the colors of the buttons? Everything Ive tried didn't seem to work.
Is there any way to provide german translations? I need my full application to be completely german.
Best regards
| You can change the background color of the buttons with css.
```
trix-toolbar .trix-button-group button {
background-color: green;
}
```
The button icons are images, so you would have to replace them in order to customize icon color, etc. For example, to remove the bold button icon with css:
```
trix-toolbar .trix-button-group button.trix-button--icon-bold::before {
background-image: none;
}
```
You can change the button tooltips (for translation or otherwise) with javascript by referring to the `data-trix-attribute` for the button you would like to change. For example, to change the bold button where the `data-trix-attribute` is set to "bold" (due to browser inconsistencies, it is best to set both the `Trix.config.lang` and the element `title` attribute):
```
Trix.config.lang.bold = 'Really Bold';
document.querySelector('button[data-trix-attribute="bold"]').setAttribute('title', 'Really Bold');
```
Following snippet illustrates the various changes above.
```
// hover over the now blank bold button in the toolbar to see the tooltip
Trix.config.lang.bold = 'Really Bold';
document.querySelector('button[data-trix-attribute="bold"]').setAttribute('title', 'Really Bold');
```
```
trix-toolbar .trix-button-group button {
background-color: green;
}
trix-toolbar .trix-button-group button.trix-button--icon-bold::before {
background-image: none;
}
```
```
<link rel="stylesheet" type="text/css" href="https://cdnjs.cloudflare.com/ajax/libs/trix/1.1.1/trix.css">
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/trix/1.1.1/trix.js"></script>
<trix-editor></trix-editor>
```
|
Issue with function Overloading in kotlin
I am trying to declare two suspend methods with list of String and PublishRequest Object as parameter. But the IDE is giving error with this.
The error is either make one of the function internal or remove suspend. But i want to use coroutines inside both of them.
```
override suspend fun publish(publishRequests: List<PublishRequest>) {
///code
}
suspend fun publish(events: List<String>) {
///code
}
```
The PublishRequest Data class is internal. The issues is only coming when we add the **publish(events: List)** method. The code is working fine the **publish(publishRequests: List)**
Can you explain why it is happening ?
| The problem you are facing is related to [type erasure](https://kotlinlang.org/docs/reference/generics.html#type-erasure).
The types `List<PublishRequest>` and `List<String>` are erased to `List<*>`, as consequence, you would have a JVM signature clash.
To solve your problem you have two different solutions.
1. Change their names and avoid a signature clash:
```
suspend fun publishRequests(publishRequests: List<PublishRequest>) {}
suspend fun publishEvents(events: List<String>) {}
```
2. Use a single function with a `reified` type and handle the different type classes inside that function:
```
suspend inline fun <reified T> publish(objects: List<T>) {
when {
PublishRequest::class.java.isAssignableFrom(T::class.java) -> // it's a list of PublishRequest
T::class == String::class -> // it's a list of String
}
}
```
|
Load a package only when needed in R package
I've got a package which a whole bunch of miscellaneous functions (see [What to do with imperfect-but-useful functions?](https://stackoverflow.com/questions/6828937/what-to-do-with-imperfect-but-useful-functions) ). Because the functions are not particularly related, they depend on a whole bunch of other packages. Often there will be just one function in the whole package which uses another package. Yet if I use Imports, Suggests, or Depends in the DESCRIPTION file, the whole list of packages will be loaded each time my package is loaded, even though very few of them are needed by any given user.
Is there any way to just load the dependencies only when a particular function is used? I could put a call to `library()` inside the function themselves, but that seems like bad practice since it doesn't tell the package management system anything and therefore the user might not have it installed.
| You can use Suggests, and in the function that needs the package you can add code to `require()` said package and if not available throw an error. An example I am familiar with, the vegan package, has, in its DESCRIPTION
```
Depends: permute
Suggests: MASS, mgcv, lattice, cluster, scatterplot3d, rgl, tcltk
```
and on loading the package we have:
```
R> require(vegan)
Loading required package: vegan
Loading required package: permute
This is vegan 1.90-0
```
and `sessionInfo()` reports that none of the Suggested packages has yet been loaded/attached etc:
```
R> sessionInfo()
R version 2.13.1 Patched (2011-07-29 r56550)
Platform: x86_64-unknown-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=en_GB.utf8 LC_NUMERIC=C
[3] LC_TIME=en_GB.utf8 LC_COLLATE=en_GB.utf8
[5] LC_MONETARY=C LC_MESSAGES=en_GB.utf8
[7] LC_PAPER=en_GB.utf8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_GB.utf8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] vegan_1.90-0 permute_0.5-0
loaded via a namespace (and not attached):
[1] tools_2.13.1
```
|
How to create a static mutable hashmap?
How do I create a static mutable hashmap? I am ok with unsafe code.
The normal static does not allow globals with constructors.
As an example, I want what is at <https://gist.github.com/Kimundi/8782487> but HASHMAP to be mutable.
I understand that global shared mutable state is not what very rust-ish but I just want to know if such a thing is possible.
| For maintained answers, see [How do I create a global, mutable singleton?](https://stackoverflow.com/q/27791532/155423), as this question should have been marked as a duplicate.
---
Seeing as how you already have a solution for a global object that is non-mutable, perhaps you can use one of the [`cell`](https://doc.rust-lang.org/std/cell/index.html) containers to add interior mutability?
Realistically, this sounds like a a *bad idea*. Global mutable state is problematic. Why can't you pass in a mutable hashmap to the methods / objects that need it?
You may also what to check out [the implementation of `stdin`](https://github.com/rust-lang/rust/blob/4b40bc85cbc1d072179c92ce01655db0272aa598/src/libstd/io/stdio.rs#L215-L245), which provides safe access to a true global singleton.
|
Drawing tangent plot in Python (matplotlib)
Today I decided to write simple program in Python, just to practice before exam. Firstly, I wanted to draw sin and cos plot, which wasn't so hard. But then, I decided to challenge myself and draw tangent plot.
```
import pylab as p
x= p.arange(-1.0,1.0,0.1)
y= (p.sin(2*p.pi*x)) / (p.cos(2*p.pi*x))
p.plot(x,y,'g-',lw=1)
p.show()
```
It returns... ugh... this:
![enter image description here](https://i.stack.imgur.com/fpANl.png)
As you can see, it looks more like ECK plot than tangent plot. Does anyone knows what's wrong?
| If you increase the number of points in `x`,
```
import pylab as p
import numpy as np
x = p.linspace(-1.0, 1.0, 1000)
y = (p.sin(2 * p.pi * x)) / (p.cos(2 * p.pi * x))
p.plot(x, y, 'g-', lw=1)
p.show()
```
you get something like this:
![enter image description here](https://i.stack.imgur.com/vklgJ.png)
Notice how large the `y-range` is getting. Matplotlib is not able to show you much of the small values in the tangent curve because the range is so large.
The plot can be improved by ignoring the extremely large values near the asymptotes. Using [Paul's workaround](https://stackoverflow.com/a/2542065/190597) to handle asymptotes,
```
import pylab as p
import numpy as np
x = p.linspace(-1.0, 1.0, 1000)
y = (p.sin(2 * p.pi * x)) / (p.cos(2 * p.pi * x))
tol = 10
y[y > tol] = np.nan
y[y < -tol] = np.nan
p.plot(x, y, 'g-', lw=1)
p.show()
```
you get
![enter image description here](https://i.stack.imgur.com/XzLmQ.png)
|
ActionBar Action Items not showing
i have a very simple code, but a problem that I cannot solve even after long google searching. I want to have some Action Items in my ActionBar, but whenever I run the App, all I see is a ActionBar with the App logo and the title, **but no Action Items**.
It would be great, if you could help me, probably I am just missing the most obvious thing ;)
Thats the method in my ActionBarActivity:
```
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu items for use in the action bar
MenuInflater inflater = getMenuInflater();
inflater.inflate(R.menu.main_activity_actions, menu);
return super.onCreateOptionsMenu(menu);
}
```
And this is the relevant .xml file for the ActionBar (named main\_activity\_actions.xml):
```
<menu xmlns:android="http://schemas.android.com/apk/res/android" >
<item android:id="@+id/action_search"
android:icon="@drawable/ic_action_search"
android:title="@string/action_search"
android:showAsAction="always" />
<item android:id="@+id/action_compose"
android:icon="@drawable/ic_action_compose"
android:title="@string/action_compose"
android:showAsAction="always"/>
</menu>
```
| This is because if you use the support AppCompat ActionBar library and ActionBarActivity you should create your menus in a different than the standard way of creating XML menus in ActioBarSherlock or the default ActionBar.
So try this code:
```
<menu xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto">
<item android:id="@+id/action_search"
android:icon="@drawable/ic_action_search"
android:title="@string/action_search"
app:showAsAction="always" />
<item android:id="@+id/action_compose"
android:icon="@drawable/ic_action_compose"
android:title="@string/action_compose"
app:showAsAction="always"/>
</menu>
```
and report if this works.
Note: check the extra prefix `xmlns:app` which should be used instead!
|
How to tell if linux disk IO is causing excessive (> 1 second) application stalls
I have a Java application performing a large volume (hundreds of MB) of continuous output (streaming plain text) to about a dozen files a ext3 SAN filesystem. Occasionally, this application pauses for several seconds at a time. I suspect that something related to ext3 vsfs (Veritas Filesystem) functionality (and/or how it interacts with the OS) is the culprit.
What steps can I take to confirm or refute this theory? I am aware of `iostat` and `/proc/diskstats` as starting points.
**Revised title to de-emphasize journaling and emphasize "stalls"**
I have done some googling and found at least one article that seems to describe behavior like I am observing: [Solving the ext3 latency problem](http://lwn.net/Articles/328363/)
**Additional Information**
- Red Hat Enterprise Linux Server release 5.3 (Tikanga)
- Kernel: `2.6.18-194.32.1.el5`
- Primary application disk is fiber-channel SAN: `lspci | grep -i fibre` >> `14:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03)`
- Mount info: `type vxfs (rw,tmplog,largefiles,mincache=tmpcache,ioerror=mwdisable) 0 0`
- `cat /sys/block/VxVM123456/queue/scheduler` >> `noop anticipatory [deadline] cfq`
| My guess is that there's some other process that hogs the disk I/O capacity for a while. `iotop` can help you pinpoint it, if you have a recent enough kernel.
If this is the case, it's not about the filesystem, much less about journalling. It's the I/O scheduler the responsible to arbitrate between conflicting applications. An easy test: check the current scheduler and try a different one. It can be done on the fly, without restarting. For example, on my desktop to check the first disk (`/dev/sda`):
```
cat /sys/block/sda/queue/scheduler
=> noop deadline [cfq]
```
shows that it's using CFQ, which is a good choice for desktops but not so much for servers. Better set 'deadline':
```
echo 'deadline' > /sys/block/sda/queue/scheduler
cat /sys/block/sda/queue/scheduler
=> noop [deadline] cfq
```
and wait a few hours to see if it improves. If so, set it permanently in the startup scripts (depends on distribution)
|
Different timing functions for different parts of css3 keyframe animation? (accurate bounce)
Is this possible? I'm trying to recreate a ball dropping onto the screen, and I have an animation like this:
```
@keyframes bounce {
20%, 40%, 60%, 74%, 84%, 92%, 100% {
transform: translate(0, 0);
}
0% {
transform: translate(0, -100vh);
}
30% {
transform: translate(0, -40vh);
}
50% {
transform: translate(0, -20vh);
}
68% {
transform: translate(0, -10vh);
}
80% {
transform: translate(0, -5vh);
}
88% {
transform: translate(0, -2vh);
}
96% {
transform: translate(0, -1vh);
}
}
```
and this, when implemented like this:
```
.ball {
animation: bounce 3s cubic-bezier(0.895, 0.03, 0.685, 0.22) 0s 1 normal forwards;
}
```
produces something that looks like this:
[![bounce!](https://i.stack.imgur.com/1sq9Q.gif)](https://i.stack.imgur.com/1sq9Q.gif)
This is.. *okay*, but not ideal. I'd prefer to do something like:
[![actual bounce](https://i.stack.imgur.com/qNzy1.png)](https://i.stack.imgur.com/qNzy1.png)
But in order to do this I need to have a different timing function for the initial 0-> 20% compared to the rest of them. Is there a way to do different timing functions for different parts of a keyframe animation? Or perhaps a different way to get an accurate *bouncing* animation that I'm not thinking of? Any help would be appreciated!
**edit:** added a fiddle [here](https://jsfiddle.net/nfdp69pw/).
| Rather than specifying a [timing function](https://developer.mozilla.org/en-US/docs/Web/CSS/animation-timing-function) for the **entire animation**, you can specify for **each key frame**. The function represents how the the values are interpolated from the beginning to end of the respective key frame.
Here's an example by adding an `ease` function to the keyframes `20%, 40%, 60%, 74%, 84%, 92%, 100%`.
```
@keyframes bounce {
20%, 40%, 60%, 74%, 84%, 92%, 100% {
transform: translate(0, 0);
animation-timing-function: ease;
}
0% {
transform: translate(0, -100vh);
}
30% {
transform: translate(0, -40vh);
}
50% {
transform: translate(0, -20vh);
}
68% {
transform: translate(0, -10vh);
}
80% {
transform: translate(0, -5vh);
}
88% {
transform: translate(0, -2vh);
}
96% {
transform: translate(0, -1vh);
}
}
.ball {
background: #ff0000;
border-radius: 50%;
position: absolute;
top: 500px;
width: 50px;
height: 50px;
animation: bounce 3s cubic-bezier(0.895, 0.03, 0.685, 0.22) 0s 1 normal forwards;
}
```
```
<div class="ball"> </div>
```
|
Missing Coordinates. Basic Trigonometry Help
please refer to my quick diagram attached below.
what i'm trying to do is get the coordinates of the yellow dots by using the angle from the red dots' known coordinates. assuming each yellow dot is about 20 pixels away from the x:50/y:250 red dot at a right angle (i think that's what it's called) how do i get their coordinates?
i believe this is very basic trigonometry and i should use Math.tan(), but they didn't teach us much math in art school.
[alt text http://www.freeimagehosting.net/uploads/e8c848a357.jpg](http://www.freeimagehosting.net/uploads/e8c848a357.jpg)
| You don't actually need trigs for this one. Simply use slopes, or change in `x` and `y`.
Given a line of slope `m = y/x`, the line perpendicular to that line has slope `-1/m`, or `-x/y`.
The slope m between the red dots is `-150/150`, or `-1/1`. I noticed your positive `y` points down.
Therefore, the positive slope is `1/1`. Both of your x and y changes at the same speed, with the same amount.
Once you know that, then it should be pretty easy to figure out the rest. Since they're aligned at 45 degrees angle, the edge ratio of the `45-45-90` triangle is `1 : 1 : sqrt(2)`. So if your length is `20`, the individual x and y change would be `20/sqrt(2)`, or roughly `14` in integers.
So, your two yellow dots would be at `(36, 236)`, and `(64, 264)`. If the lines are not aligned to a convenient degree, you would have to use `arctan()` or something similar, and get the angle between the line and the horizontal line, so you can figure out the ratio of x and y change.
I hope my answer wasn't too hard to follow. For a more general solution, see Troubadour's answer.
---
**Edit:** Since the OP said the lower red dot is actually rotating around the upper red dot, we will need a more flexible solution instead.
I'm going to extend this answer from Troubadour's, since I'm doing exactly the same thing. Please refer to his post as you read mine.
**1.
Get the vector from origin (200, 100) to rotating point (50, 250):**
```
vector = (200 - 50, 100 - 250) = (150, -150)
```
**2.
Rotate your vector by swapping the x and y, and negate x to get the new vector:**
```
vector = (150, -150) => swap => (-150, 150) => negate x => (150, 150)
```
**3.
Get the unit vector (of length 1) from the new vector:**
```
vector = vector / length(vector)
= (150 / length(vector), 150 / length(vector))
~= (0.7071, 0.7071)
where
length(vector) = sqrt(150^2 + 150^2) ~= 212.2320
```
**4.
Get the displacement vector of length 20, by multiplying the unit vector.**
```
displacement_vector = vector * 20
= (0.7071 * 20, 0.7071 * 20)
= (14.1421, 14.1421)
```
**5.
Add/Subtract this vector to/from your rotating vector (point):**
```
yellow_1 = (50, 250) + (14.1421, 14.1421) ~= (64, 254)
yellow_2 = (50, 250) - (14.1421, 14.1421) ~= (36, 236)
```
I hope the above steps help you with formulating your code. Doesn't matter what the angle it's at, same steps.
|
GPU performing slower than CPU for Pytorch on Google Colaboratory
The GPU trains this network in about 16 seconds. The CPU in about 13 seconds. (I am uncommenting/commenting appropriate lines to do the test). Can anyone see what's wrong with my code or pytorch installation? (I have already checked that the GPU is available, and that there is sufficient memory available on the GPU.
```
from os import path
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
print(accelerator)
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.0-{platform}-linux_x86_64.whl torchvision
print("done")
#########################
import torch
from datetime import datetime
startTime = datetime.now()
dtype = torch.float
device = torch.device("cpu") # Comment this to run on GPU
# device = torch.device("cuda:0") # Uncomment this to run on GPU
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1024, 128, 8
# Create random Tensors to hold input and outputs.
x = torch.randn(N, D_in, device=device, dtype=dtype)
t = torch.randn(N, D_out, device=device, dtype=dtype)
# Create random Tensors for weights.
w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True)
w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True)
w3 = torch.randn(D_out, D_out, device=device, dtype=dtype, requires_grad=True)
learning_rate = 1e-9
for i in range(10000):
y_pred = x.mm(w1).clamp(min=0).mm(w2).clamp(min=0).mm(w3)
loss = (y_pred - t).pow(2).sum()
if i % 1000 == 0:
print(i, loss.item())
loss.backward()
# Manually update weights using gradient descent
with torch.no_grad():
w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad
# Manually zero the gradients after updating weights
w1.grad.zero_()
w2.grad.zero_()
print(datetime.now() - startTime)
```
| I see you're timing things you shouldn't be timing (definition of dtype, device, ...). What's interesting to time here is the creation of the input, output and weight tensors.
```
startTime = datetime.now()
# Create random Tensors to hold input and outputs.
x = torch.randn(N, D_in, device=device, dtype=dtype)
t = torch.randn(N, D_out, device=device, dtype=dtype)
torch.cuda.synchronize()
print(datetime.now()-startTime)
# Create random Tensors for weights.
startTime = datetime.now()
w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True)
w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True)
w3 = torch.randn(D_out, D_out, device=device, dtype=dtype, requires_grad=True)
torch.cuda.synchronize()
print(datetime.now()-startTime)
```
and the training loop
```
startTime = datetime.now()
for i in range(10000):
y_pred = x.mm(w1).clamp(min=0).mm(w2).clamp(min=0).mm(w3)
loss = (y_pred - t).pow(2).sum()
if i % 1000 == 0:
print(i, loss.item())
loss.backward()
# Manually update weights using gradient descent
with torch.no_grad():
w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad
# Manually zero the gradients after updating weights
w1.grad.zero_()
w2.grad.zero_()
torch.cuda.synchronize()
print(datetime.now() - startTime)
```
# Why the GPU is slower
I run it on my machine with a GTX1080 and a very good CPU, so the absolute timing is lower, but the explanation should still be valid. If you open a Jupyter notebook and run it on the CPU:
```
0:00:00.001786 time to create input/output tensors
0:00:00.003359 time to create weight tensors
0:00:04.030797 time to run training loop
```
Now you set device to `cuda` and we call this "cold start" (nothing has been previously run on the GPU in this notebook)
```
0:00:03.180510 time to create input/output tensors
0:00:00.000642 time to create weight tensors
0:00:03.534751 time to run training loop
```
You see that the time to run the training loop is reduced by a small amount, but there is an overhead of 3 seconds because you need to move the tensors from CPU to GPU RAM.
If you run it again without closing the Jupyter notebook:
```
0:00:00.000421 time to create input/output tensors
0:00:00.000733 time to create weight tensors
0:00:03.501581 time to run training loop
```
The overhead disappears, because Pytorch uses a [caching memory allocator](https://pytorch.org/docs/master/notes/cuda.html#memory-management) to speed things up.
You can notice that the speedup you get on the training loop is very small, this is because the operations you're doing are on tensors of pretty small size. When dealing with small architectures and data I always run a quick test to see if I actually gain anything by running it on GPU.
For example if I set `N, D_in, H, D_out = 64, 5000, 5000, 8`, the training loop runs in 3.5 seconds on the GTX1080 and in 85 seconds on the CPU.
|
Fasthttp + fasthttprouter, trying to write middleware
I'm currently trying to write some middleware to work with fasthttp and fasthttprouter. And I'm stuck.
```
func jwt(h fasthttprouter.Handle) fasthttprouter.Handle {
myfunc := func(ctx *fasthttp.RequestCtx, _ fasthttprouter.Params) {
fmt.Println(string(ctx.Request.Header.Cookie("Authorization")))
}
return myfunc
}
```
How do I run the actual handler now? I feel like i'm missing something very simple.
I've read through this blog post: [Middleware in Golang](https://medium.com/@matryer/writing-middleware-in-golang-and-how-go-makes-it-so-much-fun-4375c1246e81#.ssb07reu3/). But i'm lost.
Any ideas?
Regards
| for example, let us create a middleware function that will handle CORS using:
*[github.com/buaazp/fasthttprouter](https://github.com/buaazp/fasthttprouter)* and *[github.com/valyala/fasthttp](https://github.com/valyala/fasthttp)*
```
var (
corsAllowHeaders = "authorization"
corsAllowMethods = "HEAD,GET,POST,PUT,DELETE,OPTIONS"
corsAllowOrigin = "*"
corsAllowCredentials = "true"
)
func CORS(next fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
ctx.Response.Header.Set("Access-Control-Allow-Credentials", corsAllowCredentials)
ctx.Response.Header.Set("Access-Control-Allow-Headers", corsAllowHeaders)
ctx.Response.Header.Set("Access-Control-Allow-Methods", corsAllowMethods)
ctx.Response.Header.Set("Access-Control-Allow-Origin", corsAllowOrigin)
next(ctx)
}
}
```
Now we chain this middleware function on our Index handler and register it on the router.
```
func Index(ctx *fasthttp.RequestCtx) {
fmt.Fprint(ctx, "some-api")
}
func main() {
router := fasthttprouter.New()
router.GET("/", Index)
if err := fasthttp.ListenAndServe(":8181", CORS(router.Handler)); err != nil {
log.Fatalf("Error in ListenAndServe: %s", err)
}
}
```
|
Can I use jQuery Mobile if I am developing a native app?
I am new to jQuery and mobile apps development. I know the features of jQuery Mobile. I want to know where and why to use it. Can I use jQuery Mobile if I am developing a native app?
| [jQuery Mobile](http://jquerymobile.com/) is:
>
> A unified, HTML5-based user interface system for all popular mobile device platforms, built on the rock-solid jQuery and jQuery UI foundation.
>
>
>
Its use is to provide a consistent experience across mobile devices UI for **web applications**. Web applications are applications that are accessible via a web browser through the Internet. Some web applications are intended for local use but you would still need a browser to access them.
So if your intentions are to build a native mobile application (your question wasn't very clear), jQuery Mobile is not of much use to you. But if you are developing a web application that targets mobile devices, jQuery Mobile is a valid option. If it's better than other similar options is not a question that's considered on topic here, you should decide for yourself. Personally, I like it and use it but I don't have much experience on the mobile domain, so don't take my word for it.
|
SQL Query using Partition By
I have following table name JobTitle
```
JobID LanaguageID
-----------------
1 1
1 2
1 3
2 1
2 2
3 4
4 5
5 2
```
I am selecting all records from table except duplicate JobID's for which count > 1. I am selecting only one record/first row from the duplicate JobID's.
Now I am passing LanguageID as paramter to stored procedure and I want to select duplicate JobID for that languageID along with the other records Also.
If I have passed languageID as 1 then output should come as follows
```
JobID LanaguageID
-----------------
1 1
2 1
3 4
4 5
5 2
```
I have tried using following query.
```
with CTE_RN as
(
SELECT ROW_NUMBER() OVER(PARTITION BY JobTitle.JobID ORDER BY JobTitle.JobTitle) AS RN
FROM JobTitle
INNER JOIN JobTitle_Lang
ON JobTitle.JobTitleID = JobTitle_Lang.JobTitleID
)
```
But I am unable to use WHERE clause in the above query.
Is any different approch should be followed. Or else how can i modify the query to get the desired output
|
```
with CTE_RN as
(
SELECT
JobID, LanaguageID,
ROW_NUMBER() OVER(PARTITION BY JobTitle.JobID ORDER BY JobTitle.JobTitle) AS RN
FROM JobTitle
INNER JOIN JobTitle_Lang ON JobTitle.JobTitleID = JobTitle_Lang.JobTitleID
)
select
from CTE_RN
where RN = 1 or LanguageID = @LanguageID
```
**update**
simplified a bit (join removed), but you'll get the idea:
```
declare @LanguageID int = 2
;with cte_rn as
(
select
JobID, LanguageID,
row_number() over(
partition by JobTitle.JobID
order by
case when LanguageID = @LanguageID then 0 else 1 end,
LanguageID
) as rn
from JobTitle
)
select *
from cte_rn
where rn = 1
```
**`[sql fiddle demo](http://sqlfiddle.com/#!3/846d1/2)`**
|
Why doesn't that template function compile?
This is a pretty short snippet that just won’t compile with g++ 4.7.1 (it won’t compile either with gcc 4.6.3 by the way).
```
#include <iostream>
template<typename T>
struct Foo
{
template<typename U>
friend std::ostream& operator<<(Foo&, U&);
};
template<typename T, typename U>
std::ostream& operator<<(Foo<T> foo, U& u)
{
std::cout << u;
return std::cout;
}
int main()
{
Foo<int> f;
f << "bar";
return 0;
}
```
And this is what gcc 4.7.1 outputs (4.6.3 says almost the same thing).
>
> /tmp/ccNWJW6X.o: In function `main': main.cpp:(.text+0x15): undefined
> reference to`std::basic\_ostream >&
> operator<< (Foo&, char const (&) [4])' collect2:
> ld returned 1 exit status
>
>
>
Anyone could explain why?
**EDIT**
I also tried with clang 3.1, and it says exactly the same thing.
| Friendship with templates can be a bit complicated... Lets see what your code does:
```
template<typename T>
struct Foo {
template<typename U>
friend std::ostream& operator<<(Foo&, U&); // [1]
};
template<typename T, typename U>
std::ostream& operator<<(Foo<T> foo, U& u) { // [2]
std::cout << u;
return std::cout;
}
```
When you instantiate `Foo` with a type, for example `int` the friend declaration in [1] declares a template function:
```
template <typename U>
std::ostream& operator<<(Foo<int>&,U&);
```
But that function does not exist anywhere, what you are providing in [2] is a template that takes two arguments:
```
template<typename T, typename U>
std::ostream& operator<<(Foo<T> foo, U& u);
```
The key point is that the friend declaration is processed while the template is being instantiated, and at that time `Foo` represents the type obtained with the current instantiation.
There are different options for what you want to do, the simplest is changing the friend declaration to:
```
template<typename W, typename U>
friend std::ostream& operator<<(Foo<W> foo, U& u);
```
Which declares a template taking two arguments (both `W` and `U` are unbound here), and matches your definition at namespace level.
Another option is defining the friend function inside the class template definition, in which case you can maintain the original signature. For more information on the different alternatives, take a look at this other [answer](https://stackoverflow.com/a/4661372/36565)
|
Finding process count in Linux via command line
I was looking for the best way to find the number of running processes with the same name via the command line in Linux. For example if I wanted to find the number of bash processes running and get "5". Currently I have a script that does a 'pidof ' and then does a count on the tokenized string. This works fine but I was wondering if there was a better way that can be done entirely via the command line. Thanks in advance for your help.
| On systems that have `pgrep` available, the `-c` option returns a count of the number of processes that match the given name
```
pgrep -c command_name
```
Note that this is a `grep`-style match, not an exact match, so e.g. `pgrep sh` will also match `bash` processes. If you want an exact match, also use the `-x` option.
If `pgrep` is not available, you can use `ps` and `wc`.
```
ps -C command_name --no-headers | wc -l
```
The `-C` option to `ps` takes `command_name` as an argument, and the program prints a table of information about processes whose executable name matches the given command name. This is an exact match, not `grep`-style. The `--no-headers` option suppresses the headers of the table, which are normally printed as the first line. With `--no-headers`, you get one line per process matched. Then `wc -l` counts and prints the number of lines in its input.
|
Removing a single row from DataTable using Ajax
I have a JSF view that lists items in a collection in a Primefaces `DataTable`. The rightmost columns contain remove buttons. When a remove button is clicked, it is supposed to make an Ajax call, remove the corresponding item from the session variable `Cart` and update the view in-place. I would like the request and the view change to be as minimal as possible.
Here is what I have for this purpose:
```
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:h="http://java.sun.com/jsf/html"
xmlns:f="http://java.sun.com/jsf/core"
xmlns:p="http://primefaces.org/ui">
<h:head>
<title>Register user</title>
</h:head>
<h:body>
<f:view>
<h:form id="itemsForm">
<p:outputPanel id="items">
<p:dataTable value="#{cart.itemList}" var="item">
<p:column>
<f:facet name="header">
<h:outputText value="name" />
</f:facet>
<h:outputText value="#{item.product.description}" />
</p:column>
<p:column>
<f:facet name="header">
<h:outputText value="quantity" />
</f:facet>
<h:outputText value="#{item.quantity}" />
</p:column>
<p:column>
<f:facet name="header">
<h:outputText value="" />
</f:facet>
<p:commandButton icon="ui-icon-close" title="remove from cart">
<p:ajax listener="#{cart.removeItem}"
update="form:itemsForm"
process="@this" />
</p:commandButton>
</p:column>
<f:facet name="footer">
Total amount: ${cart.totalAmount}
</f:facet>
</p:dataTable>
</p:outputPanel>
</h:form>
</f:view>
</h:body>
</html>
```
Accordingly, I have the following method in `Cart.java`
```
public void removeItem() {
System.out.println("REMOVE REQUEST ARRIVED");
}
```
However, the `removeItem` method isn't even executing when I click a remove button.
So my questions are:
**1)** What is wrong with my Ajax call? What changes should I make to my XHTML?
**2)** How do I handle the request in the `removeItem` method and return a response?
**3)** How do I update the `footer`, which displays the totalAmount?
| You can pass `#{item}` as a parameter of your method call in the `actionListener`.
Your .xhtml page should look like this:
```
<p:dataTable id="cartTable" value="#{cart.itemList}" var="item">
<p:column>
<f:facet name="header">
<h:outputText value="" />
</f:facet>
<p:commandButton icon="ui-icon-close" title="remove from cart"
actionListener="#{cart.removeItem(item)}" update="cartTable" />
</p:column>
</p:dataTable>
```
And this is the method `removeItem` of your `ManagedBean`:
```
@ManagedBean
@ViewScoped
public class Cart {
private List<Item> itemList;
public void removeItem(Item item) {
itemList.remove(item);
}
}
```
|
How can I set the canvas background before downloading
I am creating a drawing app, and using this function to download the canvas image.
```
function download() {
var dt = canvas.toDataURL('image/png');
this.href = dt;
};
```
I want to set the canvas background to `white` before downloading because on mobile, the images are very distorted and black. Any ideas?
| You may want to draw a white rectangle the size of the entire canvas, beneath the actual content of the canvas.
```
// get the canvas 2d context
var ctx = canvas.getContext('2d');
// set the ctx to draw beneath your current content
ctx.globalCompositeOperation = 'destination-over';
// set the fill color to white
ctx.fillStyle = 'white';
// apply fill starting from point (0,0) to point (canvas.width,canvas.height)
// these two points are the top left and the bottom right of the canvas
ctx.fillRect(0, 0, canvas.width, canvas.height);
```
You have to apply these lines before generating your toDataUrl() stream.
Idea taken from: <http://www.mikechambers.com/blog/2011/01/31/setting-the-background-color-when-generating-images-from-canvas-todataurl/>
|
Unable to view arrays in Visual Studio C++ debugger?
I have the following code:
```
char *DecompressChunk(Node *startNode, int arraySize)
{
char *cubeArray = new char[arraySize];
```
When I put a breakpoint down after that, with arraySize being 18, when I hover over the array to try and view it, only the first element shows up and nothing else, I can print them and it works fine but I cannot view them all with VS. How do you set it so you can view all of them, or can you?
| You can tell the debugger how large the array is by adding a comma followed by the size in the watch window (this is called a format specifier):
```
cubeArray,18
```
You can't use a variable or anything as the array size.
Here are some [other tricks](http://blogs.msdn.com/b/vcblog/archive/2006/08/04/689026.aspx).
---
This doesn't help if you just want the tool-tips to show you more; it can only be used in watch windows.
Although Microsoft probably could improve tool-tips for arrays in some special cases, in general it would be very difficult due to the nature of arrays in C++; pointers to elements of an array have no way to know the bounds of that array. The effect this has on the debugger is probably one of the least significant problems. Other problems this creates impact the security and correctness of programs.
If you avoid raw arrays in favor of smarter types then the debugger can provide better tool-tips. The debugger already knows how to display `std::vector`, for example.
|
ls command doesn't show folder but I can't create it because folder already exists
I'm using Ubuntu 16.04.
Step 1) I logged into my `root` user account.
Step 2) I used `cd` to navigate to a different user account's home directory.
Step 3) I typed `ls` to examine the contents of that directory.
Step 4) The contents came back as empty.
Step 5) I typed `mkdir .ssh` to create a directory.
Result) `mkdir: cannot create directory '.ssh': File exists`
**Question:** Why is the directory listed as empty if an .ssh folder exists inside of it?
**-- update --**
I logged into root because this is a test server. I'm repeatedly creating and destroying it.
| `ls` by itself does not show hidden directories (hidden directories and files are ones that start with a `.`, such as `.ssh`)
Try using `ls -a` in the directory.
From the ls manpage:
>
> -a, --all
>
>
> do not ignore entries starting with .
>
>
>
As noted in the comments, "hidden" directories and files are not technically a thing, there is just code built into a lot of common tools that treat `.` and `..` with special meaning, the result being that `.` is usually considered "hidden" by most tools. The reason I used this term is because it's common to hear it referred to that way.
Additionally `.` and `..` usually have special meaning to most filesystems, indicating current directory and parent directory, respectively.
|
How do I access a movieClip on the stage using as3 class?
```
public class MyClass extends MovieClip {
public function MyClass():void {
my_mc.addEventListener(MouseEvent.CLICK, action);
}
private function action(e:MouseEvent):void {
trace("cliked");
}
}
```
**Timeline code**
```
var myClass:MyClass = new MyClass();
addChild(myClass);
```
I can't able to access the `my_mc`(placed in FLA) movieclip. How do I access?
| Try this:
```
public class MyClass extends MovieClip
{
public function MyClass()
{
if (stage) init();
else addEventListener(Event.ADDED_TO_STAGE, init);
}// end function
private function init(e:Event = null):void
{
removeEventListener(Event.ADDED_TO_STAGE, init);
var myMc:MovieClip = stage.getChildByName("my_mc") as MovieClip;
// var myMc:MovieClip = parent.getChildByName("my_mc") as MovieClip;
myMc.addEventListener(MouseEvent.CLICK, onMyMcClick)
}// end function
private function onMyMcClick(e:MouseEvent)
{
trace("clicked");
}// end function
}// end class
```
If this doesn't work(which I don't think it will), its because your `my_mc` display object isn't a child of the stage, but the child of an instance of `MainTimeline`. If so, then simply comment out the following statement in the above code:
```
var myMc:MovieClip = stage.getChildByName("my_mc") as MovieClip;
```
and uncomment the following statement in the above code:
```
// var myMc:MovieClip = parent.getChildByName("my_mc") as MovieClip;
```
If my assumption is correct, the `my_mc` and `myClass` display objects share the same parent.
|
Post the Kendo Grid Data on Form Submit
I want to Post the data from a Kendo Grid to the server, and save it to a database.
For this I have used form like so:
```
@using (Html.BeginForm("MainDocumentSave","Document"))
{
<div class="row-fluid">
<div class="span10">
@(Html.Kendo().Grid<Invoice.Models.ViewModels.SegmentViewModel>()
.Name("Segment")
.TableHtmlAttributes(new { style = "height:20px; " })
.Columns(columns =>
{
columns.Bound(p => p.AirlineShortName).EditorTemplateName("AirlineEditor").Title("Airline").ClientTemplate("#=AirlineName#").Width(5);
columns.Bound(p => p.DepartureDate).Width(9);
columns.Bound(p => p.Arrives).EditorTemplateName("ArrivalLocation").Title("Arrival").ClientTemplate("#=Arrives#").Width(5);
columns.Bound(p => p.ArrivalDate).Width(7);
columns.Bound(p => p.FlightNumber).Width(8);
})
.Editable(editable => editable.Mode(GridEditMode.InCell))
.Navigatable()
.Sortable()
.Scrollable(scr => scr.Height(200))
.Scrollable()
.DataSource(dataSource => dataSource
.Ajax()
.Batch(true)
.ServerOperation(false)
.Events(events => events.Error("error_handler"))
.Model(model => model.Id(p => p.AirlineName))
.Create("Editing_Create", "Grid")
.Read("Segment_Read", "Document")
.Update("Editing_Update", "Grid")
.Destroy("Editing_Destroy", "Grid")
)
)
</div>
</div>
<button type="submit" class="btn btn-primary"> Save Segments</button>
}
```
But after submitting, the data inside the Kendo Grid is not Posted. How can I Post Kendo Grid Data to the Server?
| The grid data isn't in form elements. The form elements appear only when a cell is being edited, then it is removed. You can't post the data to the server by using a form submit button.
The proper way to to this would be by adding the 'save' command button that the grid provides itself:
```
@(Html.Kendo().Grid<Invoice.Models.ViewModels.SegmentViewModel>()
.Name("Segment")
.ToolBar(toolbar => {
toolbar.Save(); // add save button to grid toolbar
})
// ... rest of options ...
```
Or by calling [saveChanges()](http://docs.kendoui.com/api/web/grid#methods-saveChanges) on the Grid widget:
```
<button type="button" id="save">Save Segments</button>
$("#save").on("click", function () {
$("#Segment").data("kendoGrid").saveChanges();
});
```
|
How to pickle or store Jupyter (IPython) notebook session for later
Let's say I am doing a larger data analysis in Jupyter/Ipython notebook with lots of time consuming computations done. Then, for some reason, I have to shut down the jupyter local server I, but I would like to return to doing the analysis later, without having to go through all the time-consuming computations again.
---
What I would like love to do is `pickle` or store the whole Jupyter session (all pandas dataframes, np.arrays, variables, ...) so I can safely shut down the server knowing I can return to my session in exactly the same state as before.
**Is it even technically possible? Is there a built-in functionality I overlooked?**
---
**EDIT:** based on [this](https://stackoverflow.com/a/634581/4050925) answer there is a `%store` [magic](http://ipython.org/ipython-doc/rel-0.12/config/extensions/storemagic.html) which should be "lightweight pickle". However you have to store the variables manually like so:
`#inside a ipython/nb session`
`foo = "A dummy string"`
`%store foo`
*closing seesion, restarting kernel*
`%store -r foo` # r for refresh
`print(foo) # "A dummy string"`
which is fairly close to what I would want, but having to do it manually and being unable to distinguish between different sessions makes it less useful.
| I think [**Dill**](https://github.com/uqfoundation/dill) (`pip install dill`) answers your question well.
Use [`dill.dump_session`](https://dill.readthedocs.io/en/latest/#dill.dump_session) to save a Notebook session:
```
import dill
dill.dump_session('notebook_env.db')
```
Use [`dill.load_session`](https://dill.readthedocs.io/en/latest/#dill.load_session) to restore a Notebook session:
```
import dill
dill.load_session('notebook_env.db')
```
([source](https://www.reddit.com/r/IPython/comments/6reiqp/how_can_i_save_and_load_the_state_of_the_kernel/dl6f2yn))
|
How do I reconfigure my computer after an OS reinstall?
Every so often I am required to reinstall a *standardized* image on my job computer (basically a ghosting). When the process is complete I have to reinstall all my non-image software manually (development environments, non-MS browsers, assorted tools and utilities etc). Afterwards I have to manually reconfigure all settings and configurations in all programs. In some programs I can export settings for later import, but many times that's not that easy. Either way it's basically a whole days work for me to reconfigure my computer back to my preferred setup.
Is there an easier way to do it? Normally I'd use some kind of imaging, but that option is obviously out of the question. Maybe a utility or a set of programs that can assist me in this work, tracking, backup and restoring of registry settings, configurations files, software folders, user documents etc.
**Edit:** To clarify, the reason I just can't save an image is because the new *standardized* image issued by corporate IT is required in order to be able to access company assets, including the company network. So overwriting the issued image with my backup image would not accomplish anything I could not accomplish by just ignoring the new company image. Which I sometimes can, but many times can't.
| If you have sufficient time, and enough specific software, creating an unattended install script(guides [here](http://unattended.sourceforge.net/installers.php) and [here](http://unattended.msfn.org/unattended.xp/) may be a good idea.
For some software you can shortcut this with [ninite](http://ninite.com/) which allows you to download and install a subset of a selection of software - for example browsers, some dev tools and so on, and is what i use. You may also want to look at [TGSP](http://www.tgup.net/), or [allmyapps](http://allmyapps.com/) amongst [others](http://alternativeto.net/software/ninite/), since they may have a different/more suitable set of software
[windiff](http://support.microsoft.com/kb/171780) will allow you to check differences between sets of registry files and you can get [a copy here](http://www.grigsoft.com/download-windiff.htm) - you can then export the difference between your modified system and the baseline system to the baseline system i believe.
On the other hand, there's an easier way - software virtualisation. I personally favour [svs/svw](https://superuser.com/questions/128648/alternatives-to-altaris-symantec-svs) (SU question linked, cause its a pain to find the software in question) to create a layer with all my software. Simply move the layer over to the new system and everything would be as it was. There's other options for software virtualisation, but this is the one i know that works
|
find and update a value of a dictionary in list of dictionaries
How can I find the `dictionary` with value `user7` then update it's `match_sum` eg add 3 to the existing 4.
```
l = [{'user': 'user6', 'match_sum': 8},
{'user': 'user7', 'match_sum': 4},
{'user': 'user9', 'match_sum': 7},
{'user': 'user8', 'match_sum': 2}
]
```
I have this, and am not sure if its the best practice to do it.
```
>>> for x in l:
... if x['user']=='user7':
... x['match_sum'] +=3
```
| You can also use [`next()`](https://docs.python.org/2/library/functions.html#next):
```
l = [{'user': 'user6', 'match_sum': 8},
{'user': 'user7', 'match_sum': 4},
{'user': 'user9', 'match_sum': 7},
{'user': 'user8', 'match_sum': 2}]
d = next(item for item in l if item['user'] == 'user7')
d['match_sum'] += 3
print(l)
```
prints:
```
[{'match_sum': 8, 'user': 'user6'},
{'match_sum': 7, 'user': 'user7'},
{'match_sum': 7, 'user': 'user9'},
{'match_sum': 2, 'user': 'user8'}]
```
Note that if `default` (second argument) is not specified while calling `next()`, it would raise `StopIteration` exception:
```
>>> d = next(item for item in l if item['user'] == 'unknown user')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
```
And here's what would happen if `default` is specified:
```
>>> next((item for item in l if item['user'] == 'unknown user'), 'Nothing found')
'Nothing found'
```
|
Knockout JS: Binding Dynamic Rows
I'm having some trouble in binding dynamically created dom elements
Code:
```
var i=0;
$.each(data.info, function(index, element) {
$("#div1").append("<tr><td>" + element.Name + "</td><td>"+ element.Major +"</td><td>" + element.Sex +"</td><td>" + "<input data-bind='value: eng"+i+"' ></td><td>" + "<input data-bind='value: jap"+i+"' ></td><td>" + "<input data-bind='value: cal"+i+"' ></td><td>" + "<input data-bind='value: geo"+i+"' ></td><td>" + "<strong data-bind='text: total'></td>" )
i++;
});
```
This creates row with input data-bind values eng0, eng1, jap0, jap1, etc.
I want to bind these as observables
Code
```
function AppViewModel() {
this.eng = ko.observable(element.English);
this.jap = ko.observable(element.Japanese);
this.cal = ko.observable(element.Calculus);
this.geo = ko.observable(element.Geometry);
this.total = ko.computed(function() {
var tot=parseFloat(this.eng()) + parseFloat(this.jap()) + parseFloat(this.cal()) + parseFloat(this.geo());
return (tot);
}, this);
}
ko.applyBindings(new AppViewModel());
```
This code is also inside `$.each(data.info, function(index, element){}`
I want some thing like
```
Var i=0;
$.each(data.info, function(index, element) {
function AppViewModel() {
this.eng+i = ko.observable(element.English);
this.jap+i = ko.observable(element.Japanese);
this.cal+i = ko.observable(element.Calculus);
this.geo+i = ko.observable(element.Geometry);
this.total+i = ko.computed(function() {
var tot=parseFloat(this.eng()) + parseFloat(this.jap()) + parseFloat(this.cal()) + parseFloat(this.geo());
return (tot);
}, this);
}
i++;
}
```
That get me result `this.eng0 = ko.observable()`
Note: the data is obtained from a JSON object. I have only included the iteration path
| May I suggest that using a [foreach binding](http://knockoutjs.com/documentation/foreach-binding.html) may be better than using jQuery's `each` and generating the HTML yourself? I'd suggest changing your view model to something like this:
```
function AppViewModel() {
this.items = ko.observableArray();
}
function ItemViewModel(element) {
this.eng = ko.observable(element.English);
this.jap = ko.observable(element.Japanese);
this.cal = ko.observable(element.Calculus);
this.geo = ko.observable(element.Geometry);
this.name = ko.observable(element.name);
this.major = ko.observable(element.major);
this.sex = ko.observable(element.sex);
this.total = ko.computed(function () {
var tot = parseFloat(this.eng()) + parseFloat(this.jap()) + parseFloat(this.cal()) + parseFloat(this.geo());
return (tot);
}, this);
};
```
Here, the AppViewModel is a container for the list of elements, and each element is its own ItemViewModel, with the properties you seem to have.
The html to bind this would be something like this:
```
<table>
<tbody data-bind="foreach: items">
<tr>
<td data-bind="text: name"></td>
<td data-bind="text: major"></td>
<td data-bind="text: sex"></td>
<td><input data-bind='value: eng' /></td>
<td><input data-bind='value: jap' /></td>
<td><input data-bind='value: cal' /></td>
<td><input data-bind='value: geo' /></td>
<td><strong data-bind='text: total' /></td>
</tr>
</tbody>
</table>
```
When you get the JSON from your server you can use Knockout's [built-in JSON stuff](http://knockoutjs.com/documentation/json-data.html), the [mapping plugin](http://knockoutjs.com/documentation/plugins-mapping.html), or parse them yourself. I created an example using the latter option in [this jsfiddle](http://jsfiddle.net/XNaUg/1/).
|
Infinitely nest maps with variant
So, I am trying to make maps, which are infinitely nestable and I could use strings, ints, bools, etc. in it.
This is what I tried:
```
struct NMap;
struct NMap : std::map<std::string, std::variant<NMAP*, std::string, std::any>> {};
// ...
NMap* something;
something["lorem"]["ipsum"] = "Test";
^ - No such operator []
```
Which is logical, `std::variant` doesn't have `[]` operator. Is there anyway to use `std::variant` in Nestable maps?
| Something simple and a bit weird:
```
#include <map>
#include <string>
#include <optional>
struct rmap : std::map<std::string, rmap>
{
std::optional<std::string> value; // could be anything (std::variant, std::any, ...)
};
```
With a bit of sugar and some other tasteful adjustments, you can use it like you intend to:
```
#include <map>
#include <string>
#include <optional>
#include <iostream>
struct rmap : std::map<std::string, rmap>
{
using value_type = std::optional<std::string>;
value_type value;
operator const value_type&() const { return value; }
rmap& operator=(value_type&& v) { value = v; return *this; }
friend std::ostream& operator<<(std::ostream& os, rmap& m) { return os << (m.value ? *m.value : "(nil)"); }
};
int main()
{
rmap m;
m["hello"]["world"] = std::nullopt;
m["vive"]["le"]["cassoulet"] = "Obama";
std::cout << m["does not exist"] << '\n'; // nil
std::cout << m["hello"]["world"] << '\n'; // nil
std::cout << m["vive"]["le"]["cassoulet"] << '\n'; // Obama
}
```
You can adjust to your taste with some syntactic sugar.
|
Undecorated JFrame shadow
How do you add a shadow to a undecorated jframe?
From what I found online, you might be able to add the jframe to another black translucent window to give a shadow effect.
Or somehow apply something like this to a JFrame:
```
Border loweredBorder = new EtchedBorder(EtchedBorder.LOWERED);
setBorder(loweredBorder);
```
Either way I just want to know the best method or maybe a completely different way of getting the same effect like extending from another class and not jframe.
I'm new to Java so I might be going down the wrong direction so any advice is appreciated.
| Basically, you need to make a series of layers.
- `JFrame`
- `ShadowPanel`
- and content...
![enter image description here](https://i.stack.imgur.com/QXFnD.png)
```
import java.awt.AlphaComposite;
import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.EventQueue;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.GridBagLayout;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.UIManager;
import javax.swing.UnsupportedLookAndFeelException;
import javax.swing.border.EmptyBorder;
public class ShadowWindow {
public static void main(String[] args) {
new ShadowWindow();
}
public ShadowWindow() {
EventQueue.invokeLater(new Runnable() {
@Override
public void run() {
try {
UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
} catch (ClassNotFoundException | InstantiationException | IllegalAccessException | UnsupportedLookAndFeelException ex) {
}
JFrame frame = new JFrame("Testing");
frame.setUndecorated(true);
frame.setBackground(new Color(0, 0, 0, 0));
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setContentPane(new ShadowPane());
JPanel panel = new JPanel(new GridBagLayout());
panel.add(new JLabel("Look ma, no hands"));
frame.add(panel);
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
});
}
public class ShadowPane extends JPanel {
public ShadowPane() {
setLayout(new BorderLayout());
setOpaque(false);
setBackground(Color.BLACK);
setBorder(new EmptyBorder(0, 0, 10, 10));
}
@Override
public Dimension getPreferredSize() {
return new Dimension(200, 200);
}
@Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
Graphics2D g2d = (Graphics2D) g.create();
g2d.setComposite(AlphaComposite.SrcOver.derive(0.5f));
g2d.fillRect(10, 10, getWidth(), getHeight());
g2d.dispose();
}
}
}
```
|
node.js Setup Wizard ended prematurely
I can't install node.js (0.8.9). I'm doing it via Setup Wizard and each time at the end of installation I get message "Node.js Setup Wizard ended prematurely".
What could it be? Thanks.
| This may help someone in the future. I got a similar message from the installer and found that I could go to command prompt and run the MSI with a command line option to make it create a log file (like `node-v0.10.24-x64.msi /lxv C:\Logs\Nodejs.log`), where you can choose what the log is called and where it goes.
In my case, we are running in an Active Directory domain environment and some of our folders that are normally local are redirected to a network share so they are always there no matter what computer we log into. Mostly for the benefit of our "My Documents" folder.
When looking through the log I found the actual error that I was getting:
- WixCreateInternetShortcuts: Error 0x80070005: failed to save shortcut '\ad.local\system\users\<myAcctName>\Start Menu\Programs\Node.js\Node.js website.url'
- WixCreateInternetShortcuts: Error 0x80070005: failed to create Internet shortcut
- CustomAction WixCreateInternetShortcuts returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox).
## Resolution (for me)
Oddly enough, just running an admin escalated command prompt first, and then running the MSI lets it install correctly.
I *think* the difference is that when you double-click on an MSI and it escalates, it runs as `TrustedInstaller` and while that account has access to everything on my box, it does not have network access. When I run an escalated command prompt, it is running as **me**, but already escalated (the MSI never needs to ask for escalation), so it works.
## Final Note:
As of 7/22/2015, the node.js team has finally tracked down the issue with the installer and from 0.12.8 and forward this should be no longer an issue for us. I tested an early version of the installer for them to make sure it worked for me and there were no hitches with the install.
<https://github.com/joyent/node/issues/5849#issuecomment-123905214>
As of this writing, 7/30/2015, the current version was still 0.12.7, so x.8 has not yet been rolled out to the masses I guess.
|
List constraints for all tables with different owners in PostgreSQL
Do I have to be owner of relation to access constraint related data in information schema? I've tested the following and it seems that I have to be the owner.
```
create schema rights_test;
create table rights_test.t1 (id int primary key);
create table rights_test.t2 (id int references rights_test.t1(id));
select
tc.constraint_name,
tc.constraint_schema || '.' || tc.table_name || '.' || kcu.column_name as physical_full_name,
tc.constraint_schema,
tc.table_name,
kcu.column_name,
ccu.table_name as foreign_table_name,
ccu.column_name as foreign_column_name,
tc.constraint_type
from
information_schema.table_constraints as tc
join information_schema.key_column_usage as kcu on (tc.constraint_name = kcu.constraint_name and tc.table_name = kcu.table_name)
join information_schema.constraint_column_usage as ccu on ccu.constraint_name = tc.constraint_name
where
constraint_type in ('PRIMARY KEY','FOREIGN KEY')
and tc.constraint_schema = 'rights_test'
/*
This will produce desired output:
t1_pkey;rights_test.t1.id;rights_test;t1;id;t1;id;PRIMARY KEY
t2_id_fkey;rights_test.t2.id;rights_test;t2;id;t1;id;FOREIGN KEY
*/
create user rights_test_role with password 'password';
grant all on rights_test.t1 to rights_test_role;
grant all on rights_test.t2 to rights_test_role;
/* Now login as rights_test_role and try the same constraint select.
For rights_test_role it returns nothing although I've added ALL privileges
*/
```
Is there other way how to get the same information if I am not owner of the relation?
| Not all constraint-related data is "protected". You use three relations in your query:
- `table_constraints`
- `key_column_usage`
- `constraint_column_usage`
The first two are not limited, but the documentation for [`constraint_column_usage`](http://www.postgresql.org/docs/current/interactive/infoschema-constraint-column-usage.html) tells you:
>
> The view constraint\_column\_usage identifies all columns in the current database that are used by some constraint. **Only those columns are shown that are contained in a table owned by a currently enabled role.**
>
>
>
Since `information_schema.constraint_column_usage` is a view, you can see its definition using
```
\d+ information_schema.constraint_column_usage
```
in the psql shell. The result looks frightening at a first glance but it's really not so bad. The most interesting thing - for a first test - is the part in the very last line:
```
WHERE pg_has_role(x.tblowner, 'USAGE'::text);
```
If you paste the definition into the psql shell which is open by the non-owner `rights_test_role` and delete that last line you will get the desired result. This is good, because that means that the basic metadata is not protected by the system. So you can strip down the view definition to include only the parts you really need.
|
ArrayObject, getIterator();
I am trying to understand what **`getIterator()`** is, I will explain:
As I know `getIterator` is a method we call to include an external Iterator.
The problem is getIterator include **it's own methods** the closes think looks the same is Iterator interface but it can't be an interface it can be class but i am trying to search it inside the SPL.php source code and didn't find any, maybe I'm making this more complicated than it's really is, I will be happy if some one can help me understand where it is at the SPL.php source code and what is it (class,etc). Thank you all and have a nice day.
| `ArrayObject` implements `IteratorAggregate` which allows you to return an iterator instead of implementing it. It's pretty straightforward. Assume you have a class wrapping an array, like
```
class Foo
{
private $array = [1,2,3,4];
}
```
and you want to `foreach` over an instance of `Foo`, like this:
```
foreach (new Foo as $bar) {
echo $bar; // outputs 1234
}
```
To achieve that you could add the `Iterator` interface but then you'd had to implement all the methods in the interface for the simple task of iterating the array inside the object. You dont want to duplicate that code over and over again whenever you need that functionality, especially since there already is an Iterator that does what you want. So instead of implementing `Iterator` you implement `IteratorAggregate`
```
class Foo implements IteratorAggregate
{
private $array = [1,2,3,4];
public function getIterator()
{
return new ArrayIterator($this->array);
}
}
```
Now when you do the `foreach` PHP will use the Iterator returned from `getIterator` instead of the `Foo` instance itself allowing you to get the output 1234.
In case of `ArrayObject`, the point is simply for allowing you to `foreach` over the values in the `ArrayObject`. Of course, you can also call `getIterator` yourself, since it's public and then use returned Iterator directly.
Also note the ctor signature of `ArrayObject`:
```
public __construct (
[ mixed $input
[, int $flags = 0
[, string $iterator_class = "ArrayIterator"
]]] )
```
which notes which Iterator will be returned.
|
Is there a built-in login template in Django?
I want to let a user sign in before seeing pages. Is there any built-in template for user sign in, so that I do not have to write my own sign in page?
| Yes. You can read all about it here: <https://docs.djangoproject.com/en/1.8/topics/auth/default/#django.contrib.auth.decorators.login_required> ... but here are some bullet points:
- add `'django.contrib.auth.middleware.AuthenticationMiddleware'` to `MIDDLEWARE_CLASSES` in `settings.py`
- add `'django.contrib.auth`' and `'django.contrib.contenttypes'` to `INSTALLED_APPS` in `settings.py`
- setup a URL for the login using `django.contrib.auth.views.login` for the view, such as `url(r'^login/$', 'django.contrib.auth.views.login',name="my_login")`
- In your view, include the login\_required decorator and add it before your view. For example...
views.py...
```
from django.contrib.auth.decorators import login_required
@login_required
def home(request):
return HttpResponse('Home Page')
```
By default, you then put the template inside `my_template_directory/registration/login.html` . Further info about that template can be found at the link in the beginning of this post.
|
How to reconstruct grayscale image from intensity values?
It is commonly required to get the pixel data from an image or reconstruct that image from pixel data. How can I take an image, convert it to an array of pixel values and then reconstruct it using the pixel array in `Swift` using `CoreGraphics`?
The quality of the answers to this question have been all over the place so I'd like a canonical answer.
| ## Get pixel values as an array
This function can easily be extended to a color image. For simplicity I'm using grayscale, but I have commented the changes to get RGB.
```
func pixelValuesFromImage(imageRef: CGImage?) -> (pixelValues: [UInt8]?, width: Int, height: Int)
{
var width = 0
var height = 0
var pixelValues: [UInt8]?
if let imageRef = imageRef {
let totalBytes = imageRef.width * imageRef.height
let colorSpace = CGColorSpaceCreateDeviceGray()
pixelValues = [UInt8](repeating: 0, count: totalBytes)
pixelValues?.withUnsafeMutableBytes({
width = imageRef.width
height = imageRef.height
let contextRef = CGContext(data: $0.baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width, space: colorSpace, bitmapInfo: 0)
let drawRect = CGRect(x: 0.0, y:0.0, width: CGFloat(width), height: CGFloat(height))
contextRef?.draw(imageRef, in: drawRect)
})
}
return (pixelValues, width, height)
}
```
## Get image from pixel values
I reconstruct an image, in this case grayscale 8-bits per pixel, back into a `CGImage`.
```
func imageFromPixelValues(pixelValues: [UInt8]?, width: Int, height: Int) -> CGImage?
{
var imageRef: CGImage?
if let pixelValues = pixelValues {
let bitsPerComponent = 8
let bytesPerPixel = 1
let bitsPerPixel = bytesPerPixel * bitsPerComponent
let bytesPerRow = bytesPerPixel * width
let totalBytes = width * height
let unusedCallback: CGDataProviderReleaseDataCallback = { optionalPointer, pointer, valueInt in }
let providerRef = CGDataProvider(dataInfo: nil, data: pixelValues, size: totalBytes, releaseData: unusedCallback)
let bitmapInfo: CGBitmapInfo = [CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue), CGBitmapInfo(rawValue: CGImageByteOrderInfo.orderDefault.rawValue)]
imageRef = CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: bitmapInfo,
provider: providerRef!,
decode: nil,
shouldInterpolate: false,
intent: .defaultIntent)
}
return imageRef
}
```
### Demoing the code in a Playground
You'll need an image copied into the Playground's Resources folder and then change the filename and extension below to match. The result on the last line is a UIImage constructed from the CGImage.
```
import Foundation
import CoreGraphics
import UIKit
import PlaygroundSupport
let URL = playgroundSharedDataDirectory.appendingPathComponent("zebra.jpg")
print("URL \(URL)")
var image: UIImage? = nil
if FileManager().fileExists(atPath: URL.path) {
do {
try NSData(contentsOf: URL, options: .mappedIfSafe)
} catch let error as NSError {
print ("Error: \(error.localizedDescription)")
}
image = UIImage(contentsOfFile: URL.path)
} else {
print("File not found")
}
let (intensityValues, width, height) = pixelValuesFromImage(imageRef: image?.cgImage)
let roundTrippedImage = imageFromPixelValues(pixelValues: intensityValues, width: width, height: height)
let zebra = UIImage(cgImage: roundTrippedImage!)
```
|
Force Verbose Mode in Windows DEL Command
I sometimes delete masses of files on multiple computers with the Windows command line, with the `DEL` command, and on some the function is verbose (i.e. outputs the files it deletes as it goes), while on others it's not the case. Is there a way to force the command line utility to always display its progress?
| Both the **del** and **erase** commands will be silent unless you include the **/s** argument to delete all files in all subdirectories.
One option is to simply delete the files one by one. Let's say you want to delete everything in the temp folder:
```
for /f "tokens=*" %A in ('dir /s /b "%TEMP%"') do del /Q "%A"
```
That will take a while to start up against 2 million files as it will need to list them first and then delete them one by one.
Another option is to use **Robocopy** and have it mirror an empty directory to delete the files you want. You will get verbose output as each file is deleted. Start with an empty directory (c:\empty) and run something similar to the following:
```
robocopy c:\empty c:\dir_with_files_to_be_deleted *files_you_want_to_delete.* /mir /v
```
Or, if you just want to delete all files in a single directory:
```
robocopy c:\empty c:\dir_to_empty /mir /v
```
|
How to make onehotencoder in Spark to work like onehotencoder in Pandas?
When I use onehotencoder in Spark,I will get the result as in fourth column which is a sparse vector.
```
// +---+--------+-------------+-------------+
// | id|category|categoryIndex| categoryVec|
// +---+--------+-------------+-------------+
// | 0| a| 0.0|(3,[0],[1.0])|
// | 1| b| 2.0|(3,[2],[1.0])|
// | 2| c| 1.0|(3,[1],[1.0])|
// | 3| NA| 3.0| (3,[],[])|
// | 4| a| 0.0|(3,[0],[1.0])|
// | 5| c| 1.0|(3,[1],[1.0])|
// +---+--------+-------------+-------------+
```
However, what I want is to produce 3 columns for categories just like the way it works in pandas.
```
>>> import pandas as pd
>>> s = pd.Series(list('abca'))
>>> pd.get_dummies(s)
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
```
| Spark's OneHotEncoder creates a sparse vector column. To create the output columns similar to pandas OneHotEncoder, we need to create a separate column for each category. We can do that with the help of pyspark dataframe's `withColumn` function by passing a udf as a parameter. For ex -
```
from pyspark.sql.functions import udf,col
from pyspark.sql.types import IntegerType
df = sqlContext.createDataFrame(sc.parallelize(
[(0,'a'),(1,'b'),(2,'c'),(3,'d')]), ('col1','col2'))
categories = df.select('col2').distinct().rdd.flatMap(lambda x : x).collect()
categories.sort()
for category in categories:
function = udf(lambda item: 1 if item == category else 0, IntegerType())
new_column_name = 'col2'+'_'+category
df = df.withColumn(new_column_name, function(col('col2')))
print df.show()
```
Output-
```
+----+----+------+------+------+------+
|col1|col2|col2_a|col2_b|col2_c|col2_d|
+----+----+------+------+------+------+
| 0| a| 1| 0| 0| 0|
| 1| b| 0| 1| 0| 0|
| 2| c| 0| 0| 1| 0|
| 3| d| 0| 0| 0| 1|
+----+----+------+------+------+------+
```
I hope this helps.
|
NGINX, proxy\_pass and SPA routing in HTML5 mode
I have NGINX set up as a reverse proxy for a virtual network of docker containers running itself as a container. One of these containers serves an Angular 4 based SPA with client-side routing in HTML5 mode.
The application is mapped to location / on NGINX, so that <http://server/> brings you to the SPA home screen.
```
server {
listen 80;
...
location / {
proxy_pass http://spa-server/;
}
location /other/ {
proxy_pass http://other/;
}
...
}
```
The Angular router changes the URL to <http://server/home> or other routes when navigating within the SPA.
However, when I try to access these URLs directly, a 404 is returned. This error originates from the `spa-server`, because it obviously does not have any content for these routes.
The examples I found for configuring NGINX to support this scenario always assume that the SPA's static content is served directly from NGINX and thus `try_files` is a viable option.
How is it possible to forward any unknown URLs to the SPA so that it can handle them itself?
| The solution that works for me is to add the directives `proxy_intercept_errors` and `error_page` to the `location /` in NGINX:
```
server {
listen 80;
...
location / {
proxy_pass http://spa-server/;
proxy_intercept_errors on;
error_page 404 = /index.html;
}
location /other/ {
proxy_pass http://other/;
}
...
}
```
Now, NGINX will return the /index.html i.e. the SPA from the `spa-server` whenever an unknown URL is requested. Still, the URL is available to Angular and the router will immediately resolve it within the SPA.
Of course, now the SPA is responsible for handling "real" 404s. Fortunately, this is not a problem and a good practice within the SPA anyway.
UPDATE: Thanks to @dan
|
Which way of starting a new activity is the best?
In my learning process of Android Development, I have come across two different ways of starting a new activty. And now I start to wonder.
They both work fine. However, I wanna know if one of the options is better to use, and why?
My first example (and the one that I so far like the most):
```
Intent intent = new Intent(this, MainMenuActivity.class);
this.startActivity(intent);
```
And the second:
```
startActivity(new Intent("com.example.MENUSCREEN"));
```
Where I need to add the android:name to my intent-filter in the manifest:
```
<activity
android:name="com.example.MainMenuActivity"
android:label="@string/app_name"
android:theme="@android:style/Theme.NoTitleBar.Fullscreen">
<intent-filter>
<action android:name="com.example.MENUSCREEN" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
</activity>
```
And have I understood i correct, that an intent is like saying, that I intent to do something? An "intention" to do an action.
| Just to answer your question:
>
> Which way of starting a new activity is the best?
>
>
>
Depends on what you want to do and where the activity you want to start actually lives, just to make a long story short, Intent can be separated in two types Explicit/Implicit.
The first one you are trying:
```
Intent intent = new Intent(this, MainMenuActivity.class);
this.startActivity(intent);
```
Is "Explicit", it means you have access to the class that will actually handle the intent it self, usually that's only possible if the class is somewhere in your project, but that's not always the case, sometimes you might need to open an Activity from a different application and then you would use the second option.
The second you tried:
```
startActivity(new Intent("com.example.MENUSCREEN"));
```
Is "Implicit", this is an action that any activity that fits the "action/category/data" intent filter will be able to handle, if more than one Activity can handle it, the operating system would pop up a window to ask the user to select one of them.
As you can see one way is no better than the other, it all depends on what you want to do and the possibilities the OS give you to be able to start activities under different circumstances.
Hope it Helps!
Regards!
|
Does linux have pause/resume feature like in Windows 8?
I'm looking for some feature which will allow me to pause/resume copy-paste process similar to windows 8. Is there any file manager that will allow this process? Is it extension to file-manager. Does this process have anything to do with file-manager? I'm also searching for portable and best file-manager too. I'm little bit confused choosing between PCMan and Krusader. Any suggestion which to choose between them.
| ### Terminal Method
Usually when copying files that I think I'll need to pause/resume I'll go to the terminal and use the `rsync` command instead.
```
$ rsync -avz <source> <destination>
```
This can be paused/resumed in the sense that you can simply stop it, and restart the command later on. Only the files that haven't been copied as of yet, will get copied during the 2nd run.
The `rsync` tool is extremely powerful, and you can do a whole lot more with it, this is just the tip of the iceberg.
### GUI Method
If you want to do this through a GUI there are a number of options listed on the [alternativesto.net](http://alternativeto.net/) website. Specifically if you look under the alternatives to [SuperCopier](http://alternativeto.net/software/supercopier/), a similar tool for Windows. This list provides tools that can do this for all OSes.
If I were to pick one, I'd start with this one, [UltraCopier](http://ultracopier.first-world.info/).
![ss of ultracopier](https://i.stack.imgur.com/mmmH4.png)
### Backgrounding the GUI
Another method to accomplish a pause/resume feature would be to invoke the file browser from a terminal and simply press `Control` + `Z` to pause the process. When you're ready to restart it, in the terminal from where it was launched, type `fg` to bring that process back to the foreground.
|
Is there a faster way to get the colors in a screen-shot?
I have piece of code, and what it does is find the colors of an image (or section of the screen) and if the R G and B color's are greater than 127 it put's a 1 in the corresponding position of a 2D int array. Here is the segment I have now, but it is obviously extremely slow. Is there a better way to do this?
```
private void traceImage(){
try{
Robot r = new Robot();
b = new int[470][338];
for(int y = 597; y < 597+469; y++)
{
for(int x = 570; x < 570+337; x++)
{
if(r.getPixelColor(x,y).getRed() > 127 &&
r.getPixelColor(x,y).getGreen() > 127 &&
r.getPixelColor(x,y).getBlue() > 127)
{
b[y-597][x-570] = 1;
}
else
{
b[y-597][x-570] = 0;
}
}
}
}
catch(Exception ex){System.out.println(ex.toString());}
}
```
There must be a faster way to do this using the values above. `b` is the int array and it is initialized in this segment. `r` is a robot that I use to find the color of the pixel on the screen. `x` and `y` should be self explanatory.
Thanks Mikera! This is the end result I got based on your answer:
```
private void traceActionPerformed(java.awt.event.ActionEvent evt) {
try{
Robot r = new Robot();
BufferedImage image = r.createScreenCapture(new Rectangle(597,570,470,337));
b = new int[image.getWidth()][image.getHeight()];
for(int y=0;y<image.getWidth();y++){
for(int x=0;x<image.getHeight();x++){
if(image.getRGB(y,x)==-1){
b[y][x]=0;
}
else{
b[y][x]=1;
}
}
}
System.out.println("Done");
}
catch(Exception ex){System.out.println(ex.toString());}
}
```
The -1 in the if else statement apparently finds the value of black pixels like I wanted.
| You should use [Robot.createScreenCapture()](http://docs.oracle.com/javase/1.4.2/docs/api/java/awt/Robot.html#createScreenCapture%28java.awt.Rectangle%29) to capture the entire subimage in one go.
Then you will be able to query the individual pixels in the subimage much faster, e.g. using [BufferedImage.getRGB(x,y)](http://docs.oracle.com/javase/1.4.2/docs/api/java/awt/image/BufferedImage.html#getRGB%28int,%20int%29)
Interestingly your specific operation has a very fast bitwise implementation: just do `((rgb & 0x808080) == 0x808080)`
|
Service Fabric Deactivate (pause) vs Deactivate (restart)?
When I log in to Service Fabric Explorer and try to disable a node for an OS upgrade I am presented with two options:
- Deactivate (Pause)
- Deactivate (Restart)
Can anyone tell me the difference?
| Service Fabric has APIs that let you manage nodes (in C# these are DeactivateNodeAsync and ActivateNodeAsync, in PS they're Enable/Disable-ServiceFabricNode). First of all, most of these are holdovers from when people managed their own clusters, and should be *less* commonly used in the Azure Hosted Service Fabric Cluster environment compared to when you run your own clusters. Either way when deactivating a node there are several different options, which we call *Intents*.
You can think of these as representing increasingly severe operations on the nodes, which you'd use under different situations, and you use them to communicate to Service Fabric what is being done to the node.
The four different options are:
1. **Pause** - effectively "pauses" the node: Services on it will continue to run, but no services should move in or out of the node unless they fail on their own, or unless moving a service to the node is necessary to prevent outage or inconsistency.
2. **Restart** - this will move all of the in-memory stateful and stateless services off the node, and then shut down (close) any persistent services (if it is safe to do so, if not we'll build spares).
3. **RemoveData** - this will close down all of the services on the node, again building spares first if it is necessary for safety. The user is responsible for ensuring that if the node does come back, it comes back empty.
4. **RemoveNode** - this will close down all of the services on the node, again building spares first if necessary for safety. In this case though you're specifically telling SF that this node isn't coming back. SF performs an additional check to make sure that the node which is being removed isn't a SeedNode (one of the nodes currently responsible for maintaining the underlying cluster). Other than that, this is the same as RemoveData.
Now let's talk about when you'd use each. **Pause** is most common if you want to debug a given service, process, machine etc, and would like it to not be changed (to the degree possible) while you are looking at it. It would be a little awkward if you went to go diagnose some behavior of a service only to determine that we had just moved it on you. **Restart** (which is the most common of these we see used) is used when for some reason you want to move all the workloads off the node. For example Service Fabric uses this itself when upgrading the Service Fabric bits on the node - first we deactivate the node with intent restart, and then we wait for that to complete (so we know your services are not running) before we shut down and upgrade our own code on that node. **RemoveData** is where you know the node is being deprovisioned and will not be coming back (say that the hard drives are going to be swapped out, or the hardware being completely removed), or you know that if the node is coming back it's specifically going to be empty (say you're reimaging the machine). The difference between Restart and RemoveData is that for restart, we know the node is coming back, so we keep the knowledge of the replicas on that node. For persistent replicas this means that we don't have to build the replicas again immediately. But for RemoveData we know that the replicas are not coming back, and so need to build any spares immediately before confirming that the node is safe to restart. **RemoveNode** builds on top of RemoveData, and is an additional indicator that you have no specific plans to bring this node back. Since it's important to keep the SeedNodes up, SF will fail the call if the node to be removed is currently a Seed. If you really want to remove that specific node, you can reconfigure the cluster to use a different node as a seed. An example of when you'd want to use RemoveData vs. RemoveNode is that if you're scaling down a cluster, you'd be explicitly calling RemoveNode, since you intent for the nodes not to come back and want to make sure you're taking the right ones away so the underlying cluster doesn't collapse.
Once the operation (whatever it is) is done and you want to re-enable the node, the corresponding call is Activate/Enable. Restarting a node doesn't cause it to become automatically re-enabled. So if you are done with the software patch (or whatever caused you to use intent Restart, for example), and you want services to be placed on the node again, you would call Enable/Activate with the appropriate node Name.
As an example of the deactivate/disable call, check out the PS API documentation [here](https://learn.microsoft.com/en-us/powershell/module/servicefabric/disable-servicefabricnode?view=azureservicefabricps)
|
What is the best way to declare functions within a Javascript class?
I'm trying to understand the difference between 2 different function declarations in JavaScript.
**Consider the following code snippet:**
```
function SomeFunction(){
this.func1 = function(){
}
function func2(){
}
}
```
What is the difference between the declarations of func1 and func2 above?
| In simple language,
`fun1` is a property of `SomeFunction` class holding a reference of anonymous function where `func2` is named function.
>
> **Property**
>
>
>
Here `fun1` is property of that `SomeFunction` class, it means when you create instance of SomeFunction class using new keyword, then only you can access it from outside.
>
> **Private Method**
>
>
>
Here `fun2` will work as **private** method of class `SomeFunction`, and will be accessible inside that class only.
---
**Sample**
```
function SomeFunction() {
this.func1 = function() { console.log("in func1") }
function func2() { console.log("in func2") }
}
var obj = new SomeFunction();
obj.func1(); //Accessible
obj.func2(); //Not accessible
```
|
Concurrent read/write buffer in java
I am trying to implement a read/write buffer class where it can be able to support multiple writers and readers, and the reader can read the buffer simultaneously while the writer is writing the buffer. Here's my code, and so far I haven't seen any issue, but I am not 100% sure if this is thread-safe or if there's any better approach.
```
public class Buffer{
private StringBuilder sb = new StringBuilder();
private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
private Random random = new Random();
public void read(){
try{
lock.readLock().lock();
System.out.println(sb.toString());
} finally{
lock.readLock().unlock();
}
}
public void write(){
try{
lock.writeLock().lock();
sb.append((char)(random.nextInt(26)+'a'));
} finally{
lock.writeLock().unlock();
}
}
}
```
| No problem with multi-threading safety whatsoever! The read and write locks protect access to the StringBuilder and the code is clean and easy to read.
By using ReentrantReadWriteLock you are actually maximising you chance of achieving higher degrees of concurrency, because multiple readers can proceed together, so this is a better solution than using plain old synchronised methods. However, contrary to what is stated in the question, the code does *not* allow a writer to write while the readers are reading. This is not necessarily a problem in itself though.
The readers acquire a read lock before proceeding. The writers acquire a write lock before proceeding. The rules of read locks allow one to be acquired when there is no write lock (but it is OK if there are some read locks i.e. if there are more active readers). The rules of write locks allow one to be acquired if and only if there are no other locks (no readers, no writers). Thus multiple readers are allowed but only a single writer.
The only change that might be needed would be to change the lock initialisation code to this:
```
private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock(true);
```
As is the original code given in the question does not require the lock to be fair. With the above change it is guaranteed that "threads contend for entry using an approximately arrival-order policy. When the write lock is released either the longest-waiting single writer will be assigned the write lock, or if there is a reader waiting longer than any writer, the set of readers will be assigned the read lock. When constructed as non-fair, the order of entry to the lock need not be in arrival order." (Taken from <http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html>)
See also the following (from the same source):
ReentrantReadWriteLocks can be used to improve concurrency in some uses of some kinds of Collections. This is typically worthwhile only when the collections are expected to be large, accessed by more reader threads than writer threads, and entail operations with overhead that outweighs synchronization overhead. For example, here is a class using a TreeMap that is expected to be large and concurrently accessed.
```
class RWDictionary {
private final Map<String, Data> m = new TreeMap<String, Data>();
private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
private final Lock r = rwl.readLock();
private final Lock w = rwl.writeLock();
public Data get(String key) {
r.lock(); try { return m.get(key); } finally { r.unlock(); }
}
public String[] allKeys() {
r.lock(); try { return m.keySet().toArray(); } finally { r.unlock(); }
}
public Data put(String key, Data value) {
w.lock(); try { return m.put(key, value); } finally { w.unlock(); }
}
public void clear() {
w.lock(); try { m.clear(); } finally { w.unlock(); }
}
}
```
The excerpt from the API documentation is particularly performance conscious. In your specific case, I cannot comment on whether you meet the "large collection" criterion, but I can say that outputting to the console is much more time consuming than the thread-safety mechanism overhead. At any rate, you use of ReentrantReadWriteLocks makes perfect sense from a logical point of view and is perfectly thread safe. This is nice code to read :-)
Note 1 (answering question about exceptions found in the comments of the original question):
Taken from <http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/Lock.html>
lock() acquires the lock.
If the lock is not available then the current thread becomes disabled for thread scheduling purposes and lies dormant until the lock has been acquired.
A Lock implementation may be able to detect erroneous use of the lock, such as an invocation that would cause deadlock, and may throw an (unchecked) exception in such circumstances. The circumstances and the exception type must be documented by that Lock implementation.
No indication of such exceptions is given in the relevant documentation for ReentrantReadWriteLock.ReadLock (<http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.ReadLock.html>) or ReentrantReadWriteLock.WriteLock (<http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.WriteLock.html>)
Note 2: While access to the StringBuilder is protected by the locks, System.out is not. In particular, multiple readers may read the value concurrently and try to output it concurrently. That is also OK, because access to System.out.println() is synchronized.
Note 3: If you want to disallow multiple active writers, but allow a writer and one or more readers to be active at the same time, you can simple skip using read locks altogether i.e. delete lock.readLock().lock(); and lock.readLock().unlock(); in your code. However, in this particular case this would be wrong. You need to stop concurrent reading and writing to the StringBuilder.
|
Debugging ExtJS errors after build
I have an ExtJS app (I wont include source for now). When I run the Sencha CMD webserver in the root application directory everything works fine, as expected.
When I build the app (using sencha app build) and then run the Sencha CMD webserver in the build directory I get the error:
```
Uncaught TypeError: Cannot read property 'isComponent' of null app.js:1
Ext.cmd.derive.constructor app.js:1
z app.js:1
(anonymous function) app.js:1
```
I have created builds before that worked fine, and this is occuring after some recent changes I made (new build). I have checked all the normal suspects (requires, etc ...) and everything seems in order.
**My question is:** How do you debug this sort of issue since it works fine pre-build?
Versions: ExtJS 4.2.1, Sencha CMD 4.0.2.67, Error from Chrome Developer Tools
| There is **no easy way on debugging** ExtJS (or any other Javascript) application after you have **minified** the **code**, although there are a few ways around that can help you on getting close to the source of your problem,
- **Build** your application **on testing mode** `sencha app build testing`. **Testing build** is a **non minified** version of the normal **build** and you will be able to see human readable code that way. This should be enough for most cases.
- **Beautify** your **minified source** code. Although testing build should work for most of the situations, I've experienced some cases where the testing code did work and the release version did not. Minified code beautification at least **will isolate the line that throws the exception**, although it could be hard to recognize since all the comments will be gone and variable names will look different as a result of the minification process, probably you will be able to recognize your code anyways since strings and Ext calls don't change.
- You can try also using **Source Maps** (here's a [neat article](http://www.html5rocks.com/en/tutorials/developertools/sourcemaps/?redirect_from_locale=ru) on the subject), you will need to change default *yui* compiler to *[closure compiler](https://developers.google.com/closure/compiler/)*, however this is not a straight forward process, here's a [detailed explanation](http://docs.sencha.com/cmd/4.0.0/#!/guide/command_compiler) on the command compiler options.
I hope this options can point you in the right direction.
|
what kind of database are used in games?
what common and popular database/(types of database) used in games?
1. MMORPG
2. browser based ,flash games
3. video games
| 1) As far as I know, World of Warcraft runs on Oracle RDBMS. Not sure about the implementation details, however, it seems that low priority data (like the location of a character, attribute status etc) gets dumped into database in intervals and high priority data (level, item transfer) occurs real time (hence the noticeable delay sometimes.)
Also Guild Wars uses Microsoft SQL Server but nor in RDBMS manner. They store binary data within tables that look like (char\_id, last\_update, data) and the game servers periodically serialize a character into a byte array, then push to DB servers. That's the same method used to transfer players between servers. Everyone is just a chunk of data.
2) As ThiefMaster said, any DB. If you see php on frontend, there is a good chance there is MySQL or PostgreSQL at the back. If you see ASP\* variants, look for MS SQL Server.. Like websites.
3) Everything occurs in memory (generally.. Say, the Football Manager requires a database due to vast amounts of data processed.) A Database would just be an overkill.
|
Why does this jQuery code not work?
Why doesn't the following jQuery code work?
```
$(function() {
var regex = /\?fb=[0-9]+/g;
var input = window.location.href;
var scrape = input.match(regex); // returns ?fb=4
var numeral = /\?fb=/g;
scrape.replace(numeral,'');
alert(scrape); // Should alert the number?
});
```
Basically I have a link like this:
```
http://foo.com/?fb=4
```
How do I first locate the `?fb=4` and then retrieve the number only?
| Consider using the following code instead:
```
$(function() {
var matches = window.location.href.match(/\?fb=([0-9]+)/i);
if (matches) {
var number = matches[1];
alert(number); // will alert 4!
}
});
```
Test an example of it here: <http://jsfiddle.net/GLAXS/>
The regular expression is only slightly modified from what you provided. The `g`lobal flag was removed, as you're not going to have multiple `fb=`'s to match (otherwise your URL will be invalid!). The case `i`nsensitive flag flag was added to match `FB=` as well as `fb=`.
The number is wrapped in curly brackets to denote a [capturing group](http://www.regular-expressions.info/brackets.html) which is the magic which allows us to use `match`.
If [`match`](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/String/match) matches the regular expression we specify, it'll return the matched string in the first array element. The remaining elements contain the value of each capturing group we define.
In our running example, the string "?fb=4" is matched and so is the first value of the returned array. The only capturing group we have defined is the number matcher; which is why `4` is contained in the second element.
|
How does upsampling in Fully Connected Convolutional network work?
I read several posts / articles and have some doubts on the mechanism of upsampling after the CNN downsampling.
I took the 1st answer from this question:
<https://www.quora.com/How-do-fully-convolutional-networks-upsample-their-coarse-output>
I understood that similar to normal convolution operation, the "upsampling" also uses kernels which need to be trained.
Question1: if the "spatial information" is already lost during the first stages of CNN, how can it be re-constructed in anyway ?
Question2: Why >"Upsampling from a small (coarse) featuremap deep in the network has good semantic information but bad resolution. Upsampling from a larger feature map closer to the input, will produce better detail but worse semantic information" ?
| **Question #1**
Upsampling doesn't (and cannot) reconstruct any lost information. Its role is to bring back the resolution to the resolution of previous layer.
Theoretically, we can eliminate the down/up sampling layers altogether. However to reduce the number of computations, we can downsample the input before a layers and then upsample its output.
Therefore, the sole purpose of down/up sampling layers is to reduce computations in each layer, while keeping the dimension of input/output as before.
You might argue the down-sampling might cause information loss. That is always a possibility but remember the role of CNN is essentially extracting "useful" information from the input and reducing it into a smaller dimension.
**Question #2**
As we go from the input layer in CNN to the output layer, the dimension of data generally decreases while the semantic and extracted information hopefully increases.
Suppose we have the a CNN for image classification. In such CNN, the early layers usually extract the basic shapes and edges in the image. The next layers detect more complex concepts like corners, circles. You can imagine the very last layers might have nodes that detect very complex features (like presence of a person in the image).
So up-sampling from a large feature map close to the input produces better detail but has lower semantic information compared to the last layers. In retrospect, the last layers generally have lower dimension hence their *resolution* is worse compared to the early layers.
|
Select newest records that have distinct Name column
I did search around and I found this
[SQL selecting rows by most recent date with two unique columns](https://stackoverflow.com/questions/189213/sql-selecting-rows-by-most-recent-date)
Which is so close to what I want but I can't seem to make it work.
I get an error Column 'ID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
I want the newest row by date for each Distinct Name
```
Select ID,Name,Price,Date
From table
Group By Name
Order By Date ASC
```
Here is an example of what I want
Table
| ID | Name | Price | Date |
| --- | --- | --- | --- |
| 0 | A | 10 | 2012-05-03 |
| 1 | B | 9 | 2012-05-02 |
| 2 | A | 8 | 2012-05-04 |
| 3 | C | 10 | 2012-05-03 |
| 4 | B | 8 | 2012-05-01 |
desired result
| ID | Name | Price | Date |
| --- | --- | --- | --- |
| 2 | A | 8 | 2012-05-04 |
| 3 | C | 10 | 2012-05-03 |
| 1 | B | 9 | 2012-05-02 |
I am using Microsoft SQL Server 2008
|
```
Select ID,Name, Price,Date
From temp t1
where date = (select max(date) from temp where t1.name =temp.name)
order by date desc
```
Here is a [SQL Fiddle](http://sqlfiddle.com/#!18/6a8e3/1) with a demo of the above
---
Or as Conrad points out you can use an INNER JOIN (another [SQL Fiddle](http://sqlfiddle.com/#!18/6a8e3/384) with a demo) :
```
SELECT t1.ID, t1.Name, t1.Price, t1.Date
FROM temp t1
INNER JOIN
(
SELECT Max(date) date, name
FROM temp
GROUP BY name
) AS t2
ON t1.name = t2.name
AND t1.date = t2.date
ORDER BY date DESC
```
|
Running xUnit tests on Teamcity using async methods
I made the following xUnit test which is using a HttpClient to call a status api method on a webserver.
```
[Fact]
public void AmIAliveTest()
{
var server = TestServer.Create<Startup>();
var httpClient = server.HttpClient;
var response = httpClient.GetAsync("/api/status").Result;
response.StatusCode.Should().Be(HttpStatusCode.OK);
var resultString = response.Content.ReadAsAsync<string>().Result;
resultString.Should().Be("I am alive!");
}
```
This test is running fine locally. But when I commit the code and try to run the same test on the TeamCity build server, it runs forever. I even have to kill the xunit runner process because stopping the build will not stop this process.
However when I write the test like this
```
[Fact]
public async void AmIAliveTest()
{
var server = TestServer.Create<Startup>();
var httpClient = server.HttpClient;
var response = await httpClient.GetAsync("/api/status");
response.StatusCode.Should().Be(HttpStatusCode.OK);
var resultString = await response.Content.ReadAsAsync<string>();
resultString.Should().Be("I am alive!");
}
```
It runs fine locally and also on TeamCity.
My concern is now that I forget to write the test like the second variant and that once in a while the teamcity build is hanging.
Can anybody explain to me why xUnit running on the teamcity buildserver is not running the test correctly in the first place? And is there a solution for this to solve this?
|
>
> Can anybody explain to me why xUnit running on the teamcity buildserver is not running the test correctly in the first place?
>
>
>
First, I'd check your xUnit versions - you should be running the recently-released 2.0. I suspect your local version may be out of date.
The core problem is in this line:
```
var resultString = response.Content.ReadAsAsync<string>().Result;
```
I suspect you're running into a [deadlock situation](http://blog.stephencleary.com/2012/07/dont-block-on-async-code.html) that I describe on my blog. `HttpClient` has some methods on some platforms that do not properly use `ConfigureAwait(false)`, and is thus subject to this deadlock. xUnit 2.0 installs a single-threaded `SynchronizationContext` into all its unit tests, which provides the other half of the deadlock scenario.
The proper solution is to replace `Result` with `await`, and to change the return type of your unit test method from `void` to `Task`.
|
Echo implemented in Java
I implemented a simple version of [echo(1)](https://www.freebsd.org/cgi/man.cgi?query=echo&sektion=1&manpath=freebsd-release-ports) command utility. The program works as described in the man page: it writes to the standard output all command line arguments, separated by a whitespace and end with a newline. It can process the option `-n` that avoid to print the newline.
About my implementation, it is not complete, because it doesn't interpret common backslash-escaped characters (for example `\n`, `\c`, and so forth). i used a `StringBuilder` object to build the output string, because I'm not sure that the standard output is buffered. I also make some checks so the program can work without specifying any arguments.
You can compile the program with `javac JEcho` and run it with `java JEcho <...>`.
**JEcho.java**
```
/**
* JEcho writes any command line argument to the standard output; each argument
* is separated by a single whitespace and end with a newline (you can
* specify '-n' to suppress the newline).
*
* This program doesn't interpret common backslash-escaped characters (for
* exampe '\n' or '\c').
*/
public class JEcho {
public static void main(String[] args) {
boolean printNewline = true;
int posArgs = 0;
if (args.length > 0 && args[0].equals("-n")) {
printNewline = false;
posArgs = 1;
}
StringBuilder outputBuilder = new StringBuilder();
for (; posArgs < args.length; posArgs++) {
outputBuilder.append(args[posArgs]);
outputBuilder.append(" "); // Separator.
}
// Remove the trailing whitespace at the end.
int outputLength = outputBuilder.length();
if (outputLength > 0)
outputBuilder.deleteCharAt(outputBuilder.length() - 1);
String output = outputBuilder.toString();
if (printNewline)
System.out.println(output);
else
System.out.print(output);
}
}
```
| If you're using Java 8, you can use [`StringJoiner`](https://docs.oracle.com/javase/8/docs/api/java/util/StringJoiner.html).
```
/**
* JEcho writes any command line argument to the standard output; each argument
* is separated by a single whitespace and end with a newline (you can
* specify '-n' to suppress the newline).
*
* This program doesn't interpret common backslash-escaped characters (for
* exampe '\n' or '\c').
*/
public class JEcho {
public static void main(String[] args) {
boolean printNewline = true;
int posArgs = 0;
if (args.length > 0 && args[0].equals("-n")) {
printNewline = false;
posArgs = 1;
}
StringJoiner outputBuilder = new StringJoiner(" ");
for (; posArgs < args.length; posArgs++) {
outputBuilder.add(args[posArgs]);
}
String output = outputBuilder.toString();
if (printNewline)
System.out.println(output);
else
System.out.print(output);
}
}
```
|
Why is this specs2 test using Mockito passing?
Suppose I had this interface and class:
```
abstract class SomeInterface{
def doSomething : Unit
}
class ClassBeingTested(interface : SomeInterface){
def doSomethingWithInterface : Unit = {
Unit
}
}
```
Note that the doSomethingWithInterface method does not actually do anything with the interface.
I create a test for it like this:
```
import org.specs2.mutable._
import org.specs2.mock._
import org.mockito.Matchers
import org.specs2.specification.Scope
trait TestEnvironment extends Scope with Mockito{
val interface = mock[SomeInterface]
val test = new ClassBeingTested(interface)
}
class ClassBeingTestedSpec extends Specification{
"The ClassBeingTested" should {
"#doSomethingWithInterface" in {
"calls the doSomething method of the given interface" in new TestEnvironment {
test.doSomethingWithInterface
there was one(interface).doSomething
}
}
}
}
```
This test passes. Why? Am I setting it up wrong?
When I get rid of the scope:
```
class ClassBeingTestedSpec extends Specification with Mockito{
"The ClassBeingTested" should {
"#doSomethingWithInterface" in {
"calls the doSomething method of the given interface" in {
val interface = mock[SomeInterface]
val test = new ClassBeingTested(interface)
test.doSomethingWithInterface
there was one(interface).doSomething
}
}
}
}
```
The test fails as expected:
```
[info] x calls the doSomething method of the given interface
[error] The mock was not called as expected:
[error] Wanted but not invoked:
[error] someInterface.doSomething();
```
What is the difference between these two tests? Why does the first one pass when it should fail? Is this not an intended use of Scopes?
| When you mix-in the `Mockito` trait to another trait you can create expectations like `there was one(interface).doSomething`. If such an expression fails it only returns a `Result`, it doesn't throw an `Exception`. It then gets lost in a `Scope` because it is just a "pure" value inside the body of a trait.
However if you mix-in the `Mockito` trait to a `mutable.Specification` then an exception will be thrown on a failure. This is because the `mutable.Specification` class specifies that there should be `ThrownExpectations` by mixing in that trait.
So if you want to create a trait extending both `Scope` you can either:
1. create the trait from inside the specification and not have it extend Mockito:
```
class MySpec extends mutable.Specification with Mockito {
trait TestEnvironment extends Scope {
val interface = mock[SomeInterface]
val test = new ClassBeingTested(interface)
}
...
}
```
2. create trait and specification as you do, but mix-in `org.specs2.execute.ThrownExpectations`
```
trait TestEnvironment extends Scope with Mockito with ThrownExpectations {
val interface = mock[SomeInterface]
val test = new ClassBeingTested(interface)
}
class MySpec extends mutable.Specification with Mockito {
...
}
```
|
MongoDB atomic "findOrCreate": findOne, insert if nonexistent, but do not update
as the title says, I want to perform a find (one) for a document, by \_id, and if doesn't exist, have it created, then whether it was found or was created, have it returned in the callback.
I don't want to update it if it exists, as I've read findAndModify does. I have seen many other questions on Stackoverflow regarding this but again, don't wish to update anything.
I am unsure if by creating (of not existing), THAT is actually the update everyone is talking about, it's all so confuzzling :(
| Beginning with MongoDB 2.4, it's no longer necessary to rely on a unique index (or any other workaround) for atomic `findOrCreate` like operations.
This is thanks to [the `$setOnInsert` operator](http://docs.mongodb.org/manual/reference/operator/setOnInsert/#op._S_setOnInsert) new to 2.4, which allows you to specify updates which should only happen when inserting documents.
This, combined with the `upsert` option, means you can use `findAndModify` to achieve an atomic `findOrCreate`-like operation.
```
db.collection.findAndModify({
query: { _id: "some potentially existing id" },
update: {
$setOnInsert: { foo: "bar" }
},
new: true, // return new doc if one is upserted
upsert: true // insert the document if it does not exist
})
```
As `$setOnInsert` only affects documents being inserted, if an existing document is found, no modification will occur. If no document exists, it will upsert one with the specified \_id, then perform the insert only set. In both cases, the document is returned.
|