prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
How to make sure that Thinkpad discharges external slice battery first?
The thing is that when an external slice battery pack is connected, the laptop first discharges the internal one. This is a problem because then it is impossible to switch between multiple external battery packs without powering off the laptop. It is also not possible to just disconnect the external battery pack when you e.g. need to insert it into a docking station.
Thus, my question: how to change the discharge order under Linux?
For my use cases following order would make much more sense: when an external slice battery is connected, first discharge it completely before using power from the internal battery.
Background: For some Thinkpad models (e.g. X220) there are slice battery packs ([example](http://shop.lenovo.com/us/en/itemdetails/0A36280/460/2D575BF5CD7E44E58547BFB52CC4F8E9)) available that you can connect to the bottom of the laptop (similar to a docking station).
They seem to be supported well on Linux - e.g. the Gnome Shell battery applet includes both batteries (internal and external) when connected - including displaying the current capacity for each.
| There's (out-of-tree) module called `tp_smapi`, which provides access to (amongst others) access to the battery-related functions of the embedded controller. This allows you to do things like setting the start/stop charging thresholds, charge-inhibition timeout and also force discharge of a battery. Most distributions have a `tp_smapi` package, providing the module, otherwise you could still download the sources from [Github](https://github.com/evgeni/tp_smapi/) and build them by hand.
When loading this module, it'll provide you with a `sysfs` interface under `/sys/devices/platform/smapi/`, one directory for every (possible) battery called `BATn` (where `n` would be `0` or `1` in your case) and some files you could write to. The file that could be the solution to your problem is called `force_discharge`. By writing `1` to it, you'll tell the embedded controller to forcibly discharge the according battery (this even works on AC, which allows you to »recalibrate« the battery as is possible with the Thinkpad Windows-Tools) — `0` disables forced discharge, accordingly.
I'm a bit puzzled that your internal battery is used first, though. I had a X61s with the additional battery-pack and AFAIR it used the external one first (which is… intelligent, since at least the X61s e.g. didn't use the external battery for suspend-to-RAM for obvious reasons, where it would be bad to have the internal battery discharged to zero). Hrm.
|
How to override a specific table border CSS style while using Bootstrap table formatting
I'm using Bootstrap with tables, and trying to make some minor overrides to the default CSS with limited success.
In the table below, I'm able to add a dark border at the bottom of the table head (thead), and to the bottom of the table rows in the footer (tr in tfoot), but I cannot add a border to the bottom of the last table row (tr:last-child), or alternately the bottom of the table body (tbody), or I suppose the top of the table footer (tfoot).
I've had limited success with this:
```
.table-sm.event-table tbody > tr:last-child {
border-bottom: 2px solid #999;
}
```
However this doesn't render in all browsers, and only 'works' by making the single pixel light grey line a 2 pixel dark line, which I don't want, I just want a single pixel dark border between the last row of the body and the first row of the footer (between Row Two and Total Expense).
I know this has to do with the specificity of the CSS rules, and Bootstrap's rule taking precedent over my own, but even though I was able to make the other rules work, I cannot for the life of me figure out how to specify this one.
```
.event-table {
width: 100%;
}
.table thead > tr > th {
border-bottom: 1px solid #333;
}
.table tfoot > tr > td {
border-bottom: 1px solid #333;
}
```
```
<table class="table table-bordered table-sm event-table">
<thead>
<tr>
<th>Unit</th>
<th>Total</th>
</tr>
</thead>
<tfoot>
<tr>
<td>Total Expense $</td>
<td class="text-right">$200</td>
</tr>
<tr>
<td>Total Revenue $</td>
<td class="text-right">$300</td>
</tr>
</tfoot>
<tbody>
<tr>
<td>Row One</td>
<td>$100</td>
</tr>
<tr>
<td>Row Two</td>
<td>$100</td>
</tr>
</tbody>
</table>
```
| [**Specificity**](https://developer.mozilla.org/en-US/docs/Web/CSS/Specificity) is the name of the game and if you deal with Bootstrap, you'll quickly learn that it get's very complicated and even nigh impossible. While using `#ids` and `!important` may be an immediate remedy to your situation, it will bite you in the @rse if they are used even if only moderately. Try using only a few `#id` if you must and avoid `!important` at all costs.
A safer solution is to double up on a class:
>
> *As a nonsense special case for (2), duplicate simple selectors to increase specificity when you have nothing more to specify.*
>
>
>
[**MDN - The !important exception**](https://developer.mozilla.org/en-US/docs/Web/CSS/Specificity#The_!important%20exception)
The following demo has each table section (i.e. `<thead>`, `<tbody>`, and `<tfoot>`) with it's last row `border-bottom` a different color. Note that the bootstrap.css file is loaded as well, so it does work to the best of my knowledge and evidence at hand.
## Demo
```
.event-table {
width: 100%;
}
.table thead>tr.rowA1.rowA1>th {
border-bottom: 1px solid red;
}
.table tbody>tr.rowB2.rowB2>td {
border-bottom: 1px solid lime;
}
.table tfoot>tr.rowC2.rowC2>td {
border-bottom: 1px solid blue;
}
```
```
<link rel='stylesheet' href='https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css'>
<table class="table table-bordered table-sm event-table">
<thead>
<tr class='rowA1'>
<th>Unit</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr class='rowB1'>
<td>Row One</td>
<td>$100</td>
</tr>
<tr class='rowB2'>
<td>Row Two</td>
<td>$100</td>
</tr>
</tbody>
<tfoot>
<tr class='rowC1'>
<td>Total Expense $</td>
<td>$200</td>
</tr>
<tr class='rowC2'>
<td>Total Revenue $</td>
<td>$300</td>
</tr>
</tfoot>
</table>
```
|
How to compare a value in handlebars?
I want display different HTML depending on a condition.
It doesn't seem to compare those two values and it always shows the first variant. How can I compare the predefined values to the original value from JSON so that it can execute properly?
```
{{#each this}}
{{#each visits}}
<div class="row">
{{#if variable_from_json }}
<div class="col-lg-2 col-md-2 col-sm-2">
<i class="fa fa-home"></i>
</div>
{{else}}
<div class="col-lg-2 col-md-2 col-sm-2">
<i class="fa fa-plus symbol-hospital"></i>
</div>
{{/if}}
</div>
{{/each}}
{{/each}}
```
JS code
```
Handlebars.registerHelper('if', function (variable_from_json, options) {
if (variable_from_json === "M") {
return options.fn(this);
} else {
return options.inverse(this);
}
});
```
| To compare values like ==,>=,|| ,&&
Create one helper which will handle all cases
```
Handlebars.registerHelper( "when",function(operand_1, operator, operand_2, options) {
var operators = {
'eq': function(l,r) { return l == r; },
'noteq': function(l,r) { return l != r; },
'gt': function(l,r) { return Number(l) > Number(r); },
'or': function(l,r) { return l || r; },
'and': function(l,r) { return l && r; },
'%': function(l,r) { return (l % r) === 0; }
}
, result = operators[operator](operand_1,operand_2);
if (result) return options.fn(this);
else return options.inverse(this);
});
```
Use this operator in handlebar file For eg. == operator
```
{{#when <operand1> 'eq' <operand2>}}
// do something here
{{/when}}
```
|
Is it possiblle to Implement next\_permutation() on a STL set
Given this set
```
set<string> s = {"a","b","c"};
```
is it possible to implement next\_permutation() to get all combinations, where elements to do not repeat and order matters?
| No it is not possible. [`std::set`](http://en.cppreference.com/w/cpp/container/set) is an associative container and maintains an strict weak ordering. [`std::next_permutation`](http://en.cppreference.com/w/cpp/algorithm/next_permutation) transforms the range it is given which would break the ordering.
If you need to get the permutations of the contents of the `set` I suggest you use a [`std::vector`](http://en.cppreference.com/w/cpp/container/vector). You can copy the set into the vector and then get the permutation from that.
```
std::set<int> set_data;
//fill set
std::vector<int> temp(set_data.begin(), set_data.end());
do
{
// code goes here
}
while(std::next_permutation(temp.begin(), temp.end()));
```
|
How to mock Axios as default export with Jest
How do I mock `axios` that export as default function?
I have the api helper that generalizes api request with `axios()`
**api.js**
```
export const callApi = (endpoint, method, data = {}) => {
return axios({
url: endpoint,
method,
data
})
.then((response) => // handle response)
.catch((error) => // handle error)
};
```
**api.spec.js**
```
import axios from 'axios';
import { callApi } from './api';
describe('callApi()', () => {
it('calls `axios()` with `endpoint`, `method` and `body`', () => {
// mock axios()
jest.spyOn(axios, 'default');
const endpoint = '/endpoint';
const method = 'post';
const data = { foo: 'bar' };
// call function
callApi(endpoint, method, data);
// assert axios()
expect(axios.default).toBeCalledWith({ url: endpoint, method, data});
});
});
```
**result**
```
Expected mock function to have been called with:
[{"data": {"foo": "bar"}, "method": "post", "url": "/endpoint"}]
But it was not called.
```
The call works fine if I mock `axios.get()` or other methods, but not for just `axios()`. I don't want to change the definition of the `callApi()` function.
How do I mock default `axios()`? What did I miss?
| You cannot use `jest.spyOn(axios, 'default')` when you call `axios` directly (no `default`). Changing your implementation in `api.js` to be `axios.default(...args)` makes the test pass.
---
A potential change you can make is to use `jest.mock('axios')` instead of using `jest.spyOn`.
```
import axios from 'axios';
import { callApi } from './api';
jest.mock('axios');
// Make sure to resolve with a promise
axios.mockResolvedValue();
describe('callApi()', () => {
it('calls `axios()` with `endpoint`, `method` and `body`', () => {
const endpoint = '/endpoint';
const method = 'post';
const data = { foo: 'bar' };
// call function
callApi(endpoint, method, data);
// assert axios()
expect(axios).toBeCalledWith({ url: endpoint, method, data});
});
});
```
|
How to install Mezzanine on Webfaction Server
I've been choosing a django CMS for my personal site, and I decided that Mezzanine would be the one that most satisfies my needs. But I find it impossible for me to install it on Webfaction. There's all kinds of errors going on, and for a noob like me it's very frustrating. I followed this awesome guide, <http://ijcdigital.com/blog/installing-django-mezzanine-at-webfaction/>, which is probably written a year ago. Unfortunately, I simply can't get it working by following that guide. So could anyone provide me a more up-to-date and easy to follow guide for installing Mezzanine on Webfaction?
Thanks very much.
| To install Mezzanine on a WebFaction hosting account, first create a new PostgreSQL database via the WF control panel, and make a note of the database name and password.
Next, create a "Django 1.6.10 (mod\_wsgi 3.5/Python 2.7)" application and assign it to a website.
Next, SSH into your server and run the following commands (replacing `name_of_your_app`, `database_name`, `database_password`, and `my_cms` with appropriate values):
```
mkdir -p ~/lib/python2.7
easy_install-2.7 pip
cd ~/webapps/name_of_your_app
export PYTHONPATH=$PWD/lib/python2.7
pip2.7 install -U --install-option="--install-scripts=$PWD/bin" --install-option="--install-lib=$PWD/lib/python2.7" mezzanine
~/bin/mezzanine-project my_cms
cd my_cms
sed -i 's/"ENGINE": "django.db.backends.sqlite3"/"ENGINE": "django.db.backends.postgresql_psycopg2"/g' local_settings.py
sed -i 's/"NAME": "dev.db"/"NAME": "database_name"/g' local_settings.py
sed -i 's/"USER": ""/"USER": "database_name"/g' local_settings.py
sed -i 's/"PASSWORD": ""/"PASSWORD": "database_password"/g' local_settings.py
sed -i 's/DEBUG = True/DEBUG = False/g' local_settings.py
echo "ALLOWED_HOSTS = ['yourdomain.com',]" >> local_settings.py
python2.7 manage.py createdb --noinput
python2.7 manage.py collectstatic --noinput
sed -i 's/myproject\/myproject\/wsgi.py/my_cms\/wsgi.py/g' ../apache2/conf/httpd.conf
sed -i 's/myproject/my_cms/g' ../apache2/conf/httpd.conf
../apache2/bin/restart
```
Next, create a "Symbolic link to static-only app" in the control panel, using "`/home/your_username/webapps/name_of_your_app/my_cms/static`" as the symlink path (in the "extra info" field, then assign that app to your site, using '`/static`' as the URL path.
Then just wait a few minutes for that last change to gel, and you're done.
|
AWS Lambda Container Image Support Vs Fargate
I'm evaluating a solution approach using Docker containers. Now that lambda is also supporting container images this falls in my consideration too.
I'm evaluating based on the following factors
1. Pricing model of the 2 services
2. Cold start issue
3. Ease of Lamda integration with other AWS services
4. Ease of offline development with docker containers (i think it's not that relevant now)
Any other factor I need to consider between the 2 services?
| Although both services allow you to run Docker images now, they both have different application types they target.
Typically, you want a Docker container to run for while and not just a few seconds. Furthermore, you often would have the "whole" application in a Docker image.
Lambda offers you short running compute power (seconds to minutes) for small tasks, not a whole application (they are called Lambda *functions*). So comparing them using your "metrics" might not be the right approach.
First, you need to find out for *how long* you want your containers to run. If the answer is longer than 900 seconds, you don't need to compare Fargate to AWS Lambda, since Lambda can only run for a maximum of 900 seconds.
Second, you need to check *what* you actually want to run in the container. As I said before, Lambdas are made for small, short running functions, not "hosting" whole applications (e.g. web servers with Node/Rails/Django apps). If you want to run a whole application on Lambda, you would need to decompose it. Static files on S3 etc., and backend API with AWS API Gateway and AWS Lambda.
That said, if you really want your containers to run less than 900s and want to compare between the two, then here are a few more ideas:
1. Amount of available resources (memory, vCPU)
2. Ease of deployment (depends on your development practices)
3. How well can they be tested?
4. How familiar is your team with either technology?
5. Differences in security models? How easy is it to secure?
Some of those depend heavily on your experience, team and your practices, but should be factored in.
|
Can Advanced Filter criteria be in the VBA rather than a range?
After trying in vain to set more than 2 criteria in a normal AutoFilter fashion via VBA, I have come to learn that it must be done via advanced filter.
offending example:
```
Worksheets(1).Range("A1").AutoFilter Field:=ColNum, Criteria1:="A*", Operator:=xlOr, Criteria2:="B*", Operator:=xlOr, Criteria3:="C*"
```
I am hoping to pass the criteria through to a function (rather than a macro) from a PowerShell script. I have it all working fine and as expected for 1 criteria, but now I'd like 3.
I suppose I could instead write a macro to insert a new sheet, write in the criteria then filter on that new range but I'd rather check the preferred way first.
| To filter on multiple wildcards, create a variant array of wildcard matches and then use the array of full values with the standard AutoFilter method. You can minimize the array by putting a dictionary object to use with its unique index property.
Consider the following sample data.
[](https://i.stack.imgur.com/pPquj.png)
Run this code.
```
Sub multiWildcards()
Dim v As Long, vVALs As Variant, dVALs As Object
Dim colNum As Long
Set dVALs = CreateObject("Scripting.Dictionary")
dVALs.comparemode = vbTextCompare
colNum = 2 'column B
With Worksheets(1)
If .AutoFilterMode Then .AutoFilterMode = False
With .Cells(1, 1).CurrentRegion
vVALs = .Columns(colNum).Cells.Value2
For v = LBound(vVALs, 1) To UBound(vVALs, 1)
If Not dVALs.exists(vVALs(v, 1)) Then
Select Case UCase(Left(vVALs(v, 1), 1))
Case "A", "B", "C"
dVALs.Add Key:=vVALs(v, 1), Item:=vVALs(v, 1)
Case Else
'do nothing
End Select
End If
Next v
If CBool(dVALs.Count) Then
'populated the dictionary; now use the keys
.AutoFilter Field:=colNum, Criteria1:=dVALs.keys, Operator:=xlFilterValues
Else
Debug.Print "Nothing to filter on; dictionary is empty"
End If
'.CurrentRegion is now filtered on A*, B*, C* in column B
'do something with it
End With
End With
dVALs.RemoveAll: Set dVALs = Nothing
End Sub
```
Results should be:
[](https://i.stack.imgur.com/CCLOQ.png)
These results can be duplicated with many other wildcard scenarios. The [Select Case statement](https://msdn.microsoft.com/en-us/library/office/gg278665.aspx) is ideal as it supports the **Like** keyword for building your collection of matches. By starting with a value dump into a regular variant array, cycling through even large rows of data can be done quickly.
|
When printing a variable that contains newlines, why is the last newline stripped?
Contents of file.txt (no weirdness, text file as defined by POSIX)
```
iguana
gecko
anole
```
Sample script:
```
#!/bin/sh
string="$(cat file.txt)"
printf '%s' "$string"
```
Sample output:
```
[coolguy@somemachine ~]$ ./script.sh
iguana
gecko
anole[coolguy@somemachine ~]$
```
What happened to the last newline? Why are all are newlines except the last one preserved? It seems like we shouldn't have to use echo to add a newline if there should already be one there.
| It's not the printing, it's the command substitution that does that. It's defined to do that. From the [POSIX description](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_06_03):
>
> The shell shall expand the command substitution by executing command in a subshell environment and replacing the command substitution with the standard output of the command, **removing sequences of one or more {newline} characters at the end of the substitution**.
>
>
>
Note that it removes *all* trailing newlines, not just one.
In a somewhat common case, you'd use command substitution to capture a one-line output, say `osrev=$(uname -r)`. The utilities usually print a trailing newline, for user convenience on the command line. But in a shell script, you might want to use that string as part of another one, say a filename: `filename=blahblah-$osrev.dat`. And in that case, the trailing newline would only be a nuisance.
And of course, a plain `echo` will add a final newline in any case.
---
If you want the contents of the file as-is in the variable, then the common workaround is to add an extra character in the command substitution, and remove that later:
```
printf "foo\nbar\n\n" > file
string=$(cat file; echo x)
string=${string%x}
printf '%q\n' "$string"
```
that outputs `$'foo\nbar\n\n'`, showing both trailing newlines present.
---
Depending on what you intend to do with the data, there may be other ways. E.g. a `while read` loop, or Bash's `mapfile`, if you happen to want to process the file line-by-line.
|
What is the difference between a PreBuildEvent, BeforeBuild target and BeforeCompile target in MSBuild?
I recently had to move some code from a [PreBuildEvent in Visual Studio into the BeforeBuild target to make it work on AppHarbor](http://blog.dantup.com/2011/05/setting-up-nuget-to-automatically-fetch-packages-when-deploying-to-appharbor-without-storing-binaries-in-source-control). While doing so, I also noticed a BeforeCompile target.
What is the difference between these three seemingly similar events: PreBuildEvent, BeforeBuild Target, BeforeCompileTarget?
What can/can't be done with each, and why would you pick one over another?
| The answer to this question can be found in the `Microsoft.Common.targets` file which can be found (depending on wether you're using the 64-bit or 32-bit framework) at: `C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.target` for 64-bit and
`C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets` for the 32-bit runtime. This file defines all the steps a build of your project undergoes. Quoting the source:
```
<!--
============================================================
Build
The main build entry point.
============================================================
-->
<PropertyGroup>
<BuildDependsOn>
BeforeBuild;
CoreBuild;
AfterBuild
</BuildDependsOn>
</PropertyGroup>
```
The code is nice enough to explain the use of the `BeforeBuild` and `AfterBuild` target in the comments for both targets.
```
<!--
============================================================
BeforeBuild
Redefine this target in your project in order to run tasks just before Build
============================================================
-->
<Target Name="BeforeBuild"/>
<!--
============================================================
AfterBuild
Redefine this target in your project in order to run tasks just after Build
============================================================
-->
<Target Name="AfterBuild"/>
```
This is followed by the definition of the `CoreBuild` target:
```
<PropertyGroup>
<CoreBuildDependsOn>
BuildOnlySettings;
PrepareForBuild;
PreBuildEvent;
ResolveReferences;
PrepareResources;
ResolveKeySource;
Compile;
UnmanagedUnregistration;
GenerateSerializationAssemblies;
CreateSatelliteAssemblies;
GenerateManifests;
GetTargetPath;
PrepareForRun;
UnmanagedRegistration;
IncrementalClean;
PostBuildEvent
</CoreBuildDependsOn>
</PropertyGroup>
```
So the `Build` target is just a wrapper around the `CoreBuild` target to enable you to perform custom steps just before or after the `CoreBuild` target. As can be seen above the `PreBuildEvent` and `PostBuildEvent` are listed as dependencies of the `CoreBuild` target. The dependencies of the `Compile` target are defined as follows:
```
<PropertyGroup>
<CompileDependsOn>
ResolveReferences;
ResolveKeySource;
SetWin32ManifestProperties;
_GenerateCompileInputs;
BeforeCompile;
_TimeStampBeforeCompile;
CoreCompile;
_TimeStampAfterCompile;
AfterCompile
</CompileDependsOn>
</PropertyGroup>
```
Again `BeforeCompile` and `AfterCompile` are commented in the code:
```
<!--
============================================================
BeforeCompile
Redefine this target in your project in order to run tasks just before Compile.
============================================================
-->
<Target Name="BeforeCompile"/>
<!--
============================================================
AfterCompile
Redefine this target in your project in order to run tasks just after Compile.
============================================================
-->
<Target Name="AfterCompile"/>
```
Given this information I do not know why AppHarbor does not support `Pre-, PostBuildEvent` while the `Build` can be modified using `Before-, AfterBuild`.
Choosing which `Target` to override for which scenario depends on the moment during the build at which you wish to perform your given task. The targets do not have specific restrictions and/or benefits as to what they can accomplish. Apart from the fact that they can adapt `ItemGroup`'s or properties that were defined/filled by previous steps.
Using nuget to bring in packages is probably best performed before the build tries to resolve the projects dependencies. So `BeforeCompile` is not a good candidate for this kind of action.
I hope this sheds some light on the matter. Found another nice explanation on [MSDN](http://msdn.microsoft.com/en-us/library/ms366724(v=vs.80).aspx)
|
Convert an xml element whose content is inside CDATA
I have a xml fragment like below
```
<Detail uid="6">
<![CDATA[
<div class="heading">welcome to my page</div>
<div class="paragraph">this is paraph</div>
]]>
</Detail>
```
and I want to be able to change the
```
<div class="heading">...</div> to <h1>Welcome to my page</h1>
<div class="paragraph">...</div> to <p>this is paragraph</p>
```
do you know how I can do that in xslt 1.0
| What about running two transforms.
Pass 1.)
```
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet
version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" indent="yes" encoding="UTF-8"/>
<xsl:template match="/">
<xsl:apply-templates />
</xsl:template>
<xsl:template match="Detail">
<Detail>
<xsl:copy-of select="@*"/>
<xsl:value-of select="." disable-output-escaping="yes" />
</Detail>
</xsl:template>
</xsl:stylesheet>
```
Will produce:
```
<?xml version="1.0" encoding="UTF-8"?>
<Detail uid="6">
<div class="heading">welcome to my page</div>
<div class="paragraph">this is paraph</div>
</Detail>
```
Pass 2.)
```
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet
version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" indent="yes" encoding="UTF-8"/>
<xsl:template match="/">
<xsl:apply-templates />
</xsl:template>
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*| node()" />
</xsl:copy>
</xsl:template>
<xsl:template match="div[@class='heading']">
<h1><xsl:value-of select="."/></h1>
</xsl:template>
<xsl:template match="div[@class='paragraph']">
<p><xsl:value-of select="."/></p>
</xsl:template>
</xsl:stylesheet>
```
Produces:
```
<?xml version="1.0" encoding="UTF-8"?>
<Detail uid="6">
<h1>welcome to my page</h1>
<p>this is paraph</p>
</Detail>
```
|
Kafka Stream custom State Store
I have been readying the doc about state store but it is still not clear to me if it can fit my purpose. I would like to use some Distributed Graph Database as as a state store that other external application can consume from. Is that possible, what effort does that involve and can anyone point me to the class/code that will need to be extended for that functionality to happen.
| You can implement custom state store using Processor API as described here :
<https://docs.confluent.io/current/streams/developer-guide/interactive-queries.html#querying-local-custom-state-stores>
- Your custom state store must implement StateStore.
- You must have an interface to represent the operations available on the store.
- You must provide an implementation of StoreBuilder for creating instances of your store.
- It is recommended that you provide an interface that restricts access to read-only operations. This prevents users of this API from mutating the state of your running Kafka Streams application out-of-band.
Implementation will look something like this :
```
public class MyCustomStore<K,V> implements StateStore, MyWriteableCustomStore<K,V> {
// implementation of the actual store
}
// Read-write interface for MyCustomStore
public interface MyWriteableCustomStore<K,V> extends MyReadableCustomStore<K,V> {
void write(K Key, V value);
}
// Read-only interface for MyCustomStore
public interface MyReadableCustomStore<K,V> {
V read(K key);
}
public class MyCustomStoreBuilder implements StoreBuilder<MyCustomStore<K,V>> {
// implementation of the supplier for MyCustomStore
}
```
In order to make it queryable;
- Provide an implementation of QueryableStoreType.
- Provide a wrapper class that has access to all of the underlying instances of the store and is used for querying.
Example :
```
public class MyCustomStoreType<K,V> implements QueryableStoreType<MyReadableCustomStore<K,V>> {
// Only accept StateStores that are of type MyCustomStore
public boolean accepts(final StateStore stateStore) {
return stateStore instanceOf MyCustomStore;
}
public MyReadableCustomStore<K,V> create(final StateStoreProvider storeProvider, final String storeName) {
return new MyCustomStoreTypeWrapper(storeProvider, storeName, this);
}
}
```
|
What is the advantage of PyTables?
I have recently started learning about PyTables and found it very interesting. My question is:
- What are the basic advantages of PyTables over database(s) when it comes to huge datasets?
- What is the basic purpose of this package (I can do same sort of structuring in NumPy and Pandas, so what's the big deal with PyTables)?
- Is it really helpful in analysis of big datasets? Can anyone elaborate with the help of any example and comparisons?
Thank you all.
|
>
> What are the basic advantages of PyTables over database(s) when it comes to huge datasets?
>
>
>
Effectively, it *is* a database. Of course it's a hierarchical database rather than a 1-level key-value database like `dbm` (which are obviously much less flexible) or a relational database like `sqlite3` (which are more powerful, but more complicated).
But the main advantage over a non-numerics-specific database is exactly the same as the advantage of, say, a numpy `ndarray` over a plain Python `list`. It's optimized for performing lots of vectorized numeric operations, so if that's what you're doing with it, it's going to take less time and space.
>
> What is the basic purpose of this package
>
>
>
Quoting from the first line of [the front page](https://www.pytables.org/index.html) (or, if you prefer, the first line of [the FAQ](https://www.pytables.org/FAQ.html)):
>
> PyTables is a package for managing hierarchical datasets and designed to efficiently and easily cope with extremely large amounts of data.
>
>
>
There's also a page listing the [MainFeatures](https://www.pytables.org/usersguide/introduction.html), linked near the top of the front page.
>
> (I can do same sort of structuring in NumPy and Pandas, so what's the big deal with PyTables)?
>
>
>
Really? You can handle 64GB of data in numpy or pandas on a machine with only 16GB of RAM? Or a 32-bit machine?
No, you can't. Unless you split your data up into a bunch of separate sets that you load, process, and save as needed—but that's going to be much more complicated, and much slower.
It's like asking why you need numpy when you can do the same thing with just regular Python list and iterators. Pure Python is great when you have an array of 8 floats, but not when you have a 10000x10000 array of them. And numpy is great when you have a couple of 10000x10000 arrays, but not when you have a dozen interconnected arrays ranging up to 20GB in size.
>
> Is it really helpful in analysis of big datasets?
>
>
>
Yes.
>
> Can anyone elaborate with the help of any example…
>
>
>
Yes. Rather than copying all of the examples here, why don't you just look at the simple examples on the front page of the docs, the slew of examples in the source tree, the links to real-world use cases two clicks from the front page of the docs, etc.?
If you want to convince yourself of the usefulness of PyTables, take any of the examples and scale it up to 32GB worth of data, then try to figure out how you'd do the exact same thing in numpy or pandas.
|
Algorithm for finding internally connected cluster of nodes within a graph from which no edge points outwards
I am representing my graph as a adjacency list. I want to know how can I find a cluster of nodes which are internally connected but no edge points outwards from them. Is there any well known algorithm out there which I can use?
for e.g.This is my graph.
```
1---->2
2---->1
2---->3
3---->1
3---->4
4---->5
5---->4
```
Here nodes 4 and 5 are internally connected. Yet no outside edge comes from this. This would be my answer. Similarly nodes 1,2,3 even though they form a cycle, do not fit the criteria as an outside edge emanates from node 3.
So it is not same as finding a cycle in a adjacency list.
Optional read: (why I need this)
I am working on a Ranking page (search engine) algorithm, nodes like 4 and 5 are called rank-sink.
| You could detect [strongly connected components](http://en.wikipedia.org/wiki/Strongly_connected_component) using [Kosaraju](http://en.wikipedia.org/wiki/Kosaraju%27s_algorithm), [Tarjan](http://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm) or [Cheriyan-Mehldorn/Gabow](http://en.wikipedia.org/wiki/Cheriyan%E2%80%93Mehlhorn/Gabow_algorithm) algorithm.
After finding these components, you compress each strongly connected components into one single node (i.e. you represent a whole component by a single node).
In the resulting graph, you look for nodes with no outgoing edges. These nodes represent the components you are interested in.
|
Java Swing - user alerts
I am trying to build a user alert mechanism by bringing up the window to the front and then flashing the icon on the screen for the user. I have two questions with regards to this approach:
1. How can you find the current window you are at in Java and then de-minimize it and bring to front?
2. Is there a mechanism in Java that would enable me to simply show the icon for a second or two and then hide it, in the middle of the screen? If not, what would be the way to achieve that?
Thanks a lot for any replies.
|
>
> How can you find the current window you are at in Java and then de-minimize it and bring to front
>
>
>
```
Window[] allWindows = Window.getWindows();
```
- returns arrays of all `Top-Level Containers` from current JVM e.g. `J/Frame`, `J/Dialog`(`JOptionPane`), `J/Window`,
- you can to test for (example) `if (allWindows[i] instanceof JFrame) {`
- then [WindowState](http://docs.oracle.com/javase/7/docs/api/java/awt/event/WindowStateListener.html) returned [WindowEvent](http://docs.oracle.com/javase/7/docs/api/java/awt/event/WindowEvent.html)
>
> by bringing up the window to the front and then flashing the icon on the screen for the user
>
>
>
use undecodated `JDialog` (works `toFront`, `toBack`) with
- create only once time
- `setDefaultCloseOperations(HIDE_ON_CLOSE)`
- use `Swing Timer` for hide `JDialog`
>
> Is there a mechanism in Java that would enable me to simply show the icon for a second or two and then hide it, in the middle of the screen? If not, what would be the way to achieve that?
>
>
>
- have look at `Java Translucent Window`, put there `Icon` to the `JLabel` (or to the `JButton`)
use `Swing Timer` for flashing by hiding `Icon` or swithing bewtween two or more `Icons` (three or four is good)
|
What's the relationship between and connect() in React-Redux?
I'm brand new to Redux and I'm trying to figure out the relationship between [`<Provider>`](https://github.com/reactjs/react-redux/blob/master/docs/api.md#provider-store) and [`connect()`](https://github.com/reactjs/react-redux/blob/master/docs/api.md#connectmapstatetoprops-mapdispatchtoprops-mergeprops-options).
As I understand it, `connect` *connects* your component to the store. However, nowhere in the function arguments do you tell `connect` where exactly that store is!
If I'm not mistaken, the store is automagically provided to `connect()` by the `<Provider>`. This to me seems very counter-intuitive, because the [entire point](http://redux.js.org/docs/introduction/Motivation.html) of Redux is to be transparent.
So my question is, how does `<Provider>` pass the store off to `connect()` without using some sort of global variable? Does it traverse the entire tree, searching for `connect`ed components and then inject itself? Is that not inefficient? And if so, how would I use two different stores within the same component tree?
Secondly, supposing I don't want to use `<Provider>`, how can I use `connect()` without it? i.e., how can I explicitly pass a store to each connected component?
| `<Provider>` and `connect` are part of the `react-redux` module. They work together, you shouldn't really use one without the other. You *can* use `redux` on its own without `react-redux`, but you'll probably end up re-creating some or all of the features that `react-redux` provides.
`react-redux` works by using the [React context](https://facebook.github.io/react/docs/context.html). Context is like a hidden layer for passing variables that are shared by multiple components without explicitly passing them. To use context, you need to set the context somewhere, but also, any component that wants to use something from the `context` needs to get the variable. In `react-redux` `<Provider>` essentially saves the store to the `context` and `connect` provides a way to get the store from the `context`.
If you haven't already, I [recommend these videos](https://egghead.io/courses/getting-started-with-redux) for getting started with Redux and `react-redux` from the creator of Redux.
|
NestJs async httpService call
How can I use Async/Await on `HttpService` using NestJs?
The below code doesn`t works:
```
async create(data) {
return await this.httpService.post(url, data);
}
```
| The `HttpModule` uses `Observable` not `Promise` which doesn't work with async/await. All `HttpService` methods return `Observable<AxiosResponse<T>>`.
So you can either transform it to a `Promise` and then use await when calling it or just return the `Observable` and let the caller handle it.
```
create(data): Promise<AxiosResponse> {
return this.httpService.post(url, data).toPromise();
^^^^^^^^^^^^^
}
```
Note that `return await` is almost (with the exception of try catch) always redundant.
## Update 2022
`toPromise` is deprecated. Instead, you can use `firstValueFrom`:
```
import { firstValueFrom } from 'rxjs';
// ...
return firstValueFrom(this.httpService.post(url, data))
```
|
Which loss function and metrics to use for multi-label classification with very high ratio of negatives to positives?
I am training a multi-label classification model for detecting attributes of clothes. I am using transfer learning in Keras, retraining the last few layers of the vgg-19 model.
The total number of attributes is 1000 and about 99% of them are 0s. Metrics like accuracy, precision, recall, etc., all fail, as the model can predict all zeroes and still achieve a very high score. Binary cross-entropy, hamming loss, etc., haven't worked in the case of loss functions.
I am using the deep fashion dataset.
So, which metrics and loss functions can I use to measure my model correctly?
| What hassan has suggested is not correct -
Categorical Cross-Entropy loss or Softmax Loss is a *Softmax activation* plus a Cross-Entropy loss. If we use this loss, we will train a CNN to output a probability over the C classes for each image. It is used for **multi-class classification**.
What you want is multi-label classification, so you will use **Binary Cross-Entropy Loss** or Sigmoid Cross-Entropy loss. It is a *Sigmoid activation* plus a Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every CNN output vector component is not affected by other component values. That’s why it is used for multi-label classification, where the insight of an element belonging to a certain class should not influence the decision for another class.
Now for handling class imbalance, you can use weighted Sigmoid Cross-Entropy loss. So you will penalize for wrong prediction based on the number/ratio of positive examples.
|
Yeoman CSS image paths
When I build my [Yeoman](http://yeoman.io/) project (nothing special, I'm only using jQuery and Modernizr), the images used with CSS aren't shown.
**My CSS-code**
```
.contact {
background:url(../icon-contact.png) no-repeat top center;
}
```
**Output after building the app (no difference)**
```
.contact {background:url(../icon-contact.png) no-repeat top center;}
```
This doesn't work because the filename of icon-contact.png has changed to *f91724e0.icon-contact.png*.
How can I make sure that the image-paths are updated in the minified CSS-file?
---
**Edit: I've added my solution as an answer**
| ### Update (24 Feb. 2014)
You can fix this by adding `<%= yeoman.dist %>/images` to the `assetsDirs` of the usemin task. See <http://www.github.com/yeoman/yeoman/issues/824#issuecomment-32691465>
### Previous answer (4 Mar. 2013)
There is an issue named ["usemin:css" doesn't generate correct image paths for relative urls](https://github.com/yeoman/yeoman/issues/824) on GitHub about this particular problem. [Tak Tran](https://github.com/taktran) has made a branch of Yeoman and implemented a fix.
Here is how I removed the current Yeoman installation and installed the branched-version of Tak Tran:
```
npm uninstall yeoman -g
git clone git://github.com/taktran/yeoman.git
cd yeoman/cli
npm install -g
npm link
```
Source: <https://github.com/yeoman/yeoman/wiki/Additional-FAQ>
With this fix, Yeoman wil rename relative image-paths in CSS and my question is answered. Thanks everyone for the help!
|
rotation of videobrush for photocapturedevice
I'm having some problems with the orientation of my videobrush displaying the photocapturedevice on my phone. It actually should be as flexible as the built-in camera application, which means it should work for
- all aspect ratios
- both cameras (back and front)
- and all page orientations
At least one of these is always wrong. I tried <https://projects.developer.nokia.com/cameraexplorer> to get it work, but even it has the best approach, it's not working for me on different page orientations and front camera is rotating wrong way (counterclockwise when i rotate my phone clockwise, so I'm upside-down).
Is there any code-snippet with a complete working camera videobrush?
| To display correctly the viewfinder, you need two informations :
- **orientation** : preview picture orientation relative to your page orientation
- **Scale** : factor between preview picture size and xaml control.
In first you need a canvas with the videobrush as background
```
<Canvas x:Name="viewfinderCanvas" Width="480" Height="800" >
<Canvas.Background>
<VideoBrush x:Name="viewfinderBrush" Stretch="None" />
</Canvas.Background>
</Canvas>
```
You must use **Stretch="None"** or XAML will apply scale on the viewbrush. Now you need the viewfinderBrush transformation to display it correctly.
By default, the canvas center correspond to the preview picture center,so we need to compute an angle, a scale factor and use the canvas center as the transform center.
To compute the angle you need :
- the sensor orientation relative to the device Portrait orientation.
This value is given by
[PhotoCaptureDevice.SensorRotationInDegrees](http://msdn.microsoft.com/EN-US/library/windowsphone/develop/windows.phone.media.capture.photocapturedevice.sensorrotationindegrees%28v=vs.105%29.aspx) property.
- your page orientation relative to the device Portrait orientation.
code :
```
double ComputeAngle(PageOrientation orientation)
{
if ((orientation & PageOrientation.Portrait) == PageOrientation.Portrait)
{
return m_captureDevice.SensorRotationInDegrees;
}
else if ((orientation & PageOrientation.LandscapeLeft) == PageOrientation.LandscapeLeft)
{
return m_captureDevice.SensorRotationInDegrees - 90;
}
else //PageOrientation.LandscapeRight
{
return m_captureDevice.SensorRotationInDegrees + 90;
}
}
```
The scale is simply the factor between the canvas dimension and the preview picture dimension :
```
//orient preview picture size from the computed anle.
var tmp = new CompositeTransform(){Rotation = ComputeAngle(currentPageOrientation)};
var previewSize = tmp.TransformBounds (new Rect(new Point(), new Size(m_captureDevice.PreviewResolution.Width, m_captureDevice.PreviewResolution.Height))).Size;
double s1 = viewfinderCanvas.Width/ (double)previewSize.Width;
double s2 = viewfinderCanvas.Height/ (double)previewSize.Height;
```
- If you use the maximum factor, you make a Fit out => scale =
Math.Max(s1, s2)
- If you use the minimum factor, you make a Fit In => scale =
Math.Min(s1, s2)
The Front and the back camera have their eye direction opposite. So to display correctly the front camera you need to apply a mirror in one dimension. On WP8 sensor orientation is generally 90° so the Y dimension are opposite.
```
if (sensorLocation == CameraSensorLocation.Back)
{
viewfinderBrush.Transform = new CompositeTransform() {
Rotation = ComputeAngle(currentPageOrientation),
CenterX = viewfinderCanvas.Width / 2,
CenterY = viewfinderCanvas.Height / 2,
ScaleX = scale,
ScaleY = scale };
}
else
{
viewfinderBrush.Transform = new CompositeTransform() {
Rotation = ComputeAngle(currentPageOrientation),
CenterX = viewfinderCanvas.Width / 2,
CenterY = viewfinderCanvas.Height / 2,
ScaleX = scale,
ScaleY = -1 * scale };//Y mirror
}
```
You can find the last version of the sample on github : <https://github.com/yan-verdavaine/wp8-sample/tree/master/Imaging/ViewFinder>
|
OpenCV grooving detection
I have pictures of a surface with many grooves. In most cases the edges of the grooving form parallel lines so Canny and Hough transformation work very good to detect the lines and to do some characterization. However, at several places the grooving is demaged and the edges aren't parallel anymore.
I am looking for an easy way to check if a certain edge is a straight line or if there are any gaps or deviations from a straight line. I am thinking of something like the R square parameter in linear interpolation, but here I need a parameter which is more location-dependent. Do you have any other thougts how to characterize the edges?
I attached a picture of the grooving after canny edge detection. Here, the edges are straight lines and the grooving is fine. Unfortunately I don't have access to pictures with damaged grooving at the moment. However, in pictures with damaged grooving, the lines would have major gaps (at least 10% of the picture's size) or wouldn't be parallel.
| **The core of the technique** I'm sharing below uses [`cv::HoughLinesP()`](http://docs.opencv.org/modules/imgproc/doc/feature_detection.html?highlight=houghlinesp#houghlinesp) to find line segments in a grayscale image.
The application starts by loading the input image as grayscale. Then it performs a basic pre-processing operation to enhance certain characteristics of the image, aiming to improve the detection performed by `cv::HoughLinesP()`:
```
#include <cv.h>
#include <highgui.h>
#include <algorithm>
// Custom sort method adapted from: http://stackoverflow.com/a/328959/176769
// This is used later by std::sort()
struct sort_by_y_coord
{
bool operator ()(cv::Vec4i const& a, cv::Vec4i const& b) const
{
if (a[1] < b[1]) return true;
if (a[1] > b[1]) return false;
return false;
}
};
int main()
{
/* Load input image as grayscale */
cv::Mat src = cv::imread("13531682.jpg", 0);
/* Pre-process the image to enhance the characteristics we are interested at */
medianBlur(src, src, 5);
int erosion_size = 2;
cv::Mat element = cv::getStructuringElement(cv::MORPH_CROSS,
cv::Size(2 * erosion_size + 1, 2 * erosion_size + 1),
cv::Point(erosion_size, erosion_size) );
cv::erode(src, src, element);
cv::dilate(src, src, element);
/* Identify all the lines in the image */
cv::Size size = src.size();
std::vector<cv::Vec4i> total_lines;
cv::HoughLinesP(src, total_lines, 1, CV_PI/180, 100, size.width / 2.f, 20);
int n_lines = total_lines.size();
std::cout << "* Total lines: "<< n_lines << std::endl;
cv::Mat disp_lines(size, CV_8UC1, cv::Scalar(0, 0, 0));
// For debugging purposes, the block below writes all the lines into disp_lines
// for (unsigned i = 0; i < n_lines; ++i)
// {
// cv::line(disp_lines,
// cv::Point(total_lines[i][0], total_lines[i][2]),
// cv::Point(total_lines[i][3], total_lines[i][4]),
// cv::Scalar(255, 0 ,0));
// }
// cv::imwrite("total_lines.png", disp_lines);
```
At this point, all the line segments detected can be written to a file for visualization purposes:

At this point we need to sort our vector of lines because `cv::HoughLinesP()` doesn't do that, and we need the vector sorted to be able to identify groups of lines, by measuring and comparing the distance between the lines:
```
/* Sort lines according to their Y coordinate.
The line closest to Y == 0 is at the first position of the vector.
*/
sort(total_lines.begin(), total_lines.end(), sort_by_y_coord());
/* Separate them according to their (visible) groups */
// Figure out the number of groups by distance between lines
std::vector<int> idx_of_groups; // stores the index position where a new group starts
idx_of_groups.push_back(0); // the first line indicates the start of the first group
// The loop jumps over the first line, since it was already added as a group
int y_dist = 35; // the next groups are identified by a minimum of 35 pixels of distance
for (unsigned i = 1; i < n_lines; i++)
{
if ((total_lines[i][5] - total_lines[i-1][6]) >= y_dist)
{
// current index marks the position of a new group
idx_of_groups.push_back(i);
std::cout << "* New group located at line #"<< i << std::endl;
}
}
int n_groups = idx_of_groups.size();
std::cout << "* Total groups identified: "<< n_groups << std::endl;
```
The last part of the code above simply stores the index positions of the vector of lines in a new `vector<int>` so we know which lines starts a new group.
For instance, assume that the indexes stored in the new vector are: `0 4 8 12`. Remember: they define the *start* of each group. That means that the ending lines of the groups are: `0, 4-1, 4, 8-1, 8, 12-1, 12`.
Knowing that, we write the following code:
```
/* Mark the beginning and end of each group */
for (unsigned i = 0; i < n_groups; i++)
{
// To do this, we discard the X coordinates of the 2 points from the line,
// so we can draw a line from X=0 to X=size.width
// beginning
cv::line(disp_lines,
cv::Point(0, total_lines[ idx_of_groups[i] ][7]),
cv::Point(size.width, total_lines[ idx_of_groups[i] ][8]),
cv::Scalar(255, 0 ,0));
// end
if (i != n_groups-1)
{
cv::line(disp_lines,
cv::Point(0, total_lines[ idx_of_groups[i+1]-1 ][9]),
cv::Point(size.width, total_lines[ idx_of_groups[i+1]-1 ][10]),
cv::Scalar(255, 0 ,0));
}
}
// mark the end position of the last group (not done by the loop above)
cv::line(disp_lines,
cv::Point(0, total_lines[n_lines-1][11]),
cv::Point(size.width, total_lines[n_lines-1][12]),
cv::Scalar(255, 0 ,0));
/* Save the output image and display it on the screen */
cv::imwrite("groups.png", disp_lines);
cv::imshow("groove", disp_lines);
cv::waitKey(0);
cv::destroyWindow("groove");
return 0;
}
```
And the resulting image is:

**It's not a perfect match**, but it's close. With a little bit of tweaks here and there this approach can get much better. I would start by writing a smarter logic for `sort_by_y_coord`, which should discard lines that have small distances between the X coordinates (i.e. small line segments), and also lines that are not perfectly aligned on the X axis (like the one from the second group in the output image). This suggestion makes much more sense after you take the time to evaluate the first image generated by the application.
Good luck.
|
How to find which points intersect with a polygon in geopandas?
I've been trying to use the "intersects" feature on a geodataframe, looking to see which points lie inside a polygon. However, only the first feature in the frame will return as true. What am I doing wrong?
```
from geopandas.geoseries import *
p1 = Point(.5,.5)
p2 = Point(.5,1)
p3 = Point(1,1)
g1 = GeoSeries([p1,p2,p3])
g2 = GeoSeries([p2,p3])
g = GeoSeries([Polygon([(0,0), (0,2), (2,2), (2,0)])])
g1.intersects(g) # Flags the first point as inside, even though all are.
g2.intersects(g) # The second point gets picked up as inside (but not 3rd)
```
| According to the [documentation](http://geopandas.org/data_structures.html#geoseries):
>
> Binary operations can be applied between two GeoSeries, in which case
> the operation is carried out elementwise. The two series will be
> aligned by matching indices.
>
>
>
Your examples are not supposed to work. So if you want to test for each point to be in a single polygon you will have to do:
```
poly = GeoSeries(Polygon([(0,0), (0,2), (2,2), (2,0)]))
g1.intersects(poly.ix[0])
```
Outputs:
```
0 True
1 True
2 True
dtype: bool
```
Or if you want to test for all geometries in a specific GeoSeries:
```
points.intersects(poly.unary_union)
```
Geopandas relies on Shapely for the geometrical work. It is sometimes useful (and easier to read) to use it directly. The following code also works as advertised:
```
from shapely.geometry import *
p1 = Point(.5,.5)
p2 = Point(.5,1)
p3 = Point(1,1)
poly = Polygon([(0,0), (0,2), (2,2), (2,0)])
for p in [p1, p2, p3]:
print(poly.intersects(p))
```
You might also have a look at
[How to deal with rounding errors in Shapely](https://stackoverflow.com/questions/28028910/how-to-deal-with-rounding-errors-in-shapely) for issues that may arise with points on boundaries.
|
Error 413 payload too large when upload image
I'm trying to upload an image from local by using **base64** to do image detection.
And everything works fine in localhost and postman.
But after deploying, I got CROS error.
I've already got cors middleware in `server.js`
```
const express = require("express");
const cors = require("cors");
const bodyParser = require("body-parser");
const app = express();
app.use(cors());
app.use(bodyParser.json({ limit: "10000kb", extended: true }));
app.use(bodyParser.urlencoded({ limit: "10000kb", extended: true }));
```
The cors middleware works fine when fetching image with url,
But when I tried to **upload image from local** by using **base64**, the console shows:
```
No 'Access-Control-Allow-Origin' header is present on the requested resource.
```
Here's the solution I've tried:
1. cors-anywhere
```
App.js
const proxyUrl = 'https://cors-anywhere.herokuapp.com/';
fetch(proxyUrl + API_CALL.IMAGE_URL, {
method: 'post',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({
inputLink: inputLink,
inputMethod: inputMethod
}),
credentials: 'include'
})
```
It then shows `413 payload too large`.
Since there's no error when testing in localhost and postman, I found out some articles said it might still be the cors error.
2. CORS preflight
server.js
```
const corsOptions = {
origin: 'https://front-end-url/',
methods: 'GET, POST, PUT',
credentials: true,
allowedHeaders: 'Content-Type,Authorization',
exposedHeaders: 'Content-Range,X-Content- Range'
};
app.options('/imageUrl', cors(corsOptions));
```
It shows error:
```
CORS policy: Response to preflight request doesn't pass access control check:
The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*'
when the request's credentials mode is 'include'
```
3. After I remove `credentials: 'include'`, it shows `413 payload too large` again.
I'm so confused... Does anyone know how to fix it? Thank you.
| Finally fix the error by placing
`express.json()` **AFTER** `bodyParser`.
like this:
```
app.use(bodyParser.json({limit: '50mb'}));
app.use(bodyParser.urlencoded({limit: '50mb', extended: true}));
app.use(express.json());
```
If runing express.json() first, express would set the global limit to 1mb.
For the next person that needs more detail:
[Error: request entity too large](https://stackoverflow.com/questions/19917401/error-request-entity-too-large)
And for the person who needs to set Nginx config file:
[Increasing client\_max\_body\_size in Nginx conf on AWS Elastic Beanstalk](https://stackoverflow.com/questions/18908426/increasing-client-max-body-size-in-nginx-conf-on-aws-elastic-beanstalk/18951706#18951706)
|
Dataframe-Normalize each row by row's maximum
Is there any convenient way to normalize each row by row's maximum (divide by row's max)
eg:
```
df=
A B C
2 1 1
1 4 1
0 2 1
return:
A B C
1 0.5 0.5
0.25 1 0.25
0 1 0.5
```
| You can use `apply` and apply a lambda row-wise:
```
In [199]:
df.apply(lambda x: x/x.max(), axis=1)
Out[199]:
A B C
0 1.00 0.5 0.50
1 0.25 1.0 0.25
2 0.00 1.0 0.50
```
You can also use `div`:
```
In [205]:
df.div(df.max(axis=1), axis=0)
Out[205]:
A B C
0 1.00 0.5 0.50
1 0.25 1.0 0.25
2 0.00 1.0 0.50
```
|
Creating variogram for a 10,000 set data
I am trying to create a variogram for a data set with 10,000 points. However, if I try to actually calculate the distance of each point with the other then I will have 10,000\*9999/2 pairs. Out of these pairs of distances I can round off the distance values to lets say 2 decimal places. Then I can find the distances having equal values. Then I can take the average of the variances having equal distances to find the variance for that particular distance. Then I can get the experimental variogram.
This process will definitely be very slow. Is there any efficient way ? I mean instead of creating the variogram from all the observation point, I can only take a subset of it surrounding the point where I want to interpolate the value. I can create a variogram out of this subset. Then I can further take k neighbors of the destination point and use this subset variogram to interpolate. Will this be more efficient and correct?
| The [`geoR` package](http://cran.r-project.org/web/packages/geoR/index.html) will do this efficiently:
```
n <- 10^4 # Number of points
v <- list(coords=matrix(runif(2*n),ncol=2), data=rnorm(n)) # Random data
system.time(v.vario <- variog(v)) # Compute a variogram object
```
Elapsed time on this machine: 5.21 seconds.
For more points, you can subsample the data. (A stratified procedure that obtains collections of close-by points is better than a simple random sample, because accurately characterizing the variogram near the origin is important.) It's better, though, to partition the study area into "tiles" or subregions and evaluate variograms within those subregions: this is a great way to assess the stationarity hypothesis.
|
Array Sort by time hh:mm:ss
I am trying to sort the time. but I am unable to sort by time (hh:mm:ss) format. so i have used moments js. my array sort by time not get sorted. how sort array by using maps
I have an array of objects:
```
let elements =[
{
"id": 1,
"date": "02:01:02"
},
{
"id": 2,
"date": "01:01:01"
},
{
"id": 3,
"date": "03:01:01"
},
{
"id": 4,
"date": "04:01:01"
}
];
let parsedDates = new Map(
elements.map(e =>[["id", "date"],[e.id, moment(e.date, 'hh:mm:ss')]])
);
elements.sort((a, b) => parsedDates.get(a) - parsedDates.get(b));
console.log(elements.map(e => ({ id: e.id, date: e.date })));
```
| You can lexicographical sort the time using [`string.localeCompare()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/localeCompare).
```
let times = [ { "id": 1, "date": "02:01:02" }, { "id": 2, "date": "01:01:01" }, { "id": 3, "date": "03:01:01" }, { "id": 4, "date": "04:01:01" } ];
times.sort((a,b) => a.date.localeCompare(b.date));
console.log(times);
```
```
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
|
ReactJS: Fade in div and fade out div based on state
So, I am trying to fade in and fade out a set of inputs based on what button the user clicks. I tried using jQuery, but, the div was fading in and fading out at the same speed...
I am using es6 classes and react.
What I want is the user to **press a button and the inputs fadeIn. Another button, the inputs fadeOut**. I don't mind using jQuery, but I would like to understand how to do this with react.
```
renderInputs() {
if (this.state.addType === "image") {
return (
<div className="addContainer">
<input type="text" className="form-control" />
</div>
)
} else {
return (
other inputs
)
}
}
render() {
return (
<CSSTransitionGroup
transitionName="fadeInput"
transitionEnterTimeout={500}
transitionLeaveTimeout={300}>
{this.renderInputs()} // this doesn't work but I want this content to be conditional.
</CSSTransitionGroup>
)
}
// SCSS
.fadeInput-enter {
opacity: 0.01;
}
.fadeInput-enter.fadeInput-enter-active {
opacity: 1;
transition: opacity 500ms ease-in;
}
.fadeInput-leave {
opacity: 1;
}
.fadeInput-leave.fadeInput-leave-active {
opacity: 0.01;
transition: opacity 300ms ease-in;
}
```
| Just use a conditional `class` and CSS.
Have a `state` variable like `visible`.
```
this.state = {
visible:false
}
```
And for the other inputs do something like
```
<input className={this.state.visible?'fadeIn':'fadeOut'} />
```
So depending upon the `state.visible` the input will have a `class` of either `fadeIn` or `fadeOut`.
And then just use simple CSS
```
.fadeOut{
opacity:0;
width:0;
height:0;
transition: width 0.5s 0.5s, height 0.5s 0.5s, opacity 0.5s;
}
.fadeIn{
opacity:1;
width:100px;
height:100px;
transition: width 0.5s, height 0.5s, opacity 0.5s 0.5s;
}
```
So every time the `state.visible` changes the `class` changes and the `transition` takes place. The `transition` property in CSS is basically all the transitions separated by commas. Within the transition the first argument is the property to be modified (say `height`, `width` etc), second is `transition-duration` that is the time taken for the transition and third(optional) is `transition-delay` ie how much time after the transition has been initiated does the transition for the particular property take place. So when `this.state.visible` becomes `true` the `.fadeIn` class is attached to the object. The `transition` has `height` and `width` taking 0.5s each so that will take 0.5s to grow and after it is finished the `opacity` transition (which has a delay of 0.5s) will trigger and take a further 0.5s to get `opacity` 1. For the hiding it's the reverse.
Remember to have the `OnClick` event on the button handle the changing of `this.state.visible`.
|
Sort pandas dataframe by customize way
I have tried a lot to sort DataFrame column on my own way. But could not be able to correctly do it. So refer given code and let me know what is the additional syntax to do the job.
```
df = pd.DataFrame({'TC': {0: '1-1.1', 1: '1-1.2', 2: '1-10.1', 3: '1-10.2', 4: '1-2.1', 5: '1-2.1', 6: '1-2.2', 7: '1-20.1', 8: '1-20.2', 9: '1-3.1'}, 'Case': {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F', 6: 'G', 7: 'H', 8: 'I', 9: 'J'}})
df.sort_values(["TC"], ascending=[True])
print (df)
```
This code does not give desire output. I need the Dataframe sorted as per below.
[](https://i.stack.imgur.com/uKf6k.png)
| You can extract the numbers and form a `tuple`, then sort that `series` and use its `index` to `reindex` your original DataFrame.
```
>>> df.reindex(
df['TC'].str.extractall('(\d+)')
.unstack().astype(int)
.agg(tuple, 1).sort_values()
.index
)
TC Case
0 1-1.1 A
1 1-1.2 B
4 1-2.1 E
5 1-2.1 F
6 1-2.2 G
9 1-3.1 J
2 1-10.1 C
3 1-10.2 D
7 1-20.1 H
8 1-20.2 I
```
You can also use the `key` argument in `sort_values`:
```
>>> df.sort_values('TC',
key=lambda ser:
ser.str.extractall('(\d+)')
.unstack()
.astype(int).agg(tuple, 1)
)
```
If there are always three parts to an `ID` you can use `Series.str.split` on `non-numeric` characters with `expand=True`, instead of `extractall`, hence removing the need to use `unstack`:
```
>>> df.sort_values('TC',
key=lambda series:
series.str.split(r'\D+', expand=True)
.astype(int).agg(tuple,1)
)
```
Timings:
```
>>> %timeit df.reindex(df['TC'].str.extractall('(\d+)').unstack().astype(int).agg(tuple, 1).sort_values().index)
2.95 ms ± 40.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit df.sort_values('TC', key=lambda ser: ser.str.extractall('(\d+)').unstack().astype(int).agg(tuple, 1))
2.91 ms ± 32.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit df.sort_values('TC', key=lambda series:series.str.split(r'\D+', expand=True).astype(int).agg(tuple,1))
1.6 ms ± 5.88 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
```
|
Overlapping instances for between Double and Integral types
I have the following type class and instances:
```
class StatType a where
toDouble :: a -> Double
instance StatType Double where
toDouble = id
instance Integral a => StatType a where
toDouble = fromIntegral
avg :: StatType a => [a] -> Double
avg = undefined
```
But then the expression
```
*Example> avg ([1,2,3,4] :: [Double])
```
reports a type error regarding overlapped instances
```
Overlapping instances for StatType Double
arising from a use of `avg'
Matching instances:
instance StatType Double -- Defined at Example.hs:61:10
instance Integral a => StatType a -- Defined at Example.hs:63:10
```
The type system cannot select between these two instances. However, `Double` is not an `Integral` type.
```
*Example> :i Double
data Double = GHC.Types.D# GHC.Prim.Double#
-- Defined in `GHC.Types'
instance StatType Double -- Defined at Example.hs:
instance Enum Double -- Defined in `GHC.Float'
instance Eq Double -- Defined in `GHC.Classes'
instance Floating Double -- Defined in `GHC.Float'
instance Fractional Double -- Defined in `GHC.Float'
instance Num Double -- Defined in `GHC.Float'
instance Ord Double -- Defined in `GHC.Classes'
instance Read Double -- Defined in `GHC.Read'
instance Real Double -- Defined in `GHC.Float'
instance RealFloat Double -- Defined in `GHC.Float'
instance RealFrac Double -- Defined in `GHC.Float'
instance Show Double -- Defined in `GHC.Float'
```
And I don't think `Integral` implied by one of these or anything?
`fromIntegral (3 :: Double)` raises a type error since `Double` isn't an `Integral` instance.
*Why are these overlapped?*
Thanks!
| Quite simply, this is just how GHC [works](http://www.haskell.org/ghc/docs/latest/html/users_guide/type-class-extensions.html#instance-resolution)
>
> When GHC tries to resolve, say, the constraint C Int Bool, it tries to
> match every instance declaration against the constraint, by
> instantiating the head of the instance declaration. Consider these
> declarations:
>
>
>
> ```
> instance context1 => C Int a where ... -- (A)
> instance context2 => C a Bool where ... -- (B)
>
> ```
>
> GHC's default behaviour is that exactly one instance must match the
> constraint it is trying to resolve. For example, the constraint C Int
> Bool matches instances (A) and (B), and hence would be rejected; while
> C Int Char matches only (A) and hence (A) is chosen.
>
>
> Notice that
>
>
> - **When matching, GHC takes no account of the context of the instance declaration (context1 etc).**
>
>
>
So ghc doesn't see
```
instance StatType Double
instance Integral a => StatType a
```
It sees
```
instance StatType Double
instance StatType a
```
Which are evidently overlapping.
The reason this works the way it does is that type classes are open. There is no instance for `Integral Double` *now* but someone may import your library and declare one, at which point the instances would be overlapping even if the context was checked. Even worse, there would be no sensible way to prefer one over the other.
|
Best way to manage user/group object permissions with Symfony2
I'd like to hear some thoughts on the best way to optimize our schema to achieve the following.
We have a number of objects/db entries (events, venues, etc) some of which have children objects (meaning the same permissions apply - images, metas, etc)
Users can belong to groups so parent objects such as events, venues can be editable/viewable by all, group only, just one user.
Currently we have a user, usergroup and group table to manage users and groups.
Each parent object such as venues as a column for user\_id and group\_id.
Works fine (in symfony 1.4) but it's messy - every query for anything has to do complex joins to get possible groups etc... We'd like to find a simpler way.
I was really excited about the Sf2 ACL component but I am being told over and over that I should not use it to find objects that a user can manage - rather that I should use ACL to find out if a user is allowed to manage his own objects (doesn't seem very useful but whatever).
All alternative attempts online that I found to do this say to pull all objects from db then filter by ACL - it's cute for a mom and pop site - not gonna happen with a million objects.
So... I would love to hear ideas as to how we could do this - we are also open to leaving symfony for something that has a scaleable ACL solution but have not found anything so far (php or ruby) so open to that as well though we would love to continue using Sf. Note that we intend to use MongoDB in case that matters.
| From how I understand it, the ACL is used to give access to a specific object to a specific person for special scenarios. What you are describing is more generic, but it just deviates from what Symfony2 outlines for security (this person has an "admin" role, but only for the objects contained in a particular group).
ACLs should not be used to store a bunch of stuff, as checking it can get expensive if it gets too large. So, throwing a bunch of stuff in here by default when new users are added, or even when new objects are added under a group (if using the ACL, you would have to add an entry to each person in the group whenever you create a new object), is going to be taxing on performance after a while...
I am currently researching the possibility of using Symfony2 for a web app, but I am hitting a wall with this security stuff too, as we have a similar need. I'm no expert on Symfony2, but from what I have looked in to, you might have a few options:
1. Create a Voter to handle this. Voters allow you to check authorization tokens and return whether access is granted or denied based on how you process it. So, you could make a custom Voter that checks a user's group and tries to match it up with the group the object is under. If so, return ACCESS\_GRANTED, otherwise ACCESS\_DENIED, or ACCESS\_ABSTAIN if the Voter is not valid for the current check. EDIT: Here is a link to the Symfony2 cookbook for Voters: <http://symfony.com/doc/current/cookbook/security/voters.html>
2. Might also want to research the SecurityContext interface. This provides the "isGranted()" method that deals with determining access to objects. If Voters are not simply enough, you might have to go the route of creating a new SecurityContext class; I think that this would be a bit more involved though.
Like I said, I am no pro, and don't have a solution; these are just some directions I am researching to try to solve (what I feel is) a similar problem. Hope this helps somewhat.
|
AFP, SMB, NFS which is the best data transfer protocol?
I have a computer with large hard disks running Gentoo. I have to serve med/big files via a wired network to Apple devices (all of them running OS X).
Which protocol is the best for the following needs ? :
1. Speed
2. Ease of use (by the clients and the server)
3. Less limited (max file size, limited charset for filenames)
4. Security
| You should find this article interesting:
[iSCSI, AFP, SMB, and NFS performance with Mac OS X 10.5.5 clients](http://discussions.apple.com/thread.jspa?threadID=1819141).
It shows these test results:
>
> (in seconds)
>
>
>
> iSCSI 134.267530
> AFP
> 140.285572
> SMB 159.061026
> NFSv3 (w/o tuning) 477.432503
>
> NFSv3 (w/tuning) 293.994605
>
>
>
|
How to protect an object using a lock in Python?
I've come accross functionality which required the following pattern:
```
from threading import Lock
the_list = []
the_list_lock = Lock()
```
and to use it:
```
with the_list_lock:
the_list.append("New Element")
```
Unfortunately, this does not require me to acquire the lock, I could just access the object directly. I would like some protection against that (I'm only human.) Is there a standard way of doing this? My own approach is to create a `HidingLock` class that can be used like this:
```
the_list = HidingLock([])
with the_list as l:
l.append("New Element")
```
But it feels so basic that either it should exist in the standard library or it's a very unconventional way to use locks.
| I think the reason there's nothing in the standard library is because for it to be there it would need to make cast iron access guarantees. To provide anything less would give a **false sense of security** that could lead to just as many concurrency issues.
It's also nearly impossible to make these guarantees, without making substantial performance sacrifices. As such, it is left up to the user to consider how they will manage concurrency issues. This is in line with one of Python's the philosophies of "we're all consenting adults". That is, if you're writing a class I think it's reasonable that you should know which attributes you need to acquire a lock before accessing the attribute. Or, if you're really that concerned, write a wrapper/proxy class that controls all access to the underlying object.
With your example there are a number of ways in which the target object could accidentally escape. If the programmer isn't paying enough attention to the code they're writing/maintaining, then this `HiddenLock` could provide that false sense of security. For instance:
```
with the_lock as obj:
pass
obj.func() # erroneous
with the_lock as obj:
return obj.func() # possibly erroneous
# What if the return value of `func' contains a self reference?
with the_lock as obj:
obj_copy = obj[:]
obj_copy[0] = 2 # erroneous?
```
This last one is particularly pernicious. Whether this code is thread safe depends not on the code within the with block, or even the code after the block. Instead, it is the implementation of the class of `obj` that will mean this code is thread safe or not. For instance, if `obj` is a `list` then this is safe as `obj[:]` creates a copy. However, if `obj` is a `numpy.ndarray` then `obj[:]` creates a view and so the operation is unsafe.
Actually, if the contents of `obj` were mutable then this could be unsafe as regardless (eg. `obj_copy[0].mutate()`).
|
Error: ANDROID\_HOME is not set and "android" command not in your PATH on OS X
I am getting this error in terminal when attempting to execute this command:
```
$ cordova platform add android
```
I read these answers here:
[Error: ANDROID\_HOME is not set and "android" command not in your PATH. You must fulfill at least one of these conditions](https://stackoverflow.com/questions/26701176/error-android-home-is-not-set-and-android-command-not-in-your-path-you-must)
[ANDROID\_HOME is not set and "android" command not in your PATH Phonegap](https://stackoverflow.com/questions/26216081/android-home-is-not-set-and-android-command-not-in-your-path-phonegap)
however I don't know where to place the code they provide.
I am running OS X.
| **Option 1**
for windows
Right click on My computer -> properties -> Advanced system setting -> Environment Variables
Now Edit Path on system variables to
```
;/yourSdkHome/tools;/youSdkHome/platform-tools
```
**Option 2**
In Windows machine, Open Command prompt and enter each of this commands one by one
```
set ANDROID_HOME=**C:\\android-sdk-windows**
set PATH=%PATH%;%ANDROID_HOME%\tools;%ANDROID_HOME%\platform-tools
```
In MAC Machine, open Terminal and enter these one by one
```
export ANDROID_HOME=/<installation location>/android-sdk-macosx
export PATH=${PATH}:$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools
```
---
Note - `C:\\android-sdk-windows` should be replaced by the path of Android SDK in your machine.
`<installation location>` should be replaced by the path of Android SDK in your machine.
**Example**
in MAC machine, if install location is `~/adt-bundle-mac-x86_64/sdk`
first line should be
```
export ANDROID_HOME=~/adt-bundle-mac-x86_64/sdk
```
|
Is it safe to return a new object from static method?
Is it OK to have a class something like this
```
public class Weapon {
private string name;
private int might;
// etc...
private Weapon(String name, int might) {
this.name = name;
this.might = might;
}
public static final Weapon Alondite = new Weapon("Alondite", 16);
// 100++ weapon
}
```
Then calling the weapon anywhere in the project, like this `Weapon.Alondite`, will this create new object every time the static method member called?
Or should I do like this, to ensure the object only created once
```
public class Weapon {
private string name;
private int might;
// etc...
private Weapon(String name, int might) {
this.name = name;
this.might = might;
}
private static Weapon mAlondite;
public static Weapon Alondite() {
//if (mAlondite == null) {
// mAlondite = new Weapon("Alondite", 16);
// return mAlondite;
//} else {
// return mAlondite;
//}
// EDIT: as suggested by everyone
if (mAlondite == null) {
mAlondite = new Weapon("Alondite", 16);
}
return mAlondite;
}
}
```
| No, your first code snippet will not create a new weapon every time you reference it.
The second example you posted is an example of the singleton pattern.
Based on your comment at the bottom of your first example, it seems to indicate that you will have over 100 instances of different weapons. This is a lot. You're right that you don't necessarily want to have more than one instance of the same weapon (one might for different reasons, but your examples seem to indicate singleton-ness).
You might want to consider what might happen if you needed to make another weapon type, or maybe a hundred, or a thousand. You might not want to have to recompile every single time. Otherwise, the purpose of the class might be misconstrued as an weapon type list, not the weapon itself.
Based on your example, it would seem that the 100+ weapon types only differ in name and might. If that's the case, being only different in the data, not behavior, I might consider reading them from a file or something else, and providing a means to look them up (similar to the repository pattern).
```
public class WeaponLookup {
private Map<String, Weapon> weapons = new HashMap<>();
public WeaponLookup(Map<String, Weapon> weapons) { // loaded from file or something
this.weapons = weapons;
}
public Weapon Lookup(String name) {
return weapons.get(name);
}
}
```
You mentioned in a comment that your weapons will have many attributes, not limited to two. I think this moves more into the favor of using an external, non-compiled data file, as opposed to being compiled into the class.
You can compare something like this
```
{
"weapons" : [
{
"name" : "Almonite",
"might" : 16,
"def-bonus" : 7,
"spd-bonus" : 9
},
{
"name" : "Alphite",
"might" : 22,
"def-bonus" : 2,
"spd-bonus" : 3
},
{
"name" : "Betite",
"might" : 16,
"def-bonus" : 11,
"spd-bonus" : 4
},
{
"name" : "Gammite",
"might" : 12,
"def-bonus" : 7,
"spd-bonus" : 7
},
{
"name" : "Deltite",
"might" : 19,
"def-bonus" : 6,
"spd-bonus" : 5
},
{
"name" : "Thetite",
"might" : 11,
"def-bonus" : 2,
"spd-bonus" : 11
}
]
}
```
To something like
```
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Deltite = new Weapon("Deltite", 19, 6, 5);
public static final Weapon Thetite = new Weapon("Thetite", 11, 2, 11);
```
Looking at this example, can you tell which of those numbers belong to which field; is the 11 of Betite spd-bonus or def-bonus? I certainly can't tell. I could go to the constructor and see what it is (it's def-bonus), but I don't really want to do that. If there's going to be 100+, it would be about this long
```
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Deltite = new Weapon("Deltite", 19, 6, 5);
public static final Weapon Thetite = new Weapon("Thetite", 11, 2, 11);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Deltite = new Weapon("Deltite", 19, 6, 5);
public static final Weapon Thetite = new Weapon("Thetite", 11, 2, 11);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Deltite = new Weapon("Deltite", 19, 6, 5);
public static final Weapon Thetite = new Weapon("Thetite", 11, 2, 11);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Deltite = new Weapon("Deltite", 19, 6, 5);
public static final Weapon Thetite = new Weapon("Thetite", 11, 2, 11);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Deltite = new Weapon("Deltite", 19, 6, 5);
public static final Weapon Thetite = new Weapon("Thetite", 11, 2, 11);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Deltite = new Weapon("Deltite", 19, 6, 5);
public static final Weapon Thetite = new Weapon("Thetite", 11, 2, 11);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Deltite = new Weapon("Deltite", 19, 6, 5);
public static final Weapon Thetite = new Weapon("Thetite", 11, 2, 11);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Deltite = new Weapon("Deltite", 19, 6, 5);
public static final Weapon Thetite = new Weapon("Thetite", 11, 2, 11);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Deltite = new Weapon("Deltite", 19, 6, 5);
public static final Weapon Thetite = new Weapon("Thetite", 11, 2, 11);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
public static final Weapon Deltite = new Weapon("Deltite", 19, 6, 5);
public static final Weapon Thetite = new Weapon("Thetite", 11, 2, 11);
public static final Weapon Almonite = new Weapon("Almonite", 16, 7, 9);
public static final Weapon Alphite = new Weapon("Alphite", 22, 2, 3);
public static final Weapon Betite = new Weapon("Betite", 16, 11, 4);
public static final Weapon Gammite = new Weapon("Gammite", 12, 7, 7);
```
Do you want to release a new version of your program every time you make a change that doesn't really affect functionality? I don't. I'd much rather just have some sort of update checker that downloads the latest version of the data file. Functionality never changes, but suddenly you can add thousands more weapons. Or even removing weapons; if you add a weapon that ends up being more over-powered than anticipated, you can easily just remove the entry from the file; as opposed to building your code again, testing it, and making it available for your clients to download.
|
What is the difference between JAVA\_HOME and update-alternatives?
I have been trying to figure out the best way to run OpenJDK Java Runtime as default Java for my Fedora box and use Oracle JDK 6 for Android development namely for running Android SDK Manager, Android Studio and Eclipse from Android Bundle.
I installed OpenJDK Java Runtime from the Fedora repository which has setup alternatives as follow.
```
[donnie@fedora ~]$ alternatives --list | grep java
jre_openjdk auto /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.5.1.fc20.x86_64/jre
jre_1.7.0 auto /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.5.1.fc20.x86_64/jre
java auto /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.5.1.fc20.x86_64/jre/bin/java
libjavaplugin.so.x86_64 auto /usr/lib64/IcedTeaPlugin.so
```
I have installed the Oracle JDK 6 using the rpm provided by Oracle.
I could make Android Bundle and Studio make use of JAVA\_HOME to run under Oracle JDK by sticking following in `.bashrc`.
```
export JAVA_HOME=/usr/java/jdk1.6.0_45/
export PATH=$JAVA_HOME/bin:$PATH
```
I noticed that Chrome still uses OpenJDK (as I still need to link the plugin).
What are the difference between JAVA\_HOME and using alternatives?
| ### Alternatives
Alternatives is a tool that will manage the locations of the installed software using links under the control of the `alternatives` tool.
These links are ultimately managed under `/etc/alternatives` with intermediate links created under a directory in `$PATH`, typically `/usr/bin`.
### Example
```
$ ls -l /usr/bin/java
lrwxrwxrwx. 1 root root 22 Feb 24 17:36 /usr/bin/java -> /etc/alternatives/java
$ ls -l /etc/alternatives/java
lrwxrwxrwx. 1 root root 73 Feb 24 17:36 /etc/alternatives/java -> /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.5.0.fc19.x86_64/jre/bin/java
```
### $JAVA\_HOME
`$JAVA_HOME` is where software can be told to look through the use of an environment variable. Adding it to the `$PATH` simply adds the executables present in `$JAVA_HOME/bin` to your `$PATH`. This is sometimes necessary for certain applications.
The 2 mechanisms are related but can be used together or independent of each other, it really depends on the Java application which mechanism is preferable.
### What I do
I typically use `$JAVA_HOME` for some GUI applications, but in general use it only for server installations that make use of Jetty, Tomcat, or JBOSS, for example.
For these installations I'll still use `alternatives` to manage the Java installations prior to setting the `$JAVA_HOME`. I like doing it this way in cases where I might need to have multiple installations of Java.
Alternatives does allow you to have certain tools use one installation of Java while other tools use a completely different one.
### References
- [5 Minute Guide to using the alternatives Command on Fedora/CentOS/RHEL](http://www.lamolabs.org/blog/5562/5-minute-guide-to-using-the-alternatives-command-on-fedoracentosrhel/)
- [Using 'Alternatives' in Linux to use a different Java package](http://tech.lanesnotes.com/2008/03/using-alternatives-in-linux-to-use.html)
|
Shiny DT "rows\_selected" and "rows\_all" are deprecated?
I developed a Shine app which uses "*input$TABLE\_NAME\_rows\_all*" and "*input$TABLE\_NAME\_rows\_selected*" intensively.
After upgrading DT package this functions does not work. I have checked DT documentation at <https://rstudio.github.io/DT/shiny.html> and seems that these had been deprecated, whith no clear replace for the "rows\_all" option.
First question: is there any solution for this?
Second question: (if first fails) can I run an app with a certain DT version and other apps in same server with the latest DT package?
thanks
| From the link that you have provided it doesn't look like those functions have been depreciated. Chapter 2.1.1 refers to input$tableId\_rows\_selected to get the selected rows. Chapter 2.2 DataTables Information refers to input$tableId\_rows\_all to get all the rowas after filtering.
But take a look at this blog post about the release of the new version of DT,
especially this part:
For tables in the server-side processing mode (the default mode for tables in Shiny), the selected row indices are integers instead of characters (row names) now. This is for consistency with the client-side mode (which returns integer indices). In many cases, it does not make much difference if you index an R object with integers or names, and we hope this will not be a breaking change to your Shiny apps.
The blog post is [here](https://blog.rstudio.org/2016/08/09/a-new-version-of-dt-0-2-on-cran/)
Hope this helps
|
CSS diagonal div background
For a website I'm developing I need to include some diagonal shaped borders to a `div`. These are the main examples which I need to recreate.
[double diagonal top border, triangle shaped](https://i.stack.imgur.com/d44D3.png)
Now been scouting the web on how to achieve this, and my first thought as well would be by using `::before`. However I can't get it to work without it being positioned absolute which messes up the entire page.
This is my code I have tried to achieve something like this:
```
.slider-container{
background-color: $blue;
width: 100%;
overflow: hidden;
position: relative;
.col-md-3{
img{
padding: 40px;
width: 100%;
max-width: 400px;
margin: auto;
}
}
&::before {
background: red;
bottom: 100%;
content: '';
display: block;
height: 100%;
position: absolute;
right: 0;
transform-origin: 100% 100%;
transform: rotate(-15deg);
width: 150%;
}
}
```
```
<section id="slider">
<div class="container-fluid">
<div class="row slider-container">
<div class="col-md-3">
<p>imgae 1</p>
</div>
<div class="col-md-3">
<p>imgae 2</p>
</div>
<div class="col-md-3">
<p>imgae 3</p>
</div>
<div class="col-md-3">
<p>imgae 4</p>
</div>
</div>
</div>
</section>
```
Note: it won't work in here but this is the result I get [result](https://i.stack.imgur.com/B2pm8.png)
| With just css and a bit tweaking based on your divs size you could create something like this:
```
.myclass {
width: 100px;
height: 100px;
background: linear-gradient(45deg, black 0%, black 26%, transparent 26%), linear-gradient(-45deg, black 0%, black 27%, transparent 27%)
}
.myclass2 {
width: 100px;
height: 100px;
background: linear-gradient(-45deg, blue 0%, blue 27%, transparent 27%), linear-gradient(45deg, blue 0%, blue 26%, red 26%)
}
```
```
With transparency:
<div class="myclass">My content here</div>
<br/>
Not as easy with transparent:
<div class="myclass2">My content here</div>
```
Edit: Just tested this in chrome, you might need special linear-gradients for older/other browsers.
|
MySQL select rows with dates range for each days of the range
I have a table containing events with ranges like this :
```
id | title | start | end
1 | Lorem | 2019-11-02 | 2019-11-03
2 | Ipsum | 2019-11-02 | 2019-11-02
3 | Dolor | 2019-11-08 | 2019-11-10
4 | Amet | 2019-11-02 | 2019-11-04
```
I want to select all rows but joining the dates from the range, so I can have a X rows for each event for each day from its range.
The results should be from my example table :
```
date | id | title | start | end
2019-11-02 | 1 | Lorem | 2019-11-02 | 2019-11-03
2019-11-02 | 2 | Ipsum | 2019-11-02 | 2019-11-02
2019-11-02 | 4 | Amet | 2019-11-02 | 2019-11-04
2019-11-03 | 1 | Lorem | 2019-11-02 | 2019-11-03
2019-11-03 | 4 | Amet | 2019-11-02 | 2019-11-04
2019-11-04 | 4 | Amet | 2019-11-02 | 2019-11-04
2019-11-08 | 3 | Dolor | 2019-11-08 | 2019-11-10
2019-11-09 | 3 | Dolor | 2019-11-08 | 2019-11-10
2019-11-10 | 3 | Dolor | 2019-11-08 | 2019-11-10
```
I'm really stuck and don't know if it's event possible.... Thanks for your help !
I'm on MySQL 5.7
| If you are running MySQ 8.0, this is a straight-forward recursive query:
```
with recursive cte as (
select start as date, id, title, start, end from mytable
union all
select date + interval 1 day, id, title, start, end from cte where date < end
)
select * from cte
order by date, id
```
**[Demo on DB Fiddle](https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=9b7c91ce44aec7c00d173214d575e666)**:
```
date | id | title | start | end
:--------- | -: | :---- | :--------- | :---------
2019-11-02 | 1 | Lorem | 2019-11-02 | 2019-11-03
2019-11-02 | 2 | Ipsum | 2019-11-02 | 2019-11-02
2019-11-02 | 4 | Amet | 2019-11-02 | 2019-11-04
2019-11-03 | 1 | Lorem | 2019-11-02 | 2019-11-03
2019-11-03 | 4 | Amet | 2019-11-02 | 2019-11-04
2019-11-04 | 4 | Amet | 2019-11-02 | 2019-11-04
2019-11-05 | 3 | Dolor | 2019-11-05 | 2019-11-08
2019-11-06 | 3 | Dolor | 2019-11-05 | 2019-11-08
2019-11-07 | 3 | Dolor | 2019-11-05 | 2019-11-08
2019-11-08 | 3 | Dolor | 2019-11-05 | 2019-11-08
```
In earlier versions, typical solutions include a table of numbers. Here is one solution that will handle up to 4 days span (you can extend the subquery for more):
```
select
t.start + interval x.n day date,
t.*
from
mytable t
inner join (
select 0 n union all select 1 union all select 2 union all select 3 union all select 4
) x on t.start + interval x.n day <= t.end
order by date, id
```
**[Demo on DB Fiddlde](https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=c71dbb0291799f3b1ad016a562a4159b)**
|
How can I use summarise\_each for correlations in dplyr?
A dataframe has 20 columns, and I want to find the correlation of column "a" with rest of the columns.
How can I do it using dplyr?
I know how to do individual correlations such as this:
```
test %>%
dplyr::summarize(cor(a, b))
```
Or summarise\_each for mean.
But how can I do it for correlation?
Two use cases:
1. Where it calculates correlations with *every* other column in the dataframe.
2. Where it calculates correlations with columns I mention.
| The corrr package uses dplyr as a backend (and so easily works with it) to do just this via `correlate() %>% focus()`:
```
library(corrr)
mtcars %>%
correlate() %>%
focus(mpg)
#> # A tibble: 10 × 2
#> rowname mpg
#> <chr> <dbl>
#> 1 cyl -0.8521620
#> 2 disp -0.8475514
#> 3 hp -0.7761684
#> 4 drat 0.6811719
#> 5 wt -0.8676594
#> 6 qsec 0.4186840
#> 7 vs 0.6640389
#> 8 am 0.5998324
#> 9 gear 0.4802848
#> 10 carb -0.5509251
mtcars %>%
select(mpg, disp, hp) %>%
correlate() %>%
focus(mpg)
#> # A tibble: 2 × 2
#> rowname mpg
#> <chr> <dbl>
#> 1 disp -0.8475514
#> 2 hp -0.7761684
```
`focus()` acts like `dplyr::select()`, except that it excludes any remaining columns from the rows. If interested, take a look at `focus_.cor_df()` on GitHub [here](https://github.com/drsimonj/corrr/blob/master/R/cor_df.R).
|
Javascript: how is 'new Array(4)' different from Array.apply(null, {length: 4})?
I want to generate an empty array of a given length and the populate it with some numbers. One way to generate an array with four sequential numerical elements is :
```
var x = Array.apply(null, {length: 4}).map(function(item, index){return index;})
```
But when I saw `Array.apply(null, {length: 4})` I thought I could instead replace it with `new Array(4)` but that is not the case. Doing a quick test yields the following:
```
>> console.log((new Array(4)))
<< [ <4 empty items> ]
>> console.log(Array.apply(null, {length: 4}))
<< [ undefined, undefined, undefined, undefined ]
```
Which means I can `.map` the latter but not the former.
What is the difference then between `new Array` and `Array.apply(null, {})` which I thought were both creating an array object with given length?
| `apply` takes a context as the first parameter and an arraylike list of arguments as a second. Then it calls the function (`Array`) with the iterable as arguments.
```
Array.apply(null, [1, 2])
// Same as
Array(1, 2)
// Or
[1, 2]
```
Now if you pass an object as an arraylike, it will still iterate it like this:
```
function apply(context, args) {
for(var i = 0; i < args.length; i++) {
/*...*/ args[i];
}
}
```
So if you pass `{ length: 4 }` it will iterate four times and take `undefined` as an argument, so it results in something like:
```
Array.apply(null, { length: 4 })
// Same as
Array(undefined, undefined, undefined)
```
Therefore the arrays slots are not empty, but they are undefined, and as `map` only skips empty slots it will go over every entry of the second array.
By the way, the same can be achieved a bit more readable:
```
Array.from({length: 4 }, (_, i) => i)
// [0, 1, 2, 3]
```
|
Take a list of numbers and get the three numbers before python
I have this list of numbers
```
list1 = [15,27,48,70,83]
```
I want to have the output
```
list1 = [12,13,14,15,24,25,26,27,45,46,47,48,67,68,69,70,80,81,82,83]
```
I know I can do this for each number and then merge the list together and sort them
```
for i in range(len(list1)):
list1[i] = list1[i] - 1
```
Is there a faster way I can do this?
| Do:
```
list1 = [15,27,48,70,83]
result = [i for e in list1 for i in range(e - 3, e + 1)]
print(result)
```
**Output**
```
[12, 13, 14, 15, 24, 25, 26, 27, 45, 46, 47, 48, 67, 68, 69, 70, 80, 81, 82, 83]
```
The above [list comprehension](https://docs.python.org/3/tutorial/datastructures.html) is equivalent to the following nested for-loops:
```
result = []
for e in list1:
for i in range(e - 3, e + 1):
result.append(i)
```
You may face some problems if `list1` is not sorted, the goods new is you don't need to sort it, use [`heapq.merge`](https://docs.python.org/3/library/heapq.html#heapq.merge):
```
from heapq import merge
list1 = [15, 70, 83, 27, 48] # not sorted
result = list(merge(*[range(e - 3, e + 1) for e in list1]))
print(result)
```
Using the above approach will keep the overall complexity [linear](https://en.wikipedia.org/wiki/Time_complexity#Linear_time).
|
Implementing a CATransition push with CAAnimation
How do you implement `kCATransitionPush` using `CAAnimation` subclasses in iOS?
```
CAAnimation *animation;
// How do you create an animation that does the same than:
// CATransition *animation = [CATransition animation];
// [animation setType:kCATransitionPush];
[self.view.layer addAnimation:animation forKey:nil];
[UIView beginAnimations:nil context:nil];
[UIView setAnimationDuration:1];
[self.view addSubview:change];
[UIView commitAnimations];
```
I'm aware that `UIView` animations can [also](https://stackoverflow.com/questions/3505413/iphone-how-to-do-kcatransitionpush-without-any-fade) be used, but it would help me to understand **Core Animation** better if I could implement a `kCATransitionPush` transition from the ground-up.
| In order to execute the animations simultaneously over the two layers, you must add the adequate CAAnimationGroup to each layer.
```
[nextView.layer addAnimation:nextViewAnimation forKey:nil];
[currentView.layer addAnimation:currentViewAnimation forKey:nil];
```
nextViewAnimation would be:
```
CAAnimationGroup *nextViewAnimation = [CAAnimationGroup animation];
NSMutableArray *nextAnimations = [NSMutableArray array];
[nextAnimations addObject:[self opacityAnimation:YES]]; // fade in
CGPoint fromPoint = CGPointMake(forward ? nextView.center.x + nextView.frame.size.width : nextView.center.x - nextView.frame.size.width, nextView.center.y);
[nextAnimations addObject:[self positionAnimationFromPoint:fromPoint toPoint:nextView.center]]; // traslation in
nextViewAnimation.animations = nextAnimations;
```
and currentViewAnimation:
```
CAAnimationGroup *currentViewAnimation = [CAAnimationGroup animation];
NSMutableArray *currentAnimations = [NSMutableArray array];
[currentSceneAnimations addObject:[self opacityAnimation:NO]]; // fade out
CGPoint toPoint = CGPointMake(forward ? currentView.center.x - currentView.frame.size.width : currentView.center.x + currentView.frame.size.width, currentView.center.y);
[currentAnimations addObject:[self positionAnimationFromPoint:currentView.center toPoint:toPoint]]; // traslation out
currentViewAnimation.animations = currentAnimations;
```
These methods create the basic animations:
```
- (CABasicAnimation *)opacityAnimation:(BOOL)fadeIn {
CABasicAnimation *a = [CABasicAnimation animationWithKeyPath:@"opacity"];
a.fromValue = [NSNumber numberWithFloat:fadeIn ? 0.0 : 1.0];
a.toValue = [NSNumber numberWithFloat:fadeIn ? 1.0 : 0.0];
return a;
}
- (CABasicAnimation *)positionAnimationFromPoint:(CGPoint)fromPoint toPoint:(CGPoint)toPoint {
CABasicAnimation *a = [CABasicAnimation animationWithKeyPath:@"position"];
a.fromValue = [NSValue valueWithCGPoint:fromPoint];
a.toValue = [NSValue valueWithCGPoint:toPoint];
return a;
}
```
With the boolean **forward** you can simulate the transition "from left" or "from right".
|
C# checking Internet connection
Can you please tell me if there is a way to check if there is a internet connection in my computer when my C# program is running. For a simple example, if internet is working, I would output a message box saying `Internet is available`. else I would output a message saying, `Internet is unavailable`.
Without using the library function to see if network is available (since this doesn't check internet connectivity)
```
System.Net.NetworkInformation.NetworkInterface.GetIsNetworkAvailable()
```
Or without opening a webpage and seeing if it's return data
```
using (WebClient client = new WebClient())
htmlCode = client.DownloadString("http://google.com");
```
Because both of these above methods don't suit my needs.
| a little shorter version:
```
public static bool CheckForInternetConnection()
{
try
{
using (var client = new WebClient())
using (var stream = client.OpenRead("http://www.google.com"))
{
return true;
}
}
catch
{
return false;
}
}
```
Another option is:
```
Ping myPing = new Ping();
String host = "google.com";
byte[] buffer = new byte[32];
int timeout = 1000;
PingOptions pingOptions = new PingOptions();
PingReply reply = myPing.Send(host, timeout, buffer, pingOptions);
if (reply.Status == IPStatus.Success) {
// presumably online
}
```
You can find a broader discussion [here](https://stackoverflow.com/questions/2031824/what-is-the-best-way-to-check-for-internet-connectivity-using-net)
|
Path between two nodes
I'm using networkx to work with graphs. I have pretty large graph (it's near 200 nodes in it) and I try to find all possible paths between two nodes. But, as I understand, networkx can find only shortest path. How can I get not just shortest path, but all possible paths?
UPD: path can contain each node only once.
UPD2: I need something like find\_all\_paths() function, described here: python.org/doc/essays/graphs.html But this function doesn't work well with large number of nodes and edged =(
| [igraph](http://packages.python.org/pypi/python-igraph), another graph module for Python can calculate all the *shortest* paths between a given pair of nodes. Calculating all the paths does not make sense as you have infinitely many such paths.
An example for calculating all the shortest paths from vertex 0:
```
>>> from igraph import Graph
>>> g = Graph.Lattice([10, 10], circular=False)
>>> g.get_all_shortest_paths(0)
[...a list of 3669 shortest paths starting from vertex 0...]
```
If you have igraph 0.6 or later (this is the development version at the time of writing), you can restrict the result of `get_all_shortest_paths` to a given end vertex as well:
```
>>> g.get_all_shortest_paths(0, 15)
[[0, 1, 2, 3, 4, 14, 15],
[0, 1, 2, 12, 13, 14, 15],
[0, 10, 11, 12, 13, 14, 15],
[0, 1, 11, 12, 13, 14, 15],
[0, 1, 2, 3, 13, 14, 15],
[0, 1, 2, 3, 4, 5, 15]]
```
Of course you have to be careful; for instance, assume that you have a 100 x 100 grid graph (that can easily be generated by `Graph.Lattice([100, 100], circular=False)` in igraph). The number of shortest paths leading from the top left node to the bottom right node equals the number of possibilities to choose 100 elements out of 200 (proof: the length of the shortest path there has 200 edges, 100 of which will go "horizontally" in the grid and 100 of which will go "vertically"). This probably does not fit into your memory, therefore even calculating all the *shortest* paths between these two nodes is not really feasible here.
If you really need all the paths between two nodes, you can rewrite the function given on the webpage you mentioned using igraph, which will probably be faster than a pure Python solution as igraph's core is implemented in C:
```
def find_all_paths(graph, start, end, path=[]):
path = path + [start]
if start == end:
return [path]
paths = []
for node in set(graph.neighbors(start)) - set(path):
paths.extend(find_all_paths(graph, node, end, path))
return paths
```
It can be optimized more by converting the graph to an adjacency list representation first as it would spare repeated calls to `graph.neighbors`:
```
def find_all_paths(graph, start, end):
def find_all_paths_aux(adjlist, start, end, path):
path = path + [start]
if start == end:
return [path]
paths = []
for node in adjlist[start] - set(path):
paths.extend(find_all_paths_aux(adjlist, node, end, path))
return paths
adjlist = [set(graph.neighbors(node)) for node in xrange(graph.vcount())]
return find_all_paths_aux(adjlist, start, end, [])
```
**Edit**: fixed first example to work in igraph 0.5.3 as well, not only in igraph 0.6.
|
How to mute audio in AVAudioPlayer?
I have created `AVAudioPlayer`, Now I want to mute it when user click button.
Here what I have tried :
```
player.volume = 1.0 //when first time i initiate my player
- (IBAction)speakerOnOff:(id)sender {
if (player.volume == 1.0) {
[player setVolume: 0.0];
NSLog(@"1volume is:%f",player.volume);
}else if (player.volume == 0.0) {
[player setVolume: 1.0];
NSLog(@"2volume is:%f",player.volume);
}
}
```
| `if (player.volume = 1.0)` and `if (player.volume = 0.0)` are erroneous at least on two levels. First, C is not Pascal - the `=` operator is an assignment, probably you meant `if (player.volume == 1.0)` instead.
Two, even this wouldn't be any good - comparing floating-point numbers [does not do what you think it does](http://www.validlab.com/goldberg/paper.pdf). You better set a Boolean flag to indicate the state of the player (and omit the `else if` part since it's redundant):
```
- (IBAction)speakerOnOff:(id)sender
{
static BOOL muted = NO;
if (muted) {
[player setVolume:1.0];
} else {
[player setVolume:0.0];
}
muted = !muted;
}
```
|
Laravel: Where selection for Eloquent Eager Loading relationship
I got two DB tables:
**Posts**
```
$table->increments('id');
$table->integer('country_id')->unsigned();
$table->foreign('country_id')->references('id')->on('countries');
```
**Countries**
```
$table->increments('id');
$table->string('name', 70);
```
I use laravel as back-end. Now I want to implement filtering data for my front-end. So the user can select a country name and laravel should answer the request only with posts that have a country with the specified name.
How could I add this condition to my existing pagination query? I tried this:
```
$query = app(Post::class)->with('country')->newQuery();
// ...
if ($request->exists('country')) {
$query->where('country.name', $request->country);
}
// ...
```
... resulting in the following error:
```
Column not found: 1054 Unknown column 'country.name' in 'where clause' (SQL: select count(*) as aggregate from `posts` where `country`.`name` = Albania)
```
| whereHas method accepts parameter as per Laravel code base,
```
/**
* Add a relationship count / exists condition to the query with where clauses.
*
* @param string $relation
* @param \Closure|null $callback
* @param string $operator
* @param int $count
* @return \Illuminate\Database\Eloquent\Builder|static
*/
public function whereHas($relation, Closure $callback = null, $operator = '>=', $count = 1)
{
return $this->has($relation, $operator, $count, 'and', $callback);
}
```
so Changing the code a little as,
```
$query = ""
if ($request->has('country'){
$query = Post::with("country")->whereHas("country",function($q) use($request){
$q->where("name","=",$request->country);
})->get()
}else{
$query = Post::with("country")->get();
}
```
By the way above code can be a little simplified as follow;
```
$query = ""
if ($request->has('country'){
$query = Post::with(["country" => function($q) use($request){
$q->where("name","=",$request->country);
}])->first()
}else{
$query = Post::with("country")->get();
```
}
|
itertools product speed up
I use itertools.product to generate all possible variations of 4 elements of length 13. The 4 and 13 can be arbitrary, but as it is, I get 4^13 results, which is a lot. I need the result as a Numpy array and currently do the following:
```
c = it.product([1,-1,np.complex(0,1), np.complex(0,-1)], repeat=length)
sendbuf = np.array(list(c))
```
With some simple profiling code shoved in between, it looks like the first line is pretty much instantaneous, whereas the conversion to a list and then Numpy array takes about 3 hours.
Is there a way to make this quicker? It's probably something really obvious that I am overlooking.
Thanks!
| The NumPy equivalent of `itertools.product()` is `numpy.indices()`, but it will only get you the product of ranges of the form 0,...,k-1:
```
numpy.rollaxis(numpy.indices((2, 3, 3)), 0, 4)
array([[[[0, 0, 0],
[0, 0, 1],
[0, 0, 2]],
[[0, 1, 0],
[0, 1, 1],
[0, 1, 2]],
[[0, 2, 0],
[0, 2, 1],
[0, 2, 2]]],
[[[1, 0, 0],
[1, 0, 1],
[1, 0, 2]],
[[1, 1, 0],
[1, 1, 1],
[1, 1, 2]],
[[1, 2, 0],
[1, 2, 1],
[1, 2, 2]]]])
```
For your special case, you can use
```
a = numpy.indices((4,)*13)
b = 1j ** numpy.rollaxis(a, 0, 14)
```
(This won't run on a 32 bit system, because the array is to large. Extrapolating from the size I can test, it should run in less than a minute though.)
EIDT: Just to mention it: the call to `numpy.rollaxis()` is more or less cosmetical, to get the same output as `itertools.product()`. If you don't care about the order of the indices, you can just omit it (but it is cheap anyway as long as you don't have any follow-up operations that would transform your array into a contiguous array.)
EDIT2: To get the exact analogue of
```
numpy.array(list(itertools.product(some_list, repeat=some_length)))
```
you can use
```
numpy.array(some_list)[numpy.rollaxis(
numpy.indices((len(some_list),) * some_length), 0, some_length + 1)
.reshape(-1, some_length)]
```
This got completely unreadable -- just tell me whether I should explain it any further :)
|
how to generate list of products from elements of a pair of lists in mathematica
Is there a pre-canned operation that would take two lists, say
```
a = { 1, 2, 3 }
b = { 2, 4, 8 }
```
and produce, without using a for loop, a new list where corresponding elements in each pair of lists have been multiplied
```
{ a[1] b[1], a[2] b[2], a[3] b[3] }
```
I was thinking there probably exists something like Inner[Times, a, b, Plus], but returns a list instead of a sum.
|
```
a = {1, 2, 3}
b = {2, 4, 8}
Thread[Times[a, b]]
```
Or, since `Times[]` threads element-wise over lists, simply:
```
a b
```
Please note that the efficiency of the two solutions is not the same:
```
i = RandomInteger[ 10, {5 10^7} ];
{First[ Timing [i i]], First[ Timing[ Thread[ Times [i,i]]]]}
(*
-> {0.422, 1.235}
*)
```
**Edit**
The behavior of `Times[]` is due to the `Listable` attribute. Look at this:
```
SetAttributes[f,Listable];
f[{1,2,3},{3,4,5}]
(*
-> {f[1,3],f[2,4],f[3,5]}
*)
```
|
Checking whether a shell option is set with [ -o OPTION ], weird behavior when OPTION is on
I am in the process of reading [this](http://www.tldp.org/LDP/Bash-Beginners-Guide/Bash-Beginners-Guide.pdf) bash guide from the Linux Documentation Project.
On page 81 and 82 there's a short example script for testing whether an option is set:
```
if [ -o noclobber ]
then
echo "Your files are protected against accidental overwriting using redirection."
fi
```
I have run into some weird behavior when trying to negate the test. I am getting a return value of 0 for all options that are turned on for `[ -o OPTION ]` and `[ ! -o OPTION ]`. Here's an example:
```
$ set -o | grep errex
errexit off
$ [ -o errexit ]; echo $?
1
$ [ ! -o errexit ]; echo $?
0
$ set -o | grep history
history on
$ [ -o history ]; echo $?
0
$ [ ! -o history ]; echo $?
0
```
| Use `[[ ! -o option ]]` instead. Parsing of expressions in `[[ ]]` is more predictable.
The result you're seeing with `[` is because there are two `-o` operators in bash's `test` builtin: unary `-o option` to check if an option is set, and binary `test1 -o test2` to check if either test is true (logical or).
You are passing `test` three arguments, `!`, `-o` and `history`. Let's see what [POSIX says about it](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html) how to parse three arguments:
```
3 arguments:
- If $2 is a binary primary, perform the binary test of $1 and $3.
- If $1 is '!', negate the two-argument test of $2 and $3.
(...)
```
`-o`, is indeed a binary operator, so it performs the test of `$1` and `$3`, which become the "is non-empty" check (like in `[ ! ]` and `[ history ]`). The result is therefore true.
The second interpretation is what you expected, but it's not used since the first interpretation matched.
|
Google map divided in regions with color and in subregions
I need to divide USA map into political boundaries. i.e. states and in seven Regional boundaries on Google map. Each region will have its own color. Example, California, Hawaii, Nevada, Utah, Arizona and Colorado (Southwestern US) will all be same color. Political boundaries will also be on map along with regional boundaries. On national view only regional names should be visible and no state name. On zooming in to region state names will be shown, no city name should be visible either I zoom in or out.
Can you please help me understanding that from where should I start? Links? Tutorial?
Do I need an Image to put on map as overlay.
I am really not getting it.
If you want to close this question then please help me that what is the forum to ask this so I put it there?
|
>
> I need to divide USA map into political boundaries. i.e. states and in seven Regional boundaries on Google map. Each region will have its own color.
>
>
>
You can do this with a relatively simple [polygon overlay](http://code.google.com/apis/maps/documentation/javascript/overlays.html#Polygons). You'll need to obtain polygon data for each region of interest from a third party since the API does not provide any.
>
> On zooming in to region state names will be shown, no city name should be visible either I zoom in or out.
>
>
>
This starts to get more complicated. Probably best done using a [custom map type](http://code.google.com/apis/maps/documentation/javascript/maptypes.html#CustomMapTypes) (see [image map types](http://code.google.com/apis/maps/documentation/javascript/maptypes.html#ImageMapTypes)) if you don't want to see features like city names.
### See also
- [Google Maps w/ counties overlay?](https://stackoverflow.com/questions/3544041/google-maps-w-counties-overlay)
- <http://maps.forum.nu>
- [State polygon overlay example](http://econym.org.uk/gmap/example_states.htm) (for API **v2**)
- [MapTiler - Map Tile Cutter](http://www.maptiler.org/)
- [Polymaps](http://polymaps.org/) - a personal favorite alternative to Google Maps, with a really slick API
|
Shell Script - syntax error near unexpected token `else'
With the following shell script, why I am getting errors
```
syntax error near unexpected token `else'
```
Shell Script
```
echo "please enter username"
read user_name
echo "please enter password"
read -s pass
echo ${ORACLE_SID}
SID=${ORACLE_SID}
if ["${ORACLE_SID}" != 'Test'] then
sqlplus -s -l $USER_NAME/$PASS@$SID <<EOF
copy from scott/tiger@orcl insert EMP using select * from EMP
exit
EOF
else
echo "Cannot copy"
fi
```
| You have to terminate the condition part (the square brackets) like this:
with a semicolon
```
if [ "${ORACLE_SID}" != 'Test' ]; then
^ semicolon
```
or with a newline
```
if [ "${ORACLE_SID}" != 'Test' ]
^ newline
then
```
Note: you also have to put spaces after `[` and before `]`.
```
[ "${ORACLE_SID}" != 'Test' ]
^ space ^ space
```
---
background
The reason for the `;` or linebreak is that the condition part of the `if` statement is just a command. Any command of any length to be precise. The shell executes that command, examines the exit status of the command, and then decides whether to execute the `then` part or the `else` part.
Because the command can be of any length there needs to be a marker to mark the end of the condition part. That is the `;` or the newline, followed by `then`.
The reason for the spaces after `[` is because `[` is a command. Usually a builtin of the shell. The shell executes the command `[` with the rest as parameters, including the `]` as mandatory last parameter. If you do not put a space after `[` the shell will try to execute `[whatever` as command and fail.
The reason for space before the `]` is similar. Because otherwise it will not be recognized as a parameter of its own.
|
Using regex with list comprehension in python
I have following code which will store all the csv filename in a list from a specific folder
```
import pandas as pd
import re
import os
files = os.listdir('.')
filename=[filename for filename in files if filename.endswith('.csv')]
```
However, in my folder, I have two types of csv files, one ends with, for example, \_20.cvs(or maybe \_18.csv,\_01.csv), another one ends with \_Raw.csv;
However I only need the first type stored in my list. I know regular expression may can help me on that, so I did some google search, and come up with the following code, but it seems doesn't work, can anyone offer a advice?
```
filename = [re.search(r'^\d{2}.csv'),filename).group(0) for filename in files]
```
| You need to remove `^` (as it matches the start of string location), add `$` at the end of the pattern (to make sure the match is at the end of the string) and escape the dot (else, `.` matches any char but a line break char).
Note you must check if there is a match before accessing `.group()`:
```
result = [f for f in files if re.search(r'_\d{2}\.csv$', f)]
```
**Details**
- `_` - an underscore
- `\d{2}` - 2 digits
- `\.` - a literal dot
- `csv` - `csv` text
- `$` - end of string.
See the [***regex demo***](https://regex101.com/r/9LuSLL/2).
[Python demo](https://ideone.com/xYAYzo):
```
import re
files = ["gfrt_32_20.csv", "wertf_18.csv", "12_01.csv", "ith_Raw.csv"]
result = [f for f in files if re.search(r'_\d{2}\.csv$', f)]
print(result)
# => ['gfrt_32_20.csv', 'wertf_18.csv', '12_01.csv']
```
|
remove attribute if it exists from xmldocument
How to remove the attribute from XmlDocument if attribute exists in the document? Please help. I am using RemoveAttribute but how can I check if it exists.
root.RemoveAttribute(fieldName);
Thanks..
```
<?xml version="1.0" standalone="yes" ?>
<Record1>
<Attribute1 Name="DataFieldName" Value="Pages" />
</Record1>
```
I am trying to remove attribute named "DataFieldName".
| Not sure exactly what you're trying to do, so here's two examples.
Removing the attribute:
```
var doc = new System.Xml.XmlDocument();
doc.Load("somefile.xml");
var root = doc.FirstChild;
foreach (System.Xml.XmlNode child in root.ChildNodes)
{
if (child.Attributes["Name"] != null)
child.Attributes.Remove(child.Attributes["Name"]);
}
```
Setting the attribute to an empty string:
```
var doc = new System.Xml.XmlDocument();
doc.Load("somefile.xml");
var root = doc.FirstChild;
foreach (System.Xml.XmlNode child in root.ChildNodes)
{
if (child.Attributes["Name"] != null)
child.Attributes["Name"].Value = "";
}
```
Edit: I can try to modify my code if you elaborate on your original request. An XML document can only have one root node and yours appears to be record1. So does that mean your entire file will only contain a single record? Or did you mean to have something like
```
<?xml version="1.0" standalone="yes" ?>
<Records>
<Record>
<Attribute Name="DataFieldName" Value="Pages" />
</Record>
<Record>
<Attribute Name="DataFieldName" Value="Pages" />
</Record>
</Records>
```
|
Different Static Global Variables Share the Same Memory Address
## Summary
I have several C source files that all declare individual identically named static global variables. My understanding is that the static global variable in each file should be visible only within that file and should not have external linkage applied, but in fact I can see when debugging that the identically named variables share the same memory address.
It is like the `static` keyword is being ignored and the global variables are being treated as `extern` instead. Why is this?
## Example Code
foo.c:
```
/* Private variables -----------------------------------*/
static myEnumType myVar = VALUE_A;
/* Exported functions ----------------------------------*/
void someFooFunc(void) {
myVar = VALUE_B;
}
```
bar.c:
```
/* Private variables -----------------------------------*/
static myEnumType myVar = VALUE_A;
/* Exported functions ----------------------------------*/
void someBarFunc(void) {
myVar = VALUE_C;
}
```
baz.c:
```
/* Private variables -----------------------------------*/
static myEnumType myVar = VALUE_A;
/* Exported functions ----------------------------------*/
void someBazFunc(void) {
myVar = VALUE_D;
}
```
## Debugging Observations
1. Set breakpoints on the `myVar = ...` line inside each function.
2. Call `someFooFunc`, `someBarFunc`, and `someBazFunc` in that order from main.
3. Inside `someFooFunc` `myVar` initially is set to `VALUE_A`, after stepping over the line it is set to `VALUE_B`.
4. Inside `someBarFunc` `myVar` is for some reason initally set to `VALUE_B` before stepping over the line, not `VALUE_A` as I'd expect, indicating the linker may have merged the separate global variables based on them having an identical name.
5. The same goes for `someBazFunc` when it is called.
6. If I use the debugger to evaluate the value of `&myVar` when at each breakpoint the same address is given.
## Tools & Flags
Toolchain: GNU ARM GCC (6.2 2016q4)
Compiler options:
```
arm-none-eabi-gcc -mcpu=cortex-m4 -mthumb -mlong-calls -O1 -fmessage-length=0 -fsigned-char -ffunction-sections -fdata-sections -ffreestanding -fno-move-loop-invariants -Wall -Wextra -g3 -DDEBUG -DTRACE -DOS_USE_TRACE_ITM -DSTM32L476xx -I"../include" -I"../system/include" -I"../system/include/cmsis" -I"../system/include/stm32l4xx" -I"../system/include/cmsis/device" -I"../foo/inc" -std=gnu11 -MMD -MP -MF"foo/src/foo.d" -MT"foo/src/foo.o" -c -o "foo/src/foo.o" "../foo/src/foo.c"
```
Linker options:
```
arm-none-eabi-g++ -mcpu=cortex-m4 -mthumb -mlong-calls -O1 -fmessage-length=0 -fsigned-char -ffunction-sections -fdata-sections -ffreestanding -fno-move-loop-invariants -Wall -Wextra -g3 -T mem.ld -T libs.ld -T sections.ld -nostartfiles -Xlinker --gc-sections -L"../ldscripts" -Wl,-Map,"myProj.map" --specs=nano.specs -o ...
```
| *NOTE: I do understand that OP's target platform is ARM, but nevertheless I'm still posting an answer in terms of x86. The reason is, I have no ARM backend in handy, while the question is not limited to a particular architecture.*
Here's a simple test stand. Note that I'm using `int` instead of custom `enum` typedef, since it should not matter at all.
**foo.c**
```
static int myVar = 1;
int someFooFunc(void)
{
myVar += 2;
return myVar;
}
```
**bar.c**
```
static int myVar = 1;
int someBarFunc(void)
{
myVar += 3;
return myVar;
}
```
**main.c**
```
#include <stdio.h>
int someFooFunc(void);
int someBarFunc(void);
int main(int argc, char* argv[])
{
printf("%d\n", someFooFunc());
printf("%d\n", someBarFunc());
return 0;
}
```
I'm compiling it on x86\_64 Ubuntu 14.04 with GCC 4.8.4:
```
$ g++ main.c foo.c bar.c
$ ./a.out
3
4
```
Obtaining such results effectively means that `myVar` variables in `foo.c` and `bar.c` are different. If you look at the disassembly (by `objdump -D ./a.out`):
```
000000000040052d <_Z11someFooFuncv>:
40052d: 55 push %rbp
40052e: 48 89 e5 mov %rsp,%rbp
400531: 8b 05 09 0b 20 00 mov 0x200b09(%rip),%eax # 601040 <_ZL5myVar>
400537: 83 c0 02 add $0x2,%eax
40053a: 89 05 00 0b 20 00 mov %eax,0x200b00(%rip) # 601040 <_ZL5myVar>
400540: 8b 05 fa 0a 20 00 mov 0x200afa(%rip),%eax # 601040 <_ZL5myVar>
400546: 5d pop %rbp
400547: c3 retq
0000000000400548 <_Z11someBarFuncv>:
400548: 55 push %rbp
400549: 48 89 e5 mov %rsp,%rbp
40054c: 8b 05 f2 0a 20 00 mov 0x200af2(%rip),%eax # 601044 <_ZL5myVar>
400552: 83 c0 03 add $0x3,%eax
400555: 89 05 e9 0a 20 00 mov %eax,0x200ae9(%rip) # 601044 <_ZL5myVar>
40055b: 8b 05 e3 0a 20 00 mov 0x200ae3(%rip),%eax # 601044 <_ZL5myVar>
400561: 5d pop %rbp
400562: c3 retq
```
You can see that the actual addresses of static variables in different modules are indeed different: `0x601040` for `foo.c` and `0x601044` for `bar.c`. However, they are associated with a single symbol `_ZL5myVar`, which really screws up GDB logic.
You can double-check that by means of `objdump -t ./a.out`:
```
0000000000601040 l O .data 0000000000000004 _ZL5myVar
0000000000601044 l O .data 0000000000000004 _ZL5myVar
```
Yet again, different addresses, same symbols. How GDB will resolve this conflict is purely implementation-dependent.
I strongly believe that it's your case as well. However, to be double sure, you might want to try these steps in your environment.
|
How to query a column named "OR" in BigQuery?
We unfortunately have a table with a column that has been written with the field name of "OR".
If we try to query this table - "SELECT OR FROM etc." we get an error because OR is a reserved word.
How do we query that column using BigQuery's legacy SQL queries? (We need that column and not others.)
We thought we could use "SELECT \*" and BigQuery's "exclude" feature but isn't part of legacy SQL so we are unable to use it. Other ideas?
| I ran into this issue when querying the Hacker News dataset. It appears the authors added a "by" column [because they were replicating the API response keys](https://news.ycombinator.com/item?id=10441409). However, it's difficult to query and [not best practice in database design](https://news.ycombinator.com/item?id=10441325).
I tried the brackets:
```
SELECT
[by] as author_name
FROM
`bigquery-public-data.hacker_news.full`
LIMIT
1000
```
but received the following error:
```
Syntax error: Unexpected keyword BY at [2:4]
```
Here is a query that works on the Hacker News dataset in BigQuery with Standard SQL Dialect:
```
SELECT
b.by as author_name
FROM
`bigquery-public-data.hacker_news.full` as b
LIMIT
1000
```
|
What do cmap, vmin, vmax do internally (matplotlib)?
Let us say I display an image using matplotlib's imshow as follows:
`plt.imshow(IMG, cmap = 'hot', vmin = 0.20, vmax = 0.90)`
where `IMG` is a 2d grayscale image with dtype as float64 and data values in range [0,1].
What is `cmap`, `vmin`,`vmax` doing to the 2d matrix `IMG` internally that I get a proper output? I wish to understand it properly so that I can replicate its effects on my image and give it as an input to a function for further processing, instead of simply displaying it using `plt.imshow()`. Any explanation (with or without code demonstration, images) would be appreciated.
| A colormap in Matplotlib defines the colors, but always on a normalized scale between 0 and 1 (or 0-255). The `vmin` and `vmax` keywords are used to normalize the data you provide, so it is between 0 and 1, and can therefore be mapped according to the colormap.
Those `vmin` and `vmax` keywords are basically a shortcut for a linear normalization, which is very common. But other types can also be used, like logarithmic.
Using:
```
vmin = 0.2
vmax = 0.9
ax.imshow(data, cmap="hot", vmin=vmin, vmax=vmax)
```
Is short for:
```
norm = mpl.colors.Normalize(vmin=vmin, vmax=vmax)
ax.imshow(data, cmap="hot", norm=norm)
```
But using a `norm` explicitly allows for other types of normalization.
It's fairly easy to do this conversion yourself, which might give a better understanding.
You can get a colormap object using.
```
import matplotlib.pyplot as plt
import numpy as np
cmap = plt.cm.get_cmap("hot")
```
[](https://i.stack.imgur.com/hePns.png)
The normalization of some sample data can be done with:
```
data = np.random.rand(20,20)
vmin = 0.2
vmax = 0.9
data_norm = (data - vmin) / (vmax - vmin)
```
The colormap object can be called with a value, if that value is a float it assumes the range of the colormap is between 0-1. If the value is an integer it assumes it's between 0-255.
For example, the first and last colors of this `hot` colormap are:
```
cmap(0.)
# results in: (0.0416, 0.0, 0.0, 1.0) # = almost black
cmap(1.)
# results in: (1.0, 1.0, 1.0, 1.0) # = white
```
It returns a RGBA tuple, denoting the color for that specific value.
What's nice is that you can also call the colormap with an array of values, so using the **normalized** data will return an array of those values. This will add an extra dimension at the end of the array, so it changes from 2D to 3D.
That array of colors can be plotted with Matplotlib without any extra information. And that's more or less what Matplotlib does in the background when you specify the `cmap`,`vmin`,`vmax` keywords.
An example tying it all together:
```
data = np.random.rand(20,20)
vmin = 0.2
vmax = 0.9
data_norm = (data - vmin) / (vmax - vmin)
cmap = plt.cm.get_cmap("hot")
data_rgb = cmap(data_norm)
fig, axs = plt.subplots(
1, 3, figsize=(6, 2), facecolor="w",
subplot_kw=dict(xticks=[], yticks=[]),
)
axs[0].set_title("Default")
axs[0].imshow(data)
axs[1].set_title("Hot + vmin/vmax")
axs[1].imshow(data, cmap="hot", vmin=vmin, vmax=vmax)
axs[2].set_title("Manual RGB")
axs[2].imshow(data_rgb)
```
[](https://i.stack.imgur.com/pGrf5.png)
|
Are unpacked struct in packed struct automatically packed?
Are unpacked struct in packed struct automatically packed by GCC?
In other words, do `__packed__` attribute automatically propagates to nested structures?
That is to say:
```
struct unpackedStruct{
int16_t field1;
int32_t field2;
// etc...
}
struct packedStruct{
int16_t field1;
struct unpackedStruct struct1; // <-- Is this struct packed?
// etc...
} __attribute__((__packed__));
```
| No, the inner structure is not packed. In [this Godbolt example](https://godbolt.org/z/E3Gaqr), we can see that `struct foo` is not packed inside `struct bar`, which has the `packed` attribute; the `struct bar` object created contains three bytes of padding (visible as `.zero 3`) inside its `struct foo` member, between the `struct foo` members `c` and `i`.
[Current documentation for GCC 10.2](https://gcc.gnu.org/onlinedocs/gcc-10.2.0/gcc/Common-Type-Attributes.html#Common-Type-Attributes) explicitly says the internal layout of a member of a packed structure is not packed (because of the attribute on the outer structure; it could be packed due to its own definition, of course).
(In [older documentation](https://gcc.gnu.org/onlinedocs/gcc-3.3/gcc/Type-Attributes.html#Type%20Attributes) that said that applying `packed` to a structure is equivalent to applying it to its members, it meant the effect of applying `packed` to the “variable” that is the member, described in [the documentation for variable attributes](https://gcc.gnu.org/onlinedocs/gcc-3.3/gcc/Variable-Attributes.html#Variable%20Attributes). When `packed` is applied to a structure member, it causes the member’s alignment requirement to be one byte. That is, it eliminates padding between previous members and that member, because no padding is needed to make it aligned. It does not alter the representation of the member itself. If that member is an unpacked structure, it remains, internally, an unpacked structure.)
|
String format with errors with %e
I've encountered some go code that appears to use `%e` for formatting an error for display to the screen. A simplified version would be code like this
```
err := errors.New("La de da")
fmt.Printf("%e\n", err)
```
outputs
```
&{%!e(string=La de da)}
```
However, if I look at the [go manual](https://golang.org/pkg/fmt/), it says `%e` is for formatting floating point numbers in scientific notation. That output doesn't look like scientific notation, so I'm wondering
1. If this is a specific notation, what is it? (i.e. is there a `%.` formatting option I could use to get that format)
2. If it's not a specific notation, what weird thing is going on under the hood that leads to an error being rendered in this way?
3. What silly, obvious thing am I missing that renders most of what I've said in this post wrong?
| Read the Go documentation.
>
> [Package fmt](https://golang.org/pkg/fmt/)
>
>
> Printing
>
>
> Format errors:
>
>
> If an invalid argument is given for a verb, such as providing a string
> to %d, the generated string will contain a description of the problem,
> as in these examples:
>
>
>
> ```
> Wrong type or unknown verb: %!verb(type=value)
> Printf("%d", hi): %!d(string=hi)
> Too many arguments: %!(EXTRA type=value)
> Printf("hi", "guys"): hi%!(EXTRA string=guys)
> Too few arguments: %!verb(MISSING)
> Printf("hi%d"): hi%!d(MISSING)
> Non-int for width or precision: %!(BADWIDTH) or %!(BADPREC)
> Printf("%*s", 4.5, "hi"): %!(BADWIDTH)hi
> Printf("%.*s", 4.5, "hi"): %!(BADPREC)hi
> Invalid or invalid use of argument index: %!(BADINDEX)
> Printf("%*[2]d", 7): %!d(BADINDEX)
> Printf("%.[2]d", 7): %!d(BADINDEX)
>
> ```
>
> All errors begin with the string "%!" followed sometimes by a single
> character (the verb) and end with a parenthesized description.
>
>
>
---
For your example,
```
package main
import (
"errors"
"fmt"
)
func main() {
err := errors.New("La de da")
fmt.Printf("%e\n", err)
}
```
Playground: <https://play.golang.org/p/NKC6WWePyxM>
Output:
```
&{%!e(string=La de da)}
```
Documentation:
>
> All errors begin with the string "%!" followed sometimes by a single
> character (the verb) and end with a parenthesized description.
>
>
>
> ```
> Wrong type or unknown verb: %!verb(type=value)
> Printf("%d", hi): %!d(string=hi)
>
> ```
>
>
|
How can I disable vuex getter caching?
I know you can disable caching in **Vue computed properties** by including a `cache: false` option. For example:
```
computed: {
now: {
cache: false,
get() {
return Date.now();
}
}
}
```
But I was wondering if this feature was available for **Vuex getters**
| To "disable" caching, you could take advantage of the fact that getters which are to be used as functions are not cached.
From [the vuex docs](https://vuex.vuejs.org/guide/getters.html#method-style-access)
>
> Note that getters accessed via methods will run each time you call them, and the result is not cached.
>
>
>
If you don't mind adding parenthesis (actually, calling functions) when accessing your non-cached property you could use something like this:
```
getters: {
myNonCachedGetter: state => () => {
// original getter body
}
}
```
It is then used as `myNonCachedGetter()`.
It turns out that the following doesn't work in newer versions of vuex.
~~However, if you would like it to look as a normal getter, you could wrap it into an function with invocation:~~
```
getters: {
myNonCachedGetter: state => (() => {
// original getter body
})()
}
```
It *does* make the code look a bit harder to read, but it can be called as simple as `myNonCachedGetter`.
|
How to reset Jenkins security settings from the command line?
Is there a way to reset all (or just disable the security settings) from the command line without a user/password as I have managed to completely lock myself out of `Jenkins`?
| The simplest solution is to completely disable security - change `true` to `false` in `/var/lib/jenkins/config.xml` file.
```
<useSecurity>true</useSecurity>
```
A one-liner to achieve the same:
```
sed -i 's/<useSecurity>true<\/useSecurity>/<useSecurity>false<\/useSecurity>/g' /var/lib/jenkins/config.xml
```
Then just restart Jenkins:
```
sudo service jenkins restart
```
And then go to admin panel and set everything once again.
If you in case are running your Jenkins inside a Kubernetes pod and can not run `service` command, then you can just restart Jenkins by deleting the pod:
```
kubectl delete pod <jenkins-pod-name>
```
Once the command was issued, Kubernetes will terminate the old pod and start a new one.
|
What would happen if i run 3-4 virtual machines on a dualcore cpu?
How would my macbook work when i use 4 computers on only two cores? My school is trying to save money so we are supposed to run a server with at least two clients in virtualbox. Afaik each machine needs one dedicated cpu core and some ram to run. Is there any chance that this will work?
| "Afaik each machine needs one dedicated cpu core".
Not so. Your computer multitasks, allowing it to run more than one program per core. As an example, Virtualbox runs just fine a single-core CPU, right alongside your other programs.
The [requirements for running VirtualBox](http://www.virtualbox.org/wiki/End-user_documentation) include:
In order to run VirtualBox on your machine, you need:
>
> Reasonably powerful x86 hardware. Any
> recent Intel or AMD processor should
> do.
>
>
> Memory. Depending on what guest
> operating systems you want to run, you
> will need at least 512 MB of RAM (but
> probably more, and the more the
> better). Basically, you will need
> whatever your host operating system
> needs to run comfortably, plus the
> amount that the guest operating system
> needs. So, if you want to run Windows
> XP on Windows XP, you probably won't
> enjoy the experience much with less
> than 1 GB of RAM. If you want to try
> out Windows Vista in a guest, it will
> refuse to install if it is given less
> than 512 MB RAM, so you'll need that
> for the guest alone, plus the memory
> your operating system normally needs.
>
>
> Hard disk space. While VirtualBox
> itself is very lean (a typical
> installation will only need about 30
> MB of hard disk space), the virtual
> machines will require fairly huge
> files on disk to represent their own
> hard disk storage. So, to install
> Windows XP, for example, you will need
> a file that will easily grow to
> several GB in size.
>
>
> A supported host operating system.
> Presently, we support Windows (XP and
> later), many Linux distributions, Mac
> OS X, Solaris and OpenSolaris.
>
>
> A supported guest operating system.
> Besides the user manual (see below),
> up-to-date information is available at
> "Status: Guest OSes".
>
>
>
|
Configure IPython to show warnings all the time
The first time I do something that raises a warning in the IPython shell, I see it. But subsequent times I do not. For example,
```
In [1]: import numpy as np
In [2]: np.uint8(250) * np.uint8(2)
/Users/me/anaconda/envs/py33/bin/ipython:1: RuntimeWarning: overflow encountered in ubyte_scalars
#!/bin/bash /Users/me/anaconda/envs/py33/bin/python.app
Out[2]: 244
In [3]: np.uint8(250) * np.uint8(2)
Out[3]: 244 # No warning!
```
How do I configure IPython to always show warnings? I've tried:
```
import warnings
warnings.filterwarnings('always')
```
But that doesn't make any difference.
| I think this was addressed relatively [recently](https://github.com/ipython/ipython/issues/6611) by the IPython team. It wasn't playing well with `warnings` because of a somewhat unusual design decision. Turing on `always` suffices for me in plain Python, and now if I do the same thing in IPython trunk:
```
In [1]: import warnings
In [2]: warnings.filterwarnings('always')
In [3]: import numpy as np
In [4]: np.uint8(250) * np.uint8(2)
/home/dsm/sys/root/bin/ipython3.4:1: RuntimeWarning: overflow encountered in ubyte_scalars
#!/home/dsm/sys/root/bin/python3.4
Out[4]: 244
In [5]: np.uint8(250) * np.uint8(2)
/home/dsm/sys/root/bin/ipython3.4:1: RuntimeWarning: overflow encountered in ubyte_scalars
#!/home/dsm/sys/root/bin/python3.4
Out[5]: 244
```
|
Json Array not properly generated
I have written java code for generating json of my searched data from file.But its not generating exact JsonArray. Its like
`[{"item":"1617"},{"item":"1617"}]`
instead of
`[{"item":"747"},{"item":"1617"}].`
Here 1617 is last item which is fetched from file.
```
JSONArray ja = new JSONArray();
JSONObject jo = new JSONObject();
while (products.readRecord())
{
String productID = products.get("user");
int j = Integer.parseInt(productID);
if(j == userId) {
itemid = products.get("item");
jo.put("item",itemid);
ja.add(jo);
}
}
out.println(ja);
products.close();
```
| you are actually creating one jSONobject object to handle two objects, shouldn't you need to create JSONObjects in the while loop? something like this, so every iteration in while loop will create a new JSONObject and add it to JSONArray
```
JSONArray ja = new JSONArray();
while (products.readRecord())
{
String productID = products.get("user");
int j = Integer.parseInt(productID, 10);
if(j == userId)
{
JSONObject jo = new JSONObject();
itemid = products.get("item");
jo.put("item", itemid);
ja.add(jo);
}
}
out.println(ja);
products.close();
```
**Extra:**
i am not sure how java does conversion for string to integer, but i think you should always specify radix when using parseInt so the strings like '09' will not be treated as octal value and converted to wrong value (atleast this is true in javascript :))
`Integer.parseInt(productID, 10);`
|
Powershell - Filtering OUs while using get-adcomputer
I am trying to create a script that generates a list of computers based on specific properties which a computer may have. For example, I am trying to make a list of Windows XP computers and Windows 7 computers, throw their names in a .csv file and outputting the final count of each.
Here is my code so far
```
import-module ActiveDirectory
$computers = get-adcomputer -Filter 'ObjectClass -eq "Computer"' -properties "OperatingSystem"
$i = 0
$j = 0
Foreach ($computer in $computers) {
if ($computer.operatingSystem -like "Windows 7*") {
$i++
'"{0}","{1}","{2}"' -f $computer.Name, $computer.OperatingSystem, "$computer.DistinguishedName" | Out-file -append C:\users\admin\desktop\test.txt
}
elseif ($computer.OperatingSystem -like "Windows XP*") {
$j++
'"{0}","{1}","{2}"' -f $computer.Name, $computer.OperatingSystem, "$computer.DistinguishedName" | Out-file -append C:\users\admin\desktop\test.txt
}
else {
$_
}
}
write-host "$i Win 7"
write-host "$j Win xp"
$k = $i+$j
write-host "$k Total"
```
Sample Output:
```
104 Win 7
86 Win xp
190 Total
```
This script works however I would like to make it a bit better by being able to say which OU's not to look into, but I can't quite figure it out.
If anyone has any insight into how to do this, or even just to make the above code any better I would love to hear it.
Thank you!
| The `-like` operator doesn't seem to work with wildcards for DistinguishedName. So the obvious operation `Get-ADComputer -Filter {(DistinguishedName -notlike "*OU=evil,*")}` doesn't work.
The easiest workaround is to get all the computers in a colleciton and filter it afterwards to suit your needs. Like so,
```
# All the computers from the evil OU:
$evilOU = $computers| ? {$_.DistinguishedName -like "*ou=evil,*"}
# All the computers but the ones from the evil OU:
$goodOU = $computers| ? {$_.DistinguishedName -notlike "*ou=evil,*"}
```
**Addendum**
To combine matching rules, use `-and` `-or` and `-like`. Remember to use `*` wildcard with `? (where-object)`
```
# All the computers save the ones from evil and wicked OU:
$goodOU = $computers| ? {
$_.DistinguishedName -notlike "*ou=evil,*" -and $_.DistinguishedName -notlike "*ou=wicked,*"
```
}
|
How to use funcall on a method to set an attribute of an object
Considering this code:
```
(defclass test () ((test :initform nil :accessor test)))
#<STANDARD-CLASS TEST>
(defvar *test* (make-instance 'test))
*TEST*
```
and this test:
```
(funcall #'test *test*)
nil
```
one would expect that this works:
```
(setf (funcall #'test *test*) 123)
```
the same as
```
(setf (test *test*) 123)
123
```
but it results in this:
```
; in: LAMBDA NIL
; (FUNCALL #'(SETF FUNCALL) #:NEW1175 #:TMP1177 #:TMP1176)
; ==>
; (SB-C::%FUNCALL #'(SETF FUNCALL) #:NEW1175 #:TMP1177 #:TMP1176)
;
; caught WARNING:
; The function (SETF FUNCALL) is undefined, and its name is reserved by ANSI CL
; so that even if it were defined later, the code doing so would not be portable.
;
; compilation unit finished
; Undefined function:
; (SETF FUNCALL)
; caught 1 WARNING condition
```
Why doesn't it work, and how can I work around it?
I tested it using either SBCL and CLISP with the same result.
| `SETF` is a special form (see <http://www.lispworks.com/documentation/HyperSpec/Body/05_aa.htm> for the part of the spec explaining it). Your second example works because the lisp implementation interprets `(test *test*)` syntactically.
To see what's going on, look at this session:
```
This is SBCL 1.0.56.0.debian, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.
SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses. See the CREDITS and COPYING files in the
distribution for more information.
* (defclass test () ((test :initform nil :accessor test)))
#<STANDARD-CLASS TEST>
* (defvar *test* (make-instance 'test))
*TEST*
* (macroexpand '(setf (test *test*) 123))
(LET* ((#:*TEST*606 *TEST*))
(MULTIPLE-VALUE-BIND (#:NEW605)
123
(FUNCALL #'(SETF TEST) #:NEW605 #:*TEST*606)))
T
* #'(setf test)
#<STANDARD-GENERIC-FUNCTION (SETF TEST) (1)>
* (macroexpand '(setf (funcall #'test *test*) 123))
(LET* ((#:G609 #'TEST) (#:*TEST*608 *TEST*))
(MULTIPLE-VALUE-BIND (#:NEW607)
123
(FUNCALL #'(SETF FUNCALL) #:NEW607 #:G609 #:*TEST*608)))
T
```
Note that the first macroexpansion grabs `#'(setf test)`, which is the writer function that gets automatically defined by your `defclass` call. The second blindly translates to `#'(setf funcall)`, which doesn't exist (hence the error).
To answer your "how can I work around it?" question, we'd probably need to know more about what you're trying to do. For example, you could use something like `(setf (slot-value object slot-name))` which would allow you to choose the slot programmatically.
|
Difference between destructor, dispose and finalize method
I am studying how garbage collector works in c#. I am confused over the use of `Destructor`, `Dispose` and `Finalize` methods.
As per my research and understandings, having a Destructor method within my class will tell the garbage collector to perform the garbage collection in the way mentioned in the destructor method which cannot be called explicitly on the instances of the class.
The `Dispose` method is meant to provide the user to control the garbage collection. The `Finalize` method frees the resources used by the class, but not the object itself.
I am not sure if I understand it the right way. Please clarify the doubts. Any further links or guides are welcome.
| Destructor implicitly calls the Finalize method, they are technically the same. Dispose is available with objects that implement the IDisposable interface.
You may see : [Destructors C# - MSDN](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/destructors)
>
> The destructor implicitly calls Finalize on the base class of the
> object.
>
>
>
Example from the same link:
```
class Car
{
~Car() // destructor
{
// cleanup statements...
}
}
```
The Destructor's code is implicitly translated to the following code:
```
protected override void Finalize()
{
try
{
// Cleanup statements...
}
finally
{
base.Finalize();
}
}
```
Your understanding for the Destructor is right:
From [MSDN](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/destructors)
>
> The **programmer has no control over when the destructor is called
> because this is determined by the garbage collector**. The garbage
> collector checks for objects that are no longer being used by the
> application. If it considers an object eligible for destruction, it
> calls the destructor (if any) and reclaims the memory used to store
> the object. Destructors are also called when the program exits. It is
> possible to force garbage collection by calling Collect, but most of
> the time, this should be avoided because it may create performance
> issues.
>
>
>
|
Forcing a thread context switch
I would like to be able to force a context switch from one thread to another. Therefore, I have implemented the following locking procedure:
```
#define TRUE (1==1)
#define FALSE (0==1)
#include <pthread.h>
int acquire(void);
int release(void);
int c_yield(int count);
// Who was the last to acquire the lock
static volatile pthread_t lock_owner;
// Is the lock currently taken
static volatile int lock_taken = FALSE;
/* This variable indicates how many threads are currently waiting for
* the lock. */
static volatile int lock_wanted = 0;
/* Mutex for protecting access to lock_wanted, lock_owner and
* lock_taken */
static pthread_mutex_t mutex;
/* Condition even to notify when the lock becomes available */
static pthread_cond_t cond;
void init_lock(void) {
pthread_cond_init(&cond, NULL);
pthread_mutex_init(&mutex, NULL);
}
int acquire(void) {
pthread_mutex_lock(&mutex);
if(lock_taken) {
lock_wanted++;
pthread_cond_wait(&cond, &mutex);
lock_wanted--;
}
if(lock_taken) {
pthread_mutex_unlock(&mutex);
return EPROTO;
}
lock_taken = TRUE;
lock_owner = pthread_self();
return pthread_mutex_unlock(&mutex);
}
int release(void) {
pthread_mutex_lock(&mutex);
lock_taken = FALSE;
if(lock_wanted > 0) {
pthread_cond_signal(&cond);
}
return pthread_mutex_unlock(&mutex);
}
```
Using another method (not shown), I can then implement a yield() that only returns if there are either no threads waiting for the lock, or after at least one other thread had a chance to run.
This implementation works fine most of the time, but if I stress-test it with ~50 threads trying to acquire and release the lock in random intervals, every once in a while acquire() will return `EPROTO`, indicating that someone called `pthread_cond_signal` without setting first `lock_taken = FALSE`.
Why is that? It seems as if the CPU sometimes doesn't see the new value of `lock_taken`, which is why I already made the variables volatile. But it's still happening...
|
```
if(lock_taken) {
lock_wanted++;
pthread_cond_wait(&cond, &mutex);
lock_wanted--;
}
```
This should be `while(lock_taken)`, not `if`. There are several reasons you might wake from `pthread_cond_wait` but find the lock taken by another thread by the time you get scheduled. One is if there's a spurious wakeup. The other is if another thread enters `acquire` after we block, finds the lock not taken, and takes it itself before this thread gets the mutex again.
The canonical way would be:
```
lock_wanted++;
while(lock_taken) pthread_cond_wait(&cond, &mutex);
lock_wanted--;
```
Get rid of all the `volatile`s, they harm performance and are not needed. Because mutexes are sufficient synchronization, nothing else is necessary. (And, to anyone else looking at your code, they signal that you don't understand thread synchronization and tried to 'sprinkle them in' until it just happened to work.)
|
random generator not good enough?
During an interview I was asked to implement a random generator in java without using any existing random number libraries that takes as an argument int n, and returns a random number between 0 and n. This was the implementation I provided:
```
public static int random(int n) {
int r = 0;
for (int i =0; i <n;i++) {
r+=helper();
}
return r;
}
// helper that returns 0 or 1
private static int helper() {
long t = System.nanoTime();
if (t%2 == 0) {
return 1;
} else {
return 0;
}
}
```
He said it's not right but he wouldn't tell me what he was expecting. Why did he say it's wrong? How would you have done it differently?
| Main issues with your approach:
- System.nanoTime() isn't (on its own) a useful source of random bits - it's highly likely to produce the same value multiple times in a row if you call it in quick succession because many systems don't actually have a sufficiently accurate timer. Even if it was nano-second-accurate, you are likely to get predictable patterns from the lowest bits if you sample it in a tight loop. Valid uses of System.nanoTime in random number generation might be: a) one-off initialisation of a seed value or b) occasionally adding some extra randomness into an entropy pool (not guaranteed to be beneficial, but it can't hurt)
- Even if the bits were truly random, by adding the 0/1 values n times you would be creating a binomial-style distribution with a mean of n/2, i.e. not a uniform distribution which is presumably what the interviewer was expecting.
- Your algorithm is O(n) - not good for generating random numbers with a large value of n!
You ideally want a PRNG that produces new pseudo-random bits from an internal state. Here's the one I use:
```
private static volatile long state = 0xCAFEBABE; // initial non-zero value
public static final long nextLong() {
long a=state;
state = xorShift64(a);
return a;
}
public static final long xorShift64(long a) {
a ^= (a << 21);
a ^= (a >>> 35);
a ^= (a << 4);
return a;
}
public static final int random(int n) {
if (n<0) throw new IllegalArgumentException();
long result=((nextLong()>>>32)*n)>>32;
return (int) result;
}
```
This is based on George Marsaglia's XORShift algorithm. It produces good pseudorandom numbers and is very fast (typically even faster than a Linear Congruential Generator since the xors and shifts are cheaper than multiplies and divides on most hardware).
Having said that, I wouldn't expect people to memorise this kind of algorithm for an interview unless you are specifically applying for a role as a crypto programmer!
|
How do you unit test mutually-recursive methods?
I have three functions that looks something like this:
```
private Node GetNode(Node parentNode)
{
var node = new node();
switch (parentNode.NodeType)
{
case NodeType.Multiple: node = GetMultipleNode(parentNode)
case NodeType.Repeating: node = GetRepeatingNode(parentNode)
}
return node;
}
private Node GetMultipleNode(Node parentNode)
{
foreach (var child in parentNode.Children)
return GetNode(child);
}
private Node GetRepeatingNode(Node parentNode)
{
for (int i=0; i < parentNode.Count; i++)
return GetNode(new Node(i)); // Assume meaningful constructor for Node
}
```
Given that these three methods are mutually recursive, how does one go about unit testing them independently?
| Normally you wouldn't need to test each method individually - you can just test that the top-level method does the right thing.
However if for some reason you *want* to test each method separately you can use dependency injection just as you would test any method that has dependencies. The only difference here is that the dependency is the object itself. Here is some example code to demonstrate the idea:
```
class NodeGetter : INodeGetter
{
public Node GetNode(Node parentNode)
{
return GetNode(parentNode, this);
}
public Node GetNode(Node parentNode, INodeGetter nodeGetter)
{
switch (parentNode.NodeType)
{
case NodeType.Multiple:
return nodeGetter.GetMultipleNode(parentNode, nodeGetter);
case NodeType.Repeating:
return nodeGetter.GetRepeatingNode(parentNode, nodeGetter);
default:
throw new NotSupportedException(
"Node type not supported: " + parentNode.NodeType);
}
}
public Node GetMultipleNode(Node parentNode, INodeGetter nodeGetter)
{
foreach (Node child in parentNode.Children)
{
return nodeGetter.GetNode(child);
}
}
public Node GetRepeatingNode(Node parentNode, INodeGetter nodeGetter)
{
for (int i = 0; i < parentNode.Count; i++)
{
// Assume meaningful constructor for Node
return nodeGetter.GetNode(new Node(i));
}
}
}
```
When testing for the nodegetter argument pass a mock.
I also changed your methods from private to public because it is better to only test the public interface of your class.
|
How to calculate logistic regression coefficients manually?
For two independent variables, what is the method to calculate the coefficients for any dataset in logistic regression? The equation we know is => logit = ln(P/1-P) = B0 + B1 \* X1 + B2 \* X2
On the below dataset, how do we calculate the above X1 and X2 values
```
Y X1 X2
0 2 30
0 6 50
1 8 60
1 10 80
```
| Unlike linear regression, where you can use matrix algebra and [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) to get the results in a closed form, for logistic regression you need to use some kind of optimization algorithm to find the solution with smallest loss, or greatest likelihood. For this, logistic regression most commonly uses the [iteratively reweighted least squares](https://en.wikipedia.org/wiki/Logistic_regression#Estimation), but if you really want to compute it by hand, then it would be probably easier to use gradient descent. You can find nice introduction to gradient descent in the lecture [Lecture 6.5 — Logistic Regression | Simplified Cost Function And Gradient Descent](https://www.youtube.com/watch?v=TTdcc21Ko9A) by Andrew Ng (if it is unclear, see earlier lectures, also available on YouTube or on Coursera). Using his notation, the iteration step is
$ \mathtt{repeat} \, \{\\ \qquad\theta\_j := \theta\_j - \alpha\,
(h\_\theta(x^{(i)}) - y^{(i)}) \,x\_j^{(i)}\\ \}$
where $\theta\_j$ is the $j$-th parameter from the vector $(\theta\_0, \theta\_1, \dots, \theta\_k)$, $x^{(i)}$ is the vector of variables for the $i$-th observation $(1, x\_1^{(i)},\dots, x\_k^{(i)})$, where $1$ comes from the column of ones for the intercept, and the inverse of the logistic link function is $h\_\theta(x) = \tfrac{1}{1+\exp(\theta^T x)}$ and $\alpha$ is the learning rate. You iterate until convergence.
|
How to block merging of pull requests by committers in GitHub
I am looking for a way via GitHub (or CircleCI) settings to prevent the person who opens or commits to a pull request from being able to merge or approve that pull request.
So far I have the protection of a branch that requires approvals but post-approval I as PR creator and committer I still able to merge.
| You need to be able to
>
> prevent the person that is involved in PR (create PR or make a commit) to be able to merge PR (or even approve it)
>
>
>
**A contributor who has created a PR cannot approve or request changes by default in GitHub**, so that is already taken care of.
Since a Pull Request is a GitHub feature, a PR merge can currently only be blocked by 2 ways
- **Using GitHub's settings**
- **Using pre-receive hooks** (only for GitHub Enterprise)
**Using GitHub's settings**, you can only block merging by requiring either pull request reviews, status checks to pass, signed commits or linear history as shown under the branch protection settings.
[](https://i.stack.imgur.com/vjrLV.png)
or by **allowing merge commits, squash merging** or **rebase merging** as shown in the Merge button section under repo settings
[](https://i.stack.imgur.com/LYqGk.png)
**If you are on GitHub Enterprise, you can use a pre-receive hook** ([documentation](https://docs.github.com/en/enterprise/2.21/admin/developer-workflow/creating-a-pre-receive-hook-script)) like below and ensure that self merging PRs are blocked (This eg is [here](https://github.com/github/platform-samples/blob/master/pre-receive-hooks/block_self_merge_prs.sh))
```
if [[ "$GITHUB_VIA" = *"merge"* ]] && [[ "$GITHUB_PULL_REQUEST_AUTHOR_LOGIN" = "$GITHUB_USER_LOGIN" ]]; then
echo "Blocking merging of your own pull request."
exit 1
fi
exit 0
```
Apart from the above, there is no other way currently to block self merging PRs on GitHub. And using CircleCI or any other CI workflow can only block merging for everybody(if you opt for the requirement of status checks on GitHub) or nobody, as it can't control the PR merge button.
|
getting access token url from popup in javascript [spotify auth]
I am trying to make a spotify auth flow in pure javascript, so a user can sign in, and i can then add a new playlist for their account. From the instructions I've read, I use an auth popup that once they sign in, has the access token in the URL. I have a popup right now that the user can auth with, and once they do it will have the access token in the url.
I need to get the url from my popup and save it as a global var, but I'm having trouble figuring out how to do this in javascript.
<https://codepen.io/martin-barker/pen/YzPwXaz>
My codepen opens a popup with `let popup = window.open(` , can I run a function in my popup to detect when the user successfully authenticates and the url changes? In which case I would want to save the url for parsing and close my popup
My javascript code is as follows:
```
async function spotifyAuth() {
let result = spotifyLogin()
}
//open popup
function spotifyLogin() {
console.log("inside spotifyLogin, opening popup")
let popup = window.open(`https://accounts.spotify.com/authorize?client_id=5a576333cfb1417fbffbfa3931b00478&response_type=token&redirect_uri=https://codepen.io/martin-barker/pen/YzPwXaz&show_dialog=true&scope=playlist-modify-public`, 'Login with Spotify', 'width=800,height=600')
}
//get url from popup and parse access token????
window.spotifyCallback = (payload) => {
console.log("inside window? ") //this line never appears in console
popup.close()
fetch('https://api.spotify.com/v1/me', {
headers: {
'Authorization': `Bearer ${payload}`
}
}).then(response => {
return response.json()
}).then(data => {
// do something with data
})
}
```
| Here is how I did it in JavaScript. Global variable like you mentioned:
`var access_token = null;`
My url looks something like this for example: <https://...home.jsp#access_token=BQAXe5JQOV_xZmAukmw6G430lreF......rQByzZMcOIF2q2aszujN0wzV7pIxA4viMbQD6s&token_type=Bearer&expires_in=3600&state=vURQeVAoZqwYm4dC>
After Spotify redirects the user to the uri you specified on the dashboard, I parse the url for the hash containing the access token like so:
```
var hash = window.location.hash.substring(1);
var accessString = hash.indexOf("&");
/* 13 because that bypasses 'access_token' string */
access_token = hash.substring(13, accessString);
console.log("Access Token: " + access_token);
```
Which the output is:
`Access Token: BQAXe5JQOV_xZmAukmw6G430lreF...........rQByzZMcOIF2q2aszujN0wzV7pIxA4viMbQD6s`
I save this access token in the sessionStorage just in case the user navigates away from the page and the url does not contain the access\_token. I am assuming this is the implicit grant flow since you want to use pure JavaScript. Just make sure to re obtain a access token every hour since they expire.
## Addendum
I can show you how to obtain the token and use it in an example.
I have a button on a .html page that once clicked calls a function called implicitGrantFlow() in a JavaScript file called
Test.js
```
function implicitGrantFlow() {
/* If access token has been assigned in the past and is not expired, no request required. */
if (sessionStorage.getItem("accessToken") !== null &&
sessionStorage.getItem("tokenTimeStamp") !== null &&
upTokenTime < tokenExpireSec) {
var timeLeft = (tokenExpireSec - upTokenTime);
console.log("Token still valid: " + Math.floor(timeLeft / 60) + " minutes left.");
/* Navigate to the home page. */
$(location).attr('href', "home.jsp");
} else {
console.log("Token expired or never found, getting new token.");
$.ajax({
url: auth_url,
type: 'GET',
contentType: 'application/json',
data: {
client_id: client_id,
redirect_uri: redirect_uri,
scope: scopes,
response_type: response_type_token,
state: state
}
}).done(function callback(response) {
/* Redirect user to home page */
console.log("COULD THIS BE A SUCCESS?");
$(location).attr('href', this.url);
}).fail(function (error) {
/* Since we cannot modify the server, we will always fail. */
console.log("ERROR HAPPENED: " + error.status);
console.log(this.url);
$(location).attr('href', this.url);
});
}
```
What I am doing is checking if the access\_token info I stored in the sessionStorage is null. I used time since Epoch to generate when the token was created and when it ideally should expire. If these parameters are satisfied then I do not make another call.
Else, I make a call to get an access token, which on success will redirect me to my uri as I mentioned in my previous write up (you'll see I have the redirect in the .fail section. This is due to me not having permission on my school server to setup settings to bypass the CORS related issues preventing my call from being successful even though the redirect url I create is fine.).
Then when my whitelist uri gets loaded (which redirects to my home page) I utilize my `<body>` tag.
home.jsp
```
<body onload="getAccessToken()">
```
Here in my tag I have it call this function once the page loads. This calls the function getAccessTokens().
```
/**
* The bread and butter to calling the API. This function will be called once the
* user is redirected to the home page on success and without rejecting the terms
* we are demanding. Once through, this function parses the url for the access token
* and then stores it to be used later or when navigating away from the home page.
*/
function getAccessToken() {
access_token = sessionStorage.getItem("accessToken");
if (access_token === null) {
if (window.location.hash) {
console.log('Getting Access Token');
var hash = window.location.hash.substring(1);
var accessString = hash.indexOf("&");
/* 13 because that bypasses 'access_token' string */
access_token = hash.substring(13, accessString);
console.log("Access Token: " + access_token);
/* If first visit or regaining token, store it in session. */
if (typeof(Storage) !== "undefined") {
/* Store the access token */
sessionStorage.setItem("accessToken", access_token); // store token.
/* To see if we need a new token later. */
sessionStorage.setItem("tokenTimeStamp", secondsSinceEpoch);
/* Token expire time */
sessionStorage.setItem("tokenExpireStamp", secondsSinceEpoch + 3600);
console.log("Access Token Time Stamp: "
+ sessionStorage.getItem("tokenTimeStamp")
+ " seconds\nOR: " + dateNowMS + "\nToken expires at: "
+ sessionStorage.getItem("tokenExpireStamp"));
} else {
alert("Your browser does not support web storage...\nPlease try another browser.");
}
} else {
console.log('URL has no hash; no access token');
}
} else if (upTokenTime >= tokenExpireSec) {
console.log("Getting a new acess token...Redirecting");
/* Remove session vars so we dont have to check in implicitGrantFlow */
sessionStorage.clear();
$(location).attr('href', 'index.html'); // Get another access token, redirect back.
} else {
var timeLeft = (tokenExpireSec - upTokenTime);
console.log("Token still valid: " + Math.floor(timeLeft / 60) + " minutes left.");
}
```
Here I am storing the token in session storage once I obtain the access token from the url. I use the process mentioned in my earlier post but here is the full JavaScript. If it is still unclear after the comments please let me know.
Now that we have our access token obtained and stored we can now make an api call. Here is how I do it (and have been using qQuery, an example of getting a user's top tracks).
Example api call
```
/**
* Function will get the user's top tracks depending on the limit and offset
* specified in addition to the time_range specified in JSON format.
* @param time_range short/medium/long range the specifies how long ago.
* @param offset Where the indexing of top tracks starts.
* @param limit How many tracks at a time we can fetch (50 max.)
*/
function getUserTopTracks(time_range, offset, limit) {
$.get({
url: 'https://api.spotify.com/v1/me/top/tracks',
headers: {
'Authorization': 'Bearer ' + access_token,
},
data: {
limit: limit, // This is how many tracks to show (50 max @ a time).
offset: offset, // 0 = top of list, increase to get more tracks.
time_range: time_range // short/medium/long_term time ranges.
},
success: function (response) {
/* Get the items from the response (The limit) tracks. */
res = JSON.parse(JSON.stringify(response.items));
/* Get all the track details in the json */
for (i = 0; i < res.length; i++) {
console.log("Track: " + res[i]);
}
},
fail: function () {
console.log("getUserTopTracks(): api call failed!");
}
});
```
The parameter time\_range is specified as "long\_term" to get the user's top tracks since the beginning (read more on Spotify's docs for more info) in addition to offset being 0 to start at the beginning and limit being equal to 50 since that is the max fetch per call.
On success I have my response variable 'response' and I then want the root of parsing to start from the 'items' section to make parsing easier (you do not have to do this, you can simply just use response.xxx.items.xxx). I then print to the console the response.
This is the basic things you can do and how you decide to handle the data or store it is up to you. I am not an expert, I only start learning web programming this past semester and a lot of the practices I am doing might be wrong or incorrect.
|
Is there a built-in way to get all of the changed/updated fields in a Doctrine 2 entity
Let's suppose I retrieve an entity `$e` and modify its state with setters:
```
$e->setFoo('a');
$e->setBar('b');
```
Is there any possibility to retrieve an array of fields that have been changed?
In case of my example I'd like to retrieve `foo => a, bar => b` as a result
PS: yes, I know I can modify all the accessors and implement this feature manually, but I'm looking for some handy way of doing this
| You can use
`Doctrine\ORM\EntityManager#getUnitOfWork` to get a `Doctrine\ORM\UnitOfWork`.
Then just trigger changeset computation (works only on managed entities) via `Doctrine\ORM\UnitOfWork#computeChangeSets()`.
You can use also similar methods like `Doctrine\ORM\UnitOfWork#recomputeSingleEntityChangeSet(Doctrine\ORM\ClassMetadata $meta, $entity)` if you know exactly what you want to check without iterating over the entire object graph.
After that you can use `Doctrine\ORM\UnitOfWork#getEntityChangeSet($entity)` to retrieve all changes to your object.
Putting it together:
```
$entity = $em->find('My\Entity', 1);
$entity->setTitle('Changed Title!');
$uow = $em->getUnitOfWork();
$uow->computeChangeSets(); // do not compute changes if inside a listener
$changeset = $uow->getEntityChangeSet($entity);
```
**Note.** If trying to get the updated fields **inside a preUpdate listener**, don't recompute change set, as it has already been done. Simply call the getEntityChangeSet to get all of the changes made to the entity.
**Warning:** As explained in the comments, this solution should not be used outside of Doctrine event listeners. This will break Doctrine's behavior.
|
Create systemd service in AWS Elastic Beanstalk on new Amazon Linux 2
I'm currently trying to create a worker on AWS Elastic Beanstalk which is pulling messages from a specific SQS queue (with the help of the Symfony messenger). I don't want to use dedicated worker instances for this task. After some research, I found out that systemd can help here which is enabled by default on the new Amazon Linux 2 instances.
However, I'm not able to create a running systemd service. Here is my .ebextensions/03\_workers.config file:
```
files:
/etc/systemd/system/my_worker.service:
mode: "000755"
owner: root
group: root
content: |
[Unit]
Description=My worker
[Service]
User=nginx
Group=nginx
Restart=always
ExecStart=/usr/bin/nohup /usr/bin/php /var/app/current/bin/console messenger:consume integration_incoming --time-limit=60
[Install]
WantedBy=multi-user.target
services:
systemd:
my_worker:
enabled: "true"
ensureRunning: "true"
```
I can't see my service running if I'm running this command:
```
systemctl | grep my_worker
```
What am I doing wrong? :)
| `systemd` is not supported in [Services](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-services). The only correct is `sysvinit`:
```
services:
sysvinit:
my_worker:
enabled: "true"
ensureRunning: "true"
```
But I don't think it will even work, as this is for Amazon Linux 1, not for Amazon Linux 2.
In Amazon Linux 2 you shouldn't be even using much of `.ebextensions`. AWS [docs](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html) specifically write:
>
> On Amazon Linux 2 platforms, instead of providing files and commands in .ebextensions configuration files, we **highly recommend that you use Buildfile. Procfile, and platform hooks** whenever possible to configure and run custom code on your environment instances during instance provisioning.
>
>
>
Thus, you should consider using [Procfile](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html) which does basically what you want to achieve:
>
> Use a Procfile for **long-running application processes that shouldn't exit**. Elastic Beanstalk expects processes run from the Procfile to **run continuously**. Elastic Beanstalk monitors these processes and **restarts any process that terminates**. For short-running processes, use a Buildfile.
>
>
>
**Alternative**
Since you already have created a unit file `/etc/systemd/system/my_worker.service` for `systemd`, you can `enable` and `start` it yourself.
For this [container\_commands](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-container-commands) in `.ebextensions` can be used. For example:
```
container_commands:
10_enable_worker:
command: systemctl enable worker.service
20_start_worker:
command: systemctl start worker.service
```
|
tidyr use separate\_rows over multiple columns
I have a data.frame where some cells contain strings of comma separate values:
```
d <- data.frame(a=c(1:3),
b=c("name1, name2, name3", "name4", "name5, name6"),
c=c("name7","name8, name9", "name10" ))
```
I want to separate those strings where each name is split into its own cell. This is easy with
```
tidyr::separate_rows(d, b, sep=",")
```
if it is done for one column a time. But I can't do this for both columns "b" and "c" at the same time, since it requires that the number of names in each string is the same. Instead of writing
```
tidyr::separate_rows(d, b, sep=",")
tidyr::separate_rows(d, c, sep=",")
```
Is there a way to do this in a one-liner, for e.g. with apply? Something like
```
apply(d, 2, separate_rows(...))
```
Not sure how to pass the arguments to the `separate_rows()` function.
| You can use a pipe. Note that `sep = ", "` is automatically detected.
```
d %>% separate_rows(b) %>% separate_rows(c)
# a b c
# 1 1 name1 name7
# 2 1 name2 name7
# 3 1 name3 name7
# 4 2 name4 name8
# 5 2 name4 name9
# 6 3 name5 name10
# 7 3 name6 name10
```
**Note:** Using tidyr version 0.6.0, where the `%>%` operator is included in the package.
---
**Update:** Using @doscendodiscimus comment, we could use a `for()` loop and reassign `d` in each iteration. This way we can have as many columns as we like. We will use a character vector of column names, so we'll need to switch to the standard evaluation version, `separate_rows_`.
```
cols <- c("b", "c")
for(col in cols) {
d <- separate_rows_(d, col)
}
```
which gives the updated `d`
```
a b c
1 1 name1 name7
2 1 name2 name7
3 1 name3 name7
4 2 name4 name8
5 2 name4 name9
6 3 name5 name10
7 3 name6 name10
```
As an update: In tidyr1.2.0 separate\_rows\_ is deprecated. Instead, we can use the following code:
```
cols <- c("b", "c")
for (col in cols) {
d <- separate_rows(d, all_of(col))
}
```
|
For each string, execute a function/procedure
I'd like to loop through a list of strings and execute a function/procedure with each string as the argument.
What's the best alternative to the following generic code (since it's not legal):
```
set serveroutput on;
begin
FOR r IN ('The', 'Quick', 'brown', 'fox')
LOOP
dbms_output.put_line( r );
END LOOP;
end;
```
I assume there might be pattern for this.
| Just for completeness, a pure PL/SQL solution.
```
SQL> set serveroutput on
SQL>
SQL> declare
2 my_array sys.dbms_debug_vc2coll
3 := sys.dbms_debug_vc2coll('The', 'Quick', 'brown', 'fox');
4 begin
5 for r in my_array.first..my_array.last
6 loop
7 dbms_output.put_line( my_array(r) );
8 end loop;
9 end;
10 /
The
Quick
brown
fox
PL/SQL procedure successfully completed.
SQL>
```
This uses the preclared `sys.dbms_debug_vc2coll` datatype, which has quite a generous definition ...
```
SQL> desc sys.dbms_debug_vc2coll
sys.dbms_debug_vc2coll TABLE OF VARCHAR2(1000)
SQL>
```
... so, like Gary says, you may wish to declare your own. Especially if your strings are short and you have lots of them.
|
Main differences between Ubuntu Studio 12.04 and Ubuntu 12.04
What are the differences between Ubuntu Studio 12.04 and Ubuntu 12.04
Have a nice time.
| - **Desktop environment**: Ubuntu 12.04 uses Unity as shell over Gnome desktop environment while Ubuntu Studio 12.04 uses Xfce as
- **Linux kernel**: Ubuntu Studio 12.04 uses realtime kernel (for reducing the amount of latency, which is extremely beneficial for audio work) while Ubuntu 12.04 uses generic kernel([more about kernel types](https://help.ubuntu.com/community/UbuntuStudio/RealTimeKernel)).
- **Installation**: Ubuntu 12.04 has live CD for installation but Ubuntu Studio 12.04 only has live DVD.
- **Applications**: Ubuntu Studio 12.04 contain a lot more pre-installed multimedia software (several audio, video and graphical applications) than Ubuntu 12.04.
- **Appearance**: Ubuntu Studio 12.04 has blue-on-black theme instead of Ubuntu's default purple and orange and also a new sound theme replaces the default Ubuntu theme.
- **JACK Sound System** : Along with the ubiquitous Pulse Audio sound server, the powerful JACK sound server is also included in Ubuntu Studio. Both are already configured to work well together.
- **System Configuration** : There is a different configuration of the system so as to "**not-limit**" you audio/video processing.
|
Mojolicious Parameter Validation
I have the following code :
```
get '/:foo' => sub {
my $c = shift;
my $v = $c->validation;
my $foo = $c->param('y');
$c->render(text => "Hello from $foo.") if $v->required('y')->like(q/[A-Z]/);
};
```
and want to verify the `y` parameter of the http request I connect to the above web page using: http://myserver:3000?x=2&y=1
It prints `Hello from 1.` Even though there is `$v->required('y')->like(q/[A-Z]/);`
What could be my problem here?
| Mojolicious validation uses [a fluent interface](http://mojolicious.org/perldoc/Mojolicious/Validator/Validation), so most methods return the validation object. Objects are truthy by default, so your condition is always true.
Instead, you can check
- `->is_valid()` – whether validation for the current topic was sucessful, or
- `->has_error()` – whether there were any validation errors.
You introduce a new validation topic by calling `->required('name')` or `->optional('name')` on the validation object. So you could write:
```
$c->render(text => "Hello from $foo.")
if $v->required('y')->like(q/[A-Z]/)->is_valid;
```
or
```
$v->required('y')->like(q/[A-Z]/);
$c->render(text => "Hello from $foo.") unless $v->has_error;
```
|
Why I got "cannot borrow `arr[\_]` as mutable more than once at a time"?
I have a programming assignment of creating a bubble sort using Rust. I don't really have much experience in Rust so this is a little bit hard for me:
```
fn main() {
println!("Sort numbers ascending");
let num:[i32; 10] = [4, 65, 2, -31, 0, 99, 2, 83, 782, 1];
println!("Before: {:?}", num);
bubble_sort( num);
println!("After: {:?}\n", num);
}
fn swap( a: &mut i32, b: &mut i32){
let tmp: i32 = *b;
*b=*a;
*a=tmp;
}
fn bubble_sort(mut arr: [i32; 10]) {
for i in 0..arr.len() {
for j in 0..arr.len() - 1 - i {
if arr[j] > arr[j + 1] {
swap( &mut arr[j], &mut arr[j + 1]);
}
}
}
}
```
Error:
```
Compiling playground v0.0.1 (/playground)
error[E0499]: cannot borrow `arr[_]` as mutable more than once at a time
--> src/main.rs:19:24
|
19 | swap( &mut arr[j], &mut arr[j + 1]);
| ---- ----------- ^^^^^^^^^^^^^^^ second mutable borrow occurs here
| | |
| | first mutable borrow occurs here
| first borrow later used by call
|
= help: consider using `.split_at_mut(position)` or similar method to obtain two mutable non-overlapping sub-slices
For more information about this error, try `rustc --explain E0499`.
error: could not compile `playground` due to previous error
```
| The Rust borrow checker is quite sophisticated, but it has limitations. Here, even though it's obvious to a human that there is no mutable aliasing going on, that fact is totally hidden from the borrow checker. It sees `&mut arr[j]` and `&mut arr[j + 1]` and concludes: "you're trying to borrow mutably from `arr` twice".
The error message does indicate the solution:
[`split_at_mut()`](https://doc.rust-lang.org/std/primitive.slice.html#method.split_at_mut) is provided for exactly this situation. Under the hood, it uses `unsafe` to bend the above rule in a way that is still sound and will not cause UB. It takes an index to split at, and returns 2 non-overlapping mutable slices. This is the example from the docs:
```
let mut v = [1, 0, 3, 0, 5, 6];
let (left, right) = v.split_at_mut(2);
assert_eq!(left, [1, 0]);
assert_eq!(right, [3, 0, 5, 6]);
```
So in your case, instead of using `&mut arr[j]` to get a mutable reference to the number, you can do:
```
let (start, end) = arr.split_at_mut(1);
swap(&mut start[0], &mut end[0]);
```
P.S. there's also [`.swap(i, j)`](https://doc.rust-lang.org/std/primitive.slice.html#method.swap) for slices :-D
|
Confidence Interval in Python dataframe
I am trying to calculate the mean and confidence interval(95%) of a column "Force" in a large dataset. I need the result by using the groupby function by grouping different "Classes".
When I calculate the mean and put it in the new dataframe, it gives me NaN values for all rows. I'm not sure if I'm going the correct way. Is there any easier way to do this?
This is the sample dataframe:
```
df=pd.DataFrame({ 'Class': ['A1','A1','A1','A2','A3','A3'],
'Force': [50,150,100,120,140,160] },
columns=['Class', 'Force'])
```
To calculate the confidence interval, the first step I did was to calculate the mean. This is what I used:
```
F1_Mean = df.groupby(['Class'])['Force'].mean()
```
This gave me `NaN` values for all rows.
| Update on 25-Oct-2021: @a-donda pointed out, 95% shall be based on 1.96 X standard deviations of the mean.
```
import pandas as pd
import numpy as np
import math
df=pd.DataFrame({'Class': ['A1','A1','A1','A2','A3','A3'],
'Force': [50,150,100,120,140,160] },
columns=['Class', 'Force'])
print(df)
print('-'*30)
stats = df.groupby(['Class'])['Force'].agg(['mean', 'count', 'std'])
print(stats)
print('-'*30)
ci95_hi = []
ci95_lo = []
for i in stats.index:
m, c, s = stats.loc[i]
ci95_hi.append(m + 1.96*s/math.sqrt(c))
ci95_lo.append(m - 1.96*s/math.sqrt(c))
stats['ci95_hi'] = ci95_hi
stats['ci95_lo'] = ci95_lo
print(stats)
```
The output is
```
Class Force
0 A1 50
1 A1 150
2 A1 100
3 A2 120
4 A3 140
5 A3 160
------------------------------
mean count std
Class
A1 100 3 50.000000
A2 120 1 NaN
A3 150 2 14.142136
------------------------------
mean count std ci95_hi ci95_lo
Class
A1 100 3 50.000000 156.580326 43.419674
A2 120 1 NaN NaN NaN
A3 150 2 14.142136 169.600000 130.400000
```
|
How to apply map function to array in Kotlin and change its values?
I am working with this code.
```
fun main(args : Array<String>){
val someArray : Array<Int> = arrayOf(3,53,2,521,51,23,512,34,124);
println("Original array is ")
someArray.forEach {print("$it , ")}
someArray.map({num -> num*2})
println("Changed array is ")
println()
someArray.forEach { print("$it , ") }
}
```
But the map function does not seem to work. Here is what it prints
Original array is
3 , 53 , 2 , 521 , 51 , 23 , 512 , 34 , 124 , Changed array is
3 , 53 , 2 , 521 , 51 , 23 , 512 , 34 , 124 ,
My question is why is the array not doubling the value of its elements? What i am i missing here?
| You could always define your own extension that does what you need:
```
fun <T> Array<T>.mapInPlace(transform: (T) -> T) {
for (i in this.indices) {
this[i] = transform(this[i])
}
}
```
Usage would look like this:
```
someArray.mapInPlace { num -> num * 2 }
```
Note that there are special non-generic classes for primitive arrays, such as [`IntArray`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-int-array/index.html) (see also [this question](https://stackoverflow.com/q/45090808/4465208)), so you should probably be using those instead of the generic one. Of course then you'd have to define this extension for each of them separately, like:
```
fun IntArray.mapInPlace(transform: (Int) -> Int) { ... }
fun DoubleArray.mapInPlace(transform: (Double) -> Double) { ... }
```
|
How to use templated struct in class c++
I am trying to use templated `struct` within a class, but cant figure out how to declare it properly
```
template<typename var>
struct Node
{
var value;
Node* left;
Node* right;
};
class Tree
{
public:
Tree();
~Tree();
template<typename var>
void insert(var key, Node *node);
template<typename var>
Node* search(var key, Node *node);
void deleteTree(Node *node);
void inOrderTraversePrint(Node* node);
void preOrderTraversePrint(Node* node);
void postOrderTraversePrint(Node* node);
Node* getRoot();
void resetRoot();
private:
Node* root;
};
```
I keep getting couple errors " Node is not a type" and "Invalid use of template-name Node without an argument list".
I know how to template single `class` and methods within or outside the class, but this is first time I am using `struct` and trying to use templated `struct` within a class.
What is proper way to declare and use `templated struct` within a class.
| If Node uses `var` type internally, then it should be templated by it as well:
```
template<typename var>
struct Node
{
var value;
Node* left;
Node* right;
};
template<typename T>
class Tree
{
public:
Tree();
~Tree();
void insert(T key, Node<T> *node);
Node<T>* search(T key, Node<T> *node);
void deleteTree(Node<T> *node);
void inOrderTraversePrint(Node<T>* node);
void preOrderTraversePrint(Node<T>* node);
void postOrderTraversePrint(Node<T>* node);
Node<T>* getRoot();
void resetRoot();
private:
Node<T>* root;
};
```
**Edit**:
>
> this is first time I am using struct and trying to use templated struct within a class. What is proper way to declare and use templated struct within a class.
>
>
>
You can get away with not templating Tree class, if your tree data always has the same type:
```
class Tree
{
public:
Tree();
~Tree();
void insert(var key, Node<int> *node);
Node* search(var key, Node<int> *node);
void deleteTree(Node<int> *node);
void inOrderTraversePrint(Node<int>* node);
void preOrderTraversePrint(Node<int>* node);
void postOrderTraversePrint(Node<int>* node);
Node<int>* getRoot();
void resetRoot();
private:
Node<int>* root;
};
```
**Second Edit**
Variant implementation for nodes:
```
class Node
{
virtual std::string ToString() = 0; // convert value to string
virtual ~Node() = default;
Node *left, *right;
};
template<typename T>
class ValueNode: public Node
{
T value_;
public:
ValueNode(T value): Node{ nullptr, nullptr }, value_{ std::move(value) } {}
std::string ToString() override;
{
std::ostringstream oss;
oss << value_;
return oss.str();
}
virtual ~ValueNode() = default;
};
class Tree
{
public:
Tree();
~Tree();
template<typename var>
void insert(var key, Node *node)
{
// insert new ValueNode<var>{ key } here
}
template<typename var>
Node* search(var key, Node *node);
void deleteTree(Node *node);
void inOrderTraversePrint(Node* node);
void preOrderTraversePrint(Node* node);
void postOrderTraversePrint(Node* node);
Node* getRoot();
void resetRoot();
private:
Node* root;
};
```
The idea here is to identify all operations to apply to all node values in the same way (in my example code, that means "conversion of the value to a string") and (first) declare them as abstract operations in `Node`, then implement them depending on the value type (as virtual function implmentations in `ValueNode`).
This will allow you to make abstraction of the fact you have multiple types in the nodes, in the `Tree` class.
That said, if you use boost, you should probably use `boost::variant` or `boost::any` in place of typename var in Node, in your original code.
|
How to store characters into a char pointer using the strcpy() function
I am trying to figure out why I can't store characters into my char pointer by using the strcpy() command. I get a seg fault when I run the code below.
```
#include <stdio.h>
#include <string.h>
int main(int argc, const char *argv[])
{
char *str1, *str2;
int ret;
strcpy(str1, "abcdefg"); // stores string into character array str1
strcpy(str2, "abcdefg");
printf("contents of %s \n", str1);
ret = strncmp(str1, str2, strlen(str2)); /* compares str1 to str2 */
if (ret > 0) {
printf("str1 is less than str2\n");
}
else if (ret < 0) {
printf("str2 is less than str1\n");
}
else if (ret == 0) {
printf("str1 is equal to str2\n");
}
return 0;
}
```
Thank you!
| Right now, `str1` and `str2` are just pointers to a character.
When you do `strcpy(str1, "abcdefg")`, it attempts to write the characters in the string "abcdefg" into the memory that `str1` points to and since `str1` points to an unknown memory, which probably you don't have any write permissions, you get a segmentation fault.
One way to fix it is to allocate memory on the heap and then store these strings.
```
#include <stdlib.h>
...
/* assuming the max length of a string is not more than 253 characters */
char *str1 = malloc(sizeof(char) * 254);
char *str2 = malloc(sizeof(char) * 254);
```
You can also use `strdup` to duplicate the string like [Gangadhar](https://stackoverflow.com/users/1885193/gangadhar) has mentioned.
Another way is to declare `str1` and `str2` as arrays during the compiling as [Bryan Ash](https://stackoverflow.com/users/104219/bryan-ash) suggested
```
char str1[] = "abcdefg";
char str2[] = "abcdefg";
```
In case, you want to allocate the string dynamically but not on the heap, you can use `alloca` (for more details read <http://man7.org/linux/man-pages/man3/alloca.3.html>) as [kfsone](https://stackoverflow.com/users/257645/kfsone) noted
|
parsing css measures
When i write a jQuery plugin i like to specify options for spacings the CSS way.
I wrote a function that returns a CSS String as values in a object.
5px 10px returns top: 5px, right: 10px, bottom: 5px, left: 10px
Now i often use the returned values to do some calculations and its not very nice to have to extract the measuring unit every time...
I suck in writing regular expressions could someone help me complete this function:
```
this.cssMeasure = function(cssString, separateUnits){
if ( cssString ){
var values = {}
}else{
return errorMsg
}
var spacing = cssString.split(' ')
var errorMsg = 'please format your css values correctly dude'
if( spacing[4] ) {
return errorMsg
} else if ( spacing[3] ) {
values = {top: spacing[0], right:spacing[1], bottom:spacing[2], left:spacing[3]}
} else if ( spacing[2] ) {
values = {top: spacing[0], right:spacing[1], bottom:spacing[2], left:spacing[1]}
} else if ( spacing[1] ) {
values = {top: spacing[0], right:spacing[1], bottom:spacing[0], left:spacing[1]}
} else {
values = {top: spacing[0], right:spacing[0], bottom:spacing[0], left:spacing[0]}
}
if (separateUnits) {
$.each(values, function(i, value){
/*
at this place i need to extract the measuring unit of each value and return them separately
something like top: {value: 10, unit: 'px'}, right: {bla} and so on
*/
})
}
return values
}
```
if you have any idea how to improve this function i am open to your comments.
| According to <http://www.w3.org/TR/css3-values/#ltnumbergt> , "A number can either be an integer, or it can be zero or more digits followed by a dot (.) followed by one or more digits", in regexp language
```
\d+|\d*\.\d+
```
Let's add an optional sign to it, and make the group "non-capturing" to make the parsing simpler
```
([+-]?(?:\d+|\d*\.\d+))
```
Enumerating all possible units is tedious, therefore let the unit be any sequence of lowercase letters (including none) or a percent sign
```
([a-z]*|%)
```
Putting it all together,
```
propRe = /^([+-]?(?:\d+|\d*\.\d+))([a-z]*|%)$/
```
When you apply this to a value
```
parts = "+12.34em".match(propRe)
```
the numeric value will be in parts[1] and the unit in parts[2]
|
Bootstrap sticky-top on sidebar column doesn't work
I'm trying to create a sticky sidebar on the right. The sidebar menu is inside a grid column. I'm using the `sticky-top` class as [shown in this question](https://stackoverflow.com/questions/38382043/), but it still doesn't work.
Here's the code...
```
<div class="container min-vh-100 overflow-hidden">
<nav class="navbar navbar-light navbar-expand">
<a class="navbar-brand" href="#">Brand</a>
<ul class="navbar-nav">
<li class="nav-item"><a href="#" class="nav-link">Home</a></li>
</ul>
</nav>
<div class="row">
<div class="col-sm-8 content pt-4">
...
</div>
<div class="col-sm-4">
<div class="menu sticky-top p-3 bg-light">
<h5 class="text-primary">Sticky menu</h5>
<div class="nav flex-column">
<a href="#" class="nav-link pl-0">Menu 1</a>
<a href="#" class="nav-link pl-0">Menu 2</a>
<a href="#" class="nav-link pl-0">Menu 3</a>
</div>
</div>
</div>
</div>
</div>
```
Codeply: <https://codeply.com/go/xwYPD1B1tk>
The `menu` div is the one I'd like to stick to the top as the user scrolls down.
| Position sticky will not work if *any* of the parent containers of the sticky element use `overflow:hidden`. Removing the `overflow-hidden` class from the container allows the `sticky-top` to work.
```
<div class="container min-vh-100">
<nav class="navbar navbar-light navbar-expand">
..
</nav>
<div class="row">
<div class="col-sm-8 content pt-4">
...
</div>
<div class="col-sm-4">
<div class="menu sticky-top p-3 bg-light">
<h5 class="text-primary">Sticky menu</h5>
<div class="nav flex-column">
...
</div>
</div>
</div>
</div>
</div>
```
<https://codeply.com/go/9Nf6pOa7TN>
|
Why does path not fit inside SVG viewbox?
[](https://i.stack.imgur.com/hyFfC.png)[](https://i.stack.imgur.com/TlbUw.png)
**Expected Behavior:**
Icon to fit only the area of the 24 x 24 viewbox.
**Actual Behavior:**
The svg tag takes up the correct area of 24 x 24 but the path element is not contained inside of the 24 x 24 area.
How would I be able to have the path be constrained to the 24 x 24 area and not go over? Currently this is what my svg tag looks like.
```
<a href="#">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24">
<path d="M15.99 2a13.99 13.99 0 0 0-5.1 27.03c-.13-1.11-.23-2.81.05-4.02l1.64-6.96s-.41-.84-.41-2.07c0-1.95 1.13-3.4 2.53-3.4 1.2 0 1.77.9 1.77 1.97 0 1.2-.76 2.99-1.16 4.66-.33 1.39.7 2.53 2.07 2.53 2.49 0 4.4-2.63 4.4-6.4 0-3.35-2.41-5.69-5.85-5.69a6.05 6.05 0 0 0-6.32 6.07c0 1.2.46 2.49 1.04 3.19.12.14.13.26.09.4l-.39 1.59c-.06.25-.21.31-.47.18-1.73-.83-2.81-3.39-2.81-5.44 0-4.41 3.2-8.47 9.25-8.47 4.85 0 8.63 3.46 8.63 8.09 0 4.83-3.04 8.71-7.26 8.71-1.42 0-2.75-.74-3.2-1.61l-.88 3.33a15 15 0 0 1-1.74 3.67A13.97 13.97 0 0 0 30 16.01 14.02 14.02 0 0 0 15.99 2z">
</path>
</svg>
</a>
```
| Remember, the viewBox doesn't have to match your dimensions, so adjust your viewBox - that's what it's there for! The viewBox settings below seems to work. (I scaled up the SVG size and added a 1px border red to show the effect).
```
svg {
border: 1px solid red;
}
```
```
<svg xmlns="http://www.w3.org/2000/svg" width="240px" height="240px" viewBox="1.6 2 28.73 27.99">
<path d="M15.99 2a13.99 13.99 0 0 0-5.1 27.03c-.13-1.11-.23-2.81.05-4.02l1.64-6.96s-.41-.84-.41-2.07c0-1.95 1.13-3.4 2.53-3.4 1.2 0 1.77.9 1.77 1.97 0 1.2-.76 2.99-1.16 4.66-.33 1.39.7 2.53 2.07 2.53 2.49 0 4.4-2.63 4.4-6.4 0-3.35-2.41-5.69-5.85-5.69a6.05 6.05 0 0 0-6.32 6.07c0 1.2.46 2.49 1.04 3.19.12.14.13.26.09.4l-.39 1.59c-.06.25-.21.31-.47.18-1.73-.83-2.81-3.39-2.81-5.44 0-4.41 3.2-8.47 9.25-8.47 4.85 0 8.63 3.46 8.63 8.09 0 4.83-3.04 8.71-7.26 8.71-1.42 0-2.75-.74-3.2-1.61l-.88 3.33a15 15 0 0 1-1.74 3.67A13.97 13.97 0 0 0 30 16.01 14.02 14.02 0 0 0 15.99 2z">
</path>
</svg>
```
|
dump and load a dill (pickle) in two different files
I think this is fundamental to many people who know how to deal with pickle. However, I still can't get it very right after trying for a few hours. I have the following code:
**In the first file**
```
import pandas as pd
names = ["John", "Mary", "Mary", "Suzanne", "John", "Suzanne"]
scores = [80, 90, 90, 92, 95, 100]
records = pd.DataFrame({"name": names, "score": scores})
means = records.groupby('name').mean()
def name_score_function(record):
if record in names:
return(means.loc[record, 'score'])
import dill as pickle
with open('name_model.pkl', 'wb') as file:
pickle.dump(means, file)
```
**The second file**
I would like to load what I have in the first file and make the score of a person (i.e. John, Mary, Suzanne) callable via a function name\_model(record):
```
import dill as pickle
B = pickle.load('name_model.pkl')
def name_model(record):
if record in names:
return(means.loc[record, 'score'])
```
Here it shows the error:
```
File "names.py", line 21, in <module>
B = pickle.load('name_model.pkl')
File "/opt/conda/lib/python2.7/site-packages/dill/dill.py", line 197, in load
pik = Unpickler(file)
File "/opt/conda/lib/python2.7/site-packages/dill/dill.py", line 356, in __init__
StockUnpickler.__init__(self, *args, **kwds)
File "/opt/conda/lib/python2.7/pickle.py", line 847, in __init__
self.readline = file.readline
AttributeError: 'str' object has no attribute 'readline'
```
I know the error comes from my lack of understanding of pickle. I would humbly accept your opinions to improve this code. Thank you!!
**UPDATE**
The more specific thing I would like to achieve:
I would like to be able to use the function that I write in the first file and dump it, and then read it in the second file and be able to use this function to query the mean score of any person in the records.
Here is what I have:
```
import pandas as pd
names = ["John", "Mary", "Mary", "Suzanne", "John", "Suzanne"]
scores = [80, 90, 90, 92, 95, 100]
records = pd.DataFrame({"name": names, "score": scores})
means = records.groupby('name').mean()
def name_score_function(record):
if record in names:
return(means.loc[record, 'score'])
B = name_score_function(record)
import dill as pickle
with open('name_model.pkl', 'wb') as file:
pickle.dump(B, file)
with open('name_model.pkl', 'rb') as file:
B = pickle.load(f)
def name_model(record):
return B(record)
print(name_model("John"))
```
As I execute this code, I have this error `File "test.py", line 13, in <module>
B = name_score_function(record)
NameError: name 'record' is not defined`
I highly appreciate your assistance and patience.
| Thank you. It looks like the following can solve the problem.
```
import pandas as pd
names = ["John", "Mary", "Mary", "Suzanne", "John", "Suzanne"]
scores = [80, 90, 90, 92, 95, 100]
records = pd.DataFrame({"name": names, "score": scores})
means = records.groupby('name').mean()
import dill as pickle
with open('name_model.pkl', 'wb') as file:
pickle.dump(means, file)
with open('name_model.pkl', 'rb') as file:
B = pickle.load(file)
def name_score_function(record):
if record in names:
return(means.loc[record, 'score'])
print(name_score_function("John"))
```
|
extracting data from json string
I am new to using javascript as well as json. I need to extract certain sections from json for processing the data.
```
{
"status": "SUCCESS",
"status_message": "blah blah blah",
"pri_tag": [
{
"tag_id": 1,
"name": "Tag1"
},
{
"tag_id": 2,
"name": "Tag2"
},
{
"tag_id": 3,
"name": "Tag3"
},
{
"tag_id": 4,
"name": "Tag4"
}
]
}
```
From the above json message I need to extract pri\_tag section so that the extracted json should look like below:
```
[
{name:'Tag1', tag_id:1},
{name:'Tag2', tag_id:2},
{name:'Tag3', tag_id:3},
{name:'Tag4', tag_id:4},
{name:'Tag5', tag_id:5},
{name:'Tag6', tag_id:6}
];
```
How to get this done using javascript? Please help me. Thanks in advance.
Thanks friends. I was able to get this working. Thanks once again.
| try this:
```
var data={
"status": "SUCCESS",
"status_message": "blah blah blah",
"pri_tag": [
{
"tag_id": 1,
"name": "Tag1"
},
{
"tag_id": 2,
"name": "Tag2"
},
{
"tag_id": 3,
"name": "Tag3"
},
{
"tag_id": 4,
"name": "Tag4"
}
]
};
```
if you get the data from Ajax request you need to parse it like this:
```
var newData=JSON.parse(data).pri_tag;
```
if not, you don't need to parse that:
```
var newData=data.pri_tag;
```
|
CHMOD - Applying Different Permissions For Files vs. Directories
I've been trying to clean up permissions on a few boxes and have been scouring the chmod man as well as all the internet documentation that I an handle without any luck -- so here we go.
Basically, I've got a directory with many sub directories and files -- and I'd like to set the following permissions:
For directories: 770 (u+rwx, g+rwx, o-rwx)
For files: 660 (U+rw, g+rw, a-x, o-rw)
I'd like to try and do this with a single recursive chmod if possible -- as to avoid recursing through each directory and setting file-by-file permissions.
I imagine there's got to be a way to do this without writing a shell script of my own -- but I haven't been able to find anything.
I appreciate your help!
| I do find a script useful since it's often useful to change both file and directory permissions in one swoop, and they are often linked. 770 and 660 for shared directories on a file server, 755/644 for web server directories, etc. I keep a script w/ the most commonly used mode for that type of server in root's bin/ and just do the find manually when the common mode doesn't apply.
```
#!/bin/sh
# syntax: setperm.s destdir
#
if [ -z $1 ] ; then echo "Requires single argument: <directoryname>" ; exit 1 ; fi
destdir=$1
dirmode=0770
filemode=0660
YN=no
printf "\nThis will RECURSIVELY change the permissions for this entire branch:\n "
printf "\t$destdir\n"
printf "\tDirectories chmod = $dirmode\tFiles chmod = $filemode\n"
printf "Are you sure want to do this [$YN]? "
read YN
case $YN in
[yY]|[yY][eE][sS])
# change permissions on files and directories.
find $destdir -type f -print0 | xargs -0 chmod $filemode $i
find $destdir -type d -print0 | xargs -0 chmod $dirmode $ii ;;
*) echo "\nBetter safe than sorry I always say.\n" ;;
esac
```
|
How to turn Ecto select queries into structs in Phoenix?
I've got two models, Song and Vote, where songs has many votes. I want to select all songs and count the number of votes for each.
The index action in the SongController, generated using the mix gen task, has been modified to this:
```
def index(conn, _params) do
query = from s in Song, select: %{id: s.id, name: s.name, artist: s.artist}
songs = Repo.all(query)
render(conn, "index.html", songs: songs)
end
```
In this case `songs` contains a list of lists. But in the orginal, generated function, `songs = Repo.all(Song)` it is a list of *Song structs*.
This means that the song\_path functions in the template break with the following error message: `maps cannot be converted to_param. A struct was expected, got: %{artist: "Stephen", id: 3, name: "Crossfire"}`
Of course, what I *really* want to do is to somehow add a `num_votes` field to the select statement, and then somehow make a corresponding field to the Song struct?
| First we should add a [virtual field](https://hexdocs.pm/ecto/Ecto.Schema.html#field/3) to the song schema so that it can be used to store the `num_votes` result:
```
defmodule Song do
use Ecto.Schema
schema "songs" do
field :num_votes, :integer, virtual: true
...
end
end
```
Using a combination of [Ecto.Query.select/3](https://hexdocs.pm/ecto/Ecto.Query.html#select/3), [Ecto.Query.join/5](https://hexdocs.pm/ecto/Ecto.Query.html#join/5) and [Ecto.Query.API.count/1](https://hexdocs.pm/ecto/Ecto.Query.API.html#count/1) we can add the counts to the map you are using to select from the query:
```
query = from s in Song,
left_join: v in assoc(:votes),
select: %{id: s.id, name: s.name, artist: s.artist, num_votes: count(v.id)}
```
We can then use [Kernel.struct](http://elixir-lang.org/docs/v1.2/elixir/Kernel.html#struct/2) to convert each item to a struct:
```
songs =
query
|> Repo.all()
|> Enum.map(fn(song) -> struct(Song, song) end)
```
This returns a list of song structs that can be used in the view.
|
Android Timeout webview if it takes longer than a certain time to load
I Want to timeout my webview if it takes a long time to load showing an error message. I'm using `setWebViewClient` because I need to use the `public void onReceivedSslError (WebView view, SslErrorHandler handler, SslError error)`.
I've been looking around and saw I can use the method `onProgressChanged(WebView view, int newProgress)`. Now I can't use this method in `setWebViewClient` and can't figure out how to go about solving this problem. Another issue I have is that the progress bar never goes away once the page is loaded, I can't add a breakpoint to the method `public void onPageFinished(WebView view, String url)` either.
**Web View Settings Method:**
```
public void WebViewSettings(){
webView = (WebView) findViewById(R.id.webview);
webView.getSettings().setJavaScriptEnabled(true);
webView.getSettings().setBuiltInZoomControls(true);
webView.getSettings().setSupportZoom(true);
webView.getSettings().setLoadWithOverviewMode(true);
webView.canGoBack();
webView.setWebViewClient(new WebViewClient(){
@Override
public boolean shouldOverrideUrlLoading(WebView view, String url) {
if (Uri.parse(url).getHost().equals(urlString)) {
// This is my web site, so do not override; let my WebView load the page
return false;
}
// Otherwise, the link is not for a page on my site, so launch another Activity that handles URLs
Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(url));
startActivity(intent);
return true;
}
@Override
public void onLoadResource(WebView view, String url) {
// Check to see if there is a progress dialog
if (progressDialog == null) {
progressDialog = new ProgressDialog(context);
progressDialog.setTitle("Loading...");
progressDialog.setMessage("Please wait.");
//progressDialog.setCancelable(false);
progressDialog.setIndeterminate(true);
progressDialog.show();
}
}
@Override
public void onPageFinished(WebView view, String url) {
// Page is done loading;
// hide the progress dialog and show the webview
if (progressDialog.isShowing()) {
progressDialog.dismiss();
progressDialog = null;
webView.setEnabled(true);
}
}
@Override
public void onReceivedSslError (WebView view, SslErrorHandler handler, SslError error) {
handler.proceed();
}
@Override
public void onReceivedError(WebView view, int errorCod,String description, String failingUrl) {
Toast.makeText(context, "Your Internet Connection May not be active Or " + description , Toast.LENGTH_LONG).show();
}
});
}
```
So the issues I have are the progress bar doesn't get removed once the web page has loaded and I need to timeout the webview if it take over a certain amount of time to load. It looks like the progress bar is shown, then is disappears like it should, then starts loading again and wont stop. Thanks
| I have updated your webview setting method please find below updated code.
I have added one loaderTimerTask class. Added code for timer and put comment for more understanding. Please update this code if you need.
```
private boolean isPageLoadedComplete = false; //declare at class level
public void WebViewSettings(){
webView = (WebView) findViewById(R.id.webview);
webView.getSettings().setJavaScriptEnabled(true);
webView.getSettings().setBuiltInZoomControls(true);
webView.getSettings().setSupportZoom(true);
webView.getSettings().setLoadWithOverviewMode(true);
webView.canGoBack();
/**
*I had put code here
*/
Timer myTimer = new Timer();
//Start this timer when you create you task
myTimer.schedule(loaderTask, 3000); // 3000 is delay in millies
webView.setWebViewClient(new WebViewClient(){
@Override
public boolean shouldOverrideUrlLoading(WebView view, String url) {
if (Uri.parse(url).getHost().equals(urlString)) {
// This is my web site, so do not override; let my WebView load the page
return false;
}
// Otherwise, the link is not for a page on my site, so launch another Activity that handles URLs
Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(url));
startActivity(intent);
return true;
}
@Override
public void onLoadResource(WebView view, String url) {
// Check to see if there is a progress dialog
if (progressDialog == null) {
progressDialog = new ProgressDialog(context);
progressDialog.setTitle("Loading...");
progressDialog.setMessage("Please wait.");
//progressDialog.setCancelable(false);
progressDialog.setIndeterminate(true);
progressDialog.show();
}
}
@Override
public void onPageFinished(WebView view, String url) {
isPageLoadedComplete = true;
// Page is done loading;
// hide the progress dialog and show the webview
if (progressDialog.isShowing()) {
progressDialog.dismiss();
progressDialog = null;
webView.setEnabled(true);
}
}
@Override
public void onReceivedSslError (WebView view, SslErrorHandler handler, SslError error) {
handler.proceed();
}
@Override
public void onReceivedError(WebView view, int errorCod,String description, String failingUrl) {
Toast.makeText(context, "Your Internet Connection May not be active Or " + description , Toast.LENGTH_LONG).show();
}
});
}
/**
*This class is invoke when you times up.
*/
class loaderTask extends TimerTask {
public void run() {
System.out.println("Times Up");
if(isPageLoadedComplete){
}else{
if (progressDialog.isShowing()) {
progressDialog.dismiss();
progressDialog = null;
webView.setEnabled(true);
}
//show error message as per you need.
}
}
}
```
|
How do intellisense and autocompletions work?
I've been wondering for a while: how do autocompletions work?
**Example:**
In PhpStorm whenever I use a class and type `->` it shows me all properties and methods of that class. It also auto completes namespaces and even functions inside libraries such as jQuery.
Does it run some sort of regex on the files, or does it parse them somehow?
| PhpStorm developer here. I'd like to go through some basics in case it may be helpful for those who want to implement their own plugin.
First of all, a code has to be broken into tokens using a [lexer](http://www.jetbrains.org/intellij/sdk/docs/reference_guide/custom_language_support/implementing_lexer.html). Then AST (an abstract syntax tree) and PSI (a program structure interface) are built using a [parser](http://www.jetbrains.org/intellij/sdk/docs/reference_guide/custom_language_support/implementing_parser_and_psi.html). PhpStorm has its own implementations of the lexer and the parser. This is how a PSI tree looks like for a simple class.
[](https://i.stack.imgur.com/5eMpV.png)
When you type in an editor or explicitly invoke a completion action (`Ctrl`+`Space`) a number of [completion contributors](http://www.jetbrains.org/intellij/sdk/docs/reference_guide/custom_language_support/code_completion.html) are invoked. They're intended to return a list of suggestions based on a cursor's position.
Let's consider a case when completion is invoked inside a field reference.
[](https://i.stack.imgur.com/Zgr2G.png)
PhpStorm knows that at the current position all class members can be suggested. It starts obtaining a class reference (the `$class` variable in our case) and determining its type. If a variable [resolves](http://www.jetbrains.org/intellij/sdk/docs/reference_guide/custom_language_support/references_and_resolve.html) to a class its type is a class' FQN (fully qualified name).
To obtain methods and fields of a class its PSI element is needed. A special index is used to map an FQN to an appropriate `PhpClass` tree element. Indices are initially built when a project is opened for the first time and updated for each modified file.
PhpStorm collects all members from the PSI element (including parent's ones), then from its traits. They're filtered depending on a current context (e.g. access scope) and an already typed name's part (`f`).
[](https://i.stack.imgur.com/pangw.png)
Suggestions are shown in a list which is sorted by how good element's name matches, its type, position and so on. The list rearranges when you type.
When you press `Enter` to insert an element PhpStorm invokes one more handler. It knows how to properly insert the element into a code. For instance, it can add parentheses for a method or import a class reference. In our case, it's enough to put brackets and place a cursor just after them because the method has no parameters.
[](https://i.stack.imgur.com/f43YJ.png)
That's basically it. It worth mention that the IntelliJ IDEA platform allows a plugin to provide an implementation for each step described above. Thus completion can be improved or extended for some particular framework or language.
|
Why is setting an app variable a 'bad practice' in angularJS?
I've looked at a number of angularJS tutorials and style guides and have found comments like this ([from Todd Motto](http://toddmotto.com/opinionated-angular-js-styleguide-for-teams/))
```
Bad:
var app = angular.module('app', []);
app.controller();
app.factory();
Good:
angular
.module('app', [])
.controller()
.factory();
```
I first learned the "Bad" technique by example and have since seen a couple of reference (other than this one) that say the "Bad" technique is ...well Bad.
Nobody so far in my searches says WHY it is bad?
**Edit:** *Why is this question different?*
While the differences are subtle between this and the proposed duplicate question, there are two important differences:
1. 'What is the best practice?' is not the same as 'Why is it bad?'...while the accepted answer to the other question elaborates on 'Why', the two questions having the same answer is not sufficient be branded a duplicate.
2. A vigorous search, using the exact text that I placed as the title to this question did not reveal the proposed duplicate. Perhaps SE should consider allowing "optional titles" to be added to a question to enhance searchablity...but that feature is not in place and someone else asking the same question as mine will still not find the other question.
| Global variables in general tend to be considered bad practice, although `angular` itself is a global variable so I think that it's honestly not *that* big of a deal as long as you are consistent.
Problem can arise if you do something like this accidentally:
```
app = angular.module("app");
// some other file
app = somethingNotAnAngularModule();
```
External libraries might overwrite the variable `app`, etc. etc.
Instead of using the name `app`, you could also use a name that is specific to your app...
```
dustrModule = angular.module("dustr", []);
```
---
Chaining is one thing, but if you are splitting up components into separate files you can always *get* the module with `.module`
```
// app.js
angular.module("app", []);
// LoginCtrl.js
angular.module("app").controller("LoginCtrl", LoginCtrl);
```
|
Azure Cosmos db Gremlin elementMap()
Im trying to create a gremlin query in cosmos db where the properties of all vertices are flattened.
The best i have achieved is using "valueMap"
**Query**
```
g.V('12345').valueMap(true))
```
**Result**
```
{
"id": "12345",
"label": "product",
"name": [
"product name"
],
"description": [
"productdescription"
],
}
```
**What i am trying to achieve**
```
{
"id": "12345",
"label": "product",
"name": "product name",
"description": "productdescription"
}
```
It looks like elementMap is the right way to go, but it doesnt seem to be supported in Cosmos Db.
Is there a reason why this is not supported or does a similar solution exist?
| CosmosDB tends to be a bit behind in supporting all aspects of the Gremlin language. There are workarounds. Prior to `elementMap()` the typical pattern was to use a `by()` modulator to `valueMap()` to `unfold()` the lists:
```
gremlin> g.V().valueMap(true).by(unfold())
==>[id:1,label:person,name:marko,age:29]
==>[id:2,label:person,name:vadas,age:27]
==>[id:3,label:software,name:lop,lang:java]
==>[id:4,label:person,name:josh,age:32]
==>[id:5,label:software,name:ripple,lang:java]
==>[id:6,label:person,name:peter,age:35]
```
I don't know if CosmosDB supports that particular `by()` modulator though. If it does not then it gets a bit ugly:
```
gremlin> g.V().map(valueMap(true).unfold().group().by(keys).by(select(values).unfold()))
==>[id:1,label:person,name:marko,age:29]
==>[id:2,label:person,name:vadas,age:27]
==>[id:3,label:software,name:lop,lang:java]
==>[id:4,label:person,name:josh,age:32]
==>[id:5,label:software,name:ripple,lang:java]
==>[id:6,label:person,name:peter,age:35]
```
or perhaps:
```
gremlin> g.V().map(valueMap(true).unfold().group().by(keys).by(select(values).limit(local,1)))
==>[id:1,label:person,name:marko,age:29]
==>[id:2,label:person,name:vadas,age:27]
==>[id:3,label:software,name:lop,lang:java]
==>[id:4,label:person,name:josh,age:32]
==>[id:5,label:software,name:ripple,lang:java]
==>[id:6,label:person,name:peter,age:35]
```
|
c++ abstract class with nested class. derived class and nested class
I have the task to write own containers `Linked_list` and `Array_list`. I have one interface for them:
```
typedef int value_type;
class Container
{
public:
class Iterator
{
public:
Iterator();
Iterator(value_type* other);
Iterator(const Iterator& other);
Iterator& operator=(const Iterator& other);
...
};
Container();
Container(const Container& other);
~Container();
virtual value_type& front() const=0;
virtual value_type& back() const=0;
virtual Iterator begin() const=0; //
...
};
```
I did derived classes Linked\_list and Array\_list:
```
class Linked_list:public Container
{
public:
long int cur_size;
List elem;
static Link end_;
class Iterator: public Container::Iterator
{
friend Linked_list;
Link *p;
};
Iterator begin() const; //overriding virtual function return type differs ...
...
}
```
I thinks it's all wrong. should nested class `Linked_list::Iterator` be a derived class?
Is it possible to do this, if I can't change the interface?
| Taking into account your design constraints *that you cannot use templates*, than one thing should change: add interface `IteratorImpl`. Thus you can make `class Iterator` from base `class Container` **non virtual**. It needs to be non-virtual since STL-alike iterators should have *value semantics*. See [pimpl idiom](http://en.wikibooks.org/wiki/C++_Programming/Idioms#Pointer_To_Implementation_.28pImpl.29) for more details how it works!
Like this:
```
typedef int value_type;
class Container
{
protected:
class IteratorImpl
{
public:
virtual void next() = 0;
virtual IteratorImpl* clone() const = 0;
virtual value_type get() const = 0;
virtual bool isEqual(const IteratorImpl& other) const = 0;
};
public:
class Iterator
{
public:
Iterator(IteratorImpl* impl) : impl(impl) {}
~Iterator() { delete impl; }
Iterator(const Iterator& other) : impl(other.impl->clone()) {}
Iterator& operator=(const Iterator& other) {
IteratorImpl* oldImpl = impl;
impl = other.impl->clone();
delete oldImpl;
}
bool operator == (const Iterator& other) const
{
return impl->isEqual(*other->impl);
}
Iterator& operator ++ ()
{
impl->next();
return *this;
}
value_type& operator*() const
{
return impl->get();
}
value_type* operator->() const
{
return &impl->get();
}
};
Container();
Container(const Container& other);
~Container();
virtual value_type& front() const=0;
virtual value_type& back() const=0;
virtual Iterator begin() const=0; //
...
};
```
Then in your derived just implement IteratorImpl:
```
class Linked_list:public Container
{
protected:
class IteratorImpl: public Container::IteratorImpl
{
....
};
public:
Iterator begin() const { return new IteratorImpl(firstNode); }
Iterator end() const { return new IteratorImpl(nodeAfterLastNode); }
...
};
```
These firstNode and nodeAfterLastNode are just my guess - use whatever you need to implement the IteratorImpl interface...
|
Getting correct Image rotation
I have a simple problem: When I load an image to a windows form `PictureBox` some pictures are rotated and others are not.
Basically, a user selects a picture with an `OpenFileDialog` and when the picture is selected:
```
private void OpenFD_FileOk(object sender, CancelEventArgs e)
{
Image image = Image.FromFile(openFD.FileName);
PB_profile.Image = image;
}
```
And yes I checked the original image rotation
EDIT:
I changed the `PictureBox` property `SizeMode` to `StretchImage`
| **If** the pictures contains [exif data](https://msdn.microsoft.com/en-us/library/ms534416.aspx?f=255&MSPPError=-2147217396) the [`PropertyItems`](https://msdn.microsoft.com/en-us/library/ms534413.aspx?f=255&MSPPError=-2147217396) should include the **orientation** tag.
It encodes the rotation/flipping necessary to display the image correctly:
>
> PropertyTagOrientation
>
>
> Image orientation viewed in terms of rows and columns.
>
>
> Tag 0x0112
>
>
> 1 - The 0th row is at the top of the
> visual image, and the 0th column is the visual left side.
>
> 2 - The 0th
> row is at the visual top of the image, and the 0th column is the
> visual right side.
>
> 3 - The 0th row is at the visual bottom of the
> image, and the 0th column is the visual right side.
>
> 4 - The 0th row
> is at the visual bottom of the image, and the 0th column is the visual
> left side.
>
> 5 - The 0th row is the visual left side of the image, and
> the 0th column is the visual top.
>
> 6 - The 0th row is the visual right
> side of the image, and the 0th column is the visual top.
>
> 7 - The 0th
> row is the visual right side of the image, and the 0th column is the
> visual bottom.
>
> 8 - The 0th row is the visual left side of the image,
> and the 0th column is the visual bottom.
>
>
>
Here is a function to retrieve a `PropertyItem`:
```
PropertyItem getPropertyItemByID(Image img, int Id)
{
return
img.PropertyItems.Select(x => x).FirstOrDefault(x => x.Id == Id);
}
```
Here is an example of using the GDI+ `RotateFlip` method to adjust an image on the fly:
```
void Rotate(Bitmap bmp)
{
PropertyItem pi = bmp.PropertyItems.Select(x => x)
.FirstOrDefault(x => x.Id == 0x0112);
if (pi == null) return;
byte o = pi.Value[0];
if (o==2) bmp.RotateFlip(RotateFlipType.RotateNoneFlipX);
if (o==3) bmp.RotateFlip(RotateFlipType.RotateNoneFlipXY);
if (o==4) bmp.RotateFlip(RotateFlipType.RotateNoneFlipY);
if (o==5) bmp.RotateFlip(RotateFlipType.Rotate90FlipX);
if (o==6) bmp.RotateFlip(RotateFlipType.Rotate90FlipNone);
if (o==7) bmp.RotateFlip(RotateFlipType.Rotate90FlipY);
if (o==8) bmp.RotateFlip(RotateFlipType.Rotate90FlipXY);
}
```
It changes the image to the correctly rotated version..
I have tested to values with [this nice set of sample images](http://www.galloway.me.uk/2012/01/uiimageorientation-exif-orientation-sample-images/).
**Note**: The code will only work if the images actually contain the orientation tag. If they don't, maybe because they are scans, then it will do **nothing**.
**Note 2** You wrote *I checked the original image rotation.* This is not so simple: The explorer will display the images already rotated, so here they all look right and even inspecting the properties doesn't reveal the orientation!
Usually, when no exif data are present, the `PropertyTagOrientation` tag **is** present but only has the default value of `1`..
**Update:**
If the image **doesn't** have the `PropertyTagOrientation` here is how you can add one:
```
using System.Runtime.Serialization;
..
pi = (PropertyItem)FormatterServices
.GetUninitializedObject(typeof(PropertyItem));
pi.Id = 0x0112; // orientation
pi.Len = 2;
pi.Type = 3;
pi.Value = new byte[2] { 1, 0 };
pi.Value[0] = yourOrientationByte;
yourImage.SetPropertyItem(pi);
```
Kudos to @ne1410s's excellent [answer here!](https://stackoverflow.com/questions/18820525/how-to-get-and-set-propertyitems-for-an-image/25162782#25162782).
Note that adding `PropertyItems` to an image does not add exif data; the two are different tag sets!
|
Gnome3+: How do I remove favorites from Dash via terminal?
I'm guessing I need to edit one of the schemas available in `gsettings` but I don't know which one. and when I listed all the schemas, there's just too many of them.
| The key you want is `favorite-apps`, the schema ID is `org.gnome.shell`. Now to list your favorite apps you can simply run
```
gsettings get org.gnome.shell favorite-apps
```
or
```
dconf read /org/gnome/shell/favorite-apps
```
These will return an array of strings e.g.
```
['firefox.desktop', 'org.gnome.Terminal.desktop', 'org.gnome.Nautilus.desktop', 'org.gnome.gedit.desktop', 'gnome-calculator.desktop']
```
Now, to remove a value from that array you could use text processing tools like `sed`/`awk` to check if an item is in that list and remove it keeping the same format (not that trivial though definitely doable) and once you get it right just write the new settings to the database e.g. assuming you wanted to remove `org.gnome.Nautilus.desktop` you would run (note the double quotes):
```
gsettings set org.gnome.shell favorite-apps "['firefox.desktop', 'org.gnome.Terminal.desktop', 'org.gnome.gedit.desktop', 'gnome-calculator.desktop']"
```
or
```
dconf write /org/gnome/shell/favorite-apps "['firefox.desktop', 'org.gnome.Terminal.desktop', 'org.gnome.gedit.desktop', 'gnome-calculator.desktop']"
```
Still, it's easier to write your own utility ([using `gsettings` API](https://developer.gnome.org/gio/stable/GSettings.html)) that will accept one or more desktop file names as positional parameters and remove them from favorites; to get you started, here is a very basic example in `python` that accepts one param (run as `script.py firefox.desktop`):
```
#!/usr/bin/env python
from sys import argv
from gi.repository import Gio,GLib
item=argv[1]
gschema = Gio.Settings('org.gnome.shell')
gvalues=gschema.get_value('favorite-apps').unpack()
if item in gvalues: gvalues.remove(item)
gschema.set_value('favorite-apps', GLib.Variant('as', gvalues))
```
|
Is there a way to use C++ preprocessor stringification on variadic macro arguments?
My guess is the answer to this question is no, but it would be awesome if there was a way. To clarify, assume I have the following macro:
```
#define MY_VARIADIC_MACRO(X...) // Does some stuff here in the macro definition
```
What I would like to do is somehow perform stringification on all the variables of X before passing it to a variadic function; the keyword here is before. I realize there's no way to really access the individual arguments from within the macro definition, but is there a way to stringify all the arguments, with maybe something like the following?
```
#define MY_VARIADIC_MACRO(X...) some_variadic_function("some string", #X)
```
| Okay, I didn't mean to answer my own question here, but I've come up with a decent solution that is somewhat of a combination of Mark Wilkins answer and the example I gave in the question.
It is possible to stringify the entire set variadic set, which then includes the delimiting commas in the string. Here's a quick example:
```
#define MY_VARIADIC_MACRO(X...) printf(#X)
```
Using the above macro shows you that the entire set of arguments passed to the macro gets stringified.
Then you can then define a function to tokenize these arguments using the delimiting comma, thereby getting the set of tokenized strings by using the variadic macro:
```
#define MY_VARIADIC_MACRO(X...) tokenize_my_arguments(#X)
```
Then there's actually no longer the dependency of having the variadic macro call a variadic function and I can iterate nicely through my array of constant C strings rather than iterating through va\_arg.
\* New Stuff from Edit Follows *\**
Per Tim's comment, here's the details of the solution. Please forgive any errors since it was done in haste and I had to port from what I'm working on. Also, it's not meant to be copy/paste solution since it only outputs the stringification of the arguments to demonstrate POC, but should be sufficient enough to demonstrate the functionality.
Although this solution requires some run time computation, variadic macros often times call variadic functions and requires iterating through va\_args, so the iteration takes place in finding the tokens, although a bit of performance is probably sacrificed. However, for maintainability, versatility, and ease of implementation, this seems to be the best option at the moment:
```
#define VARIADIC_STRINGIFY(_ARGUMENTS_TO_STRINGIFY...) Variadic_Stringification_Without_Variadic_Function(#_ARGUMENTS_TO_STRINGIFY)
void Variadic_Stringification_Without_Variadic_Function (const char* _stringified_arguments)
{
strcpy(converted_arguments, _stringified_arguments);
for(char* token = strtok(converted_arguments, ","); token != 0x0; token = strtok(0x0, ","))
std::cout << token << std::endl;
}
```
|
When is it worth NOT using a Factory?
I'm employing TDD quite a bit these days and really enjoying myself - everything seems to flow better and be naturally better constructed and organized. However, while writing a bit of IO code, utilizing `System.IO.Stream`s, and I was wondering - when is it ever worth not using a factory? Because in `Stream`s case, it certainly seems better to not use a factory.
Generally, for more complex types that I have defined, such as a class that controls the authentication of something and interfaces with a database, you would probably use a dependency injection container and have this resolved at runtime without ever needing to *actually* create one.
However, in some circumstances, when you need to create a lot of these instances, you would create a Factory to construct that type - such as:
```
class Foo
Foo(Bar bar, Foobar foobar, Fuzz fuzz)
...
end
```
Here, because you don't want to expose how to directly create this object to the clients that need it, you expose a Factory that will create them and inject that into the client instead. This has the benefit of allowing you to replace the factory at any time you want with something else and is generally used for creating instances of types which have derived types.
```
class FooFactory
void Create(Bar, Foobar, Fuzz)
end
```
However, back to my `Stream` point - would it even be worth creating a Factory for a decorator around a Stream? For example, in my project I have `BinaryDataStream`, which reads my data from a Streeam. The BinaryData is in a custom format, and takes a Stream argument in it's constructor. Using `new` seems to violate everything I know that I've learned since I started using TDD, **because I am giving the dependent in which it needs to know explicitly about how to get it's collaborator** However a factory seems overkill.
Thoughts?
EDIT: I think I need to clarify. I don't mean avoiding new() all the time, as obviously it has it's use in tying up dependencies and the like. **however**, I meant in the following situation (copy-pasta from comments):
>
> What I meant was that I have been told it is intrinsically evil to
> use the new() operator to a collaborator from inside the type that
> uses the collaborator, and that it is better to either a) pass an
> injected type or b) pass a factory (if you need to create an unknown
> number of those types). By using new() inside the collaborator you
> tie how to create that object and where to create it directly to the
> thing that requires it, and that it is better to inject a preconfigured
> instance into the constructor if you need one instance of it, or inject
> a factory into the constructor if you need to create multiple/undetermined
> number of instances
>
>
>
|
>
> Here, because you don't want to expose how to directly create this object to the clients that need it, you expose a Factory that will create them and inject that into the client instead.
>
>
>
1. If clients know the object exists, it's usually fine for them to know how to make it. If you have an *interface* then maybe you need some factory to make the concrete instance, but for TDD you're just going to mock it anyways.
2. Your example factory is just a passthru. Your client already knows everything needed to build the object, so replacing the factory is just as hard as replacing the constructor call.
>
> When is it worth NOT using a Factory?
>
>
>
This is the wrong mode of thought.
You should default to not using factories. Factories are only necessary when you need to abstract away object **creation**. Since you should (in general) limit the creation of objects that require such abstraction, you should use factories sparingly. Instead, most of your objects will just take some interface and not care where it came from.
Only add complexity when it is necessary.
|
GuidAttribute & ProgId, what are they for?
So I am creating an outlook plugin using NetOffice.
On the plugin entry point it has something like this:
`[GuidAttribute("d7066ab2-ac03-431a-bea5-b70d3efab2a5"), ProgId("OutlookPlugin"), ComVisible(true)]`
Now I understand that the `ComVisible` bit sets the library as, well, ComVisible. I assume this is so that I can make individual classes ComVisible rather than the whole library via Assembly Information -> Make assembly COM-Visible.
But I don't understand what the `GuidAttribute` and `ProgId` are used for?
| One important feature of COM is that an application can ask for the class object to be created and COM sorts out what executable implements it and loads it for you. This requires a good way to identify the component.
You'd say: "well, not a problem, just give it a name". Problem is, people are not very good at picking good names. There are a wholeheckofalot of guys called "Hans" and I know of at least one other guy that has my exact name. Lives somewhere is the Netherlands, don't know who he is.
That's a problem, unlike people names, component name collisions are deadly. You'll get the completely wrong component loaded and your program will crash. So the COM designers decided that the only good solution is a Globally Unique ID, a number that's guaranteed to be unique throughout the known Universe, and beyond. A GUID.
A COM application uses that number to ask for the object to be created. The underlying api function is [CoCreateInstance](http://msdn.microsoft.com/en-us/library/windows/desktop/ms686615%28v=vs.85%29.aspx), the first argument is the CLSID which is the guid that identifies the class.
People are however not very good at remembering very long numbers. So there's a back-up way to identify a component, it is used in scripting languages in particular. The kind of runtime environment where getting that guid value in a reliable way is not so easy. So there's still a name attached to the number. It is the ProgId. You pass it to a helper function that's typically named CreateObject(). It makes one extra step, it uses the [CLSIDFromProgID()](http://msdn.microsoft.com/en-us/library/windows/desktop/ms688386%28v=vs.85%29.aspx) helper function to map the name to the number, then calls CoCreateInstance. Needless to say, this can and does go wrong sometimes.
|
Subsets and Splits