prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
How to set default app launcher programmatically?
I am creating a launcher (kiosk) app that will be downloadable through google. When first installing this application the user has the ability of choosing which launcher (mine or the stock) will be the default. I am trying to bring this up manually if the user does not make my application the default launcher. I want the user to be forced into selecting ALWAYS instead of JUST ONCE when that dialog comes up, otherwise the dialog will continue to appear periodically with a friendly message. This is what I have attempted so far.
I created a method to check for if my application is the default
```
/**
* method checks to see if app is currently set as default launcher
* @return boolean true means currently set as default, otherwise false
*/
private boolean isMyAppLauncherDefault() {
final IntentFilter filter = new IntentFilter(Intent.ACTION_MAIN);
filter.addCategory(Intent.CATEGORY_HOME);
List<IntentFilter> filters = new ArrayList<IntentFilter>();
filters.add(filter);
final String myPackageName = getPackageName();
List<ComponentName> activities = new ArrayList<ComponentName>();
final PackageManager packageManager = (PackageManager) getPackageManager();
packageManager.getPreferredActivities(filters, activities, null);
for (ComponentName activity : activities) {
if (myPackageName.equals(activity.getPackageName())) {
return true;
}
}
return false;
}
```
Then I make the attempt of launching the chooser
```
/**
* method starts an intent that will bring up a prompt for the user
* to select their default launcher. It comes up each time it is
* detected that our app is not the default launcher
*/
private void launchAppChooser() {
Log.d(TAG, "launchAppChooser()");
Intent intent = new Intent(Intent.ACTION_MAIN);
intent.addCategory(Intent.CATEGORY_HOME);
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(intent);
}
```
When I do this I am not receiving the choice between my app and the stock launcher. I tried using `startActivity(Intent.createChooser(intent, "Please set launcher settings to ALWAYS"));` and I get the choices between my app and the stock launcher, however, I don't get the options ALWAYS or JUST ONCE.
I can create a custom dialog for this instead of launching chooser but I need to know how to set the default app launcher programmatically. Thanks in advance!
| This is actually possible with a little workaround:
Create an empty `Activity` that acts as a launcher called `FakeLauncherActivity`. Add it to your manifest as a **disabled** component:
```
<activity
android:name="com.path.to.your.FakeLauncherActivity"
android:enabled="false">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.HOME" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
</activity>
```
Check whether your desired launcher activity is the default one (with the `isMyAppLauncherDefault()` from your question).
If not, offer the user to choose the preferred launcher activity like this:
```
public static void resetPreferredLauncherAndOpenChooser(Context context) {
PackageManager packageManager = context.getPackageManager();
ComponentName componentName = new ComponentName(context, com.path.to.your.FakeLauncherActivity.class);
packageManager.setComponentEnabledSetting(componentName, PackageManager.COMPONENT_ENABLED_STATE_ENABLED, PackageManager.DONT_KILL_APP);
Intent selector = new Intent(Intent.ACTION_MAIN);
selector.addCategory(Intent.CATEGORY_HOME);
selector.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
context.startActivity(selector);
packageManager.setComponentEnabledSetting(componentName, PackageManager.COMPONENT_ENABLED_STATE_DEFAULT, PackageManager.DONT_KILL_APP);
}
```
This method temporarily enables `FakeLauncherActivity`, which leads to a change in the set of available launcher activities, which forces Android to forget its default launcher. You will see something like...
```
521-735/system_process I/PackageManager﹕ Result set changed, dropping preferred activity for Intent { act=android.intent.action.MAIN cat=[android.intent.category.HOME] flg=0x10000000 } type null
```
... in your log.
The method then simply opens a launcher intent where you can see all installed launchers and the buttons "Always" / "Just once".
Finally, the method disables `FakeLauncherActivity` again so that it doesn't display in the list.
You could repeat that as often as you want and only let the user proceed if your desired launcher activity is set as default.
|
What is the most efficient method for adding/removing classes from DOM elements using Javascript?
I'm iterating over a large number of dom elements in Javascript, and I'd like to add/remove classes as appropriate.
What's the most efficient add/remove class operation I can use?
| - Small class strings: [JSPerf](http://jsperf.com/best-way-to-remove-class).
- Bigger class strings: [JSPerf](http://jsperf.com/best-way-to-remove-class/2).
- Bigger class strings, repeated class names: [JSPerf](http://jsperf.com/best-way-to-remove-class/3).
## Remove
The fastest reliable way (small/medium size):
```
var clss = 'classToRemove';
elem.className = (' '+elem.className+' ').split(' ' + clss + ' ').join(' ');
```
If you know for sure that the string does not contain multiple occurrences of the same class name, you can better use the `string.replace` method (any size):
```
var clss = 'classToRemove';
elem.className = (' '+elem.className+' ').replace(' ' + clss + ' ', ' ');
```
The other alternatives:
- Using a RegExp in combination with replace - Slow in all cases
- Using `indexOf` to find the position, and use string concatenation (substring) to remove the class (repeat this in a loop to remove all possible duplicates).
**Slowest** for large strings **with repetition**, not slow neither fast in other cases.
## Add (without doubt):
```
var clss = 'classToAdd';
element.className += ' ' + clss;
```
|
glassfish v3 vs tomcat 7
Which one do you recommend for web and why?
| I don't recommend any particular one. If you want just JSP/Servlet support, both suffices. If you want more than that (e.g. *anything* provided by the [Java EE API](http://download.oracle.com/javaee/6/tutorial/doc/) which is *much more* than alone JSP/Servlet), then Tomcat simply don't suffice without manually adding a bunch of components on top of that to comply the complete Java EE API like [JBoss AS](http://www.jboss.org/jbossas/) is doing.
In (Eclipse) development terms, Tomcat is a quick starter and restarter, takes about 3 seconds. Glassfish is a slow starter, takes 5~10 seconds for a simple webapp, but it is an extremely fast hotdeployer (by [this Glassfish Eclipse plugin](http://glassfishplugins.java.net/)). All happens in a subsecond while Tomcat usually takes 3 seconds for this (yes, Tomcat is then basically restarting itself).
|
Sharing JS variables in multiple
I'm working on a CodeIgniter application.
I have a view, let's call it Calendar, which has a JS/jQuery `<script>` block in it. Looks like this:
```
$(document).ready(function() {
$("#day_list").fadeIn(600);
// init
var current_month = <?= $init['current_month']; ?>;
var current_year = <?= $init['current_year'] ?>;
// previous, next month ajax call
$(".previous, .next").click(function(event) {
// do stuff to these variables, then ajax call.
$.ajax({
// ajax
});
});
});
```
In another view, my footer, I have another script block which should use the same variables (current\_month and current\_year). However, it doesn't know about their existence. What would be the quickest, and easiest, way to pass these variables from the first `<script>` block to the other? I can't put the two blocks together because of the way my application is build. Should I just write a function for it which gets and returns these values (and how should I do this? I'm such a newbie) or is there an easier way?
Thanks a lot!
| It's really important to learn to namespace your variables in JavaScript. Scope matters, and it matters a lot. Right now because you're using the "var" keyword, your stuff will be in the local scope.
Some of the other answers here say that you should move them into the global scope. That works, unless something else overwrites them unintentionally. I highly disagree with this approach, globally scoped variables are bad practice in JavaScript.
Namespacing works like this:
```
var foo = foo || {} //Use existing foo or create an empty object.
foo.bar = foo.bar || {}
foo.bar.baz = foo.bar.baz || {}
```
etc. etc.
This may seem like a lot more work, but it also PROTECTS YOUR VARIABLES.
You can also add a simple namespacing function that safely namespaces everything against the window object. (I cribbed this from somewhere ages ago, and I think I modified it a little but maybe didn't).
Put this at the top of your app and you can namespace stuff with $.namespace("myapp.mydata") and then say myapp.mydata.currentYear = ...
```
$.namespace = function() {
var a=arguments, o=null, i, j, d;
for (i=0; i<a.length; i=i+1) {
d=a[i].split(".");
o=window;
for (j=0; j<d.length; j=j+1) {
o[d[j]]=o[d[j]] || {};
o=o[d[j]];
}
}
return o;
};
```
Also, if you're new, or want to get hardcore, I recommend reading JavaScript the Good Parts by Crockford.
|
Why does Excel average gives different result?
Here's the table:
[![enter image description here](https://i.stack.imgur.com/Nl9rA.png)](https://i.stack.imgur.com/Nl9rA.png)
Should not they have the same result mathematically? (the average score of the per column and per row average)
| The missing cells mean that your cells aren't all weighted evenly.
For example, row 11 has only two cells 82.67 and 90. So for your row average for row 11 they are weighted much more heavily than in your column averages where they are 1/13 and 1/14 of a column instead of 1/2 of a row.
Try filling up all the empty cells with 0 and the averages should match.
Taking a more extreme version of Ruslan Karaev's example:
```
5 5 5 | 5
1 | 1 Average of Average of Rows = (5 + 1 + 0) / 3 = 2
0 | 0
-----
2 5 5
Average of Average of Columns = (2 + 5 + 5) / 3 = 4
```
|
How to select domain name from email address
I have email addresses like `user1@gmail.com`, `user2@ymail.com user3@hotmail.com` ... etc.
I want a Mysql `SELECT` that will trim user names and .com and return output as
`gmail`,`ymail`,`hotmail`, etc.
| Assuming that the domain is a single word domain like gmail.com, yahoo.com, use
```
select (SUBSTRING_INDEX(SUBSTR(email, INSTR(email, '@') + 1),'.',1))
```
The inner `SUBSTR` gets the right part of the email address after `@` and the outer `SUBSTRING_INDEX` will cut off the result at the first period.
otherwise if domain is expected to contain multiple words like `mail.yahoo.com`, etc, use:
```
select (SUBSTR(email, INSTR(email, '@') + 1, LENGTH(email) - (INSTR(email, '@') + 1) - LENGTH(SUBSTRING_INDEX(email,'.',-1))))
```
`LENGTH(email) - (INSTR(email, '@') + 1) - LENGTH(SUBSTRING_INDEX(email,'.',-1))` will get the length of the domain minus the TLD `(.com, .biz etc. part)` by using `SUBSTRING_INDEX` with a negative count which will calculate from right to left.
|
Why `std::prev` does not fire an error with an iterator of `std::unordered\_set`?
My question is related to the question linked below.
[Bidirectional iterators in unordered\_map?](https://stackoverflow.com/questions/2998452/bidirectional-iterators-in-unordered-map)
Since I did not know `std::unordered_set` does not support bidirectional iterators, I happened to write a code similar to this one.
```
int main(){
unordered_set<int> y{4};
std::cout << *(std::prev(y.end())) << std::endl;
}
```
This program is COMPILED, but the last line of the code crashed the program. Puzzled by that, I encountered the linked question. However, I still don't understand why this program is compiled instead of throwing error messages while the code in the linked code(which is `boost::unordered_set`) cannot be compiled. Could you clarify it?
FYI, I am using Mingw64 with g++ 4.8.2 / Windows 7 / 64 bit environment.
| `std::prev` only produces defined behavior for bidirectional iterators.
---
The GNU ISO C++ library (used by GCC 4.8.2) uses `std::advance` to implement `std::prev`, and `std::advance` itself is implemented like this:
- for random access iterators:
```
__i += __n;
```
- for bidirectional iterators:
```
if (__n > 0)
while (__n--)
++__i;
else
while (__n++)
--__i;
```
- for all other iterators:
```
while (__n--)
++__i;
```
So you can see that for an iterator of `unordered_set`, the function actually does not use the `operator--` which produces the compiler error in the other question you linked.
---
It is your duty to make sure that an iterator passed to `std::prev` is bidirectional. If that is not the case the C++ standard does not give you any guarantees what happens. GCC chooses to just silently ignore it, but it might as well crash your program.
|
Unlimited method arguments without GC
I am trying to make a function that can receive unlimited amount of arguments without crating GC.
I know that this can be done with the `params` keyword but it creates GC. Also understand that you can pass array to to the function but I want to know if it is possible to pass unlimited method arguments *without creating GC and without creating array or list and passing it to the List*.
This is the example with the `param` code:
```
void Update()
{
GameObject player1 = GameObject.Find("Player1");
GameObject player2 = GameObject.Find("Player2");
GameObject enemy1 = GameObject.Find("Enemy1");
GameObject enemy2 = GameObject.Find("Enemy2");
GameObject enemy3 = GameObject.Find("Enemy3");
Vector3 newPos = new Vector3(0, 0, 0);
moveObjects(newPos, 3f, player1, player2, enemy1, enemy2, enemy3);
}
void moveObjects(Vector3 newPos, float duration, params GameObject[] objs)
{
for (int i = 0; i < objs.Length; i++)
{
//StartCoroutine(moveToNewPos(objs[i].transform, newPos, duration));
}
}
```
When executed even with the `StartCoroutine` function commented out, it allocates **80 bytes**. At first, I thought this was happening because I used the `foreach` loop then I changed that to a `for` loop but it still creating GC then I realized that `params GameObject[]` is causing it. See the Profiler below fore more visual information on this:
[![enter image description here](https://i.stack.imgur.com/90x55.png)](https://i.stack.imgur.com/90x55.png)
So, how can I create a method that takes unlimited arguments without generating GC?
*Please ignore the use of `GameObject.Find` function used in the `Update` function. That's just used for an example to get the reference of Objects I want during run-time. I have a script implemented to handle that but not related what's in this question.*
| Yes, it is possible to create a function with unlimited arguments without causing memory allocation.
You can do this with the undocumented `__arglist` keyword and wrapping our unlimited `params` inside it.
Change your `moveObjects(newPos, 3f, player1, player2, enemy1, enemy2, enemy3)` to `moveObjects(newPos, 3f, __arglist(player1, player2, enemy1, enemy2, enemy3))`.
In the `moveObjects` function, replace `params GameObject[] objs` with `__arglist`. Put the `__arglist` in `ArgIterator` then loop over it until `ArgIterator.GetRemainingCount` is no longer more than *0*.
To obtain each value from the arguments in the loop, use `ArgIterator.GetNextArg` to get the `TypedReference` and then `TypedReference.ToObject` to cast the `object` to the Object type passed in the parameter which is `GameObject` in your example.
The whole changes together:
```
void Update()
{
GameObject player1 = GameObject.Find("Player1");
GameObject player2 = GameObject.Find("Player2");
GameObject enemy1 = GameObject.Find("Enemy1");
GameObject enemy2 = GameObject.Find("Enemy2");
GameObject enemy3 = GameObject.Find("Enemy3");
Vector3 newPos = new Vector3(0, 0, 0);
moveObjects(newPos, 3f, __arglist(player1, player2, enemy1, enemy2, enemy3));
}
void moveObjects(Vector3 newPos, float duration, __arglist)
{
//Put the arguments in ArgIterator
ArgIterator argIte = new ArgIterator(__arglist);
//Iterate through the arguments in ArgIterator
while (argIte.GetRemainingCount() > 0)
{
TypedReference typedReference = argIte.GetNextArg();
object tempObj = TypedReference.ToObject(typedReference);
GameObject obj = (GameObject)tempObj;
//StartCoroutine(moveToNewPos(obj.transform, newPos, duration));
}
}
```
While this should solve your problem, it's worth noting that it's an undocumented feature which means that it may stop working someday. If you care about that then an array should be used.
EDIT:
John Skeet mentioned about possible incompatibility on some platforms. I did a test again and it works on all the devices I tested it on. I did a test on both Windows and Android and it worked on both Windows and Android. I also expect it to work on iOS too. Too lazy to switch to Mac then fiddle with Xcode to test but there shouldn't be a problem.
**Note you must use .NET>=4.6 to get this working**
*To do that*:
**1**.Go to the Player Settings, change *Scripting Runtime Version* to *"Experimental (.Net 4.6 Equivalent)"*
**2**.Change Api Compatibility Level to *.NET 4.6*.
**3**.Change *Scripting Backend* to Mono instead of IL2CPP. IL2CPP is not supported because Unity did not implement this on it.
|
Javascript navigator.cookieEnabled Browser Compatibility
How well supported is `navigator.cookieEnabled`? Can I safely rely on it for all browsers?
| I know it's present in [at least IE 6 and later](http://msdn.microsoft.com/en-us/library/ms533694(v=vs.85).aspx), [Firefox 1 and later](https://developer.mozilla.org/en/DOM/window.navigator.cookieEnabled), and Dottoro reports that it is [supported by all major browsers](http://help.dottoro.com/ljbwtjwn.php). However, it is not part of any DOM specification and therefore is not guaranteed to be available in or properly implemented by all browsers (for instance, mobile browsers with limited DOM implementations).
As some have discovered, IE returns true for `navigator.cookieEnabled` even if cookies are blocked for the current site. This means that you cannot currently rely on the property at all and you should avoid it completely.
For a complete cross browser cookie support check, you might want to go with something like this:
```
var cookies = ("cookie" in document && (document.cookie.length > 0 ||
(document.cookie = "test").indexOf.call(document.cookie, "test") > -1));
```
Demo: <http://codetester.org/31011785>
This will return `false` in browsers that have cookies disabled or don't support the DOM level 2 property `document.cookie`, which is about as far as you can go in JS.
|
rails ancestry pagination
I've just followed the Railscast tutorial:
<http://railscasts.com/episodes/262-trees-with-ancestry>
Is it possible to paginate results from Ancestry which have been arranged?
eg: Given I have the following in my Message controller:
```
def index
@messages = Message.arrange(:order => :name)
end
```
Then how would I paginate this as it's going to result in a hash?
**Update**
I found that if I use .keys then it will paginate, but only the top level not the children.
```
Message.scoped.arrange(:order => :name).keys
```
**Update**
Each message has a code and some content. I can have nested messages
Suppose I have
code - name
```
1 - Test1
1 - test1 sub1
2 - test1 sub2
2 - Test2
1 - test2 sub1
2 - test2 sub2
3 - test2 sub3
```
This is how I want to display the listing, but I also want to paginate this sorted tree.
| It is possible but I've only managed to do it using two database trips.
The main issue stems from not being able to set limits on a node's children, which leads to either a node's children being truncated or children being orphaned on subsequent pages.
An example:
```
id: 105, Ancestry: Null
id: 117, Ancestry: 105
id: 118, Ancestry: 105/117
id: 119, Ancestry: 105/117/118
```
A LIMIT 0,3 (for the sake of the example above) would return the first three records, which will render all but id:119. The subsequent LIMIT 3,3 will return id: 119 which will not render correctly as its parents are not present.
One solution I've employed is using two queries:
1. The first returns root nodes only. These can be sorted and it is this query that is paginated.
2. A second query is issued, based on the first, which returns all children of the paginated parents. You should be able to sort children per level.
In my case, I have a Post model (which has\_ancestry) . Each post can have any level of replies. Also a post object has a replies count which is a cache counter for its immediate children.
In the controller:
```
roots = @topic.posts.roots_only.paginate :page => params[:page]
@posts = Post.fetch_children_for_roots(@topic, roots)
```
In the Post model:
```
named_scope :roots_only, :conditions => 'posts.ancestry is null'
def self.fetch_children_for_roots(postable, roots)
unless roots.blank?
condition = roots.select{|r|r.replies_count > 0}.collect{|r| "(ancestry like '#{r.id}%')"}.join(' or ')
unless condition.blank?
children = postable.posts.scoped(:from => 'posts FORCE INDEX (index_posts_on_ancestry)', :conditions => condition).all
roots.concat children
end
end
roots
end
```
Some notes:
- MySQL will stop using the ancestry column index if multiple LIKE statements are used. The FORCE INDEX forces mySQL to use the index and prevents a full table scan
- LIKE statements are only built for nodes with direct children, so that replies\_count column came in handy
- What the class method does is appends children to root, which is a WillPaginate::Collection
Finally, these can be managed in your view:
```
=will_paginate @posts
-Post.arrange_nodes(@posts).each do |post, replies|
=do stuff here
```
The key method here is **arrange\_nodes** which is mixed in from the ancestry plugin and into your model. This basically takes a sorted Array of nodes and returns a sorted and hierarchical Hash.
I appreciate that this method does not directly address your question but I hope that the same method, with tweaks, can be applied for your case.
There is probably a more elegant way of doing this but overall I'm happy with the solution (until a better one comes along).
|
SQLAlchemy emitting cross join for no reason
I had a query set up in SQLAlchemy which was running a bit slow, tried to optimize it. The result, for unknown reason, uses an implicit cross join, which is both significantly slower and comes up with entirely the wrong result. I’ve anonymized the table names and arguments but otherwise made no changes. Does anyone know where this is coming from?
To make it easier to find: The differences in new and old emitted SQL are that the new one has a longer SELECT and mentions all three tables in the WHERE before any JOINs.
Original code:
```
cust_name = u'Bob'
proj_name = u'job1'
item_color = u'blue'
query = (db.session.query(Item.name)
.join(Project, Customer)
.filter(Customer.name == cust_name,
Project.name == proj_name)
.distinct(Item.name))
# some conditionals determining last filter, resolving to this one:
query = query.filter(Item.color == item_color)
result = query.all()
```
Original emitted SQL as logged by flask\_sqlalchemy.get\_debug\_queries:
```
QUERY: SELECT DISTINCT ON (items.name) items.name AS items_name
FROM items JOIN projects ON projects.id = items._project_id JOIN customers ON customers.id = projects._customer_id
WHERE customers.name = %(name_1)s AND projects.name = %(name_2)s AND items.color = %(color_1)s
Parameters: `{'name_2': u'job1', 'state_1': u'blue', 'name_1': u'Bob'}
```
New code:
```
cust_name = u'Bob'
proj_name = u'job1'
item_color = u'blue'
query = (db.session.query(Item)
.options(Load(Item).load_only('name', 'color'),
joinedload(Item.project, innerjoin=True).load_only('name').
joinedload(Project.customer, innerjoin=True).load_only('name'))
.filter(Customer.name == cust_name,
Project.name == proj_name)
.distinct(Item.name))
# some conditionals determining last filter, resolving to this one:
query = query.filter(Item.color == item_color)
result = query.all()
```
New emitted SQL as logged by flask\_sqlalchemy.get\_debug\_queries:
```
QUERY: SELECT DISTINCT ON (items.nygc_id) items.id AS items_id, items.name AS items_name, items.color AS items_color, items._project_id AS items__project_id, customers_1.id AS customers_1_id, customers_1.name AS customers_1_name, projects_1.id AS projects_1_id, projects_1.name AS projects_1_name
FROM customers, projects, items JOIN projects AS projects_1 ON projects_1.id = items._project_id JOIN customers AS customers_1 ON customers_1.id = projects_1._customer_id
WHERE customers.name = %(name_1)s AND projects.name = %(name_2)s AND items.color = %(color_1)s
Parameters: `{'state_1': u'blue', 'name_2': u'job1', 'name_1': u'Bob'}
```
In case it matters, the underlying database is PostgreSQL.
The original intent of the query only needs `Item.name`. The optimization attempt is looking less likely to actually be helpful the longer I think about it, but I still want to know where that cross-join came from in case it happens again somewhere that adding `joinedload`, `load_only`, etc. would actually help.
| This is because a `joinedload` is different from a `join`. The `joinedload`ed entities are effectively anonymous, and the later filters you applied refer to different instances of the same tables, so `customers` and `projects` gets joined in twice.
What you should do is to do a `join` as before, but use [`contains_eager`](http://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html#sqlalchemy.orm.contains_eager) to make your join look like `joinedload`.
```
query = (session.query(Item)
.join(Item.project)
.join(Project.customer)
.options(Load(Item).load_only('name', 'color'),
Load(Item).contains_eager("project").load_only('name'),
Load(Item).contains_eager("project").contains_eager("customer").load_only('name'))
.filter(Customer.name == cust_name,
Project.name == proj_name)
.distinct(Item.name))
```
This gives you the query
```
SELECT DISTINCT ON (items.name) customers.id AS customers_id, customers.name AS customers_name, projects.id AS projects_id, projects.name AS projects_name, items.id AS items_id, items.name AS items_name, items.color AS items_color
FROM items JOIN projects ON projects.id = items._project_id JOIN customers ON customers.id = projects._customer_id
WHERE customers.name = %(name_1)s AND projects.name = %(name_2)s AND items.color = %(color_1)s
```
|
how to fade out a data bound text block when the property it is bound to is changed, using MVVM
i am using the MVVM design pattern and do not want much code in my code behind. coding in XAML and C#.
when a user saves a new record i would like "record saved" to appear in a text Block then fade away.
this is the sort of thing i would like to work:
```
<TextBlock Name="WorkflowCreated" Text="Record saved">
<TextBlock.Triggers>
<DataTrigger Binding="{Binding Path=NewWorkflowCreated}">
<DataTrigger.EnterActions>
<BeginStoryboard>
<Storyboard>
<DoubleAnimation
Storyboard.TargetName="WorkflowCreated"
Storyboard.TargetProperty="(TextBlock.Opacity)"
From="1.0" To="0.0" Duration="0:0:3"/>
</Storyboard>
</BeginStoryboard>
</DataTrigger.EnterActions>
</DataTrigger>
</TextBlock.Triggers>
```
so when NewWorkflowCreated is changed in the viewmodel it would trigger the animation, unfortunately this does not work. i have also tried this:
```
<TextBlock Name="Message" Text="This is a test.">
<TextBlock.Triggers>
<EventTrigger RoutedEvent="TextBlock.Loaded">
<BeginStoryboard>
<Storyboard>
<DoubleAnimation
Storyboard.TargetName="Message"
Storyboard.TargetProperty="(TextBlock.Opacity)"
From="1.0" To="0.0" Duration="0:0:3"/>
</Storyboard>
</BeginStoryboard>
</EventTrigger>
</TextBlock.Triggers>
</TextBlock>
```
any help would be much appreciated. Maybe there is away that requires code in the View model?
| You're using a DataTrigger which needs to be in a style.
```
<Window.DataContext>
<WpfApplication2:TestViewModel/>
</Window.DataContext>
<Grid>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition/>
</Grid.RowDefinitions>
<Grid.Resources>
<Style x:Key="textBoxStyle" TargetType="{x:Type TextBlock}">
<Style.Triggers>
<DataTrigger Binding="{Binding Path=NewWorkflowCreated}" Value="True">
<DataTrigger.EnterActions>
<BeginStoryboard>
<Storyboard>
<DoubleAnimation
Storyboard.TargetProperty="(TextBlock.Opacity)"
From="1.0" To="0.0" Duration="0:0:3"/>
</Storyboard>
</BeginStoryboard>
</DataTrigger.EnterActions>
</DataTrigger>
</Style.Triggers>
</Style>
</Grid.Resources>
<TextBlock Name="WorkflowCreated" Style="{StaticResource textBoxStyle}" Text="Record saved" />
<Button Content="press me" Grid.Row="1" Click="Button_Click_1"/>
</Grid>
public class TestViewModel : INotifyPropertyChanged
{
private bool _newWorkflowCreated;
public bool NewWorkflowCreated
{
get { return _newWorkflowCreated; }
set {
_newWorkflowCreated = value;
PropertyChanged(this, new PropertyChangedEventArgs("NewWorkflowCreated"));
}
}
#region Implementation of INotifyPropertyChanged
public event PropertyChangedEventHandler PropertyChanged;
#endregion
}
```
|
How to pass formData for POST request in swagger.json?
In my play framework application, I have registered APIs in route file as:
```
POST /api/rmt-create-request controllers.Api.CreateRMTRequestForm
```
On action of controller, I am using following code to access formData submitted with form submit as :
```
public Result CreateRMTRequestForm()
{
Map<String, String[]> params = request().body().asMultipartFormData().asFormUrlEncoded();
```
Its working fine as API when I submit the form with forntend application.
I am trying to create APIs documentation with swagger.ui in which within swagger.json file I have written following JSON data.
```
"paths": {"/api/rmt-create-request": {
"post": {
"tags": [
"RMT APIs"
],
"description" : "Return newly created request data",
"operationId": "create-new-rmt-request",
"consumes": ["application/x-www-form-urlencoded"],
"parameters": [
{
"name": "rootNodeName",
"in": "formData",
"description": "Root node class name for item",
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/rmt-request-data"
}
}
}
},
"default": {
"$ref": "#/components/responses/default"
}
}
}
},
```
While inspecting RequestHeader data, its not showing content-Type property with value **'multipart/form-data'** as well as formData are not attached, which makes controller to throw null exception.
Can anyone help whats missing in swagger.json file ?
| You are mixing OpenAPI 2.0 and 3.0 syntax.
In OpenAPI 3.0, request body (including form data) is defined using the [`requestBody`](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#user-content-operationrequestbody) keyword instead of `in: formData` parameters.
Also, OAS3 does not use `consumes`. The media types consumed by the operation are specified inside the `requestBody`.
```
"paths": {
"/api/rmt-create-request": {
"post": {
"tags": [
"RMT APIs"
],
"description": "Return newly created request data",
"operationId": "create-new-rmt-request",
"requestBody": {
"content": {
"multipart/form-data": { // or "application/x-www-form-urlencoded" - depending on what you need
"schema": {
"type": "object",
"properties": {
"rootNodeName": {
"type": "string",
"description": "Root node class name for item"
}
}
}
}
}
}
}
}
}
```
More information: [Describing Request Body](https://swagger.io/docs/specification/describing-request-body/)
|
How to Implement dynamic routing in routes.js for generated menu items in sidebar in universal react redux boilerplate by erikras
I am currently working on a CMS based project.
For which i am using the universal react redux boilerplate by erikras
I really need suggestions on handling dynamic routing
Lets take a simple scenario form the boilerplate...
In **routes.js**
```
<Route path="about" component={About}/>
<Route path="login" component={Login}/>
<Route path="survey" component={Survey}/>
<Route path="widgets" component={Widgets}/>
```
**data.js**
```
export const data = [
{id: 1, property: 'Dashboard', link: '/'},
{id: 2, property: 'Login', link: '/login'},
{id: 3, property: 'About Us', link: '/About'},
];
```
now, let say on the basis of user role, the properties in json data will change
*let say new property: is*
```
{id: 4, property: 'test page', link: '/test'}
```
When react will render the components, how it would know the route link .. as it is not defined in the routes.js
I am not getting the right way to implement it
We need a sidebar made of specific menu content as per the user role .
Let say we are building a reservation system , there can be different user roles like admin, maintenance mode, assistant role
.
So different role will have different properties, accordingly we need to generate the menu on the basis it, as the properties will definitely differ as per user role.
Thanks!!
| It is not clear from your example which component should be rendered for `/test` url? I suppose it is value of `property` key, right?
### First option
You can do is something like this:
```
<Route path="/:page" component={Page}/>
```
It will allow you to render `Page` component for each url, that starts from `/` and this component will have page url inside `this.props.routeParams.page`. It allows you to find needed component inside `Page#render`:
```
render() {
const url = this.props.routeParams.page;
const PageComponent = data.find(page => page.link === url).property;
render <PageComponent />;
}
```
### Second option
You can generate Routes dynamically, but I'm not sure if it works (you can check it). You just should replace this part:
```
<Route path="about" component={About}/>
<Route path="login" component={Login}/>
<Route path="survey" component={Survey}/>
<Route path="widgets" component={Widgets}/>
```
with
```
data.map(page => <Route path={page.link} component={page.property} key={page.id}/>)
```
|
jquery mobile, disable all button when loading overlay is showed
Actually i can call this code
```
$(":input").attr("disabled",true); //Disable all input fields
```
to disable all buttons on my page. But i don't know how good is the performance when i have a lot of button on my page.
I saw a trick that we create a loading indicator overlay, which is above all element on the page => user can not click on the buttons anymore
is there any way to reuse the loading overlay of jquery mobile to archive the above trick? I'm not good at CSS so hopefully someone can help me.
Thanks
**Edited:**
i ended up with using jquery.blockUI plugin for jQuery and it works as expected.
And i added the default div with css from jquery mobile so that i still have the loading message of jquery mobile and the behaviour that i wanted
[Working sample here](http://jsfiddle.net/peacemoon/rRKRr/3/)
| A simple way I've just found is to use a fixed background with `z-index` and `opacity`:
Add this CSS:
```
.ui-loader-background {
width:100%;
height:100%;
top:0;
margin: 0;
background: rgba(0, 0, 0, 0.3);
display:none;
position: fixed;
z-index:100;
}
.ui-loading .ui-loader-background {
display:block;
}
```
`ui-loader-background` is a custom class, but `ui-loading` is added by JQM to the `<html>` element when the loading spinner is shown.
And add this element in your body:
```
<div class="ui-loader-background"> </div>
```
example : <http://jsfiddle.net/5GB6B/1/>
|
Reusablity of controller in MVC
In the MVC pattern, the controller is the least reusable, compared to the other two aspects.
Now let's say I have an app (say for Ordering Pizza), which is available both as a web app and a mobile app (say iPhone). So in that case, I think the model (or data) can be reused. The view might not be reusable.
But regarding the controller, is it possible to reuse anything? Let's say if I already have a working web app, can I reuse controller logic for the mobile app as well? Also, what is and where exactly does "business logic" reside in MVC?
| The controller calls a service layer. The service layer uses the model to do business logic. Controller never contains business logic. It should only delegate work to the service layer. I consider the service layer as the part that the domain model exposes, you could say it is the "Model" in MVC.
That said, I don't think the MVC frameworks really care if the controller is reusable or not. The important part is the model, which should not change because the service layer code is reused. Besides, if we write our code correctly, the controller will be a very thin layer and reusability should not be a concern.
Can you reuse the controller logic from the web app for a mobile application? I think not, but you could use the service layer. I am sceptical if even the view can be used directly from web to mobile apps, the needs are so different.
I suggest you look at Domain driven design if you are interested in application design and learning how to organize business logic.
|
How to copy the data of all of the rows in Task Manager in Windows8?
Currently, you can only select one row, press ctrl+c and then press ctrl +v to excel.
If there is no way to do it from Task Manager, is there some other ways? Such as from cmd.exe.
Thanks in advance.
![enter image description here](https://i.stack.imgur.com/hpLvs.png)
| Press you Windows Key, then type `Powershell` to open a PowerShell session, then type `ps` to get a list of all processes with some standard columns.
To copy the information, use one of the out- cmdlets:
```
ps | Out-Clipboard
```
or
```
ps | Out-File C:\processes.txt
```
to limit the number of processes to show, you can filter:
```
ps | where ProcessName -eq Chrome | Out-Clipboard
```
to show different columns, specify them:
```
ps | where ProcessName -eq Chrome | Select Id, ProcessName, Path | Out-Clipboard
```
to get a list of all available columns you could do:
```
ps | where Id -eq 0 | fl *
```
You can do a lot more filtering, but this should get you started.
cmd.exe is just there for backwards compatibility, PowerShell is much more powerful and if you want to learn to do stuff on the command line, I recommend using PowerShell.
I never tried PowerShell with simplified Chinese, so I don't know how well that works.
|
C++ header files with no extension
I am using an open source project (Open Scene Graph). I found that all the header file names are in `File` format, which I found to be *File With No Extension* as mentioned in some website.
I would like to know why those developer used this extension, rather than the traditional `.h` file extension.
| It seems you are talking about [this repository](https://github.com/openscenegraph/OpenSceneGraph) of C++ code.
It looks like the authors of that code decided to follow the patterns of the C++ standard library. In standard C++, library headers are not supposed to have the `.h` extension. So the following is correct:
```
#include <iostream>
```
With most implementations writing `<iostream.h>` would also work, but the version without an extension is actually correct. The C++ standard library was able to drop extensions in C++98 due to the introduction of namespaces, and introduction of the `std` namespace for the standard library.
The C++ standard neither requires nor forbids an extension for other headers, so it's entirely up to the authors of some software what file extension to use, if any. The most common choices are to use `.h` or `.hpp`, the latter being intended to distinguish C++ headers from C headers.
A quick look at the OpenSceneGraph code shows that they've followed the C++ standard library pattern in their includes. There are no extensions, and everything is in the `osg` namespace, analogous to the `std` namespace of the standard library. So using the OpenSceneGraph libraries is very similar to using the C++ standard library.
```
#include <osg/Camera> // Provides osg::Camera
```
It's the same pattern as:
```
#include <string> //Provides std::string
```
So I think it's safe to say that authors of the OSG wanted to follow the same pattern as in the C++ Standard Library. My personal opinion is that it's better to have a file extension, even if only to be able to search for header files.
|
How to calculate the resulting filesize of Image.resize() in PIL
I have to reduce incoming files to a size of max 1MB. I use `PIL` for image operations and python 3.5.
The filesize of an image is given by:
```
import os
src = 'testfile.jpg'
os.path.getsize(src)
print(src)
```
which gives in my case 1531494
If I open the file with PIL I can get only the dimensions:
```
from PIL import Image
src = 'testfile.jpg'
image = Image.open(src)
size = image.size
print(size)
```
which gives in my case (1654, 3968)
Of course I can make loop over the file as below with different sizes, save the file and check its filesize. But there must be a simpler way, because this takes too much time. (If you downsize 1000 files with different sizes)
```
def resize_image(src, reduceby=1):
'''
resizes image by percent given in reduceby
'''
print(" process_image:",src, reduceby)
org = Image.open(src)
real_size = org.size
reduced_size = (int(real_size[0] * reduceby / 100),int(real_size[1] * reduceby / 100) )
org.resize(reduced_size, Image.ANTIALIAS)
reduced_file = src[:-4] +"_" + str(reduceby) + src[-4:]
org.save(reduced_file, optimize=True)
print(" reduced_image:", reduced_file)
reduced_filesize = os.path.getsize(reduced_file)
return reduced_filesize, reduced_file
def loop_image(src, target_size):
print("loop_image :", src, target_size)
file_size = os.path.getsize(src)
reduced_file =src
print("source :", src, file_size)
reduce_by = 1
while file_size > target_size:
file_size, reduced_file = resize_image(src, reduce_by)
print("target :", file_size, reduced_file)
reduce_by += 1
return reduced_file
```
This function works, but it reduces too much and takes too much time.
My question is: How can I calculate the resulting filesize **before** I resize it?
Or is there a simpler way?
| Long story short, you do not know how well the image will be compressed, because it depends a lot on what kind of image it is. That said, we can optimize your code.
Some optimizations:
- Approximate the number of bytes per pixel using the memory size and the image width.
- performing a ratio updated based on the new memory consumption and old memory consumption.
My coding solution applies both of the above methods, because applying them separately didn't seem to result in very stable convergence. The following sections will explain both part in more depth and show the test cases that I considered.
## Reducing image memory
The following code approximates the new image dimensions based on the difference between the original file size (in bytes) and the preferred file size (in bytes). It will approximate the number of bytes per pixels and then applies the difference between the original bytes per pixel and the preferred bytes per pixel on the image width and height (therefore the square root is taken).
Then I use `opencv-python` (cv2) for the image rescaling, but that can be changed by your code.
```
def reduce_image_memory(path, max_file_size: int = 2 ** 20):
"""
Reduce the image memory by downscaling the image.
:param path: (str) Path to the image
:param max_file_size: (int) Maximum size of the file in bytes
:return: (np.ndarray) downscaled version of the image
"""
image = cv2.imread(path)
height, width = image.shape[:2]
original_memory = os.stat(path).st_size
original_bytes_per_pixel = original_memory / np.product(image.shape[:2])
# perform resizing calculation
new_bytes_per_pixel = original_bytes_per_pixel * (max_file_size / original_memory)
new_bytes_ratio = np.sqrt(new_bytes_per_pixel / original_bytes_per_pixel)
new_width, new_height = int(new_bytes_ratio * width), int(new_bytes_ratio * height)
new_image = cv2.resize(image, (new_width, new_height), interpolation=cv2.INTER_LINEAR_EXACT)
return new_image
```
## Applying ratio
Most of the magic happens in `ratio *= max_file_size / new_memory`, where we calculate our error with respect to the preferred size and correct our ratio with that value.
The program will search for a ratio that satisfies the following condition:
- `abs(1 - max_file_size / new_memory) > max_deviation_percentage`
This means that the new file size has to be relatively close to the preferred file size. You control this closeness ratio by `delta`. The higher the delta is the, the smaller your file can be (be lower than `max_file_size`). The smaller the delta is the closer the new file size will be to the `max_file_size`, but it will never be larger.
The trade of is in time, the smaller delta is the more time it will take to find a ratio satisfying the condition, empirically testing shows that values between `0.01` and `0.05` are good.
```
if __name__ == '__main__':
image_location = "test img.jpg"
# delta denotes the maximum variation allowed around the max_file_size
# The lower the delta the more time it takes, but the close it will be to `max_file_size`.
delta = 0.01
max_file_size = 2 ** 20 * (1 - delta)
max_deviation_percentage = delta
current_memory = new_memory = os.stat(image_location).st_size
ratio = 1
steps = 0
# make sure that the comparison is within a certain deviation.
while abs(1 - max_file_size / new_memory) > max_deviation_percentage:
new_image = reduce_image_memory(image_location, max_file_size=max_file_size * ratio)
cv2.imwrite(f"resize {image_location}", new_image)
new_memory = os.stat(f"resize {image_location}").st_size
ratio *= max_file_size / new_memory
steps += 1
print(f"Memory resize: {current_memory / 2 ** 20:5.2f}, {new_memory / 2 ** 20:6.4f} MB, number of steps {steps}")
```
## Test cases
For testing I had two different approaches, using randomly generated images and an example from google.
For the random images I used the following code
```
def generate_test_image(ratio: Tuple[int, int], file_size: int) -> Image:
"""
Generate a test image with fixed width height ratio and an approximate size.
:param ratio: (Tuple[int, int]) screen ratio for the image
:param file_size: (int) Approximate size of the image, note that this may be off due to image compression.
"""
height, width = ratio # Numpy reverse values
scale = np.int(np.sqrt(file_size // (width * height)))
img = np.random.randint(0, 255, (width * scale, height * scale, 3), dtype=np.uint8)
return img
```
### results
- Using a randomly generated image
```
image_location = "test image random.jpg"
# Generate a large image with fixed ratio and a file size of ~1.7MB
image = generate_test_image(ratio=(16, 9), file_size=1531494)
cv2.imwrite(image_location, image)
```
Memory resize: 1.71, 0.99 MB, number of steps 2
In 2 steps it reduces the original size from 1.7 MB to 0.99 MB.
(before)
[![original randomly generated image of 1.7 MB](https://i.stack.imgur.com/80cLu.jpg)](https://i.stack.imgur.com/80cLu.jpg)
(after)
[![resized randomly generated image of 0.99 MB](https://i.stack.imgur.com/E3QLC.jpg)](https://i.stack.imgur.com/E3QLC.jpg)
- Using a google image
Memory resize: 1.51, 0.996 MB, number of steps 4
In 4 steps it reduces the original size from 1.51 MB to 0.996 MB.
(before)
[![original google image of a lake with waterfalls](https://i.stack.imgur.com/vPagE.jpg)](https://i.stack.imgur.com/vPagE.jpg)
(after)
[![resized google image of a lake with waterfalls](https://i.stack.imgur.com/irgm0.jpg)](https://i.stack.imgur.com/irgm0.jpg)
## Bonus
- It also works for `.png`, `.jpeg`, `.tiff`, etc...
- Besides downscaling it can also be used to upscale images to a certain memory consumption.
- The image ratio is maintained as good as possible.
---
## Edit
I made the code a bit more user friendly, and added the suggestion from `Mark Setchell` using the `io.Buffer`, this roughly speeds up the code with a factor of 2. There is also a `step_limit`, that prevents endless looping if the delta is very small.
```
import io
import os
import time
from typing import Tuple
import cv2
import numpy as np
from PIL import Image
def generate_test_image(ratio: Tuple[int, int], file_size: int) -> Image:
"""
Generate a test image with fixed width height ratio and an approximate size.
:param ratio: (Tuple[int, int]) screen ratio for the image
:param file_size: (int) Approximate size of the image, note that this may be off due to image compression.
"""
height, width = ratio # Numpy reverse values
scale = np.int(np.sqrt(file_size // (width * height)))
img = np.random.randint(0, 255, (width * scale, height * scale, 3), dtype=np.uint8)
return img
def _change_image_memory(path, file_size: int = 2 ** 20):
"""
Tries to match the image memory to a specific file size.
:param path: (str) Path to the image
:param file_size: (int) Size of the file in bytes
:return: (np.ndarray) rescaled version of the image
"""
image = cv2.imread(path)
height, width = image.shape[:2]
original_memory = os.stat(path).st_size
original_bytes_per_pixel = original_memory / np.product(image.shape[:2])
# perform resizing calculation
new_bytes_per_pixel = original_bytes_per_pixel * (file_size / original_memory)
new_bytes_ratio = np.sqrt(new_bytes_per_pixel / original_bytes_per_pixel)
new_width, new_height = int(new_bytes_ratio * width), int(new_bytes_ratio * height)
new_image = cv2.resize(image, (new_width, new_height), interpolation=cv2.INTER_LINEAR_EXACT)
return new_image
def _get_size_of_image(image):
# Encode into memory and get size
buffer = io.BytesIO()
image = Image.fromarray(image)
image.save(buffer, format="JPEG")
size = buffer.getbuffer().nbytes
return size
def limit_image_memory(path, max_file_size: int, delta: float = 0.05, step_limit=10):
"""
Reduces an image to the required max file size.
:param path: (str) Path to the original (unchanged) image.
:param max_file_size: (int) maximum size of the image
:param delta: (float) maximum allowed variation from the max file size.
This is a value between 0 and 1, relatively to the max file size.
:return: an image path to the limited image.
"""
start_time = time.perf_counter()
max_file_size = max_file_size * (1 - delta)
max_deviation_percentage = delta
new_image = None
current_memory = new_memory = os.stat(image_location).st_size
ratio = 1
steps = 0
while abs(1 - max_file_size / new_memory) > max_deviation_percentage:
new_image = _change_image_memory(path, file_size=max_file_size * ratio)
new_memory = _get_size_of_image(new_image)
ratio *= max_file_size / new_memory
steps += 1
# prevent endless looping
if steps > step_limit: break
print(f"Stats:"
f"\n\t- Original memory size: {current_memory / 2 ** 20:9.2f} MB"
f"\n\t- New memory size : {new_memory / 2 ** 20:9.2f} MB"
f"\n\t- Number of steps {steps}"
f"\n\t- Time taken: {time.perf_counter() - start_time:5.3f} seconds")
if new_image is not None:
cv2.imwrite(f"resize {path}", new_image)
return f"resize {path}"
return path
if __name__ == '__main__':
image_location = "your nice image.jpg"
# Uncomment to generate random test images
# test_image = generate_test_image(ratio=(16, 9), file_size=1567289)
# cv2.imwrite(image_location, test_image)
path = limit_image_memory(image_location, max_file_size=2 ** 20, delta=0.01)
```
|
React navigation header right button
I want add button in react-native header , the button is to mas and unmask password in the page, the problem on click when i change the state to change secureTextEntry value, the icon wont change will keep as the initial value;
the function is working fine but the icon cant change
```
this.state.secureTextEntry ? "eye" : "eye-slash"
```
this is the main code
```
class ChangePasswordScreen extends Component {
constructor(props) {
super(props);
this.state = {
newPassword: null,
currentPassword: null,
confirmPassword: null,
errors: [],
secureTextEntry: true
};
this.maskPassword = this.maskPassword.bind(this)
}
componentDidMount() {
this.props.navigation.setParams({
headerRight: ( < TouchableOpacity onPress = {
() => {
this.maskPassword();
}
} > < Icon style = {
styles.eyeIcon
}
name = {
this.state.secureTextEntry ? "eye" : "eye-slash"
}
size = {
20
}
color = {
Colors.WHITE
}
/></TouchableOpacity > )
})
}
static navigationOptions = ({
navigation
}) => {
return {
// headerTitle: <LogoTitle />,
headerRight: navigation.state.params && navigation.state.params.headerRight,
};
};
maskPassword = () => {
this.setState({
secureTextEntry: !this.state.secureTextEntry
})
}
```
}
| The problem is **this.setState** will not re-render header component . if you want to change header right then you have to call **setParams** again
Try this code in **componentDidMount**
```
componentDidMount() {
this.props.navigation.setParams({
headerRight: this.setHeaderRight(this.state.secureTextEntry)
});
}
```
**Set function for header right**
```
setHeaderRight = state => {
//console.log("setHeaderRight", this.state.secureTextEntry);
return (
<TouchableOpacity
onPress={() => {
this.maskPassword();
}}
>
<Icon
style={styles.eyeIcon}
name={state ? "eye" : "eye-slash"}
size={20}
color={Colors.WHITE}
/>
</TouchableOpacity>
);
};
```
**Set header right again when state set**
```
maskPassword = () => {
this.setState({
secureTextEntry: !this.state.secureTextEntry
});
this.props.navigation.setParams({
headerRight: this.setHeaderRight(!this.state.secureTextEntry)
});
};
```
|
How Can I Access the WiFiManager Framework iOS?
I am trying to access the WiFiManager Framework (previously apple80211) and can't find the right information. I understand Apple doesn't allow the use of private frameworks for apps on the app store but I am writing this app for personal use so this is of no concern to me. What I need to know is can I still access the WiFiManager Framework for an app I upload directly to my phone and is there any information out there on how to? Thanks in advance for ANY help.
| See my answer [here](https://stackoverflow.com/questions/2053114/iphone-wi-fi-manager-sdk/2152933#2152933).
```
//IN YOUR APP
notify_post("com.yourcompany.yourapp.yournotification");
//IN YOUR DYLIB
#import <SpringBoard/SBWiFiManager.h>
HOOK(SpringBoard, applicationDidFinishLaunching$, void, id app) {
//Listen for events via DARWIN NOTIFICATION CENTER
CFNotificationCenterAddObserver(CFNotificationCenterGetDarwinNotifyCenter(), NULL,
&NotificationReceivedCallback, CFSTR("com.yourcompany.yourapp.yournotification"), NULL,
CFNotificationSuspensionBehaviorCoalesce);
}
//THIS IS WHERE THE MAGIC HAPPENS
static void NotificationReceivedCallback(CFNotificationCenterRef center,
void *observer, CFStringRef name,
const void *object, CFDictionaryRef
userInfo)
{
[[objc_getClass("SBWiFiManager") sharedInstance] setWiFiEnabled:NO];
}
```
|
Can I access the DATA from a required script in Ruby?
Is it possible to access the text after `__END__` in a ruby file other than the "main" script?
For example:
```
# b.rb
B_DATA = DATA.read
__END__
bbb
```
.
```
# a.rb
require 'b'
A_DATA = DATA.read
puts 'A_DATA: ' + A_DATA
puts 'B_DATA: ' + B_DATA
__END__
aaa
```
.
```
C:\Temp>ruby a.rb
A_DATA:
B_DATA: aaa
```
Is there any way to get at the "bbb" from b.rb?
| Unfortunately, the `DATA` global constant is set when the "main" script is loaded. A few things that might help:
You *can* at least get `A_DATA` to be correct. Just reverse the order of the first two operations in `a.rb`:
```
# a.rb
A_DATA = DATA.read
require 'b'
...
```
You can get the `B_DATA` to be correct if you go through a bit of rigamarole:
```
# load_data_regardless_of_main_script.rb
module LoadDataRegardlessOfMainScript
def self.from(file)
# the performance of this function could be
# greatly improved by using a StringIO buffer
# and only appending to it after seeing __END__.
File.read(file).sub(/\A.*\n__END__\n/m, '')
end
end
# b.rb:
require 'load_data_regardless_of_main_script'
B_DATA = LoadDataRegardlessOfMainScript.from(__FILE__)
```
|
The program can't start because mfc120ud.dll is missing from your computer
I'm trying to run an application that I've recently developped onto another computer and which I've compiled using VS2013.
Running it I get:
>
> The program can't start because mfc120ud.dll is missing from your computer. Try reinstalling the program to fix this problem.
>
>
>
I've searched the mfc120ud.dll from the net but without any result. I've copied/pasted this dll file from the computer on which I've developed that app onto the system32 of the other computer, doing that implies:
>
> C:\Users\u\System32\mfc120ud.dll is either not designed to run on Windows or it contains an error. Try installing the program again using the original installation media or contact your system administrator or the software vendor for support.
>
>
>
Knowing that the computer on which I'm developing is a 64-bits one and the other one is 32-bits, and that I've copied/pasted the mfc120ud.dll version from System32, how do I to fix this issue?
|
>
> The program can't start because mfc120ud.dll is missing from your computer. Try reinstalling the program to fix this problem.
>
>
>
That is one of the debug libraries for MFC. That's the library that you link against when you build debug releases of your program. It is present on your developer machine, but you cannot redistribute it.
You need to do the following:
1. Build your project for release. This will link against the release versions of any runtime DLLs.
2. Install the MSVC and MFC redistributable dependencies on any machine on which you plan to run your program. An alternative is to install the runtime DLLs in the same directory as your executable.
>
> I've copied/pasted the mfc120ud.dll version from System32
>
>
>
You are not allowed to do that. Retrace your steps and undo that.
|
for( in ) loop index is string instead of integer
Consider following code:
```
var arr = [111, 222, 333];
for(var i in arr) {
if(i === 1) {
// Never executed
}
}
```
It will fail, because `typeof i === 'string'`.
Is there way around this? I could explicitly convert `i` to integer, but doing so seems to defeat the purpose using simpler `for in` instead of regular `for`-loop.
**Edit:**
I am aware of using `==` in comparison, but that is not an option.
| You have got several options
1. Make conversion to a Number:
```
parseInt(i) === 1
~~i === 1
+i === 1
```
2. Don't compare a type (Use `==` instead of `===`):
```
i == 1 // Just don't forget to add a comment there
```
3. Change the for loop to (I would do this but it depends on what you are trying to achieve):
```
for (var i = 0; i < arr.length; i++) {
if (i === 1) { }
}
// or
arr.forEach(function(item, i) {
if (i === 1) { }
}
```
By the way you should not use `for...in` to iterate through an array. See docs: <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...in>
>
> for..in should not be used to iterate over an Array where index order
> is important. Array indexes are just enumerable properties with
> integer names and are otherwise identical to general Object
> properties. There is no guarantee that for...in will return the
> indexes in any particular order and it will return all enumerable
> properties, including those with non–integer names and those that are
> inherited.
>
>
> Because the order of iteration is implementation dependent, iterating
> over an array may not visit elements in a consistent order. Therefore
> it is better to use a for loop with a numeric index (or Array.forEach
> or the non-standard for...of loop) when iterating over arrays where
> the order of access is important.
>
>
>
|
Push notification not received when App is in Background in iOS 10
I'm using FCM(Firebase Cloud Messaging) for sending push notifications in iOS.
I'm able to receive the notification when App is in foreground state. But when the App is in background state, the notification is not received. Whenever the application will come to foreground state, only then will the notification be received.
My code is:
```
- (void)userNotificationCenter:(UNUserNotificationCenter *)center
willPresentNotification:(UNNotification *)notification
withCompletionHandler:(void (^)(UNNotificationPresentationOptions))completionHandler {
// Print message ID.
NSDictionary *userInfo = notification.request.content.userInfo;
NSLog(@"Message ID: %@", userInfo[@"gcm.message_id"]);
// Pring full message.
NSLog(@"%@", userInfo);
if( [UIApplication sharedApplication].applicationState == UIApplicationStateInactive )
{
NSLog( @"INACTIVE" );
completionHandler(UNNotificationPresentationOptionAlert);
}
else if( [UIApplication sharedApplication].applicationState == UIApplicationStateBackground )
{
NSLog( @"BACKGROUND" );
completionHandler( UNNotificationPresentationOptionAlert );
}
else
{
NSLog( @"FOREGROUND" );
completionHandler( UNNotificationPresentationOptionAlert );
}}
- (void)applicationDidEnterBackground:(UIApplication *)application {
}
```
When App is in background state:
```
- (void)userNotificationCenter:(UNUserNotificationCenter *)center
willPresentNotification:(UNNotification *)notification
withCompletionHandler:(void (^)(UNNotificationPresentationOptions))completionHandler
```
-- is not called at all.
I enabled push notifications and also remote notifications in background modes in App Capabilities. But App is still not receiving the notification.
I referred to some StackOverflow questions but wasn't able to solve the issue. Is there anything to add in iOS version 10 or any mistake in my code?
| For iOS 10, we need to call the 2 methods below.
**For FOREGROUND state**
```
- (void)userNotificationCenter:(UNUserNotificationCenter *)center willPresentNotification:(UNNotification *)notification withCompletionHandler:(void (^)(UNNotificationPresentationOptions options))completionHandler
{
NSLog( @"Handle push from foreground" );
// custom code to handle push while app is in the foreground
NSLog(@"%@", notification.request.content.userInfo);
completionHandler(UNNotificationPresentationOptionAlert);
}
```
**For BACKGROUND state**
```
- (void)userNotificationCenter:(UNUserNotificationCenter *)center didReceiveNotificationResponse:(UNNotificationResponse *)response withCompletionHandler:(void (^)())completionHandler
{
NSLog( @"Handle push from background or closed" );
// if you set a member variable in didReceiveRemoteNotification, you will know if this is from closed or background
NSLog(@"%@", response.notification.request.content.userInfo);
completionHandler();
}
```
Before that, we must add the `UserNotifications` framework and import in the `AppDelegate.h` file
```
#import <UserNotifications/UserNotifications.h>
@interface AppDelegate : UIResponder <UIApplicationDelegate,UNUserNotificationCenterDelegate>
```
|
URL scheme - Qt and mac
I'm trying to implement a custom URL scheme for my application. I've added the necessary lines for my Info.plist. After calling the specified url (eg.: myapp://) the application launches.
If I want to handle the URL, I've found these steps:
```
@interface EventHandler : NSObject {
}
@end
@implementation EventHandler
- (id)init {
self = [super init];
if (self) {
NSLog(@"eventHandler::init");
NSNotificationCenter* defaultCenter = [NSNotificationCenter defaultCenter];
[defaultCenter addObserver:self
selector:@selector(applicationDidFinishLaunching:)
// name:NSApplicationWillFinishLaunchingNotification
name:NSApplicationDidFinishLaunchingNotification
object:nil];
}
return self;
}
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
NSAppleEventManager *appleEventManager = [NSAppleEventManager sharedAppleEventManager];
[appleEventManager setEventHandler:self andSelector:@selector(handleGetURLEvent:withReplyEvent:) forEventClass:kInternetEventClass andEventID:kAEGetURL];
}
- (void)handleGetURLEvent:(NSAppleEventDescriptor *)event withReplyEvent:(NSAppleEventDescriptor *)replyEvent
{
NSString* url = [[event paramDescriptorForKeyword:keyDirectObject] stringValue];
NSLog(@"%@", url);
}
@end
```
The above code is working if the application is running, but if the URL gets called and the application was terminated, the event is not caught. I think this is because this: NSApplicationDidFinishLaunchingNotification.
Changing it to NSApplicationWillFinishLaunchingNotification causes that non events caught. Maybe Qt handles it before me, but I can't find a workaround for the problem.
| I was also trying to get my Qt-based application handle a custom URL scheme on the Mac and went down the same path as the original poster. It turns out that Qt4 already supports URL events on the Mac, and there's no need to write Objective-C code to receive them. This is in fact the reason that you didn't receive any URL events when you set the event handler in response to NSApplicationWillFinishLaunchingNotification: Qt registers its own handler afterward.
When a URL with your custom scheme is triggered, your Qt application will receive a FileOpenEvent. Note that it is the QApplication instance which receives the event. You can catch it by making your application subclass QApplication or by installing an event filter on the standard QApplication. I opted for this second approach.
Here's the eventFilter method of my custom event filter class, FileOpenEventFilter. It just emits the signal urlOpened when the event contains a non-empty URL. It also saves the last opened URL in case my main window isn't completely initialized when the event arrives (which happens in my app when it's not already running when the custom URL is clicked.)
```
bool FileOpenEventFilter::eventFilter(QObject* obj, QEvent* event)
{
if (event->type() == QEvent::FileOpen)
{
QFileOpenEvent* fileEvent = static_cast<QFileOpenEvent*>(event);
if (!fileEvent->url().isEmpty())
{
m_lastUrl = fileEvent->url().toString();
emit urlOpened(m_lastUrl);
}
else if (!fileEvent->file().isEmpty())
{
emit fileOpened(fileEvent->file());
}
return false;
}
else
{
// standard event processing
return QObject::eventFilter(obj, event);
}
}
```
|
Netbeans UI empty in DWM
I'm trying to use dwm Windows Manager, everything is fine (1mb ram ;) but when I run netbeans it load but with a grey and empty interface. (it works fine in Unity or E17 )
Any Idea ?
I have found out this <http://netbeans.org/bugzilla/show_bug.cgi?id=86253>
but the solutions proposed doesn't work for me
| Perhaps your issue is the same as this xmonad issue?
<http://www.haskell.org/haskellwiki/Xmonad/Frequently_asked_questions#Problems_with_Java_applications.2C_Applet_java_console>
>
> The Java gui toolkit has a hardcoded list of so-called
> "non-reparenting" window managers. xmonad is not on this list (nor are
> many of the newer window managers). Attempts to run Java applications
> may result in `grey blobs' where windows should be, as the Java gui
> code gets confused.
>
>
>
A solution is to export \_JAVA\_AWT\_WM\_NONREPARENTING=1.
Edit:
According to <https://wiki.archlinux.org/index.php/Dwm#Fixing_misbehaving_Java_applications>, you can also use "wmname LG3D" to hack the window manager's name.
|
Any better way to solve Project Euler Problem #5?
Here's my attempt at Project Euler Problem #5, which is looking quite clumsy when seen the first time. Is there any better way to solve this? Or any built-in library that already does some part of the problem?
```
'''
Problem 5:
2520 is the smallest number that can be divided by each of the numbers from 1 to 10
What is the smallest number, that is evenly divisible by each of the numbers from
1 to 20?
'''
from collections import defaultdict
def smallest_number_divisible(start, end):
'''
Function that calculates LCM of all the numbers from start to end
It breaks each number into it's prime factorization,
simultaneously keeping track of highest power of each prime number
'''
# Dictionary to store highest power of each prime number.
prime_power = defaultdict(int)
for num in xrange(start, end + 1):
# Prime number generator to generate all primes till num
prime_gen = (each_num for each_num in range(2, num + 1) if is_prime(each_num))
# Iterate over all the prime numbers
for prime in prime_gen:
# initially quotient should be 0 for this prime numbers
# Will be increased, if the num is divisible by the current prime
quotient = 0
# Iterate until num is still divisible by current prime
while num % prime == 0:
num = num / prime
quotient += 1
# If quotient of this priime in dictionary is less than new quotient,
# update dictionary with new quotient
if prime_power[prime] < quotient:
prime_power[prime] = quotient
# Time to get product of each prime raised to corresponding power
product = 1
# Get each prime number with power
for prime, power in prime_power.iteritems():
product *= prime ** power
return product
def is_prime(num):
'''
Function that takes a `number` and checks whether it's prime or not
Returns False if not prime
Returns True if prime
'''
for i in xrange(2, int(num ** 0.5) + 1):
if num % i == 0:
return False
return True
if __name__ == '__main__':
print smallest_number_divisible(1, 20)
import timeit
t = timeit.timeit
print t('smallest_number_divisible(1, 20)',
setup = 'from __main__ import smallest_number_divisible',
number = 100)
```
While I timed the code, and it came out with a somewhat ok result. The output came out to be:
```
0.0295362259729 # average 0.03
```
Any inputs?
| You are recomputing the list of prime numbers for each iteration. Do it just once and reuse it. There are also better ways of computing them other than trial division, the [sieve of Eratosthenes](http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) is very simple yet effective, and will get you a long way in Project Euler. Also, the factors of `n` are all smaller than `n**0.5`, so you can break out earlier from your checks.
So add this before the `num` for loop:
```
prime_numbers = list_of_primes(int(end**0.5))
```
And replace `prime_gen` with :
```
prime_gen =(each_prime for each_prime in prime_numbers if each_prime <= int(num**0.5))
```
The `list_of_primes` function could be like this using trial division :
```
def list_of_primes(n)
"""Returns a list of all the primes below n"""
ret = []
for j in xrange(2, n + 1) :
for k in xrange(2, int(j**0.5) + 1) :
if j % k == 0 :
break
else :
ret.append(j)
return ret
```
But you are better off with a very basic [sieve of Erathostenes](http://numericalrecipes.wordpress.com/2009/03/16/prime-numbers-2-the-sieve-of-erathostenes/):
```
def list_of_primes(n) :
sieve = [True for j in xrange(2, n + 1)]
for j in xrange(2, int(sqrt(n)) + 1) :
i = j - 2
if sieve[j - 2]:
for k in range(j * j, n + 1, j) :
sieve[k - 2] = False
return [j for j in xrange(2, n + 1) if sieve[j - 2]]
```
---
There is an alternative, better for most cases, definitely for Project Euler #5, way of going about calculating the least common multiple, using the greatest common divisor and [Euclid's algorithm](http://en.wikipedia.org/wiki/Euclidean_algorithm):
```
def gcd(a, b) :
while b != 0 :
a, b = b, a % b
return a
def lcm(a, b) :
return a // gcd(a, b) * b
reduce(lcm, xrange(start, end + 1))
```
On my netbook this gets Project Euler's correct result lightning fast:
```
In [2]: %timeit reduce(lcm, xrange(1, 21))
10000 loops, best of 3: 69.4 us per loop
```
|
pandas pivot\_table with dates as values
let's say I have the following table of customer data
```
df = pd.DataFrame.from_dict({"Customer":[0,0,1],
"Date":['01.01.2016', '01.02.2016', '01.01.2016'],
"Type":["First Buy", "Second Buy", "First Buy"],
"Value":[10,20,10]})
```
which looks like this:
```
Customer | Date | Type | Value
-----------------------------------------
0 |01.01.2016|First Buy | 10
-----------------------------------------
0 |01.02.2016|Second Buy| 20
-----------------------------------------
1 |01.01.2016|First Buy | 10
```
I want to pivot the table by the Type column.
However, the pivoting only gives the numeric Value columns as a result.
I'd desire a structure like:
```
Customer | First Buy Date | First Buy Value | Second Buy Date | Second Buy Value
---------------------------------------------------------------------------------
```
where the missing values are NAN or NAT
Is this possible using pivot\_table. If not, I can imagine some workarounds, but they are quite lenghty. Any other suggestions?
| Use [`unstack`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html):
```
df1 = df.set_index(['Customer', 'Type']).unstack()
df1.columns = ['_'.join(cols) for cols in df1.columns]
print (df1)
Date_First Buy Date_Second Buy Value_First Buy Value_Second Buy
Customer
0 01.01.2016 01.02.2016 10.0 20.0
1 01.01.2016 None 10.0 NaN
```
If need another order of columns use [`swaplevel`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.swaplevel.html) and [`sort_index`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html):
```
df1 = df.set_index(['Customer', 'Type']).unstack()
df1.columns = ['_'.join(cols) for cols in df1.columns.swaplevel(0,1)]
df1.sort_index(axis=1, inplace=True)
print (df1)
First Buy_Date First Buy_Value Second Buy_Date Second Buy_Value
Customer
0 01.01.2016 10.0 01.02.2016 20.0
1 01.01.2016 10.0 None NaN
```
|
Java 8 Stream vs Collection Storage
I have been reading up on Java 8 Streams and the way data is streamed from a data source, rather than have the entire collection to extract data from.
This quote in particular I read on [an article](http://www.drdobbs.com/jvm/lambdas-and-streams-in-java-8-libraries/240166818?pgno=1) regarding streams in Java 8.
>
> No storage. Streams don't have storage for values; they carry values from a source (which could be a data structure, a generating function, an I/O channel, etc) through a pipeline of computational steps.
>
>
>
I understand the concept of streaming data in from a source piece by piece. What I don't understand is if you are streaming from a collection how is there no storage? The collection already exists on the Heap, you are just streaming the data from that collection, the collection already exists in "storage".
What's the difference memory-footprint wise if I were to just loop through the collection with a standard for loop?
| The statement about streams and storage means that a stream doesn't have any storage *of its own*. If the stream's source is a collection, then obviously that collection has storage to hold the elements.
Let's take one of examples from that article:
```
int sum = shapes.stream()
.filter(s -> s.getColor() == BLUE)
.mapToInt(s -> s.getWeight())
.sum();
```
Assume that `shapes` is a `Collection` that has millions of elements. One might imagine that the `filter` operation would iterate over the elements from the source and create a temporary collection of results, which might also have millions of elements. The `mapToInt` operation might then iterate over that temporary collection and generate its results to be summed.
That's not how it works. There is no temporary, intermediate collection. The stream operations are pipelined, so elements emerging from `filter` are passed through `mapToInt` and thence to `sum` without being stored into and read from a collection.
If the stream source weren't a collection -- say, elements were being read from a network collection -- there needn't be any storage at all. A pipeline like the following:
```
int sum = streamShapesFromNetwork()
.filter(s -> s.getColor() == BLUE)
.mapToInt(s -> s.getWeight())
.sum();
```
might process millions of elements, but it wouldn't need to store millions of elements anywhere.
|
How to make console be able to print any of 65535 UNICODE characters
I am experimenting with unicode characters and taking unicode values from [Wikipedia](http://en.wikipedia.org/wiki/List_of_Unicode_characters) page
Ihe problem is my console displays all of **C0 Controls and Basic Latin** unicode characters ie from U+0000 to U+00FF but for all other categories like **Latin Extended -B , Cyrillic , other languges etc ,** the console prints question mark character (**?**) .
My C# code is
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace DataTypes
{
class Program
{
static void Main(string[] args)
{
char ch = '\u0181';
Console.WriteLine("the unicode character is value" + ch);
}
}
}
```
I am working on windows 7 , Visual studio 2010. What should i do to increase Unicode support.
| There's a lot of history behind that question, I'll noodle about it for a while first. Console mode apps can only operate with an 8-bit text encoding. This goes back to a design decision made 42 years ago by Ken Thompson et al when they designed Unix. A core feature of Unix that terminal I/O was done through pipes and you could chain pipes together to feed the output of one program to the input of another. This feature was also implemented in Windows and is supported by .NET as well with the ProcessStartInfo.RedirectStandardXxxx properties.
Nice feature but that became a problem when operating systems started to adopt Unicode. Windows NT was the first one that was fully Unicode at its core. Unicode characters must always be encoded, a common choice back then was UCS, later morphed into utf-16. Now there's a problem with I/O redirection, a program that spits out 16-bit encoded characters is not going to operate well when it is redirected to a program that still uses 8-bit encoded characters.
Credit Ken Thompson as well with finding a solution for this problem, he invented utf-8 encoding.
That works in Windows as well. Easy to do in a console mode app, you have to re-assign the Console.OutputEncoding property:
```
using System;
using System.Text;
class Program {
static void Main(string[] args) {
Console.OutputEncoding = Encoding.UTF8;
Console.WriteLine("Ĥėļŀō ŵŏŗłđ");
Console.ReadLine();
}
}
```
You'll now however encounter another problem, the font selected for the console window is likely to be unable to render the text. Press Alt+Space to invoke the system menu, Properties, Font tab. You'll need to pick a non-raster font. Pickings are very slim, on Vista and up you can choose Consolas. Re-run your program and the accented characters should render properly. Unfortunately, forcing the console font programmatically is a problem, you'll need to document this configuration step. In addition, a font like Consolas doesn't have the full set of possible Unicode glyphs. You are likely to see rectangles appear for Unicode codepoints for which it has no glyphs. All an unsubtle reminder that creating a GUI program is really your best bet.
|
Overloading new operator in the derived class
I have overloaded `new` operator in the Base class. However, when I add additional overloaded `new` to the Derived class gcc compiler does not find `new` operator in the Base class. Why?
Best,
Alex
```
#include <stdlib.h>
template <class t> class Base {
public:
Base() {}
void * operator new (size_t size, void *loc) { return loc; }
};
template <class t> class Derived : public Base<t> {
public:
Derived() {}
void * operator new (size_t size, int sz, void *loc) { return loc; }
};
void foo() {
void *loc = malloc(sizeof(Derived<char>));
Derived<char> *d = new (loc) Derived<char>();
}
```
gcc output:
```
new.cpp: In function ‘void foo()’:
new.cpp:17:45: error: no matching function for call to ‘Derived<char>::operator new(sizetype, void*&)’
Derived<char> *d = new (loc) Derived<char>();
^
new.cpp:17:45: note: candidate is:
new.cpp:11:10: note: static void* Derived<t>::operator new(size_t, int, void*) [with t = char; size_t = unsigned int]
void * operator new (size_t size, int sz, void *loc) { return loc; }
^
new.cpp:11:10: note: candidate expects 3 arguments, 2 provided
```
| When you invoke the `operator new` via the placement `new` expression
```
new (loc) Derived<char>();
```
the compiler looks for an overload of `operator new` in the `Derived` class (and not the `Base` class). It finds it, but your overload
```
void * operator new (size_t size, int sz, void *loc) { return loc; }
// ^ additional parameter
```
accepts more parameters, hence the error.
If you ask why the compiler is not smart enough to invoke the `Base`'s overload of `operator new`, it is because of [*name hiding*](https://stackoverflow.com/q/1628768/3093378): the `operator new` overload in the `Derived` class hides the one of the `Base` class. If you want to make the `Base::operator new` overload visible in your `Derived` class, use
```
using Base<t>::operator new;
```
|
Kind of load balanced thread pool in java
I am looking for a load balanced thread pool with no success so far. (Not sure whether load balancing is the correct wording).
Let me explain what I try to achieve.
Part 1:
I have Jobs, with 8 to 10 single tasks. On a 6 core CPU I let 8 thread work on this tasks in parallel which seems to deliver best peformance. Whe one task is ready, another one can start. Once all ten tasks are finished, the complete job is done. Usually a job is done in 30 to 60 seconds.
Part two:
Some times, unfortunately, the job takes more then two hours. This is correct due to amount of data that has to be calculated.
The bad thing is, that no other job can start while job1 is running (assuming, that all threads have the same duration) because it is using all threads.
My First idea:
Have 12 threads, allow up to three jobs in parallel.
BUT: that means, the cou is not fully untilized when there is only 1 job.
I am looking for a solution to have full CPU power for job one when there is no other job. But when an other job needs to be started while one other is running, I want the CPU power allocated to both job. And when a third or fourth job shows up, I want the cpu power alocated fairly to all four jobs.
I apreciate your answers...
thanks in advance
| One possibility might be to use a standard `ThreadPoolExecutor` with a different kind of task queue
```
public class TaskRunner {
private static class PriorityRunnable implements Runnable,
Comparable<PriorityRunnable> {
private Runnable theRunnable;
private int priority = 0;
public PriorityRunnable(Runnable r, int priority) {
this.theRunnable = r;
this.priority = priority;
}
public int getPriority() {
return priority;
}
public void run() {
theRunnable.run();
}
public int compareTo(PriorityRunnable that) {
return this.priority - that.priority;
}
}
private BlockingQueue<Runnable> taskQueue = new PriorityBlockingQueue<Runnable>();
private ThreadPoolExecutor exec = new ThreadPoolExecutor(8, 8, 0L,
TimeUnit.MILLISECONDS, taskQueue);
public void runTasks(Runnable... tasks) {
int priority = 0;
Runnable nextTask = taskQueue.peek();
if(nextTask instanceof PriorityRunnable) {
priority = ((PriorityRunnable)nextTask).getPriority() + 1;
}
for(Runnable t : tasks) {
exec.execute(new PriorityRunnable(t, priority));
priority += 100;
}
}
}
```
The idea here is that when you have a new job you call
```
taskRunner.runTasks(jobTask1, jobTask2, jobTask3);
```
and it will queue up the tasks in such a way that they interleave nicely with any existing tasks in the queue (if any). Suppose you have one job queued, whose tasks have priority numbers j1t1=3, j1t2=103, and j1t3=203. In the absence of other jobs, these tasks will execute one after the other as quickly as possible. But if you submit another job with three tasks of its own, these will be assigned priority numbers j2t1=4, j2t2=104 and j2t3=204, meaning the queue now looks like
j1t1, j2t1, j1t2, j2t2, etc.
This is not perfect however, because if all threads are currently working (on tasks from job 1) then the first task of job 2 can't start until one of the job 1 tasks is complete (unless there's some external way for you to detect this and interrupt and re-queue some of job 1's tasks). The easiest way to make things more fair would be to break down the longer-running tasks into smaller segments and queue those as separate tasks - you need to get to a point where each individual job involves more tasks than there are threads in the pool, so that some of the tasks will always start off in the queue rather than being assigned directly to threads (if there are idle threads then `exec.execute()` passes the task straight to a thread without going through the queue at all).
|
php - filter\_input - set to default value if GET key not set
I'd like to have a clean, elegant way to set a variable to a GET parameter if said parameter is set (and numeric), and to 0 (or some other default) if it's not set.
Right now I have:
```
if (($get_id = filter_input(INPUT_GET, 'id', FILTER_VALIDATE_INT))) {
$opened_staff['id'] = $get_id;
// some database queries etc.
} else { $opened_staff['id'] = 0; }
```
I tried using a callback function that returns 0 if the value is null or not numeric, but if the GET parameter 'id' isn't set, the callback won't even be called - it just sets `$get_id` to null.
Not a big deal to include the else statement, just thought I might be missing out on some functionality of `filter_input`.
| The [`filter_input`](http://php.net/manual/en/function.filter-input.php) function accepts an `options` parameter. Each filter accepts different options. For example, the `FILTER_VALIDATE_INT` filter can accept `default`, `min_range` and `max_range` options [as described here](http://php.net/manual/en/filter.filters.validate.php).
```
$get_id = filter_input(INPUT_GET, 'id', FILTER_VALIDATE_INT, array("options" => array(
"default" => 0,
"min_range" => 0
)));
var_dump($get_id);
// $get_id = 0 when id is not present in query string, not an integer or negative
// $get_id = <that integer> otherwise
```
|
Why is FAT#2 rarely used?
I read a single-line explanation of FAT#2 from Peter Abel's book *IBM PC Assembly Language and Programming*.
It says:
>
> Although FAT2 is still maintained, its use has never been implemented.
>
>
>
Wikipedia says:
>
> The FAT Region.
>
>
> This typically contains two copies (may vary) of the File Allocation Table for the sake of redundancy checking, although rarely used, even by disk repair utilities.
>
>
>
I could think of two strong reasons to use it
1. All FAT system has one (unless one disables it)
2. It's built-in
I realize that FAT is a very old file system, but why has FAT#2 **never** and **rarely** been implemented?
| Assuming "FAT2" means a second copy of the FAT (File Allocation Table) then the basic problem is that it's of little practical use, but I'm not sure if it's true if its actually never used.
The FAT is a central data structure in the FAT file system, so central that the file system itself is named after it. It's not only a table of which clusters have been allocated or not, it also stores linked lists of the clusters that make up each file. If a single sector in the FAT is damaged, a potentially large number of files could be lost, so someone at some point thought it would be a good idea to have a backup copy of the FAT.
The problem though is if the FAT is corrupt, how do you tell which copy of the FAT is the correct one? This limits the usefulness of the backup copy to cases when reading from the primary FAT results in read errors. So, at least in theory, if when reading a file the OS encountered an error while reading FAT it could try the backup copy.
However physical disk errors aren't the only way the FAT could be corrupt. In particular, disk repair utilities, like `chkdsk`, weren't really designed to fix file system corruption caused by read errors. They were only meant to fix corruption due to bad data being written to the disk. The most common case would be when the computer was shutdown in the middle of writing to the disk. In that case the file system could easily be an inconsistent state. In particular if the OS was in the middle of updating the FAT it might have updated the primary copy but not the backup copy, or it might have updated backup copy but not the primary copy. There's no way to know which.
I'm not sure if operating systems actually do bother to check the backup FAT after read errors. It's hard to tell, because it rarely makes a difference in practice. Single sector read errors are uncommon on hard disks made in the last 20 years or so, since they remap failing sectors before they go bad. Drives tend to have no disk errors until they fail completely. Even on floppies physical disk errors tend to affect entire tracks, which would wipe out both copies of the FAT.
Looking at the source for the Linux and FreeBSD FAT file system implementations it appears neither tries the backup FAT if reading from the primary FAT fails. I don't know what any of Microsoft's three main implementations (MS-DOS, Windows 95 or Windows NT) do.
|
Regexp matching a string - positive lookahead
Regexp: `(?=(\d+))\w+\1`
String: `456x56`
Hi,
I am not getting the concept, how this regex matches "56x56" in the string "456x56".
1. The lookaround, (?=(\d+)), captures 456 and put into \1, for (\d+)
- The wordcharacter, \w+, matches the whole string("456x56")
- \1, which is 456, should be followed by \w+
- After backtracking the string, it should not find a match, as there is no "456" preceded by a word character
However the regexp matches 56x56.
| You don't anchor your regex, as has been said. Another problem is that `\w` also matches digits... Now look at how the regex engine proceeds to match with your input:
```
# begin
regex: |(?=(\d+))\w+\1
input: |456x56
# lookahead (first group = '456')
regex: (?=(\d+))|\w+\1
input: |456x56
# \w+
regex: (?=(\d+))\w+|\1
input: 456x56|
# \1 cannot be satisfied: backtrack on \w+
regex: (?=(\d+))\w+|\1
input: 456x5|6
# And again, and again... Until the beginning of the input: \1 cannot match
# Regex engine therefore decides to start from the next character:
regex: |(?=(\d+))\w+\1
input: 4|56x56
# lookahead (first group = '56')
regex: (?=(\d+))|\w+\1
input: 4|56x56
# \w+
regex: (?=(\d+))\w+|\1
input: 456x56|
# \1 cannot be satisfied: backtrack
regex: (?=(\d+))\w+|\1
input: 456x5|6
# \1 cannot be satisfied: backtrack
regex: (?=(\d+))\w+|\1
input: 456x|56
# \1 satified: match
regex: (?=(\d+))\w+\1|
input: 4<56x56>
```
|
Receiving an Arabic datetime error in asp.net
I use ADO disconnected mode to get data from database by filling dataset ds.
All data come true except the date field
```
string strDate = ds.Tables[0].Rows[0]["H_DT"].ToString();
```
it throws an exception says:
>
> Specified time is not supported in this calendar. It should be between
> 04/30/1900 00:00:00 (Gregorian date) and 11/16/2077 23:59:59
> (Gregorian date), inclusive.
>
>
>
I tried to write this code
```
System.Threading.Thread.CurrentThread.CurrentCulture = new CultureInfo("ar-sa");
System.Threading.Thread.CurrentThread.CurrentUICulture = new CultureInfo("ar-sa");
```
to change the culture to Arabic but without any luck.
## Update
The following is screenshot of quick watch for the variable
![enter image description here](https://i.stack.imgur.com/P8ZF3.jpg)
| From [`DateTime.ToString` method](http://msdn.microsoft.com/en-us/library/k494fzbf%28v=vs.110%29.aspx)
>
> The `ToString()` method returns the string representation of the date
> and time in the calendar used by the current culture. If the value of
> the current `DateTime` instance is earlier than
> `Calendar.MinSupportedDateTime` or later than
> `Calendar.MaxSupportedDateTime`, the method throws an
> `ArgumentOutOfRangeException`.
>
>
>
Your `ar-sa` culture's default calender is [`UmAlQuraCalendar` calender](http://msdn.microsoft.com/en-us/library/System.Globalization.UmAlQuraCalendar%28v=vs.110%29.aspx).
```
var culture = CultureInfo.GetCultureInfo("ar-sa");
Console.WriteLine(culture.Calendar); // prints UmAlQuraCalendar
```
And from [`UmAlQuraCalendar.MinSupportedDateTime` Property](http://msdn.microsoft.com/en-us/library/system.globalization.umalquracalendar.minsupporteddatetime%28v=vs.110%29.aspx)
>
> The earliest date and time supported by the UmAlQuraCalendar class,
> which is equivalent to the first moment of **April 30, 1900 C.E. in the
> Gregorian calendar**.
>
>
>
Since your `DateTime` is `1, 1, 1398`, it is too normal to throws `ArgumentOutOfRangeException`.
You can solve your problem to provide parameter an [`IFormatProvider`](http://msdn.microsoft.com/en-us/library/system.iformatprovider%28v=vs.110%29.aspx) in your `DateTime.ToString()` method which has [`GregorianCalendar`](http://msdn.microsoft.com/en-us/library/system.globalization.gregoriancalendar%28v=vs.110%29.aspx) by default. You can use [`InvariantCulture`](http://msdn.microsoft.com/en-us/library/system.globalization.cultureinfo.invariantculture%28v=vs.110%29.aspx) for example.
```
string strDate = ds.Tables[0].Rows[0]["H_DT"].ToString(CultureInfo.InvariantCulture);
```
>
> I wrote globalization configuration in web config as `ar-sa` to be
> global in all application but I faced the same error, please clarify
> me, thanks
>
>
>
A `DateTime` belongs on Gregorian calendar *by default*. From [`DateTime` structure](http://msdn.microsoft.com/en-us/library/system.datetime.aspx);
>
> Each `DateTime` member implicitly uses the Gregorian calendar to perform
> its operation, with the exception of constructors that specify a
> calendar, and methods with a parameter derived from `IFormatProvider`,
> such as `System.Globalization.DateTimeFormatInfo`, that implicitly
> specifies a calendar.
>
>
>
That means your `ds.Tables[0].Rows[0]["H_DT"]` datetime is Gregorian calender by default. But since you using `.ToString()` method without any parameter, your method uses your `CurrentCulture` which is `ar-sa` since you wrote it in your web.config. And that culture has `UmAlQuraCalendar` calender by default. Since your datetime out of range in this calender, your code throws exception.
Remember, you have a `DateTime` with `1318` as a year in Gregorian calender, **not** `1318` as a year in `UmAlQuraCalendar` calender.
As an example;
```
var date = new DateTime(1318, 1, 1);
Console.WriteLine(date.ToString(new CultureInfo("ar-sa")));
```
throws `ArgumentOutOfRangeException` exception because it is exactly the same case of yours. This is a DateTime which is a `1318` year in a Gregorian calender, but there is no representation on `UmAlQuraCalendar` calender of this datetime because in `UmAlQuraCalendar` calender, years start with `1900` in Gregorian calender.
Take a look at how [`UmAlQuraCalendar` calender implemented](http://referencesource.microsoft.com/#mscorlib/system/globalization/umalquracalendar.cs);
```
////////////////////////////////////////////////////////////////////////////
//
// Notes about UmAlQuraCalendar
//
////////////////////////////////////////////////////////////////////////////
/*
** Calendar support range:
** Calendar Minimum Maximum
** ========== ========== ==========
** Gregorian 1900/04/30 2077/11/17
** UmAlQura 1318/01/01 1500/12/30
*/
```
|
Select from two tables with group by date
I have two tables:
Table t1:
```
id | date_click
1 | 2016-02-31 17:17:23
2 | 2016-03-31 12:11:21
3 | 2016-03-31 13:13:23
```
So from this table I want to get count field `Id` for each day.
For this I use next query:
```
SELECT date_format(date_click, '%Y-%m-%d') as date_click_event
, COUNT(id) as count_click
FROM t1
GROUP
BY date_click_event
ORDER
BY date_click_event DESC;
```
It's work good.
So next table is t2.
```
id | count | date_sent
1 | 33 | 2016-02-31 11:12:23
2 | 22 | 2016-03-31 14:11:22
3 | 11 | 2016-03-31 13:12:13
```
To select data by date from this table I use next query:
```
SELECT date_format(date_sent, '%Y-%m-%d') as date_sent_push
, SUM(count) as count_sent
FROM t2
GROUP
BY date_sent_push
ORDER
BY date_sent_push DESC
LIMIT 100;
```
It's also work good. So my purpose is merge these two queries into one `SELECT` that next I can write in php one tables with count of `Id` by date from table `t1` and with count of `count` field from table `t2` by date.
When I try next query:
```
SELECT date_format(t2.date_sent, '%Y-%m-%d') AS date_sent_push
, SUM(t2.count) as count_sent
, COUNT(t1.id) as count_click
FROM t2
, t1
WHERE date_format(t2.date_sent, '%Y-%m-%d') = date_format(t1.date_click, '%Y-%m-%d')
GROUP
BY date_sent_push
ORDER
BY date_sent_push
DESC
LIMIT 100;
```
It's not work. What I do wrong?
| First you should UNION these results and then group by days and select aggregate fields. Also you can JOIN these queries but it can be a problem if some days miss in one of two tables:
```
SELECT date_sent_push,
MAX(count_click) as count_click,
MAX(count_sent) as count_sent
FROM
(SELECT date_format(date_click, '%Y-%m-%d') as date_sent_push
, COUNT(id) as count_click
, NULL as count_sent
FROM t1
GROUP BY date_sent_push
UNION ALL
SELECT date_format(date_sent, '%Y-%m-%d') as date_sent_push
, NULL as count_click
, SUM(count) as count_sent
FROM t2
GROUP
BY date_sent_push
) as t3
GROUP BY date_sent_push
```
`[SQL fiddle demo](http://sqlfiddle.com/#!9/75ef9b/1)`
|
How to create a local read-only variable in bash?
How do I create both `local` and `declare -r` (read-only) variable in bash?
If I do:
```
function x {
declare -r var=val
}
```
Then I simply get a global `var` that is read-only
If I do:
```
function x {
local var=val
}
```
If I do:
```
function x {
local var=val
declare -r var
}
```
Then I get a global again (I can access `var` from other functions).
How to combine both local and read-only in bash?
| Even though `help local` doesn't mention it in Bash *3.x*, `local` can accept the same options as `declare` (as of at least Bash 4.3.30, this documentation oversight has been corrected).
Thus, you can simply do:
```
local -r var=val
```
That said, `declare` *inside a function* by default behaves the same as `local`, as @ruakh states in a comment, so your 1st attempt should also have succeeded in creating a *local* read-only variable.
In Bash 4.2 and higher, you can *override* this with `declare`'s `-g` option to create a global variable even from inside a function (Bash 3.x does *not* support this.)
---
Thanks, [Taylor Edmiston](https://stackoverflow.com/users/149428/taylor-edmiston):
`help declare` shows all options support by *both* `declare` and `local`.
|
How do I adjust a QTableView height according to contents?
In my layout, my dynamically generated QTableViews seem to get resized to only show one row. I want to have the container of the table views to have a scrollbar instead of the individual table views, which should show full contents.
| @savolai
Thank you very much for your code, it works well for me. I just do additional checks:
```
void verticalResizeTableViewToContents(QTableView *tableView)
{
int rowTotalHeight=0;
// Rows height
int count=tableView->verticalHeader()->count();
for (int i = 0; i < count; ++i) {
// 2018-03 edit: only account for row if it is visible
if (!tableView->verticalHeader()->isSectionHidden(i)) {
rowTotalHeight+=tableView->verticalHeader()->sectionSize(i);
}
}
// Check for scrollbar visibility
if (!tableView->horizontalScrollBar()->isHidden())
{
rowTotalHeight+=tableView->horizontalScrollBar()->height();
}
// Check for header visibility
if (!tableView->horizontalHeader()->isHidden())
{
rowTotalHeight+=tableView->horizontalHeader()->height();
}
tableView->setMinimumHeight(rowTotalHeight);
}
```
|
Need help parsing string in bash
I have a script that uses sensors to get CPU temps in Ubuntu.
```
IN="`/usr/bin/sensors -f | grep 'Core ' | cut -f2 -d"+" | cut -f1 -d '.'`"
echo $IN
```
It yields results like this
```
96 100 98 102
```
What I need to do is be able to call it like cpu1 to get the first, cpu2 to get the second, and so on. I need to split it up so i can monitor core temps using MRTG, there might be a better way, but I havent found one yet.
| You can convert `$IN` to an array like this:
```
TEMPERATURES=($IN)
```
Then you can index into that array to get a particular temperature; for example:
```
echo ${TEMPERATURES[0]}
```
If you pass a command-line parameter to your script, you can use that as an array index:
```
WHICH=$1 # 1st command line arg; $0 is name of script
TEMPERATURES=($IN)
echo ${TEMPERATURES[$WHICH]}
```
Calls to the script might then look like this (assuming the script is called `cpu`):
```
$ ./cpu 0
96
$ ./cpu 1
100
$ ./cpu 2
98
$ ./cpu 3
102
```
|
Elixir: Modifying value of module attribute
Is it possible to achieve below behavior wherein one tries to change the value of a module attribute to alter the behavior of the module methods?
```
defmodule Adder do
@num_to_add 10
def addTo(input), do: input + @num_to_add
end
IO.inspect Adder.addTo(5) # Prints 15
Adder.num_to_add = 20
IO.inspect Adder.addTo(5) # Expect it to print 25
```
It throws below error
```
** (CompileError) hello.exs:8: cannot invoke remote function Adder.num_to_add/0 inside match
(elixir) src/elixir_clauses.erl:26: :elixir_clauses.match/3
```
If this is not possible (as everything in Elixir is supposed to immutable), is there any Elixir-way of achieving similar behavior.
| This is not possible since attributes only exist up until compilation of that specific module. When the module is compiled all the attributes are inlined and forgotten about, so at the point you are able to call functions from that module it is no longer possible to modify the attributes.
This code should show this a bit more clearly:
```
defmodule Test do
@attr 1
@attr 2
def attr do
@attr
end
end
IO.inspect Test.attr # => 2
Module.put_attribute(Test, :attr, 3)
IO.inspect Test.attr # => ** (ArgumentError) could not call put_attribute on module Test because it was already compiled
```
Note that you can change the value of the attribute while the module hasn't been compiled (for example in the module's body) simply by setting it again, like I do here when setting `@attr` to `2`.
Incidentally what you seem to be trying to achieve can be done easily with an `Agent`:
```
defmodule Storage do
def start_link do
Agent.start_link(fn -> 10 end, name: __MODULE__)
end
def add_to(input) do
Agent.get_and_update(__MODULE__, fn (x) -> {x + input, x + input} end)
end
end
Storage.start_link
IO.inspect Storage.add_to(5) # => 15
IO.inspect Storage.add_to(5) # => 20
```
A good rule of thumb in Elixir is that whenever you need to keep track of some mutable state you will need to have a process wrapping that state.
|
Magento EU VAT tax validation fails, customer group change not applied
Our company is VAT registered and removes VAT from EU B2B sales where a valid VAT number has been provided. Magento version is 1.9 which includes support for this tax issue, customer group is (was) automatically assigned once the VAT number is validated.
There was originally a problem where Magento showed two instances of a VAT number entry form, one of these worked and the other didn't so results were unreliable. I subsequently hid the VAT form which was not working and all appeared to be working properly. There was still an issue if the customer didn't read the instruction to remove the country code from the VAT number, this prevents the VAT number from being validated but overall it was working.
Recently, VAT has not been removed for EU VAT reg customers, even manually adjusting the customers account group to the VAT exempt group has not removed VAT.
It seems that checking of the VAT number against the appropriate VAT database is not taking place. We've tried using what we know to be valid VAT numbers and an error "Your Tax ID cannot be validated. If you believe this is an error, please contact us at [email address]"
Presumably everyone who uses the EU VAT rules feature is having the same issue, perhaps something as simple as a hyperlink being changed is the cause, searching has revealed no other recent similar problems though.
Can someone advise where in Magento the code for the VAT check and authorisation is held please?
Thanks in advance for any help.
RobH
| There are 2 fields for the VAT ID inside Magento. The first one is related to the customer entity, which won't be checked against the VAT validation service "VIES". It has only some information character, but is has no impact of the customer group assignment.
The second VAT ID field is related to the customer address entity, which will be checked against the VIES service. If the VAT ID is valid, your customer will be assigned to your defined customer group (e.g. "Valid VAT ID", etc.). Important: Magento doesn't accept VAT IDs with country code prefixes. E.g. "ATU69014309" is not a valid TAX ID for Magento. Instead the VAT ID without country prefix "U69014309" is valid. You can fix this issue easily if you extend the following method of the class Mage\_Customer\_Helper\_Data:
```
/**
* Send request to VAT validation service and return validation result
*
* @param string $countryCode
* @param string $vatNumber
* @param string $requesterCountryCode
* @param string $requesterVatNumber
*
* @return Varien_Object
*/
public function checkVatNumber($countryCode, $vatNumber, $requesterCountryCode = '', $requesterVatNumber = '')
{
// Remove the country code prefix from the vat number
$vatNumber = preg_replace("/^[a-z]{2}/i", "", $vatNumber);
...
```
Also important: The VAT ID which the user enters during the checkout inside the address will be also ignored (known Magento Issue).
Kind regards,
Robert
|
SQL Server Configuration Manager express 2012
I want to enable TCP/IP on my SQL Server Express 2012 but I cannot find SQL Server Configuration Manager. I have windows 8 and I made a search for "SQL Server Configuration Manager" but nothing comes up.
Do I have to install SQL Server Configuration Manager separately or does it come with SQL Server? If it comes within it, how do I start it?
| As is [stated on My Tec Bits](http://www.mytecbits.com/microsoft/sql-server/sql-server-2012-configuration-manager-in-windows-8):
If you have installed SQL Server 2012 on windows 8, you may not see the Configuration manager in the app list. SQL Server 2012 configuration manager is not a stand alone program. it is a snap-in for MMC (Microsoft Management Console). Follow the below steps in Windows 8 to open the Configuration Manager of SQL Server 2012 or
2008.
1. Go to Windows 8 Start screen.
2. Start typing in SQLServerManager11.msc if you are looking for SQL
Server 2012 configuration manager. Type in SQLServerManager10.msc if
you are looking for SQL Server 2008 configuration manager.
3. In the result panel you can see the SQLServerConfiguration Manager.
4. Click the icon to launch the SQL Server Configuration manager.
5. The configuration manager will open in MMC.
|
Slick Carousel next and previous buttons showing above/below, rather than left/right
I'm using Slick Carousel [this](http://kenwheeler.github.io/slick/) and my "next" and "previous" arrows are appearing above and below my images, rather than on each side. I'm just looking for it to appear the way it does in the Slick docs.
The buttons aren't in the html, they're generated by the slick js.
Here's the html:
```
<div class="albumsCarousel">
<div><img class="slickImage" src=./images/betterQuit.png></div>
<div><img class="slickImage" src=./images/betterVill.png></div>
<div><img class="slickImage" src=./images/casio.jpg></div>
<div><img class="slickImage" src=./images/betterWorried.png></div>
<div><img class="slickImage" src=./images/betterFrost.png></div>
<div><img class="slickImage" src=./images/betterWeird.png></div>
<div><img class="slickImage" src=./images/betterOphelia.png></div>
<div><img class="slickImage" src=./images/betterEnya.png></div>
<div><img class="slickImage" src=./images/betterXiu.png></div>
<div><img class="slickImage" src=./images/betterImpasse.png></div>
<div><img class="slickImage" src=./images/betterV.png></div>
<div><img class="slickImage" src=./images/betterThrone.png></div>
<div><img class="slickImage" src=./images/betterSholi.png></div>
<div><img class="slickImage" src=./images/betterPGirls2.png></div>
</div>
```
here's the JS:
$(document).ready(function(){
```
$('.albumsCarousel').slick({
infinite: true,
slidesToShow: 3,
slidesToScroll: 3,
arrows: true,
cssEase: "ease",
autoplay: true,
autoplaySpeed: 3000,
nextArrow: '<i class="fa fa-arrow-right"></i>',
prevArrow: '<i class="fa fa-arrow-left"></i>'
});
```
and here's the CSS (which I'm pretty sure isn't playing a role here):
```
.slick-prev, .slick-next {
transform: translate3d(0, 0, 0);/* fix for chrome not rendering */
}
.slick-dots {
transform: translate3d(0, 0, 0);
}
```
| I had a similar problem with slick; the navigations where above and under the image. but i solved it with this simple css.
```
.nextArrowBtn{
position: absolute;
z-index: 1000;
top: 50%;
right: 0;
color: #BFAFB2;
}
.prevArrowBtn{
position: absolute;
z-index: 1000;
top: 50%;
left: 0;
color: #BFAFB2;
}
```
Now in your slick configuration, add the classes above to the prevArrow and nextArrow attributes.
```
$("#slick-images").slick({
nextArrow: '<i class="icon fa-arrow-right nextArrowBtn"></i>',
prevArrow: '<i class="icon fa-arrow-left prevArrowBtn"></i>'
});
```
you can add other attributes as desired, optionally, you can also style the arrow to choice. but this little css positions the arrow right where you in the middle of the images. I hope this helps someone out there. Cheers!
|
Round to nearest MINUTE or HOUR in Standard SQL BigQuery
There are some easy short ways to round to nearest MINUTE for T-SQL as noted [here](https://stackoverflow.com/questions/6666866/t-sql-datetime-rounded-to-nearest-minute-and-nearest-hours-with-using-functions).
I am looking to get the same short syntax for Standard SQL.
| Below is for BigQuery Standard SQL
```
#standardSQL
WITH `project.dataset.table` AS (
SELECT DATETIME '2018-01-01 01:05:56' input_datetime
)
SELECT input_datetime,
DATETIME_TRUNC(input_datetime, MINUTE) rounded_to_minute,
DATETIME_TRUNC(input_datetime, HOUR) rounded_to_hour
FROM `project.dataset.table`
```
with result as
```
Row input_datetime rounded_to_minute rounded_to_hour
1 2018-01-01T01:05:56 2018-01-01T01:05:00 2018-01-01T01:00:00
```
For `TIMESTAMP` or `TIME` data types - you can use respectively - `TIMESTAMP_TRUNC()` or `TIME_TRUNC()`
|
Open XFCE Terminal Window and Run command in same Window
I would like to start 'xfce4-terminal' and then run a command but finish with a prompt to repeat the command.
I use the software 'todo.txt' and like to have it open in a small window which I can refer to and add entries, etc.
At the moment, I have the following command line which works...
```
xfce4-terminal --title=todo &
```
...I then have to **switch to that terminal window** and type...
```
/usr/local/bin/todo.sh -t ls
```
I have the tried all these but on each try it 'finishes' the window and will not let me type in the window:-
```
xfce4-terminal --execute '/usr/local/bin/todo.sh -t ls' --title=todo --hold &
xfce4-terminal --command '/usr/local/bin/todo.sh -t ls' --title=todo --hold &
xfce4-terminal --command='/usr/local/bin/todo.sh -t ls' --title=todo &
```
Can anyone help please?
I would like to open the terminal window, run the command, then leave me with a working prompt.
Thanks.
| Use `-c` option for bash command to wrap multiple commands, like this:
```
$ bash -c "ls /var/log/apt; bash"
history.log history.log.4.gz term.log.10.gz term.log.5.gz
history.log.10.gz history.log.5.gz term.log.11.gz term.log.6.gz
history.log.11.gz history.log.6.gz term.log.12.gz term.log.7.gz
history.log.12.gz history.log.7.gz term.log.1.gz term.log.8.gz
history.log.1.gz history.log.8.gz term.log.2.gz term.log.9.gz
history.log.2.gz history.log.9.gz term.log.3.gz
history.log.3.gz term.log term.log.4.gz
username@hostname:~$
```
Then, use that bash command in `xfce4-terminal` command, like this:
```
xfce4-terminal -e 'bash -c "ls /var/log/apt; bash"' -T "Run and ready"
```
whereby the options: `-e` to run the commands, `-T` to set the title.
As a result, the terminal does not close and ready with a new prompt.
|
Why do we need natural log of Odds in Logistic Regression?
I know what an odds is. It's a ratio of the probability of some event happening to the probability it not happening. So, in the context of classification, the probability that an input feature vector $X$ belongs to class 1 is $p(X)$ then the Odds is:-
$O = \frac{p(X)}{1-p(X)}$
This is what I don't understand. When we have probability, why do we need Odds here at all?
| I think I figured out the answer myself after doing a bit of reading so thought of posting it here. It looks like I got little confused.
So as per my post
$$O = \frac{P(X)}{1-P(X)}.$$
So I forgot to take into account the fact that $P(X)$ itself is the probability given by the logistic function:-
$$P\_\beta(X) = \frac{e^{\beta^TX}}{1 + e^{\beta^TX} }.$$
So replacing this in in the equation for $O,$ we get
$$O = \frac{\frac{e^{\beta^TX}}{1 + e^{\beta^TX} }}{1-\frac{e^{\beta^TX}}{1 + e^{\beta^TX} }} = e^{\beta^TX}.$$
So $e^{\beta^TX}$ is nothing but the odds for the input feature vector $X$ to be of a positive class. And with further algebraic manipulation, we can obtain a linear form and the reason for doing this is to be able to interpret the coefficient vector $\beta$ in precise manner. So that algebraic manipulation is basically taking a natural log of the latest form of $O $ ($e^{\beta^TX}$)
i.e.
$$\ln(O) = \ln \left(e^{\beta^TX}\right) =\beta^TX $$
So the expanded form of $\beta^TX$ is:-
$$\ln(O) = \beta\_0+\beta\_1x\_1+\beta\_2x\_2+\cdots+\beta\_nx\_n$$
So the real use of this, as I have understood it, is to be able to interpret the coefficients easily while keeping the linear form just like in multiple linear regression. So looking at the latest expanded form of $\ln(O)$ we can say that a unit increase in $x\_i$ causes the log of Odds to increase by $\beta\_i.$
|
open or create file in python and append to it
how do you do this series of actions in python?
1) Create a file if it does not exist and insert a string
2) If the file exists, search if it contains a string
3) If the string does not exist, hang it at the end of the file
I'm currently doing it this way but I'm missing a step
EDIT
with this code every time i call the function seems that the file does not exist and overwrite the older file
```
def func():
if not os.path.exists(path):
#always take this branch
with open(path, "w") as myfile:
myfile.write(string)
myfile.flush()
myfile.close()
else:
with open(path) as f:
if string in f.read():
print("string found")
else:
with open(path, "a") as f1:
f1.write(string)
f1.flush()
f1.close()
f.close()
```
| Try this:
```
with open(path, 'a+') as file:
file.seek(0)
content = file.read()
if string not in content:
file.write(string)
```
seek will move your pointer to the start, and write will move it back to the end.
Edit:
Also, you don't need to check the path.
Example:
```
>>> f = open('example', 'a+')
>>> f.write('a')
1
>>> f.seek(0)
0
>>> f.read()
'a'
```
file example didn't exist, but when I called open() it was created. [see why](https://www.tutorialspoint.com/python/python_files_io.htm)
|
SVG image inside circle
I want to create a circle which contains an image, I already tried using `pattern` or `filter` but none of them give me the expected result. Below is the code:
```
<svg id="graph" width="100%" height="400px">
<!-- filter -->
<filter id = "born1" x = "0%" y = "0%" width = "100%" height = "100%">
<feImage xlink:href = "https://cdn3.iconfinder.com/data/icons/people-professions/512/Baby-512.png"/>
</filter>
<circle id = "born" class = "medium" cx = "5%" cy = "20%" r = "5%" fill = "white" stroke = "lightblue" stroke-width = "0.5%" filter = "url(#born1)"/>
<!-- pattern -->
<defs>
<pattern id="image" x="0" y="0" height="100%" width="100%">
<image x="0" y="0" xlink:href="https://cdn3.iconfinder.com/data/icons/people-professions/512/Baby-512.png"></image>
</pattern>
</defs>
<circle id = "sd" class = "medium" cx = "5%" cy = "40%" r = "5%" fill = "white" stroke = "lightblue" stroke-width = "0.5%" fill="url(#image)"/>
</svg>
```
My goal is to preserve the circle and give background image inside it, something like CSS attr `background-image`.
| A pattern will work. You just have to give the `
</pattern>
</defs>
<circle id="sd" class="medium" cx="5%" cy="40%" r="5%" fill="url(#image)" stroke="lightblue" stroke-width="0.5%" />
</svg>
```
|
Initialize array holding struct more efficiently
I have the following code:
```
const N: usize = 10000;
const S: usize = 7000;
#[derive(Copy, Clone, Debug)]
struct T {
a: f64,
b: f64,
f: f64
}
fn main() {
let mut t: [T; N] = [T {a: 0.0, b: 0.0, f: 0.0}; N];
for i in 0..N {
t[i].a = 0.0;
t[i].b = 1.0;
t[i].f = i as f64 * 0.25;
}
for _ in 0..S {
for i in 0..N {
t[i].a += t[i].b * t[i].f;
t[i].b -= t[i].a * t[i].f;
}
println!("{}", t[1].a);
}
}
```
I'm unsure why the array `t` must be initialized that way. The first for-loop is intended to initialize the array with the containing struct to their respective values.
When I try to omit the initialization directly with the array:
```
let mut t: [T; N];
```
I get the following error:
>
> error[E0381]: use of possibly uninitialized variable: t
>
>
>
All for-loops are intended to be as such, I just want to know if there is a smarter way for the array and it's initialization with the first for-loop.
|
>
> I'm unsure why the array `t` must be initialized that way.
>
>
>
Because Rust doesn't let you touch (entirely or partially) uninitialised values. The compiler isn't smart enough to prove that the loop will *definitely* initialise everything, so it just forbids it.
Now, the *optimiser* is a different story. That *can* notice that the initialisation is redundant and skip it... in theory. It doesn't appear to do so with that code and the current compiler. Such is optimisation.
>
> I just want to know if there is a smarter way for the array and it's initialization with the first for-loop.
>
>
>
The smart way is to just leave the code as-it-is. Statistically speaking, it's unlikely to be a bottleneck. If profiling suggests that it *is* a bottleneck, then you can use [`uninitialised`](https://doc.rust-lang.org/std/mem/fn.uninitialized.html). However, note that doing so can lead to undefined behaviour if you use it wrong. Although not an exhaustive list, you *definitely avoid* using it on any type that is not `Copy`.
If you do *need* to use it, I strongly recommend also adjusting the first loop to make forgetting to initialise an element *or* a field in the structure impossible:
```
let mut t: [T; N] = unsafe { ::std::mem::uninitialized() };
for (i, e) in t.iter_mut().enumerate() {
*e = T {
a: 0.0,
b: 1.0,
f: i as f64 * 0.25,
}
}
```
|
FileStream locking a file for reading and writing
I have the following code block which is giving me a headache.
Logically it should work as I am using the filestream providing the lock within the using statement. When it gets to the line that creates the StreamWriter, it fails saying "the file is not writable".
Now my program is a multithreaded application. Any thread could be trying to write to this file. I need the program to lock the file, read the contents, check the contents, then write back any changes. During that process, no other thread should be able to access that file.
```
using (var fs = File.Open(fileLoc, FileMode.Open, FileAccess.ReadWrite, FileShare.None))
{
var sr = new StreamReader(fs);
var str = sr.ReadToEnd();
var strArray = str.Split(',');
if (strArray.Any(st => st == text))
{
return;
}
sr.Close();
var sw = new StreamWriter(fs);
sw.Write(str + text);
sw.Flush();
sw.Close();
}
```
| The FileShare.None flag does not cause threads to queue, it just locks the file, hence the exception that you get. To provide mutually exclusive access you can lock a shared object prior to writing.
But you say this "Now my program is a multithreaded application. Any thread could be trying to write to this file." Now, do these threads all use exactly the same method to write to the file? Let's assume they do, then this should work ...
Create a static class variable ...
```
private static object lockObject = new object();
```
Use it here ...
```
lock (lockObject)
{
using(var sw = new StreamWriter(fs))
{
sw.Write(str + text);
}
}
```
I have made some assumptions about the threads, so you may have to look up info on synchronization if this doesn't work or provide some more info to us.
Also, please close your `StreamReader` earlier (in case the method returns earlier). Close it immediately after you use it or better yet use `using`.
|
Determine if an app exists and launch that app on iOS
Is there a way to check iOS to see if another app has been installed and then launched? If memory serves me this was not possible in early versions but has this been changed?
| Doable, but tricky.
Launching installed apps, like the FB or Twitter apps, is done using the Custom URL Scheme. These can be used both in other apps as well as on web sites.
[Here's an article about how to do this with your own app](http://iphonedevelopertips.com/cocoa/launching-your-own-application-via-a-custom-url-scheme.html).
Seeing if the URL is there, though, can be tricky. A good example of an app that detects installed apps is [Boxcar](http://boxcar.io). The thing here is that Boxcar has advanced knowledge of the custom URL's. I'm fairly (99%) certain that there is a `canOpenURL:`, so knowing the custom scheme of the app you want to target ahead of time makes this simple to implement.
[Here's a partial list](https://ios.gadgethacks.com/news/always-updated-list-ios-app-url-scheme-names-0184033/) of some of the more popular URL's you can check against.
There is a way to find out the custom app URL : <https://www.amerhukic.com/finding-the-custom-url-scheme-of-an-ios-app>
But if you want to scan for apps and deduce their URL's, it can't be done on a non-JB device.
[Here's a blog post](http://www.amitay.us/blog/files/how_to_detect_installed_ios_apps.php) talking about how the folks at Bump handled the problem.
|
Databinding a label in C# with additional text?
Is there an easy way to databind a label AND include some custom text?
Of course I can bind a label like so:
someLabel.DataBindings.Add(new Binding("Text", this.someBindingSource, "SomeColumn", true));
But how would I add custom text, so that the result would be something like:
someLabel.Text = "Custom text " + databoundColumnText;
Do I really have to resort to custom code...?
(maybe my head is too fogged from my cold and I can't see a simple solution?)
TIA for any help on this matter.
| You can always use Binding.Format event.
<http://msdn.microsoft.com/en-us/library/system.windows.forms.binding.format.aspx>
>
> The Format event is raised when data
> is pushed from the data source into
> the control. You can handle the Format
> event to convert unformatted data from
> the data source into formatted data
> for display.
>
>
>
Something like...
```
private string _bindToValue = "Value from DataSource";
private string _customText = "Some Custom Text: ";
private void Form1_Load(object sender, EventArgs e)
{
var binding = new Binding("Text",_bindToValue,null);
binding.Format += delegate(object sentFrom, ConvertEventArgs convertEventArgs)
{
convertEventArgs.Value = _customText + convertEventArgs.Value;
};
label1.DataBindings.Add(binding);
}
```
|
remove gradient of a image without a comparison image
currently i am having much difficulty thinking of a good method of removing the gradient from a image i received.
The image is a picture taken by a microscope camera that has a light glare in the middle. The image has a pattern that goes throughout the image. However i am supposed to remove the light glare on the image created by the camera light.
Unfortunately due to the nature of the camera it is not possible to take a picture on black background with the light to find the gradient distribution. Nor do i have a comparison image that is without the gradient. (note- the location of the light glare will always be consistant when the picture is taken)
In easier terms its like having a photo with a flash in it but i want to get rid of the flash. The only problem is i have no way to obtaining the image without flash to compare to or even obtaining a black image with just the flash on it.
My current thought is conduct edge detection and obtain samples in specific locations away from the edges (due to color difference) and use that to gauge the distribution of gradient since those areas are supposed to have relatively identical colors. However i was wondering if there was a easier and better way to do this.
If needed i will post a example of the image later.
At the moment i have a preferrence of solving this in c++ using opencv if that makes it easier.
thanks in advance for any possible ideas for this problem. If there is another link, tutorial, or post that may solve my problem i would greatly appreciate the post.
![enter image description here](https://i.stack.imgur.com/sUqRv.jpg)
as you can tell there is a light thats being shinned on the img as you can tell from the white spot. and the top is lighter than the bottome due to the light the color inside the oval is actually different when the picture is taken in color. However the color between the box and the oval should be consistant. My original idea was to perhaps sample only those areas some how and build a profile that i can utilize to remove the light but i am unsure how effective that would be or if there is a better way
**EDIT :**
Well i tried out Roger's suggestion and the results were suprisngly good. Using 110 kernel gaussian blurr to find illumination and conducting CLAHE on top of that. (both done in opencv)
![enter image description here](https://i.stack.imgur.com/ElPsP.jpg)
However my colleage told me that the image doesn't look perfectly uniform and pointed out that around the area where the light used to be is slightly brighter. He suggested trying a selective gaussian blur where the areas above certain threshold pixel values are not blurred while the rest of the image is blurred.
Does anyone have opinions regarding this and perhaps a link, tutorial, or an example of something like this being done? Most of the things i find tend to be selective blur for programs like photoshop and gimp
**EDIT2 :**
![enter image description here](https://i.stack.imgur.com/ULdMy.jpg)
it is difficult to tell with just eyes but i believe i have achieved relatively close uniformization by using a simple plane fitting algorithm.((-A \* x - B \* y) / C) (x,y,z) where z is the pixel value. I think that this can be improved by utilizing perhaps a sine fitting function? i am unsure. But I am relatively happy with the results. Many thanks to Roger for the great ideas.
I believe using a bunch of pictures and getting the avg would've been another good method (suggested by roger) but Unofruntely i was not able to implement this since i was not supplied with various pictures and the machine is under modification so i was unable to use it.
| I have done some work in this area previously and found that a large Gaussian blur kernel can produce a reasonable approximation to the background illumination. I will try to get something working on your example image but, in the meantime, here is an example of your image after Gaussian blur with radius 50 pixels, which may help you decide if it's worth progressing.
![Blurred](https://i.stack.imgur.com/1c7e7.png)
**UPDATE**
Just playing with this image, you can actually get a reasonable improvement using adaptive histogram equalisation (I used [CLAHE](http://en.wikipedia.org/wiki/Adaptive_histogram_equalization#Contrast_Limited_AHE)) - see comparison below - any use?
![CLAHE](https://i.stack.imgur.com/ScOvR.jpg)
I will update this answer with more details as I progress.
|
How can I get rid of the 'remote: ' messages that appear on every line returned by post-receive in git?
I've created a post-receive hook in git. The hook output messages to the screen, which are sent back to the git client doing the push, and outputted back.
How can I get rid of the 'remote: ' text before every single line of output? It's rather distracting. (I know it's possible because I've seen it in practice, I just don't know how it's done.)
| Note: The prefix can be important to avoid mistaking messages from the remote system as messages from the local system.
That said, there is no way to turn off the prefix, but they are all written to stderr. You could redirect/capture/filter the stderr of *git push* to do what you want.
A rough way of doing might be something like this:
```
git push ... 2>&1 | sed -e 's/^remote: //'
```
It sends stdout to a pipe and makes stderr goto the same place. At the other end of the pipe, *sed* reads the combined output and deletes any `remote:` prefixes. This should be okay since it we are unlikely to see `remote:` prefixes in the stdout stream. Combining stdout and stderr like this is generally acceptable for interactive use (since they were probably going to the same tty device anyway), but it may not be a good idea for use in automated contexts (e.g scripts).
|
Flutter - How to display a GridView within an AlertDialog box?
I'm trying to display a GridView of images to allow a user to pick from within a Dialog Box and I'm having rendering issues. My UI fades as if a view was appearing on top of it, but nothing displays. Here is my code...
```
Future<Null> _neverSatisfied() async {
return showDialog<Null>(
context: context,
barrierDismissible: false, // user must tap button!
child: new AlertDialog(
title: new Text(
'SAVED !!!',
style:
new TextStyle(fontWeight: FontWeight.bold, color: Colors.black),
),
content: new GridView.count(
crossAxisCount: 4,
childAspectRatio: 1.0,
padding: const EdgeInsets.all(4.0),
mainAxisSpacing: 4.0,
crossAxisSpacing: 4.0,
children: <String>[
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
].map((String url) {
return new GridTile(
child: new Image.network(url, fit: BoxFit.cover, width: 12.0, height: 12.0,));
}).toList()),
actions: <Widget>[
new IconButton(
splashColor: Colors.green,
icon: new Icon(
Icons.done,
color: Colors.blue,
),
onPressed: () {
Navigator.of(context).pop();
})
],
),
);
}
```
| The issue is that, the `AlertDailog` tries to get the intrinsic width of the child. But `GridView` being lazy does not provide intrinsic properties. Just try wrapping the `GridView` in a `Container` with some `width`.
Example:
```
Future<Null> _neverSatisfied() async {
return showDialog<Null>(
context: context,
barrierDismissible: false, // user must tap button!
child: new AlertDialog(
contentPadding: const EdgeInsets.all(10.0),
title: new Text(
'SAVED !!!',
style:
new TextStyle(fontWeight: FontWeight.bold, color: Colors.black),
),
content: new Container(
// Specify some width
width: MediaQuery.of(context).size.width * .7,
child: new GridView.count(
crossAxisCount: 4,
childAspectRatio: 1.0,
padding: const EdgeInsets.all(4.0),
mainAxisSpacing: 4.0,
crossAxisSpacing: 4.0,
children: <String>[
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
'http://www.for-example.org/img/main/forexamplelogo.png',
].map((String url) {
return new GridTile(
child: new Image.network(url, fit: BoxFit.cover, width: 12.0, height: 12.0,));
}).toList()),
),
actions: <Widget>[
new IconButton(
splashColor: Colors.green,
icon: new Icon(
Icons.done,
color: Colors.blue,
),
onPressed: () {
Navigator.of(context).pop();
})
],
),
);
}
```
You can read more about how intrinsic properties are calculated for different views [here](https://docs.flutter.io/flutter/rendering/RenderViewportBase/computeMinIntrinsicWidth.html).
Hope that helps!
|
Auto mocking container for Windsor and Rhino
I am want to do automocking with Windsor so that I can do something like
```
_controller = _autoMockingContainer.Create<MyControllerWithLoadsOfDepdencies>();
```
There used to be a Windsor auto mocking container in [Ayende's](http://ayende.com/blog/) [Rhino](http://blog.eleutian.com/CommentView,guid,762249da-e25a-4503-8f20-c6d59b1a69bc.aspx) libraries. But that doesn't seem to be maintained any more, so the dependencies are a bit old (it's using Castle Windsor 2, but we need 2.5 to be referenced), therefore causing dll hell.
Are there any viable alternatives? I tried pulling out the relevant classes from rhino testing, but it's much more involved that I can handle.
| Thanks to @mookid8000's link and help from a colleague, I created this......which seems to do the trick.
```
public abstract class TestBase
{
static readonly WindsorContainer _mockWindsorContainer;
static TestBase()
{
_mockWindsorContainer = new WindsorContainer();
_mockWindsorContainer.Register(Component.For<LazyComponentAutoMocker>());
}
protected static T MockOf<T>() where T : class
{
return _mockWindsorContainer.Resolve<T>();
}
protected static T Create<T>()
{
_mockWindsorContainer.Register(Component.For<T>());
return _mockWindsorContainer.Resolve<T>();
}
}
public class LazyComponentAutoMocker : ILazyComponentLoader
{
public IRegistration Load(string key, Type service, IDictionary arguments)
{
return Component.For(service).Instance(MockRepository.GenerateStub(service));
}
}
```
|
How do you resolve a "The parameters (number[]) don't match the method signature for SpreadsheetApp.Range.setValues" error
I am getting this error:
>
> "The parameters (number[]) don't match the method signature for SpreadsheetApp.Range.setValues."
>
>
>
in my Google Apps Script when I try to write an array of values to a sheet.
Below is a shortened (simplified) version of code. The actual code runs through about 10,000 records.
The error is generated in the last line, when the `setValues` is called.
I know I'm missing something super simple here.
```
function writeArrayToSheet() {
var ss = SpreadsheetApp.openById("Spreadsheet_ID");
var orderSheet = ss.getSheetByName("Sheet_Name");
var vTable = orderSheet.getRange(1,6,5,11).getValues(); //Raw data
var vWriteTable = []; //Data that will be written to sheet
var updateTime = new Date();
var i = 0;
var vSeconds = 0;
while (i < 5 && vTable[i][0] != "") {
//Logic section that calculated the number of seconds between
if (vSeconds == 0) {
vWriteTable.push("");
} else {
if (vTable[i][6] < certain logic) {
vWriteTable.push("Yes");
} else {
vWriteTable.push("");
}
}
i = i + 1;
} // End while
orderSheet.getRange(1,20,vWriteTable.length,1).setValues(vWriteTable);
} //End Function
```
This is what `vWriteTable` looks like when debugging:
[![debug data](https://i.stack.imgur.com/Why0V.png)](https://i.stack.imgur.com/Why0V.png)
| [`setValues`](https://developers.google.com/apps-script/reference/spreadsheet/range#setValues(Object)) accepts(and [`getValues()`](https://developers.google.com/apps-script/reference/spreadsheet/range#getValues()) returns):
- 1 argument of type:
- `Object[][]` a **two** dimensional array of objects
It does **NOT** accept a 1 dimensional array. A range is **always** two dimensional, **regardless of the range height or width or both**.
If A1:A2 is the range, then corresponding values array would be like:
- `[[1],[3]]`
Similarly, A1:B1 would be
- `[[1,2]]`
A1:B2 would be
- `[[1,2],[3,4]]`
Notice how the two dimension provides direction and that it is always a 2D array, even if the height or width of the range is just 1.
### Solution:
Push a 1D array to make the output array 2D.
### Snippet:
```
vWriteTable.push(/*Added []*/["Yes"]);
```
### More information:
For a more detailed explanation of arrays in google sheets, checkout my answer [here](https://stackoverflow.com/a/63720613/).
|
how to mimic search and replace in google apps script for a range
I wanted to automet some text replacement in a Google sheet.
I utilized the record a macro functionality while doing CTRL-H seacrch and replace, but nothing got recorded.
Then I tryed this code:
```
spreadsheet.getRange('B:B').replace('oldText','newText');
```
but it does not work, range has no replace method
Should I iterate each cell?
| - You want to replace `oldText` to `newText` for the specific column (in this case, it's the column "B".)
- You want to achieve this using Google Apps Script.
If my understanding is correct, how about this answer? Please think of this as just one of several answers.
Unfortunately, `replace()` cannot be used for the value of `getRange()`. So in this answer, I used TextFinder for achieving your goal.
### Sample script:
```
var oldText = "oldText";
var newText = "newText";
var sheet = SpreadsheetApp.getActiveSheet();
sheet.getRange("B1:B" + sheet.getLastRow()).createTextFinder(oldText).replaceAllWith(newText);
```
- When you run this script, `oldText` in the column "B" of the active sheet is replaced to `newText`.
### References:
- [createTextFinder(findText)](https://developers.google.com/apps-script/reference/spreadsheet/sheet#createtextfinderfindtext)
- [replaceAllWith(replaceText)](https://developers.google.com/apps-script/reference/spreadsheet/text-finder.html#replaceallwithreplacetext)
If I misunderstood your question and this was not the result you want, I apologize.
|
How to access subprocess Popen pass\_fds argument from subprocess?
So the title is a bit long but it is the only thing I cannot find online, with a little bit searching. How do I access the `pass_fds` argument from subprocess?
```
# parent.py
import subprocess
subprocess.Popen(['run', 'some', 'program'], pass_fds=(afd, bfd))
# child.py
import subprocess
# need to access pass_fds argument? but how?
```
| You need to explicitly inform the child of the fds passed in some way. The most common/simple mechanisms would be:
1. Via an environment variable set for the child
2. Via an argument passed to the child
3. (Less common, but possible) Written to the child's `stdin`
All of these require the child's cooperation of course; it needs to define an interface to inform it of the fds passed.
`openssl`'s command line tool supports all these mechanisms for a similar purpose (communicating a passphrase to the child without putting it on the command line). You pass `-pass` and a second argument that defines where to look for the password. If the second argument is `stdin`, it reads from `stdin`, if it's `-pass fd:#` (where `#` is the fd number) it reads from an arbitrary file descriptor provided, `-pass env:var` (where `var` is the name of an environment variable) reads from the environment, etc.
|
Homebrew GDB can't open core file on Yosemite 10.10
I installed GDB 7.8.1 and GCC 4.9 through Homebrew.
When I open a core file generated by a GCC-compiled (`gcc-4.9 -g xxx.c -o xxx`) program, it reports:
```
→ gdb ./list_test /cores/core.1176
GNU gdb (GDB) 7.8.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-apple-darwin14.0.0".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./list_test...
warning: `/var/folders/r1/3sx4x5k1557g_v5by83k4hg00000gn/T//cchuMtAU.o': can't open to read symbols: No such file or directory.
(no debugging symbols found)...done.
"/cores/core.1176": no core file handler recognizes format
```
I googled and found someone suggested to use LLDB instead of GDB.
Is is possible to use GDB to debug the core file? And is it because GDB does not support the binary format on Yosemite?
| Based on [the long GDB developers' discussion thread on this issue](https://cygwin.com/ml/gdb/2014-01/msg00035.html), it seems Apple did not merge their changes back to the official GNU mainline, and instead chose to publish the modified source code on their own site. As a result, the Homebrew GDB install (which uses the stock GDB sources) can't load OS X core files.
At this point, I see three choices:
1. **Give in and learn LLDB.** There's a [GDB to LLDB cheat sheet](http://lldb.llvm.org/lldb-gdb.html) to help.
2. **Install Apple's custom GDB from MacPorts.** I've long forsaken MacPorts, so I can't test it, but if you have MacPorts installed, try the following:
```
$ sudo port install gdb-apple
$ codesign -s <your_GDB_cert_id> /opt/local/bin/gdb-apple
$ /opt/local/bin/gdb-apple ./list_test /cores/core.1176
```
3. **Translate MacPorts' GDB patches and build spec into a Homebrew formula.** It's theoretically possible, but I don't have the time to do it myself.
Personally, I've opted to just learn LLDB. Apple has moved permanently to LLVM, so it's probably just a matter of time before the old patched GDB stops working with the latest-and-greatest Xcode tools.
|
Is it possible to have multiple local strategies in passport implemented with NestJS
I have a scenario where I need to implement an authentication mechanism for admin and for normal users in my application using the Passport local strategy. I implemented the strategy for the normal users as described [here](https://docs.nestjs.com/security/authentication#implementing-passport-strategies). It is working perfectly fine.
However, now I need to implement the same local strategy for Admin login. I feel like it would have been much easier if both the type of users(admin and normal user) are on the same entity/table because a single validate function would be capable enough to handle the case but my application design has separate entities for Admins and normal users and hence are the separate services.
My local strategy looks something like this:
```
@Injectable()
export class LocalStrategy extends PassportStrategy(Strategy) {
constructor(private userService: UserService) {
super();
}
async validate(username: string, password: string): Promise<any> {
const user = await this.userService.validateUser(username, password);
if (!user) {
throw new UnauthorizedException("Incorrect credentials!");
}
return user;
}
}
```
As I went through the documentation, it is said that a Local Strategy can have only one validate function(that works as a verify callback), if this is the case how do I differentiate a logic inside this single validate function to behave differently for the requests coming in from the normal user controller and from the admin controller? Because in the admin login case, I'll be using a different route something like(admin/login), and for the user, it could be something like(user/login).
What's the best approach for this? Do I need to create a separate local strategy for admin? If yes, any hints will be appreciated. Otherwise, how can I incorporate logic inside this single validate function?
One of the alternatives could be checking if data exists in both the tables for every login payload each time. This approach doesn't look quite right to me.
If this provides more insight, the auth guard is simple as this:
```
@Injectable()
export class LocalAuthGuard extends AuthGuard('local') {
}
```
| You could make a second strategy based on `passport-local` but give it a specified name like `admin` by following the [named strategies](https://docs.nestjs.com/security/authentication#named-strategies) part of the docs. Something like
```
@Injectable()
export class LocalAdminStrategy extends PassportStrategy(Strategy, 'admin') {
validate(username: string, password: string) {
return validateAdminInfo({ username, password });
}
}
```
And now you can use this strategy by using `@UseGuards(AuthGuard('admin'))` and add several strategies by giving an array of strategies names to AuthGuard like:
```
@UseGuards(AuthGuard(['admin', 'user']))
```
If one of the strategy is passed, the route access will be accepted.
|
When to convert an ordinal variable to a binary variable?
I have seen some people convert their ordinal variable to a binary one, especially in the public opinion literature. For instance, when there is a four-scale question with responses including "Strongy agree," "Agree," "Disagree," and "Strongly disagree," some authors simply code "strongly agree" and "agree" as 1, the two other responses 0. Then they use a logistic regression model to analyze their data. I am not sure why they simply don't use their ordinal data with an ordered logistic model.
When do we need to convert our ordinal data to binary one?
| While I am not sure there is ever a time that one needs to convert ordinal data to binary, there are times when it may be more appropriate.
First, the authors may simply choose to opt for a simpler model. That is to say, a logistic model is easier to run and analyze than is an ordinal model. Also, fewer assumptions to be tested. Following from this, it is also easier to explain the results of the logistic model compared to the trying to explain the results for an ordinal model. Of course, one can argue that the better model to fit the data should be run...but I have had editors and reviewers ask me to scale back to an easier model to accommodate the general audience of the journal. So, it is always good to keep the audience in mind.
Second, it may be a matter of cell sizes. For example, if you have very few responses in one or more categories, it may make the model estimation difficult or unstable. One work around is to collapse neighboring categories into a single category. For example, if there are too few strongly-disagree responses, then combining the D + SD categories into one group may be beneficial for computational purposes. Note, this is a strategy that is often employed in item response theory (IRT) for ordinal data.
Third, there may be a theoretical/conceptual motivation for collapsing the data. For example, if you are working under the assumption that there are response styles present in the data (people responding to the response scale provided in different fashions), then you may argue that the only "true" distinction is between whether someone agrees (to any degree) or disagrees (to any degree). Thus, the research question most likely focusses only on that comparison, and not degrees of difference in the comparison.
I hope this is helpful.
|
Arraylist not able to remove duplicate strings
```
public static void main(String[] args) {
List<String> list = new ArrayList();
list.add("AA");list.add("Aw");list.add("Aw");list.add("AA");
list.add("AA");list.add("A45");list.add("AA");
list.add("Aal");list.add("Af");list.add("An");
System.out.println(list);
for(int i=0;i<list.size();i++){
if(list.get(i).equals("AA")){
list.remove(i);
}
}
System.out.println(list);
}
```
I am currently attempting to remove all the elements within the ArrayList that have the value of `"AA"`, However, it's only removing some of them and not all. can anyone explain to me what am I doing wrong?
elements within arraylist:
```
[AA, Aw, Aw, AA, AA, A45, AA, Aal, Af, An]
```
output after i've attempted to remove all the strings that have the value of `"AA"`.
```
[Aw, Aw, AA, A45, Aal, Af, An]
```
why still `AA` is within the output list?
| this is wrong:
```
for (int i = 0; i < list.size(); i++) {
if (list.get(i).equals("AA")) {
list.remove(i);
}
}
```
because your list is changing its size as long as you remove elements....
you need an iterator:
```
Iterator<String> iter = list.iterator();
while (iter.hasNext()) {
if (iter.next().equals("AA")) {
iter.remove();
}
}
```
---
using java8:
```
List<String> newFilteredList = list.stream().filter(i -> !i.equals("AA")).collect(Collectors.toList());
System.out.println(newFilteredList);
```
|
Python Kafka consumer reading already read messages
Kafka consumer code -
```
def test():
TOPIC = "file_data"
producer = KafkaProducer()
producer.send(TOPIC, "data")
consumer = KafkaConsumer(
bootstrap_servers=['localhost:9092'],
auto_offset_reset='latest',
consumer_timeout_ms=1000,
group_id="Group2",
enable_auto_commit=False,
auto_commit_interval_ms=1000
)
topic_partition = TopicPartition(TOPIC, 0)
assigned_topic = [topic_partition]
consumer.assign(assigned_topic)
consumer.seek_to_beginning(topic_partition)
for message in consumer:
print("%s key=%s value=%s" % (message.topic, message.key, message.value))
consumer.commit()
```
**Expected behavior**
It should read only the last message which is written by the producer. It should just print:
```
file_data key=None value=b'data'
```
**Current behavior**
After running code it prints:
```
file_data key=None value=b'data'
file_data key=None value=b'data'
file_data key=None value=b'data'
file_data key=None value=b'data'
file_data key=None value=b'data'
file_data key=None value=b'data'
```
|
```
from kafka import KafkaConsumer
from kafka import TopicPartition
from kafka import KafkaProducer
def test():
TOPIC = "file_data"
producer = KafkaProducer()
producer.send(TOPIC, b'data')
consumer = KafkaConsumer(
bootstrap_servers=['localhost:9092'],
auto_offset_reset='latest',
consumer_timeout_ms=1000,
group_id="Group2",
enable_auto_commit=False,
auto_commit_interval_ms=1000
)
topic_partition = TopicPartition(TOPIC, 0)
assigned_topic = [topic_partition]
consumer.assign(assigned_topic)
# consumer.seek_to_beginning(topic_partition)
for message in consumer:
print("%s key=%s value=%s" % (message.topic, message.key, message.value))
consumer.commit()
test()
```
This is working as per your expectation. If you want it to start at the beginning, then only call `seekToBeginning`
Ref: [seek\_to\_beginning](https://kafka-python.readthedocs.io/en/master/apidoc/KafkaConsumer.html#kafka.KafkaConsumer.seek_to_beginning)
|
AWS .NET Core 3.1 Mock Lambda Test Tool, cannot read AppSettings.json or App.config
Using the AWS .NET Core 3.1 Mock Lambda Test Tool, I cannot get the lambda function to read from an appsettings.json or even an app.config file.
That is two sperate methods that when I try to get a return value, each method returns null.
In a separate .NET Core 3.1 console app, these same methods work perfectly fine.
So, is there some reason why the 'Mock Lambda Test Tool' at runtime will not allow my code to read from a JSON or App.config file set to copy-always? And does this mean this will not run on AWS when packaged and uploaded to the AWS Lambda console?
My situation does *not* allow me to use Lambda Environment Variables for my local DB connection string. And I cannot store the connection string inside the code, as it has to come from a .json or .config file.
Any ideas or wisdom on this?
THE CODE:
**METHOD 1**
```
// requires: System.Configuration.ConfigurationManager
var connString = System.Configuration.ConfigurationManager.AppSettings["connectionString"];
/*
Reads App.config:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<add key="connectionString" value="Server=127.0.0.1;Port=0000;Database=some-db;User Id=some-user;Password=some-password;" />
</appSettings>
</configuration>
*/
```
**METHOD 2**
```
// requires: Microsoft.Extensions.Configuration; Microsoft.Extensions.Configuration.Json;
public class DbConfig
{
public string ConnectionString { get; set; }
}
var config = new ConfigurationBuilder().SetBasePath(AppDomain.CurrentDomain.BaseDirectory)
.AddJsonFile("appsettings.json").Build();
var section = config.GetSection(nameof(DbConfig));
var dbClientConfig = section.Get<DbConfig>();
var connString = dbClientConfig.ConnectionString;
/*
Reads appsettings.json:
{
"DbConfig": {
"connectionString": "Server=127.0.0.1;Port=0000;Database=some-db;User Id=some-user;Password=some-password;"
}
}
*/
```
I also used a simpler bare-bones method, that also works in console app but not in the Lambda.
**METHOD 3:**
```
// requires: Microsoft.Extensions.Configuration;
IConfiguration _config = new ConfigurationBuilder().AddJsonFile("appconfig.json", true, true).Build();
var _connString = _config["connectionString"];
/*
Reads appconfig.json:
{
"connectionString": "Server=127.0.0.1;Port=0000;Database=some-db;User Id=some-user;Password=some-password;"
}
*/
```
Again, thanks.
| From this [blog post](https://aws.amazon.com/blogs/compute/announcing-aws-lambda-supports-for-net-core-3-1/)
>
> The test tool is an ASP.NET Core application that loads and executes the Lambda code.
>
>
>
Which means that this web app has its own config file that is different from your application's one. And if you put a breakpoint in startup class, you will see that configuration has different fields kept in rather that those in your appsettings.json or whatever file.
**solution #1**
Register current directory as a folder where to search config files
```
new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json", optional: true)
```
|
Intuition as to why estimates of a covariance matrix are numerically unstable
It is well known that estimating the covariance matrix by the ML estimator (sample covariance) can be very numerically unstable (in high dimensions), which is why it is preferable to do PCA with the SVD, for example. But I haven't been able to find an intuitive explanation of why this is the case.
For matrix inversions it is clearer where the numerical instability arises, but a covariance matrix for (centered and standardized data) is just the seemingly innocuous product: **XX'**
EDIT: here is a reference: [Regularized estimation of large covariance matrices](https://www.stat.berkeley.edu/~bickel/BL2008-banding.pdf)
| The reason that the SVD of the original matrix $X$ is preferred instead of the eigen-decomposition of the covariance matrix $C$ when doing PCA is that that the solution of the eigenvalue problem presented in the covariance matrix $C$ (where $C = \frac{1}{N-1}X\_0^T X\_0$, $X\_0$ being the zero-centred version of the original matrix $X$) has a higher [condition number](https://en.wikipedia.org/wiki/Condition_number) than the corresponding problem presented by the original data matrix $X$. In short, the condition number of a matrix quantifies the sensitivity of the solution of a system of linear equations defined by that matrix to errors in the original data. The condition number strongly suggests (but does not fully determine) the quality of the system of linear equations' solution.
In particular as the covariance matrix $C$ is calculated by the cross-product of $X\_0$ with itself, the ratio of the largest singular value of $X\_0$ to the smallest singular value of $X\_0$ is squared. That ratio is the condition number; values that are close to unity or generally below a few hundreds suggest a rather stable system. This is easy to see as follows:
Assume that $X\_0 = USV^T$ where $U$ are the right singular vectors, $V$ are the left singular vectors and $S$ is the diagonal matrix holding the singular values of $X\_0$, as $C = \frac{1}{N-1}X\_0^TX\_0$ then we can write: $C = \frac{1}{N-1} VS^TU^T USV^T = \frac{1}{N-1} V S^T S V^T = \frac{1}{N-1} V \Sigma V^T$. (Remember that the matrix $U$ is orthonormal so $U^TU = I$). ie. the singular values of $X\_0^TX\_0$ represented in $\Sigma$ are the square of the singular values of $X\_0$ represented in $S$.
As you see while *seemingly innocuous* the cross-product $X\_0^TX\_0$ squares the condition number of the system you try to solve and thus makes the resulting system of equations (more) prone to numerical instability issues.
*Some additional clarification particular to the paper linked*: the estimate of the covariance matrix $C$ is immediately rank-degenerate in cases where $N < p $ which are the main focus of that paper; that's why the authors initially draw attention of the [Marcenko–Pastur law](https://en.wikipedia.org/wiki/Marchenko%E2%80%93Pastur_distribution) about the distribution of singular values) and regularisation and banding techniques. Without such notions, working with $C$ or the inverse of $C$ (in the form of Cholesky factor of the inverse of $C$ as the authors do) is numerically unstable. The rationale as to why these covariance matrices are degenerate is exactly the same as above in the case of very large matrices: the condition number is squared. This is even more prominent in the $N < p$ case: an $N\times p$ matrix $X$ has at most $N$ non-zero singular values, the crossproduct of it with itself can also have at most $N$ non-zero singular values leading to rank-degeneracy (and therefore a "infinite" condition number). The paper presents a way to band the estimated covariance matrix given some particular conditions (the estimated $C$ has a Toepliz structure, the oracle $k$ representing the banding parameter can properly estimated, etc.) such as it is numerically stable.
|
R Sort strings according to substring
I have a set of file names like:
```
filelist <- c("filea-10.txt", "fileb-2.txt", "filec-1.txt", "filed-5.txt", "filef-4.txt")
```
and I would like to filter them according to the number after "-".
In python, for instance, I can use the `key`parameter of the sorting function:
```
filelist <- ["filea-10.txt", "fileb-2.txt", "filec-1.txt", "filed-5.txt", "filef-4.txt"]
sorted(filelist, key=lambda(x): int(x.split("-")[1].split(".")[0]))
> ["filec-1.txt", "fileb-2.txt", "filef-4.txt", "filed-5.txt", "filea-10.txt"]
```
In R, I am playing with `strsplit` and `lapply` with no luck so far.
Which is the way to do it in R?
**Edit**:
File names can be many things and may include more numbers. The only fixed pattern is that the number I want to sort by is after the "-". Another (real) example:
```
c <- ("boards10017-51.mp4", "boards10065-66.mp4", "boards10071-81.mp4",
"boards10185-91.mp4", "boards10212-63.mp4", "boards1025-51.mp4",
"boards1026-71.mp4", "boards10309-89.mp4", "boards10310-68.mp4",
"boards10384-50.mp4", "boards10398-77.mp4", "boards10419-119.mp4",
"boards10421-85.mp4", "boards10444-87.mp4", "boards10451-60.mp4",
"boards10461-81.mp4", "boards10463-52.mp4", "boards10538-83.mp4",
"boards10575-62.mp4", "boards10577-249.mp4")"
```
| I'm not sure of the actual complexity of your list of file names, but something like the following might be sufficient:
```
filelist[order(as.numeric(gsub("[^0-9]+", "", filelist)))]
# [1] "filec-1.txt" "fileb-2.txt" "filef-4.txt" "filed-5.txt" "filea-10.txt"
```
---
Considering your edit, you may want to change the `gsub` to something like:
```
gsub(".*-|\\..*", "", filelist)
```
Again, without a few more text cases, it's hard to say whether this is sufficient for your needs.
---
Example:
```
x <- c("boards10017-51.mp4", "boards10065-66.mp4", "boards10071-81.mp4",
"boards10185-91.mp4", "boards10212-63.mp4", "boards1025-51.mp4",
"boards1026-71.mp4", "boards10309-89.mp4", "boards10310-68.mp4",
"boards10384-50.mp4", "boards10398-77.mp4", "boards10419-119.mp4",
"boards10421-85.mp4", "boards10444-87.mp4", "boards10451-60.mp4",
"boards10461-81.mp4", "boards10463-52.mp4", "boards10538-83.mp4",
"boards10575-62.mp4", "boards10577-249.mp4")
x[order(as.numeric(gsub(".*-|\\..*", "", x)))]
## [1] "boards10384-50.mp4" "boards10017-51.mp4" "boards1025-51.mp4"
## [4] "boards10463-52.mp4" "boards10451-60.mp4" "boards10575-62.mp4"
## [7] "boards10212-63.mp4" "boards10065-66.mp4" "boards10310-68.mp4"
## [10] "boards1026-71.mp4" "boards10398-77.mp4" "boards10071-81.mp4"
## [13] "boards10461-81.mp4" "boards10538-83.mp4" "boards10421-85.mp4"
## [16] "boards10444-87.mp4" "boards10309-89.mp4" "boards10185-91.mp4"
## [19] "boards10419-119.mp4" "boards10577-249.mp4"
```
|
Old SQL History in Oracle SQL Developer
In SQL Developer, i was finding some SQL commands of previous month but not able to find that as it is showing only the records of last 4-5 days.
Is there any way to find the old SQL commands those are not displaying under SQL history tab.
Thanks.
| As Oracle has documented, there is a SQL history folder and it is larger (has more SQL queries that go back about a year) than the SQL History tool bar (a couple of months).
Here is the content of my SQL History tool bar:
![SQL History tool bar](https://i.stack.imgur.com/KsIo2.jpg)
With respect to the SQL history folder, release notes cite this location in Windows 7:
C:\Users\your\_user\_name\AppData\Roaming\SQL Developer\SqlHistory
While this folder contains SQL History, it also contains: User-defined reports and user-defined snippets files (e.g. see [3.1 Sql Developer release notes](http://www.oracle.com/technetwork/developer-tools/sql-developer/sqldev31-ea-relnotes-487612.html "3.1 Release notes"))
Here is my SQL History folder:
![My SQL History folder](https://i.stack.imgur.com/Z8ZYT.jpg)
I do not see any documentation to adjust the size of either.
|
Copy one column to another for over a billion rows in SQL Server database
Database : SQL Server 2005
Problem : Copy values from one column to another column in the same table with a billion+
rows.
```
test_table (int id, bigint bigid)
```
Things tried 1: update query
```
update test_table set bigid = id
```
fills up the transaction log and rolls back due to lack of transaction log space.
Tried 2 - a procedure on following lines
```
set nocount on
set rowcount = 500000
while @rowcount > 0
begin
update test_table set bigid = id where bigid is null
set @rowcount = @@rowcount
set @rowupdated = @rowsupdated + @rowcount
end
print @rowsupdated
```
The above procedure starts slowing down as it proceeds.
Tried 3 - Creating a cursor for update.
generally discouraged in SQL Server documentation and this approach updates one row at a time which is too time consuming.
Is there an approach that can speed up the copying of values from one column to another. Basically I am looking for some 'magic' keyword or logic that will allow the update query to rip through the billion rows half a million at a time sequentially.
Any hints, pointers will be much appreciated.
| I'm going to guess that you are closing in on the 2.1billion limit of an INT datatype on an artificial key for a column. Yes, that's a pain. Much easier to fix before the fact than after you've actually hit that limit and production is shut down while you are trying to fix it :)
Anyway, several of the ideas here will work. Let's talk about speed, efficiency, indexes, and log size, though.
## Log Growth
The log blew up originally because it was trying to commit all 2b rows at once. The suggestions in other posts for "chunking it up" will work, *but* that may not totally resolve the log issue.
If the database is in SIMPLE mode, you'll be fine (the log will re-use itself after each batch). If the database is in FULL or BULK\_LOGGED recovery mode, you'll have to run log backups frequently during the running of your operation so that SQL can re-use the log space. This might mean increasing the frequency of the backups during this time, or just monitoring the log usage while running.
## Indexes and Speed
ALL of the `where bigid is null` answers will slow down as the table is populated, because there is (presumably) no index on the new BIGID field. You could, (of course) just add an index on BIGID, but I'm not convinced that is the right answer.
The key (pun intended) is my assumption that the original ID field is probably the primary key, or the clustered index, or both. In that case, lets take advantage of that fact, and do a variation of Jess' idea:
```
set @counter = 1
while @counter < 2000000000 --or whatever
begin
update test_table set bigid = id
where id between @counter and (@counter + 499999) --BETWEEN is inclusive
set @counter = @counter + 500000
end
```
This should be extremely fast, because of the existing indexes on ID.
The ISNULL check really wasn't necessary anyway, neither is my (-1) on the interval. If we duplicate some rows between calls, that's not a big deal.
|
Samba 4.9.0 ./configure lmdb error
I'm very new to Linux and installing Samba and I'm trying to make my Centos 7 into a ADDC.
However, whenever I want to configure I get the following message:
>
> Checking for lmdb >= 0.9.16 via header check : not found
>
> Samba AD DC and --enable-selftest requires lmdb 0.9.16 or later
>
>
>
When using yum install lmdb it says it's already installed.
>
> [root@localhost samba-4.9.0]# yum install lmdb
> Loaded plugins:
> fastestmirror Loading mirror speeds from cached hostfile \* base:
> mirrors.standaloneinstaller.com \* epel: mirrors.powernet.com.ru \*
> extras: ftp.rezopole.net \* updates: distrib-coffee.ipsl.jussieu.fr
> Package lmdb-0.9.22-2.el7.x86\_64 already installed and latest version
>
> Nothing to do
>
>
>
| The actual dependency to install ([for Red Hat Enterprise Linux 7 / CentOS 7 / Scientific Linux 7](https://wiki.samba.org/index.php/Package_Dependencies_Required_to_Build_Samba#Red_Hat_Enterprise_Linux_7_.2F_CentOS_7_.2F_Scientific_Linux_7)) is `lmdb-devel`.
Rather than following some random tutorial for a now EOL version of Samba, you might be better off following the official (and up to date) Samba guidance: [Build Samba from Source](https://wiki.samba.org/index.php/Build_Samba_from_Source), [Package Dependencies Required to Build Samba](https://wiki.samba.org/index.php/Package_Dependencies_Required_to_Build_Samba), [Setting up Samba as an Active Directory Domain Controller](https://wiki.samba.org/index.php/Setting_up_Samba_as_an_Active_Directory_Domain_Controller)
To be sure, the tutorial to which you linked DOES NOT list the current Samba dependencies for CentOS 7.
|
Multiple taps on the same row in a table view
I am working on putting a checkbox in every row of a table view. When I tap a row, its value should get saved into an array, on tap of another row, its value should likewise get saved to the same array, and on a tap of the same row, the value should get deleted from the array.
Please suggest to me how to implement this one. I was using the following code in `didSelectRowAtIndexPath` method but I was not able to do it.
```
if([arrData count]==0)
{
strlast = [arrName objectAtIndex:indexPath.row];
[arrData addObject:strlast];
NSLog(@"string checked in arrData %@",strlast);
}
else
{
for(int i = 0 ;i < [arrData count]; i++)
{
NSLog(@"[arrData count]:%d",[arrData count]);
strSelected = [arrName objectAtIndex:indexPath.row];
NSLog(@"strSelected:%@",strSelected);
for(int i = 0 ;i < [arrData count]; i++)
{
if([strSelected caseInsensitiveCompare:[arrData objectAtIndex:i]])
{
[arrData addObject:strSelected];
NSLog(@"arrData:%@",arrData);
}
}
}
}
```
| list is the array name which contains all the data that is viewed in tableview replace it with your own array name
Suppose tableArray is your array in which values are inserted and deleted.
in .h file
```
NSMutableArray *tableArray;
```
in .m file in view didload
```
tableArray=[[NSMutableArray alloc]init];
```
tableview didselect row method:-
```
- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath {
if ([tableArray count]>0) {
if ([tableArray containsObject:[list objectAtIndex:indexPath.row]]) {
[tableArray removeObject:[list objectAtIndex:indexPath.row]];
NSLog(@"data removed");
NSLog(@"tableArray%@",tableArray);
}
else {
[tableArray addObject:[list objectAtIndex:indexPath.row]];
NSLog(@"data added");
NSLog(@"tableArray%@",tableArray);
}
}
else {
//[tableArray addObject:[NSString stringWithFormat:@"%d",indexPath.row]];
[tableArray addObject:[list objectAtIndex:indexPath.row]];
NSLog(@"data added");
NSLog(@"tableArray%@",tableArray);
}
}
```
release the array in dealloc
I have tested the code I hope it might help you....
|
Does Google Guava Cache do deduplication when refreshing value of the same key
I implemented a non-blocking cache using Google Guava, there's only one key in the cache, and value for the key is only refreshed asynchronously (by overriding reload()).
My question is that does Guava cache handle de-duplication if the first reload() task hasn't finished, and a new get() request comes in.
```
//Cache is defined like below
this.cache = CacheBuilder
.newBuilder()
.maximumSize(1)
.refreshAfterWrite(10, TimeUnit.MINUTES)
.recordStats()
.build(loader);
//reload is overwritten asynchronously
@Override
public ListenableFuture<Map<String, CertificateInfo>> reload(final String key, Map<String, CertificateInfo> prevMap) throws IOException {
LOGGER.info("Refreshing certificate cache.");
ListenableFutureTask<Map<String, CertificateInfo>> task = ListenableFutureTask.create(new Callable<Map<String, CertificateInfo>>() {
@Override
public Map<String, CertificateInfo> call() throws Exception {
return actuallyLoad();
}
});
executor.execute(task);
return task;
}
```
| Yes, see the documentation for [`LoadingCache.get(K)`](http://google.github.io/guava/releases/snapshot/api/docs/com/google/common/cache/LoadingCache.html#get-K-) (and it sibling, [`Cache.get(K, Runnable)`](http://google.github.io/guava/releases/snapshot/api/docs/com/google/common/cache/Cache.html#get-K-java.util.concurrent.Callable-)):
>
> If another call to `get(K)` or `getUnchecked(K)` is currently loading the value for key, simply waits for that thread to finish and returns its loaded value.
>
>
>
So if a cache entry is currently being computed (or reloaded/recomputed), other threads that try to retrieve that entry will simply wait for the computation to finish - they will not kick off their own redundant refresh.
|
Laravel eager loading vs explicit join
This might sound like an obvious question but I just want to get some reassurance.
Using Laravel's eager loading functionality, from what I understand it will create **two queries** to return a whole list of related results (say if you're working with two tables). However, and correct me if I'm wrong, using a join statement will leave you with only **one query**, which creates one less round trip to the server's database (MySQL) and is a more efficient query.
I know that you can write join queries in Laravel, which is great, so the question is: am I incorrect to assume that when retrieving related data from two or more tables, should I not bother with eager loading and instead just write my own join statements?
\*\*\*\*\*\* Edit \*\*\*\*\*\*\*
Coming back to this one year later, I'd say in my personal opinion, just write the queries, raw, and write them well.
\*\*\*\*\*\*\*\* Edit 2 \*\*\*\*\*\*\*\*\*
Okay now six years later, I keep getting points for this.
Whether I was unclear from the beginning or not, contrary to what I've said above, Eloquent at this point writes great queries. **Use Eloquent** - even if there's a slight query inefficiency, it allows you to **write very clear, maintainable code** which at this point in my career I would argue is more important in most cases. Only write raw queries and optimize in cases where performance enhancements are critical and you can measure the impact.
| You are absolutely right about your understanding. If you write a `join` statement to join two or more tables using `join()` in `Laravel` then it makes only one query where using an `Eloquent` model with `eager loading` technique requires more than one query.
>
> should I not bother with eager loading and instead just write my own
> join statements
>
>
>
Actually, the `eager loading` is a technique to load related models using `Eloquent ORM` easily and it uses `Query Builder` behind the scene and lets you use `Eloquent Model Object` without making the query by your self and represents the data differently, Using `Eloquent ORM` you are able to interact with model directly which represent objects in the database with additional features. Most importantly, it hides the complexity of `SQL` and allows you to do the database query in an OOP fashion using `PHP` code.
But when you manually call `join` method which belongs to `Illuminate\Database\Query\Builder` class then you are using the `Query Builder` directly which requires you write more code and requires more knowledge of `sql query` because it doesn't hide the query from you but helps you make queries more precisely, but you still make queries.
Both are different things and they work differently. You may search on `Google` using the term `ORM vs Query Builder`.
|
Is GCC 4.8.1 C++11 complete?
OS is windows.
I'll start off by saying that I have no experience with C++, or any other compiled language. I've used CPython a bit and am familiar with that, but, until earlier today, I'd never even glanced at C++ source.
I'm trying to teach myself C++, so I've been playing around with it a bit, and one problem I'm having is the error:
```
error: 'to_string' was not declared in this scope
```
Apparently, to\_string is a C++11 thing, which *should* be fine. I downloaded the latest MinGW, added it to my path - I have checked, and running
```
g++ - v
```
does indeed tell me that I have version 4.8.1 installed. The IDE I'm working with, Code::Blocks finds it no problem, but it simply won't use any of the C++11 stuff, giving me errors such as the one above. Things not exclusive to C++11 compile fine.
There is a section under compiler flags to "follow the C++11 language standard", which I have checked, but, even then, I get the same errors. I'm really not sure what's going on - I've looked this up, and all of the suggestions are to update either the IDE or MinGW (both of which are up to date), or to select that flag, which, as I said, is already selected.
Does anyone with more experience with C++ have any idea what might be going on?b
| My understanding is that, other than `regex` support, G++'s C++11 support is largely complete with 4.8.1.
The following two links highlight the status of C++11 support in G++ 4.8.1 and libstdc++:
- [C++11 status in GCC 4.8.x.](http://gcc.gnu.org/gcc-4.8/cxx0x_status.html)
- [C++11 status in libstdc++.](http://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html#status.iso.200x) Note that this is for the latest SVN release, not tied to a specific G++ release; therefore expect it to be "optimistic" with respect to G++.
To compile C++11 code, though, you need to include the command line flag `-std=c++11` when you compile.
|
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 56