prompt
stringlengths
49
4.73k
ground_truth
stringlengths
238
35k
Distribution and Variance of Count of Triangles in Random Graph Consider an *Erdos-Renyi* random graph $G=(V(n),E(p))$. The set of $n$ vertices $V$ is labelled by $V = \{1,2,\ldots,n\}$. The set of edges $E$ is constructed by a random process. Let $p$ be a probability $0<p<1$, then each unordered pair $\{i,j\}$ of vertices ($i \neq j$) occurs as an edge in $E$ with probability $p$, independently of the other pairs. A triangle in $G$ is an unordered triple $\{i,j,k\}$ of distinct vertices, such that $\{i,j\}$, $\{j,k\}$, and $\{k,i\}$ are edges in $G$. The maximum number of possible triangles is $\binom{n}{3}$. Define the random variable $X$ to be the observed count of triangles in the graph $G$. The probability that three links are simultaneously present is $p^3$. Therefore, the expected value of $X$ is given by $E(X) = \binom{n}{3} p^3$. Naively, one may guess that the variance is given by $E(X^2) =\binom{n}{3} p^3 (1-p^3)$, but this is not the case. The following *Mathematica* code simulates the problem: ``` n=50; p=0.6; t=100; myCounts=Table[Length[FindCycle[RandomGraph[BernoulliGraphDistribution[n,p]],3,All]],{tt,1,t}]; N[Mean[myCounts]] // 4216. > similar to expected mean Binomial[n,3]p^3 // 4233.6 N[StandardDeviation[myCounts]] // 262.078 > not similar to "expected" std Sqrt[Binomial[n,3](p^3)(1-p^3)] // 57.612 Histogram[myCounts] ``` What is the variance of $X$?
Let $Y\_{ijk}=1$ iff $\{i, j, k\}$ form a triangle. Then $X=\sum\_{i, j, k}Y\_{ijk}$ and each $Y\_{ijk}\sim Bernoulli(p^3)$. This is what you have used to calculate the expected value. For the variance, the issue is that the $Y\_{ijk}$ are not independent. Indeed, write $$X^2=\sum\_{i, j, k}\sum\_{i', j', k'}Y\_{ijk}Y\_{i'j'k'}.$$ We need to compute $E[Y\_{ijk}Y\_{i'j'k'}]$, which is the probability that both triangles are present. There are several cases: - If $\{i,j,k\}=\{i',j',k'\}$ (same 3 vertices) then $E[Y\_{ijk}Y\_{i'j'k'}]=p^3$. There will be $\binom{n}{3}$ such terms in the double sum. - If the sets $\{i,j,k\}$ and $\{i',j',k'\}$ have exactly 2 elements in common, then we need 5 edges present to get the two triangles, so that $E[Y\_{ijk}Y\_{i'j'k'}]=p^5$. there will be $12 \binom{n}{4}$ such terms in the sum. - If the sets $\{i,j,k\}$ and $\{i',j',k'\}$ have 1 element in common, then we need 6 edges present, so that $E[Y\_{ijk}Y\_{i'j'k'}]=p^6$. There will be $30 \binom{n}{5}$ such terms in the sum. - If the sets $\{i,j,k\}$ and $\{i',j',k'\}$ have 0 element in common, then we need 6 edges present, so that $E[Y\_{ijk}Y\_{i'j'k'}]=p^6$. There will be $20 \binom{n}{6}$ such terms in the sum. To verify that we have covered all cases, note that the sum adds up to $\binom{n}{3}^{2}$. $$\binom{n}{3} + 12 \binom{n}{4} + 30 \binom{n}{5} + 20 \binom{n}{6} = \binom{n}{3}^{2}$$ Remembering to subtract the square of the expected mean, putting this all together gives: $$E[X^2] - E[X]^2 = \binom{n}{3} p^3 + 12 \binom{n}{4} p^5 + 30 \binom{n}{5} p^6 + 20 \binom{n}{6} p^6 - \binom{n}{3}^2 p^6$$ Using the same numerical values as your example, the following *R* code calculates the standard deviation, which is reasonably close to the value of 262 from your simulation. ``` n=50 p=0.6 sqrt(choose(n, 3)*p^3+choose(n, 2)*(n-2)*(n-3)*p^5+(choose(n, 3)*choose(n-3, 3)+n*choose(n-1, 2)*choose(n-3, 2))*p^6-4233.6^2) 298.7945 ``` The following *Mathematica* code also calculates the standard deviation, which gives the same result. ``` mySTD[n_,p_]:=Sqrt[Binomial[n,3]p^3+12Binomial[n,4]p^5+30 Binomial[n,5]p^6+20Binomial[n,6]p^6-(Binomial[n,3]p^3)^2] mySTD[50,0.6] // gives 298.795 ```
Different between two statements What is the difference between these two piece of code? ``` my $a = (); print $a; ``` ``` print (); ```
The scalar assignment operator evaluates its operands in scalar context. The stub operator (`()`) evaluates to `undef` in scalar context, so ``` my $a = (); # Create $a initialized to undef, then assign undef to $a. ``` is the same as ``` my $a = undef; # Create $a initialized to undef, then assign undef to $a. ``` which simplifies to ``` my $a; # Create $a initialized to undef. ``` --- The `print` operator evaluates its operands in list context. The stub operator (`()`) evaluates to an empty list in list context, so ``` print( () ); # Print no scalars, so print nothing. ``` is quite different than ``` print( $a ); # Print one scalar whose value is undef. ``` and ``` print( undef ); # Print one scalar whose value is undef. ``` --- But you didn't use ``` print( () ); ``` You actually used ``` print (); ``` which is a weird way of writing ``` print(); ``` When you don't specify any arguments, `print` prints `$_`, so ``` print (); # Print one scalar ($_) whose value is quite possibly undef. ``` is equivalent to ``` print($_); # Print one scalar ($_) whose value is quite possibly undef. ``` If warnings had been on, you would have received the following warning: ``` print (...) interpreted as function ```
Filtering a spark dataframe based on date I have a dataframe of ``` date, string, string ``` I want to select dates before a certain period. I have tried the following with no luck ``` data.filter(data("date") < new java.sql.Date(format.parse("2015-03-14").getTime)) ``` I'm getting an error stating the following ``` org.apache.spark.sql.AnalysisException: resolved attribute(s) date#75 missing from date#72,uid#73,iid#74 in operator !Filter (date#75 < 16508); ``` As far as I can guess the query is incorrect. Can anyone show me what way the query should be formatted? I checked that all enteries in the dataframe have values - they do.
The following solutions are applicable since **spark 1.5** : For lower than : ``` // filter data where the date is lesser than 2015-03-14 data.filter(data("date").lt(lit("2015-03-14"))) ``` For greater than : ``` // filter data where the date is greater than 2015-03-14 data.filter(data("date").gt(lit("2015-03-14"))) ``` For equality, you can use either `equalTo` or `===` : ``` data.filter(data("date") === lit("2015-03-14")) ``` If your `DataFrame` date column is of type `StringType`, you can convert it using the `to_date` function : ``` // filter data where the date is greater than 2015-03-14 data.filter(to_date(data("date")).gt(lit("2015-03-14"))) ``` You can also filter according to a year using the `year` function : ``` // filter data where year is greater or equal to 2016 data.filter(year($"date").geq(lit(2016))) ```
Removing whitespaces in a string I wrote this function to remove whitespaces in strings. Please help me improve it. I intended to use the function for a Big Integer ADT. ``` #include <iostream> #include <string> #include <string.h> void rs(char* str){ int i(0); int j(0); while((*(str + i) = *(str + j++)) != '\0') if(*(str + i) != ' ') i++; return; } int main() { std::string str = "Hello World"; char* result = strcpy((char*)malloc(str.length()+1), str.c_str()); rs(result); std::cout << result << std::endl; return 0; } ```
> > > ``` > char* result = strcpy((char*)malloc(str.length()+1), str.c_str()); > > ``` > > This seems more suited for a C program than a C++ one. In fact, with C++11, you don't need to write your own function. Behold: [`std::remove_if`](http://en.cppreference.com/w/cpp/algorithm/remove) **Note:** > > std::remove\_if > > > Removes all elements satisfying specific criteria from the range > [first, last) and returns a past-the-end iterator for the new end of > the range. > > > A call to remove is typically followed by a call to a container's > erase method, which erases the unspecified values and reduces the > physical size of the container to match its new logical size. > > > This is what you would do: ``` str.erase(remove_if(str.begin(), str.end(), ::isspace), str.end()); ``` I am not sure if this is the *safest* or *most efficient method*, but I think it's definitely an improvement to what you have done.
access variable from outside loop I know that this is fundamental JS, but I'd like a simple explanation. From what I've read, If i declare an empty variable outside of my loop, the variable inside the loop should be accessible globally? Or am I totally wrong? I would like to access `randAd` from outside my `for` loop. ``` var mobileAds = [ "mobile/bb.jpg", "mobile/eyeko.jpg", "mobile/farfetch.jpg", "mobile/fsb.jpg" ]; var randNum = (Math.floor(Math.random() * mobileAds.length)); var randAd; var i; for (i = 0; i < mobileAds.length; ++i) { randAd = (mobileAds[randNum]); } ```
If you want to access every element of `randAd` outside the `for` loop try like this `var randAd = [];` to initialize it as an array. You can easily access it after your for loop but If you use it as a simple variable `var randAd;`then you'll get the last variable always (it overwrites). So initialize it as an array and push every element inside loop before outputting it. ``` var mobileAds = [ "mobile/bb.jpg", "mobile/eyeko.jpg", "mobile/farfetch.jpg", "mobile/fsb.jpg" ]; var randNum = (Math.floor(Math.random() * mobileAds.length)); var randAd = []; // see the change here var i; for (i = 0; i < mobileAds.length; ++i) { randAd.push(mobileAds[randNum]); // push every element here } console.log(randAd); ```
IAM Service Account Key vs Google Credentials File I'm writing code to generate and download a private key for a Google Cloud service account. Using the IAM API, I was able to [create a service account](https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts/create), and my call to [generate a key](https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts.keys/create) seems to be working. I get back a Service Account Key as described on the [IAM API create key page](https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts.keys#ServiceAccountKey), like ``` { "privateKeyType": "TYPE_GOOGLE_CREDENTIALS_FILE", "privateKeyData": "random-key-stringkajdkjakjfke", ... } ``` I downloaded this file as a JSON response and am trying to authenticate with it: ``` gcloud auth activate-service-account --key-file=service-account-key-file.json ``` Unfortunately, I get an error stating `The .json key file is not in a valid format.` When I go though the Google Cloud Console flow (IAM & Admin -> Service accounts -> ... -> Create Key -> Create) I get a downloaded JSON file that looks like ``` { "type": "service_account", "private_key": "----BEGIN-PRIVATE-KEY-----", "auth_uri": "https://gaiastaging.corp.google.com/o/oauth2/auth", } ``` This file looks completely different than the response from the IAM API. Explains my error! Unfortunately, this format doesn't seem to be described anywhere. It's mentioned briefly in some [docs](https://cloud.google.com/vision/docs/common/auth). Is it a Google Credentials File? I'd like to take the IAM response file/JSON and convert it to the second credentials file. I've tried writing some code to convert it, but there are some fields like `"auth_provider_x509_cert_url"` that I don't understand. Perhaps converting the file is the wrong approach as well? More generally: How can I generate a file and then use it to authenticate with gcloud? How should I describe/distinguish between both of the above files? Why is each type of file useful?
About the two files: A Google Credentials file and a Service Account Credentials file are the same thing - they're both the second type of file that I downloaded off the Google Cloud Console page. No great official docs pages on them, but they're referenced a lot. Probably also Application Default Credentials. The JSON response from the IAM create call - this is just a response to an API call. It's not useful outside of parsing it with your application code. To generate a Google Credentials file: In the JSON response to the IAM create, there's a field `privateKeyData`. This field actually **contains the entire Google Credentials file**. It's just encoded as a base64 string. I just downloaded the file from HTML as ``` <a href="data:attachment/json;base64;charset=utf-8,THAT-LONG-privateKeyData-base64-string-here" download="service-account-key.json"> Download key </a> ``` Or if you just want to confirm that it contains all the information quickly, copy paste the base64 `privateKeyData` field into a file `google-credentials` and decode it (on Linux) with: ``` base64 -d google-credentials ``` I was then able to run ``` gcloud auth activate-service-account --key-file=google-credentials.json ``` and got ``` Activated service account credentials for: [service-account-id@project-id.iam.gserviceaccount.com] ```
CIL ANTLR grammar? Is there any .NET CIL (AKA MSIL) ANTLR grammar?
According [this publication](http://isea.nitk.ac.in/publications/web.pdf), Microsoft provides a MSIL grammar in in the Microsoft Visual Studio installation file *asmparse.grammar*. If you do not have MSVS, the grammar is also listed in the publication (see page 79). If I'm not mistaken, that grammar could pretty much just be copied and pasted in a ANTLR grammar file (ie. it's a *[LL](http://en.wikipedia.org/wiki/LL_parser) grammar*, not an *[LR](http://en.wikipedia.org/wiki/LR_parser)* one). If you're not familiar with the difference between the two, have a look at [this ANTLR article](http://www.antlr.org/wiki/display/ANTLR3/How+to+remove+global+backtracking+from+your+grammar).
Angular: Bind callback function using & and pass-in arguments I have a (simplified) directive ``` angular.module('myApp') .directive('myButton', function () { return { restrict: 'E', scope: { callbackFn: '&' }, template: '<button ng-click=ca;;backFn($evenb)' } }); ``` Now, in some parent controller I have defined a callback function: ``` this.myCallback = function ($event) { this.doIt($event); } ``` and the HTML: ``` <my-button callback-fn="page.myCallback()"></my-button> ``` (I'm using things like `bindToController` and `controllerAs`) The issue is that the `$event` is never passed to `myCallback`, which most likely has to do with how I bind this function (`&`). But on the other hand, inside `myCallback` I would like to use `this`. Is there some way to fix this ? without doing things like ``` var self = this; this.myCallback = function ($event) { self.doIt($event); } ```
You haven't completely set up your bindings correctly. You can pass back arguments from the directive to the parent controller via a key-value map. According to the [angular docs](https://docs.angularjs.org/api/ng/service/$compile) (emphasis mine): > > `&` or `&attr` - provides a way to execute an expression in the context of the parent scope. If no attr name is specified then the attribute name is assumed to be the same as the local name. Given `<widget my-attr="count = count + value">` and widget definition of `scope: { localFn:'&myAttr'`}, then isolate scope property `localFn` will point to a function wrapper for the `count = count + value` expression. Often it's desirable **to pass data from the isolated scope via an expression to the parent scope, this can be done by passing a map of local variable names and values into the expression wrapper fn. For example, if the expression is `increment(amount)` then we can specify the amount value by calling the `localFn` as `localFn({amount: 22})`.** > > > So that means in your consuming HTML you need to add parameters: ``` <my-button callback-fn="page.myCallback(parentEvent)"></my-button> ``` And then in the directive: ``` ...... restrict: 'E', scope: { callbackFn: '&' }, template: '<button ng-click="ctrl.callbackFn({parentEvent: $event})">Callback</button>' ``` ,
Python: create a list containing dicts how would you turn this string: ``` str='ldap:alberthwang,eeid:67739|ldap:meng,eeid:107,building:CL5' ``` into a list that give you this: ``` print x[1]['building']=CL5 ``` which would be: ``` x=[{'ldap':'alberthwang','eeid':'67739'},{'ldap':'meng','eeid':'107','building':'CL5'}] ``` i've tried to split the string first and append to a list: ``` sample=[] for s in str.split('|'): sample.append(s) ``` But i'm stuck on how to turn the list items into a dictionary that i can then use to populate another list.
``` text='ldap:alberthwang,eeid:67739|ldap:meng,eeid:107,building:CL5' sample=[ dict(item.split(':') for item in part.split(',')) for part in text.split('|')] print(sample) # [{'eeid': '67739', 'ldap': 'alberthwang'}, {'building': 'CL5', 'eeid': '107', 'ldap': 'meng'}] print(sample[1]['building']) # CL5 ``` 1. [List comprehensions](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) are a very convenient way to construct lists such as this. 2. A [dict can be constructed](http://docs.python.org/tutorial/datastructures.html#dictionaries) from an iterable of key-value pairs. The iterable used above was a [generator expression](http://docs.python.org/tutorial/classes.html#generator-expressions). 3. `str` is a built-in type, so assigning a string to `str` overwrites the builtin. It's better to choose some other variable name to avoid future surprising bugs. --- I read and write list comprehensions backwards: ``` [ expression # (3) for variable in # (2) iterable # (1) ] ``` (1): First, understand the iterable. In the solution above, this is `text.split('|')`. (2): `for variable in` causes `variable` to be assigned to the values in `iterable`, one at a time. (3): Finally, `expression` can be any Python expression, (usually) using `variable`. The syntax for generator expressions is almost the same. The difference between a list comprehension and a generator expression is that a list comprehension returns a list, while a generator expression returns an iterator -- an object that yields its contents on-demand (as it is looped over, or when `next` is called) instead of generating all the items at once as is the case with `list`s. A list can consume a lot of memory if the list is long. A generator expression will consume less memory (and can even be infinite) because not all elements have to exist in memory at the same time.
trying to add jquery countdown timer for special price in opencart i am trying to put jquery count down timer for special price in opencart. as we have start date and end date for special price in open cart admin panel, I want to have jquery count timer to show remaining `(days:Hours:Min:SEC)` for that special price. i get the code for jquery countdown and put in template file of product but its not working and no help or code on internet. thanks
Excellent question. As you noted, the data you wish to display is already part of the admin/backend of OpenCart, but it is not available on the frontend. I'll show you how to add it. Due to the [MVC](http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller) architecture of OpenCart, you'll have to make changes in 3 places. The Model, the View and the Controller. First things first, you will have to get the data from the database. Because we're looking to make changes to the frontend, everything will be contained in the *catalog* directory. If you look at the code, you'll find *catalog/model/catalog/product.php*. This is where we're going to make our first change. Special price is available in the ModelCatalogProduct, but the special price end date is not. You can either modify the existing getProduct() method, or you can create your own method. I am going to show you the latter, while the former is left as an exercise for the user. **catalog/model/catalog/product.php** ``` class ModelCatalogProduct extends Model { ... // Return an array containing special (price, date_start, date_end). // or false if no special price exists. public function getSpecialPriceDates($product_id) { if ($this->customer->isLogged()) { $customer_group_id = $this->customer->getCustomerGroupId(); } else { $customer_group_id = $this->config->get('config_customer_group_id'); } $query = $this->db->query("SELECT price, date_start, date_end FROM " . DB_PREFIX . "product_special WHERE product_id = '" . (int)$product_id . "' AND customer_group_id = '" . (int)$customer_group_id . "' AND ((date_start = '0000-00-00' OR date_start < NOW()) AND (date_end = '0000-00-00' OR date_end > NOW())) ORDER BY priority ASC, price ASC LIMIT 1"); if ($query->num_rows) { return array( 'special' => $query->row['price'], 'date_start' => $query->row['date_start'], 'date_end' => $query->row['date_end'], ); } else { return false; } } ... } ``` Great, now there is a function getSpecialPriceDates() you can call to find out when a product special will end. Let's make this data available to the View. In order to to that, we're going to have to add it to the Controller. Look in the ControllerProductProduct for where the 'special' variable is set. **catalog/controller/product/product.php** ``` ... if ((float)$product_info['special']) { $this->data['special'] = $this->currency->format($this->tax->calculate($product_info['special'], $product_info['tax_class_id'], $this->config->get('config_tax'))); // +++ NEW CODE $special_info = $this->model_catalog_product->getSpecialPriceDates($product_id); if ($special_info) { $this->data['special_date_end'] = $special_info['date_end']; } else { $this->date['special_date_end'] = false; } // +++ END NEW CODE } else { $this->data['special'] = false; } ... ``` The last task is to implement your timer in the product view. This will be located somewhere like *catalog/view/theme/default/template/product/product.tpl* (if you have your own theme, replace *default* with *{your-theme}*). There are a lot of different countdown timer solutions for jQuery, pick your [favorite](http://plugins.jquery.com/project/countdown2 "jQuery Countdown"). **catalog/view/theme/default/template/product/product.tpl** ``` <?php if (!$special) { ?> <?php echo $price; ?> <?php } else { ?> <span class="price-old"><?php echo $price; ?></span> <span class="price-new"><?php echo $special; ?></span> <?php if ($special_date_end): ?> <!-- TIMER CODE HERE --> <div class="timer"></div> <?php endif; ?> <?php } ?> ```
Angular Elements - pass a complicated input to my web component I started lately to play a little bit with angular elements that was released in angular V6, I opened a small sandbox project for that purpose. <https://github.com/Slash7GNR/ndv-angular-elements> Now I've tried to add a more complicated input to my web component - I've tried to add an array input as follows: in app.component.ts I've added: ``` @Input() welcomeMessages: string[]; ``` and in the template I've added: ``` <div *ngFor="let message of welcomeMessages; let index = index"> {{index}} - {{message}} </div> ``` Then, I've created a static html file and added the following code: (pls-wrk is a web component generated by angular elements feature) ``` <pls-wrk></pls-wrk> <script> let arr = [“wow”,”wow2"]; let elementByTag = document.getElementsByTagName(‘pls-wrk’)[0]; elementByTag.setAttribute(“welcome-messages”, arr); </script> <script src="ndv-elements.js"></script> ``` I’m getting the following error: > > Uncaught Error: Cannot find a differ supporting object ‘wow,wow2’ of > type ‘string’. NgFor only supports binding to Iterables such as > Arrays. > > > I've tried as well to bind an array via the element itself like this: ``` <pls-wrk welcome-messages=”[‘wow’,wow2']”></pls-wrk> ``` But neither this worked. I've tried as well to bind an input which is an object, but I couldn't see the result in the html. Anyone here who could successfully pass an array / an object as an input to an angular element?
So I found the answer, I had two main problems: 1. I had to add the angular elements script (ndv-elements.js) before initializing the attributes. 2. The thing with HTML attribute, is that they are all passed as strings, in contrast to angular or other framework where you can pass objects / arrays, so when using setAttribute method, the attribute is being initialized as a string which causes a problem with inputs that are objects / arrays, the solution is to set the attribute as follows: elementByTag.welcomeMessages = array; So the full solution will be: ``` <pls-wrk> </pls-wrk> <script src="ndv-elements.js"></script> <script> let arr = ['a1','a2','a3']; let wrk = document.querySelector('pls-wrk'); wrk.welcomeMessages = arr; </script> ``` References: <https://github.com/angular/angular/issues/24415> <https://medium.com/@gilfink/using-complex-objects-arrays-as-props-in-stencil-components-f2d54b093e85>
Is F#'s implementation of monads unique with respect to the amount of keywords available to it? I only know F#. I haven't learned the other functional programming languages. All the examples that I have seen for monads only describe the bind and unit methods. F# has lots of keywords (e.g. `let!`, `do!`, etc.) that allow you to do different things within the same computational expression. This seemingly gives you more power than your basic bind and unit methods. Is this unique to F# or is it common across functional programming languages?
Yes, I think that the F# syntax for computation expressions is unique in that it provides direct syntactic support for different types of computations. It can be used for working with *monoids*, usual *monads* and also *MonadPlus* computations from Haskell. I wrote about these in the introduction of [my Master thesis](http://tomasp.net/academic/theses/events/events.pdf). I believe it is quite readable part, so you can go to page 27 to read it. Anyway, I'll copy the examples here: **Monoid** is used just for concatenating values using some "+" operation (`Combine`). You can use it for example for building strings (this is inefficient, but it demonstrates the idea): ``` type StringMonoid() = member x.Combine(s1, s2) = String.Concat(s1, s2) member x.Zero() = "" member x.Yield(s) = s let str = new StringMonoid() let hello = str { yield "Hello " yield "world!" };; ``` **Monads** are the familiar example that uses *bind* and *return* operations of comptuation expressions. For example *maybe* monad represents computations that can fail at any point: ``` type MaybeMonad() = member x.Bind(m, f) = match m with Some(v) -> f v | None -> None member x.Return(v) = Some(v) let maybe = new MaybeMonad() let rec productNameByID() = maybe { let! id = tryReadNumber() let! prod = db.TryFindProduct(id) return prod.Name } ``` **Additive monads** (aka `MonadPlus` in Haskell) is a combination of the two. It is a bit like monadic computation that can produce multiple values. A common example is *list* (or *sequence*), which can implement both *bind* and *combine*: ``` type ListMonadPlus() = member x.Zero() = [] member x.Yield(v) = [v] member x.Combine(a, b) = a @ b member x.Bind(l, f) = l |> List.map f |> List.concat let list = new ListMonadPlus() let cities = list { yield "York" yield "Orleans" } let moreCities = list { let! n = cities yield n yield "New " + n } // Creates: [ "York"; "New York"; "Orleans"; "New Orleans" ] ``` There are some additional keywords that do not directly correspond to any theoretical idea. The `use` keyword deals with resources and `for` and `while` can be used to implement looping. The sequence/list comprehension actually use `for` instead of `let!`, because that makes much more sense from the syntactic point of view (and `for` usually takes some sequence - although it may be e.g. asynchronous).
What is the default animation easing function in iOS? In iOS animations is the default easing function (`UIViewAnimationOptionCurveEaseInOut`) a quadratic or a cubic? Or what else?
It's a [cubic bézier curve](http://en.wikipedia.org/wiki/B%C3%A9zier_curve#Cubic_B.C3.A9zier_curves). The precise control points aren't documented, so they could change between releases, but you can get them via `CAMediaTimingFunction`: ``` CAMediaTimingFunction *func = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; for (int i = 0; i < 4; i++) { float *values = malloc(sizeof(float) * 2); [func getControlPointAtIndex:i values:values]; NSLog(@"Control point %i: (%f, %f)", i+1, values[0], values[1]); free(values); } ``` The values I get with this are `(0.0, 0.0)`, `(0.42, 0.0)`, `(0.58, 1.0)`, `(1.0, 1.0)`, which corresponds roughly to this curve: ![Curve](https://i.stack.imgur.com/F0fog.png)
sweet-alert display HTML code in text I am using sweet-alert plugin to display an alert. With a classical config (defaults), everything goes OK. But when I want to add a HTML tag into the TEXT, it display `<b>...</b>` without making it bold. After searching for the answer, it looks like I don't have the right search word... How to make sweet alert display the text also with HTML code? ``` var hh = "<b>test</b>"; swal({ title: "" + txt + "", text: "Testno sporocilo za objekt " + hh + "", confirmButtonText: "V redu", allowOutsideClick: "true" }); ```
The SweetAlert repo seems to be unmaintained. There's a bunch of Pull Requests without any replies, the last merged pull request was on Nov 9, 2014. I created [SweetAlert2](https://github.com/limonte/sweetalert2) with HTML support in modal and some other options for customization modal window - width, padding, Esc button behavior, etc. ``` Swal.fire({ title: "<i>Title</i>", html: "Testno sporocilo za objekt: <b>test</b>", confirmButtonText: "V <u>redu</u>", }); ``` ``` <script src="https://cdn.jsdelivr.net/npm/sweetalert2@11"></script> ```
Class template argument deduction with partial specialization I'm having some trouble understanding all the limitations of the new C++17 feature that allows template deduction on constructors. In particular, this example compiles correctly: ``` struct B {}; template <typename T, typename = T> struct A { A(T) {} }; int main() { B b; A a(b); // ok } ``` While this one does not: ``` struct B {}; template <typename T, typename = T> struct A; template <typename T> struct A<T> { A(T) {} }; int main() { B b; A a(b); // error } ``` The error in this second case is: ``` main.cpp: In function ‘int main()’: main.cpp:17:14: error: class template argument deduction failed: A a(b); ^ main.cpp:17:14: error: no matching function for call to ‘A(B&)’ main.cpp:4:12: note: candidate: template<class T, class> A(A<T, <template-parameter-1-2> >)-> A<T, <template-parameter-1-2> > struct A; ^ main.cpp:4:12: note: template argument deduction/substitution failed: main.cpp:17:14: note: ‘B’ is not derived from ‘A<T, <template-parameter-1-2> >’ A a(b); ^ ``` Why is this happening?
Class template argument deduction only considers constructors from the **primary** class template in order to do deduction. In the first example, we have one constructor that we synthesize a function template for: ``` template <class T> A<T> __f(T ); ``` The result of `__f(b)` is `A<B>`, and we're done. But in the second example, the primary class template is just: ``` template <typename T, typename = T> struct A; ``` It has no constructors, so we have no function templates to synthesize from them. All we have is a [hypothetical default constructor](http://eel.is/c++draft/over.match.class.deduct#1.2.sentence-1) and the [copy deduction guide](http://eel.is/c++draft/over.match.class.deduct#1.3.sentence-1), which together give us this overload set: ``` template <class T> A<T> __f(); template <class T> A<T> __f(A<T> ); ``` Neither of which are viable for `__f(b)` (the compile error you get is about trying to match the copy deduction guide), so deduction fails. --- If you want this to succeed, you'll have to write a deduction guide: ``` template <class T> A(T ) -> A<T>; ``` Which would allow `A a(b)` to work.
Google Geochart - Unable to set region to a specific province I'm trying to display City markers for cities within the Province of Quebec Map. Based on google documentation, we should be able to set the `resolution` option to `provinces` and set the `region` to the ISO code (Eg: US-GA..). When i try with `CA-QC` (Found this code [here on wikipedia](http://en.wikipedia.org/wiki/ISO_3166-2:CA). When I try this, the map `<div>` displays this message : **Requested map doest not exist** See Fiddle: ``` google.setOnLoadCallback(drawRegionsMap); function drawRegionsMap() { var data = google.visualization.arrayToDataTable([ ['City', 'Popularity'], ['Quebec', 200], ['Montreal', 300], ['Sorel-Tracy', 400], ['Boucherville', 500] ]); var options = { enableRegionInteractivity: 'true',resolution: 'provinces', region:'CA-QC'}; var chart = new google.visualization.GeoChart(document.getElementById('regions_div')); chart.draw(data, options); } ``` ``` <script type="text/javascript" src="https://www.google.com/jsapi?autoload={'modules':[{'name':'visualization','version':'1.1','packages':['geochart']}]}"></script> <div id="regions_div" style="width: 900px; height: 500px;"></div> ``` I there a way/workaround to do this? Thanks
It does not seem possible to set regions in this country. According to [Visualization: Geochart](https://developers.google.com/chart/interactive/docs/gallery/geochart): > > 'provinces' - Supported only for country regions and US state regions. > Not supported for all countries; please test a country to see whether > this option is supported. > > > But you could consider another option, draw popularity using [circle markers](https://developers.google.com/maps/documentation/javascript/examples/circle-simple) instead of `geocharts` as demonstrated below: ``` function initialize() { var mapOptions = { zoom: 6, center: new google.maps.LatLng(46.579246, -72.024826) }; var map = new google.maps.Map(document.getElementById('map-canvas'), mapOptions); displayPopularity(map); } function displayPopularity(map) { var citymap = {}; citymap['Quebec'] = { center: new google.maps.LatLng(46.769492, -71.290357), population: 200 }; citymap['Montreal'] = { center: new google.maps.LatLng(45.510845, -73.567888), population: 300 }; citymap['Sorel-Tracy'] = { center: new google.maps.LatLng(46.421575, -73.118540), population: 400 }; citymap['Boucherville'] = { center: new google.maps.LatLng(45.601591, -73.438919), population: 500 }; for (var city in citymap) { var populationOptions = { strokeColor: '#00FF00', strokeOpacity: 0.8, strokeWeight: 2, fillColor: '#00FF00', fillOpacity: 0.35, map: map, center: citymap[city].center, radius: Math.sqrt(citymap[city].population) * 1000 }; var cityCircle = new google.maps.Circle(populationOptions); (function (cityCircle,city) { //create info window var infoWindow = new google.maps.InfoWindow({ content: city }); google.maps.event.addListener(cityCircle, 'click', function(ev) { infoWindow.setPosition(cityCircle.getCenter()); infoWindow.open(map); }); })(cityCircle,city); } } google.maps.event.addDomListener(window, 'load', initialize); ``` ``` <!DOCTYPE html> <html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no"> <meta charset="utf-8"> <title>Popularity map</title> <style> html, body, #map-canvas { height: 100%; margin: 0px; padding: 0px; } </style> <script src="https://maps.googleapis.com/maps/api/js?v=3.exp&signed_in=true"></script> </head> <body> <div id="map-canvas"></div> </body> </html> ```
Constructors versus Initializors in C# > > **Possible Duplicate:** > > [What's the difference between an object initializer and a constructor?](https://stackoverflow.com/questions/740658/whats-the-difference-between-an-object-initializer-and-a-constructor) > > > In c# you can construct an object like: ``` public class MyObject{ int val1; int val2; public MyObject(int val1, val2){ this.val1 = val1; this.val2 = val2; } } ``` With: ``` MyObject ob = new MyObject(1,2); ``` or with: ``` MyObject ob = new MyObject(){ val1 = 1, val2 = 2 }; ``` What is the diference between that kind of constructors?
``` MyObject ob = new MyObject(){ val1 = 1, val2 = 2 }; ``` is just syntactic sugar (i.e. shorthand) for ``` MyObject ob = new MyObject(); ob.val1 = 1; ob.val2 = 2; ``` One difference between the two is that you can set [readonly](http://msdn.microsoft.com/en-us/library/acdd6hb7%28v=vs.71%29.aspx) fields from the constructor, but not by using the shorthand. A second difference is that a constructor with parameters forces the client to provide those values. See [Constructor-injection vs. Setter injection](http://misko.hevery.com/2009/02/19/constructor-injection-vs-setter-injection/) for a good bit of background reading.
Django datetime not validating right I'm using the HTML5 `datetime-local` input type to try and get some datetime data into my database. The `ModelForm` `class Meta:` looks like the following: ``` class Meta: model = ScheduleEntry fields = ['calendar', 'title', 'start', 'end', 'assets', 'users'] widgets = { 'calendar': forms.Select(attrs={ 'class': 'fom-control chosen-select' }), 'title': forms.TextInput(attrs={ 'class': 'form-control' }), 'start': forms.DateTimeInput(attrs={ 'type':'datetime-local', 'class':'form-control' }, format='%Y-%m-%dT%H:%M'), 'end': forms.DateTimeInput(attrs={ 'type': 'datetime-local', 'class': 'form-control' }, format='%Y-%m-%dT%H:%M'), 'assets': forms.SelectMultiple(attrs={ 'class': 'form-control chosen-select' }), 'users': forms.SelectMultiple(attrs={ 'class': 'form-control chosen-select', }) } ``` I keep failing on form validation and it's causing me to pull my hair out.[This is](https://docs.djangoproject.com/en/2.1/ref/forms/widgets/#django.forms.DateTimeInput) the documentation page that shows it should work, but it looks like I'm missing something? **EDIT FOR CLARIFICATION:** The error message is for both `start` and `end` and it's `Enter a valid date/time`
## The misconception: To quote the [docs](https://docs.djangoproject.com/en/2.1/ref/forms/widgets/): > > Widgets should not be confused with the form fields. Form fields deal with the logic of input validation and are used directly in templates. Widgets deal with rendering of HTML form input elements on the web page and extraction of raw submitted data. > > > Widgets have no influence on validation. You gave your widgets a format argument, but that does not mean the form field validation will use it - it only sets the initial format the widget's content is rendered with: > > **format**: The format in which this field’s initial value will be displayed. > > > --- ## The solutions Two options: - provide the form field ([`forms.DateTimeField`](https://docs.djangoproject.com/en/2.1/ref/forms/fields/#datetimefield)) with the datetime format you would like to use by passing a `input_formats` argument ``` class MyIdealDateForm(forms.ModelForm): start = forms.DateTimeField( input_formats = ['%Y-%m-%dT%H:%M'], widget = forms.DateTimeInput( attrs={ 'type': 'datetime-local', 'class': 'form-control'}, format='%Y-%m-%dT%H:%M') ) ``` This needs to be done for every form field (and probably by extension even their widgets). What you are doing here is effectively overwriting the settings (see the next point). - Add your datetime format to the [settings](https://docs.djangoproject.com/en/2.1/ref/settings/#datetime-input-formats) as the first item. This will apply globally to all formfields and widgets that use that setting.
Got focus and lost focus both event calling repeatedly when handling get and lost focus event of multiple textboxes it goes into infinite loop Following is my code to handle gotfocus and lostfocus event for all textboxes available in form. ``` private void Form1_Load(object sender, EventArgs e) { foreach (Control c in this.Controls) { if (c is TextBox) { c.GotFocus += new System.EventHandler(this.txtGotFocus); c.LostFocus += new System.EventHandler(this.txtLostfocus); } } } private void txtGotFocus(object sender, EventArgs e) { TextBox tb = (TextBox)sender; if (tb != null) { tb.BackColor = Color.Silver; tb.BorderStyle = BorderStyle.FixedSingle; } } private void txtLostFocus(object sender, EventArgs e) { TextBox tb = (TextBox)sender; if (tb != null) { tb.BackColor = Color.White; tb.BorderStyle = BorderStyle.Fixed3D; } } ``` It works fine with first textbox but when I go to next textbox by pressing tab key it will repeatedly call both events and textbox behave like blinking. After some time error message display in code like: > > A callback was made on a garbage collected delegate of type 'System.Windows.Forms!System.Windows.Forms.NativeMethods+WndProc::Invoke'. This may cause application crashes, corruption and data loss. When passing delegates to unmanaged code, they must be kept alive by the managed application until it is guaranteed that they will never be called. > > > Whats wrong with code? Is there any solution?
``` c.LostFocus += new System.EventHandler(this.txtLostfocus); ``` LostFocus is a *dangerous* event, the MSDN Library article for Control.LostFocus warns about this and strongly recommends to use the Leave event instead. This is something you can see in the designer, drop a TextBox on the form and click the lightning bolt icon in the Properties window. Note how both the GotFocus and LostFocus events are *not* visible. You must use the Enter and Leave events instead. Some background on what is going on here. Your program blows up because you assign the BorderStyle property. That's a "difficult" property, it is one that affects the style flag of the window, the one that's passed to the native CreateWindowEx() function. So changing the border style requires Winforms to create the native window again. This is what causes the flicker you see, the text box is destroyed and re-created, then repainted. You see that. But that has side-effects beyond the flicker, it also causes the low-level GotFocus and LostFocus events to fire. Because the destroyed window of course also loses the focus. This interacts very poorly in your program since your LostFocus event handler changes the BorderStyle again, forcing Winforms to yet again recreate the window. And fire the GotFocus event, you change the BorderStyle yet again. This repeats over and over again, you see the textbox rapidly blinking. This doesn't go on endlessly, after 10,000 times of creating the window, the operating system pulls the plug and doesn't let your program create yet another one. The hard crash on the window procedure is the outcome. Not a problem with the Enter and Leave events, they don't work from the low-level Windows notification so don't fire when the textbox window is recreated. You can only get rid of the one-time flicker, if it is still bothering you, by not changing the BorderStyle property.
Openlayers - Projection issues when getting latitude/longitude from a point I'm trying to get the latitude/longitude from a draggable marker with Openlayers and OSM but I can't find the good settings for the projection conversion, what I am doing wrong ? Here is the code: <http://pastie.org/2300321> (see addMarker l140 & updateTargets l153) & and a little [demo test](http://labs.julien-guigner.name/openlayers/). If you submit an address, then drag the marker, the longitude and latitude are wrong. I tested a few different projections but I'm not sure what I've to use…
I think the problem is inside `updateTargets` method: ``` var point = this.feature.geometry; var pixel = new OpenLayers.Pixel(point.x, point.y); var coord = this.map.getLonLatFromPixel(pixel).transform( new OpenLayers.Projection("EPSG:900913"), new OpenLayers.Projection("EPSG:4326") ); ``` this.feature.geometry is already specified in lon/lat coordinates, not in pixels. So I suggest that you skip second line and do the conversion from OpenStreetMap projection to lon/lat directly on geometry object: ``` var coord = this.feature.geometry.transform( new OpenLayers.Projection("EPSG:900913"), new OpenLayers.Projection("EPSG:4326") ); ```
how to host angular 2 website? How to host angular 2 website? I am new to angular 2 and I made a simple website with no back-end. I wondered that when I tried to open directly index.html file, it opens with error. But after command `"npm start"` it works fine, which runs a local server on computer. So, how to host this website on simple hosting sites (Not a Dedicated Server..!)? I think hosting sites automatically find index.html file, but here is the problem, index.html is don't start without `"npm start"` command. can I have to start an process for that on server? please guide me.
Host your Angular 2 App on **Firebase** using these simple steps: Do Create a project with Angular CLI first. Get More info here <https://cli.angular.io/> **Step 1: Build your App** Run the below cmd to build ``` ng build --prod ``` **Step 2: Create FireBase project and Install Firebase CLI** Open the Firebase console at <https://console.firebase.google.com/> and create a new Firebase project. To install the Firebase command line tools run: ``` npm install -g firebase-tools ``` **Step 3: Deploy to FireBase** Run the below firebase cmd to login: ``` firebase login ``` It will open the browser and ask you for authentication. Login with your Firebase account. There after you can close the browser window. On the command line you'll receive the message that the login has been performed successfully. Now run the below cmd: ``` firebase init ``` First of all you're being asked which of the Firebase client features you want to use. You should select the option Hosting: Configure and deploy Firebase Hosting site. Next the Firebase client will ask which folder to use for deployment. Type in **dist**. That is important because that is the location where our production build is stored. Next the question is ask if this app is a single page app and if it should rewrite all URLs to index.html. In our case we need to answer yes. Last question is if Firebase should over write file index.html. The answer to that question is no. Now, Run the below cmd to deploy: ``` firebase deploy ``` Firebase will provide a **URL** which you can use to access your application online. ***[Update]*** Now after you have successfully deployed your app if you want to make some changes and deploy the code on same URL. Follow the same procedure. But make sure You are pointing to your project. **To list all projects use this command:** ``` firebase list ``` **To make a project as a current project use:** ``` firebase use <project_id> ```
When should I prefix ROLE\_ with Spring Security? In Spring Security, when is it appropriate to add the `"ROLE_"` prefix? In examples using `@PreAuthorize("hasRole('ROLE_USER')")`, it does. But in this example, it doesn't: ``` http .httpBasic() .and() .authorizeRequests() .antMatchers(HttpMethod.POST, "/books").hasRole("ADMIN") ``` What about the following? ``` SecurityContext securityContext = new SecurityContextImpl(); final Properties users = new Properties(); users.put("joe","secret,ADMIN,enabled"); <-- here InMemoryUserDetailsManager manager = new InMemoryUserDetailsManager(users); ``` and ``` Collection<GrantedAuthority> grantedAuthorities = new ArrayList<GrantedAuthority>(); grantedAuthorities.add(new SimpleGrantedAuthority("ROLE_ADMIN")); <-- here AnonymousAuthenticationToken anonymousAuthenticationToken = new AnonymousAuthenticationToken("test", manager.loadUserByUsername("joe"), grantedAuthorities); securityContext.setAuthentication(anonymousAuthenticationToken); SecurityContextHolder.setContext(securityContext); ``` Are there any specific rules of the usage?
## Automatic `ROLE_` prefixing As [Spring Security 3.x to 4.x migration guide](http://docs.spring.io/spring-security/site/migrate/current/3-to-4/html5/migrate-3-to-4-jc.html#m3to4-role-prefixing) states: > > Spring Security 4 automatically prefixes **any role** with `ROLE_`. The > changes were made as part of [SEC-2758](https://github.com/spring-projects/spring-security/issues/2984) > > > With that being said, the `ROLE_` prefix in the following annotation is redundant: ``` @PreAuthorize("hasRole('ROLE_USER')") ``` Since you're calling `hasRole` method, the fact that you're passing a **role** is implied. Same is true for the following expression: ``` antMatchers(HttpMethod.POST, "/books").hasRole("ADMIN") ``` But for the: ``` new SimpleGrantedAuthority("ROLE_ADMIN") ``` Since this is an authority, not a role, you should add the `ROLE_` prefix (If your intent is to create a role!). Same is true for calling `public InMemoryUserDetailsManager(Properties users)` constructor, since it's using an authority [internally](https://github.com/spring-projects/spring-security/blob/master/core/src/main/java/org/springframework/security/provisioning/InMemoryUserDetailsManager.java#L64).
A complex condition inside v-if I want to create a complex condition to pass to the `v-if` directive. I have tried the following. ``` <div v-if="(order.order_products[key].statuses[0].id) != 3 || order.status != 3" > ``` Can I add a complex condition in Vue's `v-if`? This is not working. I also tried with `&&` but that wasn't working, either. I haven't found anything in the documentation about this.
Firstly, to answer your question. > > Can I add a complex condition in Vue's `v-if`? > > > You **can** pass an arbitrary JavaScript expression to the `v-if` directive in Vue, including a complex boolean expression which contains operators `||` or `&&`. You can test this on your own. For example, try having the following template. ``` <div v-if="true && false">I am not visible!</div> ``` Of course, you might try out something less trivial, too: ``` <div v-if="1 == 2 || (1 + 2 == 3 && 4 == 4)">I am visible!</div> ``` --- Your expression looks good, but based on the provided information it's impossible to deduce what exactly is wrong. Your problem is something else: maybe the data is not in the format you thought it would be, or maybe your logic has a flaw in it.
Filter nested array using jmes query I have to get the name of companies in which 'John' worked in the 'sales' department. My JSON looks like this: ``` [ { "name" : "John", "company" : [{ "name" : "company1", "department" : "sales" }, { "name" : "company2", "department" : "backend" }, { "name" : "company3", "department" : "sales" } ], "phone" : "1234" } ] ``` And my jmesquery is like this: ``` jmesquery: "[? name=='John'].company[? department=='sales'].{Company: name}" ``` But with this query, I'm getting a `null` array.
This is because your first filter `[?name=='John']` is creating a [projection](https://jmespath.org/tutorial.html#projections), and more specifically a [filter projection](https://jmespath.org/tutorial.html#filter-projections), that you will have to reset in order to further filter it. Resetting a projection can be achieved using [pipes](https://jmespath.org/examples.html#pipes). > > Projections are an important concept in JMESPath. However, there are times when projection semantics are not what you want. A common scenario is when you want to operate of the result of a projection rather than projecting an expression onto each element in the array. > For example, the expression `people[*].first` will give you an array containing the first names of everyone in the people array. What if you wanted the first element in that list? If you tried `people[*].first[0]` that you just evaluate `first[0]` for each element in the people array, and because indexing is not defined for strings, the final result would be an empty array, `[]`. To accomplish the desired result, you can use a pipe expression, `<expression> | <expression>`, to indicate that a projection must stop. > > > *Source: <https://jmespath.org/tutorial.html#pipe-expressions>* So, here would be a first step in your query: ``` [?name=='John'] | [].company[?department=='sales'].{Company: name} ``` --- This said, this still ends in an array of array: ``` [ [ { "Company": "company1" }, { "Company": "company3" } ] ] ``` Because you can end up with multiple people named `John` in a `sales department`. So, one array for the `users` and another for the `companies/departments`. In order to fix this, you can use the [flatten operator](https://jmespath.org/specification.html#flatten-operator): `[]`. So we end with: ``` [?name=='John'] | [].company[?department=='sales'].{Company: name} [] ``` Which gives: ``` [ { "Company": "company1" }, { "Company": "company3" } ] ```
How to make a histogram for non-numeric variables in python **Sample data** ``` import pandas as pd import matplotlib.pyplot as plt dummy = {'id': [1,2,3,4,5], 'brand': ['MS', 'Apple', 'MS', 'Google', 'Apple'], 'quarter': ['2017Q2', '2017Q2', '2017Q2', '2016Q1', '2015Q1']} dummyData = pd.DataFrame(dummy, columns = ['id', 'brand', 'quarter']) dummyData # id brand quarter # 0 1 MS 2017Q2 # 1 2 Apple 2017Q2 # 2 3 MS 2017Q2 # 3 4 Google 2016Q1 # 4 5 Apple 2015Q1 ``` Now I want to plat a histogram using matplotlib and pandas, here the description - X Axis : Quarter - Y Axis : Count of values - Histogram Bin: Filled with brand like 2017Q2 have two color values for MS and Apple - Legends : Brand Name I have a R background and its pretty easy using ggplot, I want to do the same in Python but I am not finding any suitable code, I am getting below mentioned error ``` TypeError: Empty 'DataFrame': no numeric data to plot ```
IIUC, you can use `groupby` + `count` + `unstack` + `plot` - ``` plt.style.use('ggplot') dummyData.groupby(['quarter', 'brand'])\ .brand.count().unstack().plot.bar(legend=True) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/gbf3j.png)](https://i.stack.imgur.com/gbf3j.png) For reference, this is what is plotted - ``` brand Apple Google MS quarter 2015Q1 1.0 NaN NaN 2016Q1 NaN 1.0 NaN 2017Q2 1.0 NaN 2.0 ```
How to take a screenshot of a view and email it? Can I make my app take a screenshot of the contents of a view and attach it to an email? How?
You can convert your view to an image, then you could create an email with it. This code ([from here](http://iphonedevelopertips.com/email/how-to-send-email-with-attachments-example-using-iphone-camera-to-email-a-photo.html)) will allow you to send an email with an attachment: ``` - (void)emailImageWithImageData:(NSData *)data { MFMailComposeViewController *picker = [[MFMailComposeViewController alloc] init]; picker.mailComposeDelegate = self; // Set the subject of email [picker setSubject:@"Picture from my iPhone!"]; // Add email addresses // Notice three sections: "to" "cc" and "bcc" [picker setToRecipients:[NSArray arrayWithObjects:@"emailaddress1@domainName.com", @"emailaddress2@domainName.com", nil]]; [picker setCcRecipients:[NSArray arrayWithObject:@"emailaddress3@domainName.com"]]; [picker setBccRecipients:[NSArray arrayWithObject:@"emailaddress4@domainName.com"]]; // Fill out the email body text NSString *emailBody = @"I just took this picture, check it out."; // This is not an HTML formatted email [picker setMessageBody:emailBody isHTML:NO]; // Attach image data to the email // 'CameraImage.png' is the file name that will be attached to the email [picker addAttachmentData:data mimeType:@"image/png" fileName:@"CameraImage"]; // Show email view [self presentModalViewController:picker animated:YES]; //if you have a navigation controller: use that to present, else the user will not //be able to tap the send/cancel buttons //[self.navigationController presentModalViewController:picker animated:YES]; // Release picker [picker release]; } - (void)mailComposeController:(MFMailComposeViewController*)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError*)error { // Called once the email is sent // Remove the email view controller [self dismissModalViewControllerAnimated:YES]; } ``` To convert your view graphical representation to an image, use the code ([from here](https://stackoverflow.com/questions/2200736/how-to-take-a-screenshot-programmatically)): ``` UIGraphicsBeginImageContext(self.window.bounds.size); [self.window.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); NSData * data = UIImagePNGRepresentation(image); [self emailImageWithImageData:data]; ```
Convert items in List<> to a concatenated and formatted string? Given: ``` protected class Marker { public string Name { get; set; } public string Lat { get; set; } public string Long { get; set; } }; List<Marker> allMarkers = new List<Marker>(); allMarkers.Add(new Marker{Name="Bondi Beach", Lat = "-33.890542", Long = "151.274856"}); allMarkers.Add(new Marker{Name="Coogee Beach", Lat = "-33.923036", Long = "151.259052"}); allMarkers.Add(new Marker{Name="Cronulla Beach", Lat = "-34.028249", Long = "151.157507"}); allMarkers.Add(new Marker{Name="Manly Beach", Lat = "-33.800101", Long = "151.287478"}); allMarkers.Add(new Marker{Name="Maroubra Beach", Lat = "-33.950198", Long = "151.259302"}); ``` I'd like to convert to a string in this format: ``` ['Bondi Beach', -33.890542, 151.274856], ['Coogee Beach', -33.923036, 151.259052], ['Cronulla Beach', -34.028249, 151.157507], ['Manly Beach', -33.800101, 151.287478], ['Maroubra Beach', -33.950198, 151.259302] ``` Is there a one liner way to do this, something similar to string.Join(), or do I have to do it manually via a foreach on the List and use stringbuilder.appendformat()?
If `Marker` is your own class, consider overriding the `ToString()` method to display each line the way you do. Then, you can use a simple `String.Join()` to combine it all together. ``` protected class Marker { public string Name { get; set; } public string Lat { get; set; } public string Long { get; set; } public override string ToString() { return String.Format("['{0}', {1}, {2}]", Name, Lat, Long); } }; ``` Then, to use: ``` List<Marker> allMarkers = new List<Marker>(); allMarkers.Add(new Marker { Name = "Bondi Beach", Lat = "-33.890542", Long = "151.274856" }); allMarkers.Add(new Marker { Name = "Coogee Beach", Lat = "-33.923036", Long = "151.259052" }); allMarkers.Add(new Marker { Name = "Cronulla Beach", Lat = "-34.028249", Long = "151.157507" }); allMarkers.Add(new Marker { Name = "Manly Beach", Lat = "-33.800101", Long = "151.287478" }); allMarkers.Add(new Marker { Name = "Maroubra Beach", Lat = "-33.950198", Long = "151.259302" }); Console.Write(String.Join(",\n", allMarkers)); ``` Note: If you are dealing with a lot of markers and notice bad performance, consider rewriting that `String.Format()` line as: ``` return "['" + Name + "', " + Lat.ToString() + ", " + Long.ToString() + "]"; ``` You may notice this is better (or worse), depending on your use case.
Scheduler task load in TYPO3 6.2 I have configured scheduler task in one of my extension, It is also being listed in , but When i try to add it shows me following error.... "The selected task class could not be found. You should probably contact the task's developers."
In TYPO3 6.2.X, it will be namespace... Consider that we are adding scheduler task in extension called "Test" and vendor name is default TYPO3 Create Task class inside controller which includes execute method So inside YOUR\_EXT\_NAME/Classes/Task/ , It will be Task.php which contain execute method... It will contain ``` <?php namespace TYPO3\Test\Task; class Task extends \TYPO3\CMS\Scheduler\Task\AbstractTask { /** * Function executed from the Scheduler. * Sends an email * * @return boolean */ public function execute() { //Your logic to perform return TRUE; } } ?> ``` Register this task to scheduler in ext\_localconf.php in following way ``` // Register information for the task $GLOBALS['TYPO3_CONF_VARS']['SC_OPTIONS']['scheduler']['tasks']['TYPO3\\Test\\Task\\Task'] = array( 'extension' => $_EXTKEY, 'title' => 'Test task', 'description' => 'Test task that performs XYZ functions', 'additionalFields' => 'TYPO3\\Test\\Task\\TaskAdditionalFieldProvider' ); ```
Reference manuals in R Every package on `CRAN` seems to provide a reference manual but in contrast to `vignettes` they do not seem to get downloaded or built when installing packages. Can this be done so that I can access them from the command line in `R` or can I only access them on `CRAN`?
From a given package's sources, you can build this via `R CMD Rd2pdf`: ``` edd@max:~$ R CMD Rd2pdf --help Usage: R CMD Rd2pdf [options] files Generate PDF output from the Rd sources specified by files, by either giving the paths to the files, or the path to a directory with the sources of a package, or an installed package. Unless specified via option '--output', the basename of the output file equals the basename of argument 'files' if this specifies a package or a single file, and 'Rd2' otherwise. The Rd sources are assumed to be ASCII unless they contain \encoding declarations (which take priority) or --encoding is supplied or if using package sources, if the package DESCRIPTION file has an Encoding field. The output encoding defaults to the package encoding then to 'UTF-8'. Files are listed in the order given: for a package they are in alphabetic order of the \name sections. Options: -h, --help print short help message and exit -v, --version print version info and exit --batch no interaction --no-clean do not remove created temporary files --no-preview do not preview generated PDF file --encoding=enc use 'enc' as the default input encoding --outputEncoding=outenc use 'outenc' as the default output encoding --os=NAME use OS subdir 'NAME' (unix or windows) --OS=NAME the same as '--os' -o, --output=FILE write output to FILE --force overwrite output file if it exists --title=NAME use NAME as the title of the document --no-index do not index output --no-description do not typeset the description of a package --internals typeset 'internal' documentation (usually skipped) --build_dir=DIR use DIR as the working directory The output papersize is set by the environment variable R_PAPERSIZE. The PDF previewer is set by the environment variable R_PDFVIEWER. Report bugs at bugs.r-project.org . edd@max:~$ ```
Confusion with bitwise operations | and << ``` unsigned long long n = 0; for (int i = 0; i <= 64; i+=2) n |= 1ULL << i; //WHAT DOES THIS DO? AH! ``` I'm trying to wrap my head around what the third line of this code actually does. Someone please help clear this up!
That line sets the ith bit of n. - `1ULL` is the integer 1 with type unsigned long long. - `<<` is a bitshift operator. `1ULL << i` is equal to 2i, or in binary: `100...0` with i zeros. - `n |= x;` is a compound assignment operator. It is similar to writing `n = n | x;`. - The operator `|` is the bitwise OR operator. Wikipedia has an example showing how bitwise OR operator works in the general case: ``` 0101 (decimal 5) OR 0011 (decimal 3) = 0111 (decimal 7) ``` **Related** - [What are bitwise shift (bit-shift) operators and how do they work?](https://stackoverflow.com/questions/141525/absolute-beginners-guide-to-bit-shifting) - [Bitwise operation](http://en.wikipedia.org/wiki/Bitwise_operation)
Using of vector in C++ I'm having trouble with the following code and can't seem to figure out what is wrong ``` #include <iostream> #include <cmath> #include <vector> using namespace std; double distance(int a, int b) { return fabs(a-b); } int main() { vector<int> age; age.push_back(10); age.push_back(15); cout<<distance(age[0],age[1]); return 0; } ``` The error lies at calling of function distance. ``` /usr/include/c++/4.6/bits/stl_iterator_base_types.h: In instantiation of ‘std::iterator_traits<int>’: test.cpp:18:30: instantiated from here /usr/include/c++/4.6/bits/stl_iterator_base_types.h:166:53: error: ‘int’ is not a class, struct, or union type /usr/include/c++/4.6/bits/stl_iterator_base_types.h:167:53: error: ‘int’ is not a class, struct, or union type /usr/include/c++/4.6/bits/stl_iterator_base_types.h:168:53: error: ‘int’ is not a class, struct, or union type /usr/include/c++/4.6/bits/stl_iterator_base_types.h:169:53: error: ‘int’ is not a class, struct, or union type /usr/include/c++/4.6/bits/stl_iterator_base_types.h:170:53: error: ‘int’ is not a class, struct, or union type ```
You are trying to override std::distance function, try removing "`using namespace std`" and qualifying `cout` and `endl` with `std::` ``` #include <iostream> #include <cmath> #include <vector> double distance(int a, int b) { return fabs(a-b); } int main() { std::vector<int> age; age.push_back(10); age.push_back(15); std::cout<< distance(age[0],age[1]); return 0; } ``` The `std::distance` is used to count the number of elements in a container within a specified range. You can find more about it [here](http://en.cppreference.com/w/cpp/iterator/distance). Or you can rename your distance function if you want to introduce the `std::` namespace: ``` #include <iostream> #include <cmath> #include <vector> using namespace std; double mydistance(int a, int b) { return fabs(a-b); } int main() { vector<int> age; age.push_back(10); age.push_back(15); cout<<mydistance(age[0],age[1]); return 0; } ``` This will make your code work, but it is not recommended to have "using namespace" declarations before definitions. When you write your code, you should avoid the second option, it's shown here only as an alternative for your code example.
Pythonic way to ensure unicode in python 2 and 3 I'm working on porting a library so that it is compatible with both python 2 and 3. The library receives strings or string-like objects from the calling application and I need to ensure those objects get converted to unicode strings. In python 2 I can do: ``` unicode_x = unicode(x) ``` In python 3 I can do: ``` unicode_x = str(x) ``` However, the best cross-version solution I have is: ``` def ensure_unicode(x): if sys.version_info < (3, 0): return unicode(x) return str(x) ``` which certainly doesn't seem great (although it works). Is there a better solution? I am aware of `unicode_literals` and the `u` prefix but both of those solutions do not work as the inputs come from clients and are not literals in my library.
Don't re-invent the compatibility layer wheel. Use the [`six` compatibility layer](http://pythonhosted.org/six/), a small one-file project that can be included with your own: > > Six supports every Python version since 2.6. It is contained in only one Python file, so it can be easily copied into your project. (The copyright and license notice must be retained.) > > > It includes a [`six.text_type()` callable](http://pythonhosted.org/six/#six.text_type) that does exactly this, convert a value to Unicode text: ``` import six unicode_x = six.text_type(x) ``` In the [project source code](https://bitbucket.org/gutworth/six/src/784c6a213c4527ea18f86a800f51bf16bc1df5bc/six.py?at=default) this is defined as: ``` import sys PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 # ... if PY3: # ... text_type = str # ... else: # ... text_type = unicode # ... ```
Embed generics inside struct I find difficulties when using Rust traits, so for example which is the correct way to do this? ``` pub struct Cube<R>{ pub vertex_data: [Vertex;24], pub asMesh: gfx::Mesh<R> } ```
You can only use generics when defining the struct, but you can use *trait bounds* on those generics to restrict it to specific types. Here, I've used the `where` clause: ``` trait Vertex {} struct Mesh<R> { r: R, } struct Cube<V, R> where V: Vertex, { vertex_data: [V; 24], mesh: Mesh<R>, } fn main() {} ``` You will also want to use those bounds on any method implementations: ``` impl<V, R> Cube<V, R> where V: Vertex, { fn new(vertex: V, mesh: Mesh<R>) -> Cube<V, R> { ... } } ``` In fact, you frequently will only see the `where` clause on the implementation, not the struct. This is because you normally only access the struct through the methods, and the struct is opaque to the end user. If you have public fields it may be worth leaving the bound in both places though.
What arguments are passed into AsyncTask? I don't understand what I am supposed to put in here and where these arguments end up? What exactly should I put, and where exactly will it go? Do I need to include all 3 or can I include 1,2,20?
Google's Android Documentation Says that : An asynchronous task is defined by 3 generic types, called Params, Progress and Result, and 4 steps, called onPreExecute, doInBackground, onProgressUpdate and onPostExecute. AsyncTask's generic types : The three types used by an asynchronous task are the following: ``` Params, the type of the parameters sent to the task upon execution. Progress, the type of the progress units published during the background computation. Result, the type of the result of the background computation. ``` Not all types are always used by an asynchronous task. To mark a type as unused, simply use the type Void: ``` private class MyTask extends AsyncTask<Void, Void, Void> { ... } ``` You Can further refer : <http://developer.android.com/reference/android/os/AsyncTask.html> Or You Can clear whats the role of AsyncTask by refering [Sankar-Ganesh's Blog](http://sankarganesh-info-exchange.blogspot.com/p/need-and-vital-role-of-asynctas-in.html) ## Well The structure of a typical AsyncTask class goes like : ``` private class MyTask extends AsyncTask<X, Y, Z> protected void onPreExecute(){ } ``` This method is executed before starting the new Thread. There is no input/output values, so just initialize variables or whatever you think you need to do. ``` protected Z doInBackground(X...x){ } ``` The most important method in the AsyncTask class. You have to place here all the stuff you want to do in the background, in a different thread from the main one. Here we have as an input value an array of objects from the type “X” (Do you see in the header? We have “...extends AsyncTask” These are the TYPES of the input parameters) and returns an object from the type “Z”. ``` protected void onProgressUpdate(Y y){ } ``` This method is called using the method publishProgress(y) and it is usually used when you want to show any progress or information in the main screen, like a progress bar showing the progress of the operation you are doing in the background. ``` protected void onPostExecute(Z z){ } ``` This method is called after the operation in the background is done. As an input parameter you will receive the output parameter of the doInBackground method. **What about the X, Y and Z types?** As you can deduce from the above structure: ``` X – The type of the input variables value you want to set to the background process. This can be an array of objects. Y – The type of the objects you are going to enter in the onProgressUpdate method. Z – The type of the result from the operations you have done in the background process. ``` How do we call this task from an outside class? Just with the following two lines: ``` MyTask myTask = new MyTask(); myTask.execute(x); ``` Where x is the input parameter of the type X. Once we have our task running, we can find out its status from “outside”. Using the “getStatus()” method. ``` myTask.getStatus(); ``` and we can receive the following status: **RUNNING** - Indicates that the task is running. **PENDING** - Indicates that the task has not been executed yet. **FINISHED** - Indicates that onPostExecute(Z) has finished. ***Hints about using AsyncTask*** 1. Do not call the methods onPreExecute, doInBackground and onPostExecute manually. This is automatically done by the system. 2. You cannot call an AsyncTask inside another AsyncTask or Thread. The call of the method execute must be done in the UI Thread. 3. The method onPostExecute is executed in the UI Thread (here you can call another AsyncTask!). 4. The input parameters of the task can be an Object array, this way you can put whatever objects and types you want.
Incorrect Jacoco code coverage for Kotlin coroutine I am using **Jacoco** for unit test code coverage. Jacoco's generated report shows that **few branches are missed** in my **Kotlin code**. I noticed that the **coroutine code** and the code after it, is not properly covered according to Jacoco. I am not sure if it is because of coroutine or something else. While running my unit test with the **IntelliJ Code Coverage** my Kotlin class shows **100% coverage**. I don't know why Jacoco is showing lesser coverage. I have written my Unit Tests using Spock (Groovy). Please refer the below images: Missed Branches: [![enter image description here](https://i.stack.imgur.com/W8QlE.png)](https://i.stack.imgur.com/W8QlE.png) [![enter image description here](https://i.stack.imgur.com/lg5cS.png)](https://i.stack.imgur.com/lg5cS.png) Original Code: [![enter image description here](https://i.stack.imgur.com/GH3DH.png)](https://i.stack.imgur.com/GH3DH.png)
Similarly to "[Why is JaCoCo not covering my String switch statements?](https://stackoverflow.com/questions/42642840/why-is-jacoco-not-covering-my-string-switch-statements/42680333#42680333)" : JaCoCo performs **analysis of bytecode, not source code**. Compilation of `Example.kt` with `kotlinc 1.3.10` ``` package example fun main(args: Array<String>) { kotlinx.coroutines.runBlocking { // line 4 } } ``` results in two files `ExampleKt.class` and `ExampleKt$main$1.class`, bytecode of last one (`javap -v -p ExampleKt$main$1.class`) contains method `invokeSuspend(Object)` ``` public final java.lang.Object invokeSuspend(java.lang.Object); descriptor: (Ljava/lang/Object;)Ljava/lang/Object; flags: ACC_PUBLIC, ACC_FINAL Code: stack=3, locals=4, args_size=2 0: invokestatic #29 // Method kotlin/coroutines/intrinsics/IntrinsicsKt.getCOROUTINE_SUSPENDED:()Ljava/lang/Object; 3: astore_3 4: aload_0 5: getfield #33 // Field label:I 8: tableswitch { // 0 to 0 0: 28 default: 53 } 28: aload_1 29: dup 30: instanceof #35 // class kotlin/Result$Failure 33: ifeq 43 36: checkcast #35 // class kotlin/Result$Failure 39: getfield #39 // Field kotlin/Result$Failure.exception:Ljava/lang/Throwable; 42: athrow 43: pop 44: aload_0 45: getfield #41 // Field p$:Lkotlinx/coroutines/CoroutineScope; 48: astore_2 49: getstatic #47 // Field kotlin/Unit.INSTANCE:Lkotlin/Unit; 52: areturn 53: new #49 // class java/lang/IllegalStateException 56: dup 57: ldc #51 // String call to 'resume' before 'invoke' with coroutine 59: invokespecial #55 // Method java/lang/IllegalStateException."<init>":(Ljava/lang/String;)V 62: athrow LineNumberTable: line 4: 3 line 5: 49 ``` which is associated with line 4 of source file and contains branches (`ifeq`, `tableswitch`). While latest as of today JaCoCo version (0.8.2) has filters for various compiler-generated artifacts such as `String` in `switch` statement, bytecode that Kotlin compiler generates for coroutines is not filtered. Changelog can be seen at <https://www.jacoco.org/jacoco/trunk/doc/changes.html> And among others at <https://www.jacoco.org/research/index.html> there is also [presentation about bytecode pattern matching](https://youtu.be/48kp3h10brM?t=112) that shows/explains many compiler-generated artifacts. --- What you see in IntelliJ IDEA as 100% - is only line coverage, so you are trying to compare two completely different things. As a proof - here is screenshot of IntelliJ IDEA which shows 100% line coverage, but only one branch of `if` was executed (where `args.size >= 0` evaluates to `true`) [![intellij](https://i.stack.imgur.com/RfMr1.png)](https://i.stack.imgur.com/RfMr1.png) And here is corresponding screenshots of JaCoCo report for execution of the same source file [![jacoco source level](https://i.stack.imgur.com/nzgPj.png)](https://i.stack.imgur.com/nzgPj.png) Going up to the package level you can see 100% line coverage, but 50% branch coverage [![jacoco package level](https://i.stack.imgur.com/4M8Pt.png)](https://i.stack.imgur.com/4M8Pt.png) And then going down to the class level via the first link `ExampleKt.main.new Function2() {...}` you can again see that method `invokeSuspend(Object)` contributes missed branches [![jacoco class level](https://i.stack.imgur.com/G7V3R.png)](https://i.stack.imgur.com/G7V3R.png) --- ## Update (29/01/2019) [JaCoCo version 0.8.3](https://www.jacoco.org/jacoco/trunk/doc/changes.html) has filter for branches added by the Kotlin compiler for suspending lambdas and functions: [![before](https://i.stack.imgur.com/0VI6z.jpg)](https://i.stack.imgur.com/0VI6z.jpg) [![after](https://i.stack.imgur.com/9L5j5.jpg)](https://i.stack.imgur.com/9L5j5.jpg)
center an item in a div by ignoring other elements I want to create a header bar and center my title / logo but it's not centered perfectly. ``` body { margin: 0; background: black; } .link { text-decoration: none; } #header { height: 80px; display: flex; align-items: center; background-color: #000000; } #headerTitleContainer { margin: 0 auto; } #headerTitle { color: #97d700; } #menuBtnContainer { display: inline-block; cursor: pointer; margin-right: 10px; } @media (min-width: 300px) { #menuBtnContainer { margin-left: 20px; } } @media (max-width: 299px) { #menuBtnContainer { margin-left: 5px; } } .menuIconBar { width: 35px; height: 5px; margin: 6px 0 6px 0; transition: 0.4s; background-color: #97d700; } ``` ``` <div id="header"> <div id="menuBtnContainer"> <div class="menuIconBar" id="menuIconBar1"></div> <div class="menuIconBar" id="menuIconBar2"></div> <div class="menuIconBar" id="menuIconBar3"></div> </div> <div id="headerTitleContainer"> <h1><a class="link" id="headerTitle" href="/">MyTitle</a></h1> </div> </div> ``` As you can see `MyTitle` is not centered correctly. How can I do this? I achieve it when taking out my menu button but obviously I need this button. I just want it to be always in the center of the bar. But it should not overlap the menu button. That's why I added a `margin-right: 10px;` to my menu button.
One solution it to make width of button 0 and have `overflow:visible` on it: ``` body { margin: 0; background: black; } .link { text-decoration: none; } #header { height: 80px; display: flex; align-items: center; background-color: #000000; } #headerTitleContainer { margin: 0 auto; } #headerTitle { color: #97d700; } #menuBtnContainer { display: inline-block; cursor: pointer; width: 0; overflow: visible; } @media (min-width: 300px) { #menuBtnContainer { margin-left: 20px; margin-right:-20px; } } @media (max-width: 299px) { #menuBtnContainer { margin-left: 5px; } } .menuIconBar { width: 35px; height: 5px; margin: 6px 0 6px 0; transition: 0.4s; background-color: #97d700; } ``` ``` <div id="header"> <div id="menuBtnContainer"> <div class="menuIconBar" id="menuIconBar1"></div> <div class="menuIconBar" id="menuIconBar2"></div> <div class="menuIconBar" id="menuIconBar3"></div> </div> <div id="headerTitleContainer"> <h1><a class="link" id="headerTitle" href="/">MyTitle</a></h1> </div> </div> ``` Or simply make the button absolute position without chaging any other property and don't forget to make the parent `position:relative` (I prefer this one): ``` body { margin: 0; background: black; } .link { text-decoration: none; } #header { height: 80px; display: flex; align-items: center; background-color: #000000; position:relative; } #headerTitleContainer { margin: 0 auto; } #headerTitle { color: #97d700; } #menuBtnContainer { display: inline-block; cursor: pointer; margin-right: 10px; position: absolute; } @media (min-width: 300px) { #menuBtnContainer { margin-left: 20px; margin-right:-20px; } } @media (max-width: 299px) { #menuBtnContainer { margin-left: 5px; } } .menuIconBar { width: 35px; height: 5px; margin: 6px 0 6px 0; transition: 0.4s; background-color: #97d700; } ``` ``` <div id="header"> <div id="menuBtnContainer"> <div class="menuIconBar" id="menuIconBar1"></div> <div class="menuIconBar" id="menuIconBar2"></div> <div class="menuIconBar" id="menuIconBar3"></div> </div> <div id="headerTitleContainer"> <h1><a class="link" id="headerTitle" href="/">MyTitle</a></h1> </div> </div> ``` Another solution is to add a third hidden element using `:after` taking the same width as the button so the title get centered : ``` body { margin: 0; background: black; } .link { text-decoration: none; } #header { height: 80px; display: flex; align-items: center; background-color: #000000; } #header:after { content: ""; width: 35px; margin-left: 10px; } #headerTitleContainer { margin: 0 auto; } #headerTitle { color: #97d700; } #menuBtnContainer { display: inline-block; cursor: pointer; margin-right: 10px; } @media (min-width: 300px) { #menuBtnContainer { margin-left: 20px; } #header:after { margin-right: 20px; } } @media (max-width: 299px) { #menuBtnContainer { margin-left: 5px; } } .menuIconBar { width: 35px; height: 5px; margin: 6px 0 6px 0; transition: 0.4s; background-color: #97d700; } ``` ``` <div id="header"> <div id="menuBtnContainer"> <div class="menuIconBar" id="menuIconBar1"></div> <div class="menuIconBar" id="menuIconBar2"></div> <div class="menuIconBar" id="menuIconBar3"></div> </div> <div id="headerTitleContainer"> <h1><a class="link" id="headerTitle" href="/">MyTitle</a></h1> </div> </div> ```
Why isn't the empty string an identity of concatenation? I was reading this blog post by Eric Lippert <http://ericlippert.com/2013/06/17/string-concatenation-behind-the-scenes-part-one/#more-1228> and became aware that the empty string is not an identity of concatenation in C#. I haven't run into a situation that made me aware this was the case and always just assumed it was an identity. I assume there is some good reason why ``` string NullString = null; NullString = NullString + String.Empty; // results in and empty string, not null ``` results in an empty string rather than `null`, what is that reason? Why is there no identity of string concatenation? Was it made that way for convenience or practicality?
The documentation for [`String.Concat`](http://msdn.microsoft.com/en-us/library/a6d350wd.aspx) explains this behavior: > > An Empty string is used in place of any null argument. > > > Basically, the `String.Concat` method was designed to exhibit this behavior. --- > > Was it made that way for convenience or practicality? > > > While only the framework design team could directly answer this, this behavior does has some practical benefits. This behavior allows you to concatenate strings with `null` and not create `null` results, which reduces the number of explicit `null` checks required in most code. Without this behavior, `someString + "abc"` would require null checking, wheras with it, a non-null value is guaranteed.
Golang - Using chan slice inside struct I am trying to use a slice chan type inside a struct, similar to the code below. However, when I try to receive at ``` test := <-c.slice ``` The program hangs. Is there a way to do this? ``` package main import "fmt" type blah struct { slice chan [][]int } func main() { slice := make([][]int, 3) c := blah{make(chan [][]int)} slice[0] = []int{1, 2, 3} slice[1] = []int{4, 5, 6} slice[2] = []int{7, 8, 9} go func() { test := <- c.slice test = slice c.slice <- test }() fmt.Println(<-c.slice) } ```
The first line inside the goroutine receives from the channel, and so does the first line in main after the goroutine is created. That means the only two goroutines in the system are both trying to receive from the slice channel, and nobody is trying to send into it. More generally, the issue is that for somebody to receive (consume), somebody else has to concurrently send (produce). If the channel was buffered, which isn't the case in your example, the send might also happen ahead of time in the same goroutine. So this works, for example, because the channel has a buffer of one element, which means the send won't block: ``` ch := make(chan int, 1) ch <- 1 i := <-ch ``` This works as well, because the send is happening concurrently with the receive: ``` ch := make(chan int) go func() { ch <- 1 }() i := <-ch ``` This does *not* work, because both goroutines are trying to send into an unbuffered channel, and nobody is trying to receive: ``` ch := make(chan int) go func() { ch <- 1 }() ch <- 2 ``` This does *not* work either, because both goroutines are trying to receive, and nobody is trying to send: ``` ch := make(chan int) go func() { <-ch }() <-ch ``` That last one is your case.
Why and how (internally) does Enum.IsDefined search for both name and value? Lets say we have defined `Planets` enum: ``` public enum Planets { Sun = 0, Mercury=5, Venus, Earth, Jupiter, Uranus, Neptune } ``` I was using `Enum.IsDefined` method for finding whether string exists in enum or not. ``` Enum.IsDefined(typeof(Planets), "Mercury"); // result is true ``` But, then I tried this and it returned true also: ``` Enum.IsDefined(typeof(Planets), 5); // result is true again ``` How, it comes? This method has not any overload. It has only one signature: ``` Enum.IsDefined(Type enumType, object value); ``` Why and how is `Enum.IsDefined` searching for both name and value? And it is really interesting to me, why did they chosed that way? IMO making overloads would be better choice, not?
From [`Enum.IsDefined` method](http://msdn.microsoft.com/en-us/library/system.enum.isdefined%28v=vs.100%29.aspx) > > The value parameter can be **any of the following**: > > > - Any member of type enumType. > - A variable whose value is an enumeration member of type enumType. > - The string representation of the name of an enumeration member. The characters in the string must have the same case as the enumeration > member name. > - A value of the underlying type of enumType. > > > I believe that's the reason why it has no overload and takes `object` as a second parameter. Since this method takes `object` as a second parameter - and `object` is a base class for all .NET types - you can pass `string` or `int` or etc.. Here how this method [implemented](http://referencesource.microsoft.com/#mscorlib/system/enum.cs#32e813377ac50d28); ``` public static bool IsDefined(Type enumType, Object value) { if (enumType == null) throw new ArgumentNullException("enumType"); return enumType.IsEnumDefined(value); } ``` And looks like this virtual [`Type.IsEnumDefined`](http://msdn.microsoft.com/en-us/library/system.type.isenumdefined%28v=vs.110%29.aspx) method handles all of these cases in it's [implementation](http://referencesource.microsoft.com/#mscorlib/system/type.cs#f5e6d204f05b157e) like; ``` public virtual bool IsEnumDefined(object value) { if (value == null) throw new ArgumentNullException("value"); if (!IsEnum) throw new ArgumentException(Environment.GetResourceString("Arg_MustBeEnum"), "enumType"); Contract.EndContractBlock(); // Check if both of them are of the same type Type valueType = value.GetType(); // If the value is an Enum then we need to extract the underlying value from it if (valueType.IsEnum) { if (!valueType.IsEquivalentTo(this)) throw new ArgumentException(Environment.GetResourceString("Arg_EnumAndObjectMustBeSameType", valueType.ToString(), this.ToString())); valueType = valueType.GetEnumUnderlyingType(); } // If a string is passed in if (valueType == typeof(string)) { string[] names = GetEnumNames(); if (Array.IndexOf(names, value) >= 0) return true; else return false; } // If an enum or integer value is passed in if (Type.IsIntegerType(valueType)) { Type underlyingType = GetEnumUnderlyingType(); // We cannot compare the types directly because valueType is always a runtime type but underlyingType might not be. if (underlyingType.GetTypeCodeImpl() != valueType.GetTypeCodeImpl()) throw new ArgumentException(Environment.GetResourceString("Arg_EnumUnderlyingTypeAndObjectMustBeSameType", valueType.ToString(), underlyingType.ToString())); Array values = GetEnumRawConstantValues(); return (BinarySearch(values, value) >= 0); } } ```
How to lazy load a js file in React (for a multilingual app) I would like to create a multilingual app with React. The way I see it would be to have a js file for each language, for example : **en.js:** ``` module.exports = { langEnglish: 'English', langFrench: 'French', navHome: 'Home', navUsers: 'Users', ... }; ``` **fr.js:** ``` module.exports = { langEnglish: 'Anglais', langFrench: 'Français', navHome: 'Accueil', navUsers: 'Utilisateurs', ... }; ``` As each language file will be quite big and there could be dozens of different languages supported, I would prefer to download only the correct file to use depending on the language chosen in order to minimize loading time (and bandwidth usage). For example I could have a variable in the app state ``` var App = React.createClass({ getInitialState: function () { return { lang: 'en' }; }, ... ``` and some user control to switch this variable between `fr` and `en`. Is it possible to load only the `en.js` file on the initial load, and if the user switches the language to French then load and use the `fr.js` file instead and so on for each language?
Make use of some advanced [**webpack**](https://webpack.github.io/) features, such as code splitting. You can use webpacks [`require.ensure`](https://webpack.github.io/docs/code-splitting.html#commonjs-require-ensure) for async loading your files. Create a file: ### i18n.js ``` var currentTranslation = {}; module.exports = { getTranslation: function() { return currentTranslation; }, loadI18n: function(region, cb) { switch (region) { case 'en': require.ensure([], function(require) { cb(currentTranslation = require('./en')); }, 'i18n-en'); // will create a chunk named 'i18n-en' break; case 'fr': require.ensure([], function(require) { cb(currentTranslation = require('./fr')); }, 'i18n-fr'); // will create a chunk named 'i18n-fr' break; default: require.ensure([], function(require) { cb(currentTranslation = require('./en')); }, 'i18n-en'); } } } ``` ### App.js ``` var i18n = require('./i18n'); ``` and when you need the translation strings to be loaded async you can call: ``` i18n.loadI18n('en', function(texts) { console.log(texts); }); ``` once webpack loads that chunk, you will be able to get the translation texts using the function ``` var texts = i18n.getTranslation(); // call this from anywhere and it will return english texts ``` if you want to switch language, just call ``` i18n.loadI18n('fr', function(texts) { console.log(texts); }); var texts = i18n.getTranslation(); // will return french texts ```
Build a minimal VST3 host in C++ I'm struggling to find a basic example on how to set up a minimal plugin host with VST 3.x SDK. The official documentation is absolutely criptic and brief, I can't get anywhere. I would like to: 1. understand the minimal setup: required headers, interfaces to implement, ...; 2. load a VST3 plugin (no fancy GUI, for now); 3. print out some data (e.g. plugin name, parameters, ...). That would be a great start :)
Yeah, VST3 is rather mysterious and poorly documented. There are not many good examples partially because not many companies (other than Steinberg) actually care about VST3. But all cynicism aside, your best bet would be to look at the Juce source code to see their implementation of a VST3 host: <https://github.com/julianstorer/JUCE/blob/master/modules/juce_audio_processors/format_types/juce_VST3PluginFormat.cpp> There's a few other VST3-related files in that package which are worth checking out. Anyways, this should at least be enough information to get get you started with a VST3 host. It's worth noting that Juce is GPL (unless you pay for a license), so it's a big no-no to borrow code directly from it unless you are also using the GPL or have a commercial license. Just a friendly reminder to be a responsible programmer when looking at GPL'd code on the net. :)
JSON Search and remove in php? I have a session variable `$_SESSION["animals"]` containing a deep json object with values: ``` $_SESSION["animals"]='{ "0":{"kind":"mammal","name":"Pussy the Cat","weight":"12kg","age":"5"}, "1":{"kind":"mammal","name":"Roxy the Dog","weight":"25kg","age":"8"}, "2":{"kind":"fish","name":"Piranha the Fish","weight":"1kg","age":"1"}, "3":{"kind":"bird","name":"Einstein the Parrot","weight":"0.5kg","age":"4"} }'; ``` For example, I want to find the line with "Piranha the Fish" and then remove it (and json\_encode it again as it was). How to do this? I guess i need to search in `json_decode($_SESSION["animals"],true)` resulting array and find the parent key to remove but i'm stucked anyways.
`json_decode` will turn the JSON object into a PHP structure made up of nested arrays. Then you just need to loop through them and `unset` the one you don't want. ``` <?php $animals = '{ "0":{"kind":"mammal","name":"Pussy the Cat","weight":"12kg","age":"5"}, "1":{"kind":"mammal","name":"Roxy the Dog","weight":"25kg","age":"8"}, "2":{"kind":"fish","name":"Piranha the Fish","weight":"1kg","age":"1"}, "3":{"kind":"bird","name":"Einstein the Parrot","weight":"0.5kg","age":"4"} }'; $animals = json_decode($animals, true); foreach ($animals as $key => $value) { if (in_array('Piranha the Fish', $value)) { unset($animals[$key]); } } $animals = json_encode($animals); ?> ```
Angular Material table with subrows (nested cells) of different heights I would like to create a table with nested cells imitating subrows. It works fine if content of every cell does not exceed min-height property value of nested cell (A). Unfortunately when this value is exceeded the table breaks down (B). Is there any simple\* way to align nested cells properly to obtain table like (C)? code: <https://stackblitz.com/edit/angular-aejwfm> [![img](https://i.stack.imgur.com/vvg6B.png)](https://i.stack.imgur.com/vvg6B.png) (\*) I know that we can use javascript to set height of all cells representing certain subrow to max(heights of all cells from that subrow) but I would prefer pure HTML/CSS/Angular solution.
One possible solution is you can create one directive that finds a cell with max-height for each row and apply the same height to all the cell of that row. But you are rendering data column vise so you need to provide some information to the directive using which the directive can identify cells of a single row. I have updated your code to achieve the desired result. <https://stackblitz.com/edit/angular-aejwfm-6htvuk> here is the brief explanation of the changes I have done to your demo: create one directive which accepts the `book` object as Input. This book object will help us to identify cells of single row. ``` @Directive({ selector: '[matchCellHeight]' }) export class MatchCellHeightDirective { @Input() matchCellHeight: any; constructor(private el: ElementRef) { } } ``` then bind this directive to each cell element in your template. ``` <ng-container matColumnDef="title"> <mat-header-cell *matHeaderCellDef> Title </mat-header-cell> <mat-cell *matCellDef="let genre"> <mat-cell [matchCellHeight]='book' *ngFor="let book of genre.books"> //bind matchCellHeight to each cell {{book.title}} </mat-cell> </mat-cell> </ng-container> <ng-container matColumnDef="pages"> <mat-header-cell *matHeaderCellDef> Pages </mat-header-cell> <mat-cell *matCellDef="let genre"> <mat-cell [matchCellHeight]='book' *ngFor="let book of genre.books"> {{book.pages}} </mat-cell> </mat-cell> </ng-container> ``` Now we need to store reference of each cell somewhere so that we can apply our match height logic on each cell. for that I have created one service and injected it in the directive. ``` @Injectable() export class MatchCellHeightService{ cells:{el:ElementRef,data:any}[] = []; constructor(){ } findUniqueBooks():any[]{ return this.cells.map(currCell => currCell.data).reduce( (acc,curr) => { if(acc){ if(acc.indexOf(curr) == -1){ acc.push(curr); } }else{ acc = [].push(curr) } return acc; },[] ) } matchHeight(){ /* first find all the unique books*/ let books = this.findUniqueBooks(); for(let i = 0; i < books.length ; i++){ let currbook = books[i]; /*for each book find cells that containins this book object info (in this demo each row is containng info about one book there for we are using book object to identify cells of each row) */ let bookCells:{el:ElementRef,data:any}[] = []; for(let j = 0; j < this.cells.length ; j++){ let currCell = this.cells[j]; if(currCell.data == currbook){ bookCells.push(currCell); } } /*once we have the array of cells which are of same row apply the match height logic*/ bookCells.forEach(val => val.el.nativeElement.style.height = 'initial'); let maxHeight = Math.max(...bookCells.map(val => val.el.nativeElement.offsetHeight)); bookCells.forEach(currCell => { currCell.el.nativeElement.style.height = maxHeight+'px'; }) } } } ``` I hope this will help. And I have also found another implementation of a directive that does the same job. Here is the link for that. <https://scotch.io/tutorials/responsive-equal-height-with-angular-directive>
How to check that template's parameter type is integral? In the description of some std template function I saw something like: > > if the template parameter is of integral type, the behavior is such and such. > > otherwise, it is such and such. > > > How can I do a similar test? Perhaps dynamic\_cast? Since the function I write is for my personal use I can rely on myself to supply only correct parameters, but why miss a chance to learn something? :)
In addition to the other answers, it should be noted that the test can be used at runtime but also at compile-time to select the correct implementation depending on wether the type is integral or not: Runtime version: ``` // Include either <boost/type_traits/is_integral.hpp> (if using Boost) // or <type_traits> (if using c++1x) // In the following, is_integral shoudl be prefixed by either boost:: or std:: template <typename T> void algorithm(const T & t) { // some code if (is_integral<T>::value) { // operations to perform if T is an integral type } else { // operations to perform if T is not an integral type } // some other code } ``` However, this solution can be improved when the implementation of the algorithm greatly depends on the test. In this case, we would have the test at the top of the function, then a big `then` block and a big `else` block. A common approach in this case is to overload the function and make the compiler select the correct implementation using SFINAE. An easy way to do this is to use [`boost::enable_if`](http://www.boost.org/doc/libs/1_42_0/libs/utility/enable_if.html): ``` #include <boost/utility/enable_if.hpp> #include <boost/type_traits/is_integral.hpp> template <typename T> typename boost::enable_if<boost::is_integral<T> >::type algorithm(const T & t) { // implementation for integral types } template <typename T> typename boost::disable_if<boost::is_integral<T> >::type algorithm(const T & t) { // implementation for non integral types } ``` When invoking the `algorithm` function, the compiler will "select" the correct implementation depending on wether the template parameter is integral or not.
Are there delimiter bytes for UTF8 characters? If I have a byte array that contains UTF8 content, how would I go about parsing it? Are there delimiter bytes that I can split off to get each character?
Take a look here... <http://en.wikipedia.org/wiki/UTF-8> If you're looking to identify the boundary between characters, what you need is in the table in "Description". The only way to get a high bit zero is the ASCII subset 0..127, encoded in a single byte. All the non-ASCII codepoints have 2nd byte onwards with "10" in the highest two bits. The leading byte of a codepoint never has that - it's high bits indicate the number of bytes, but there's some redundancy - you could equally watch for the next byte that doesn't have the "10" to indicate the next codepoint. ``` 0xxxxxxx : ASCII 10xxxxxx : 2nd, 3rd or 4th byte of code 11xxxxxx : 1st byte of code, further high bits indicating number of bytes ``` A codepoint in unicode isn't necessarily the same as a character. There are modifier codepoints (such as accents), for instance.
nicer way to select the right function? I'm writing a `contains()` utility function and have come up with this. My question is: Is there a nicer way to select the right function to handle the call? ``` template <class Container> inline auto contains(Container const& c, typename Container::key_type const& key, int) noexcept( noexcept(c.end(), c.find(key))) -> decltype(c.find(key), true) { return c.end() != c.find(key); } template <class Container> inline auto contains(Container const& c, typename Container::value_type const& key, long) noexcept( noexcept(c.end(), ::std::find(c.begin(), c.end(), key)) ) { auto const cend(c.cend()); return cend != ::std::find(c.cbegin(), cend, key); } template <class Container, typename T> inline auto contains(Container const& c, T const& key) noexcept( noexcept(contains(c, key, 0)) ) { return contains(c, key, 0); } ```
For the record, you could write: ``` #include "magic.h" template <typename T, typename... Us> using has_find = decltype(std::declval<T>().find(std::declval<Us>()...)); template <class Container, typename T> auto contains(const Container& c, const T& key) { return static_if<detect<has_find, decltype(c), decltype(key)>{}> ( [&] (auto& cont) { return cont.end() != cont.find(key); }, [&] (auto& cont) { return cont.end() != std::find(cont.begin(), cont.end(), key); } )(c); } ``` where `magic.h` contains: ``` #include <type_traits> template <bool> struct tag {}; template <typename T, typename F> auto static_if(tag<true>, T t, F f) { return t; } template <typename T, typename F> auto static_if(tag<false>, T t, F f) { return f; } template <bool B, typename T, typename F> auto static_if(T t, F f) { return static_if(tag<B>{}, t, f); } template <bool B, typename T> auto static_if(T t) { return static_if(tag<B>{}, t, [](auto&&...){}); } template <typename...> using void_t = void; template <typename AlwaysVoid, template <typename...> class Operation, typename... Args> struct detect_impl : std::false_type {}; template <template <typename...> class Operation, typename... Args> struct detect_impl<void_t<Operation<Args...>>, Operation, Args...> : std::true_type {}; template <template <typename...> class Operation, typename... Args> using detect = detect_impl<void, Operation, Args...>; ``` [**DEMO**](http://coliru.stacked-crooked.com/a/72b567e5230a03c4)
How to create Chrome extension that will search for text in source and alter formatting I am new here... I am wondering if anyone could help point me in the right direction here. I am looking to create a Chrome extension that searches a page for a number of different strings (one example: "(410)" or "(1040)" without the quotes) and highlight these so they're easier to see. To explain a little further why I need this: I frequently work out of a queue with other coworkers, and there are specific things I need to focus on but I can ignore the rest of the questions on the page. So it would be helpful if my particular items were highlighted. Thank you! Edit: an example of how the source code works: ``` <td class="col-question">28 (510). <span id="ctl00_PlaceHolderMain_ctl01_ContentCheckList_ctl28_Label1" title=" &lt;p> <td class="col-question">49 (1150). <span id="ctl00_PlaceHolderMain_ctl01_ContentCheckList_ctl49_Label1" title="&lt;p> ``` etc etc etc... there are around 100 numbers in parenthesis I would want highlighted. And probably another 100 that I wouldn't want highlighted.
Okay, to start off with I will show you how to inject the code into the page(s) you want, we will get to selecting the correct numbers in a bit. I will be using [`jQuery`](http://jquery.com/) throughout this example, it isn't strictly necessary, but I feel that it may make it a bit easier. First we declare a [`content script`](http://developer.chrome.com/extensions/content_scripts.html) in our **manifest** as well as [`host permissions`](http://developer.chrome.com/extensions/match_patterns.html) for the page you are injecting into: ``` "content_scripts": [ { "matches": ["http://www.domain.com/page.html"], "js": ["jquery.js","highlightNumbers.js"], "css": ["highlight.css"] }], "permissions": ["http://www.domain.com/*"] ``` This will place our code in the page we are trying to change. Now you said that there are about 100 different numbers you would want to highlight and I will assume that these are specific numbers that don't match any patterns, so the only way to select all of them would be to make an explicit list of numbers to highlight. **highlightNumbers.js** ``` // This array will contain all of the numbers you want to highlight // in no particular order var numberArray = [670,710,820,1000,...]; numberArray.forEach(function(v){ // Without knowing exactly what the page looks like I will just show you // how to highlight just the numbers in question, but you could easily // similarly highlight surrounding text as well var num = "(" + v + ")"; // Select the '<td>' that contains the number we are looking for var td = $('td.col-question:contains('+num+')'); // Make sure that this number exists if(td.length > 0){ // Now that we have it we need to single out the number and replace it var span = td.html().replace(num,'<span class="highlight-num">'+num+'</span>'); var n = td.html(span); } // Now instead of '(1000)' we have // '<span class="highlight-num">(1000)</span>' // We will color it in the css file }); ``` Now that we have singled out all of the numbers that are important, we need to color them. You can, of course, use whatever color you want, but for the sake of the example I will be using a bright green. **highlight.css** ``` span.highlight-num{ background-color: rgb(100, 255, 71); } ``` This should color all of the numbers that you put in the array in the `js` file. Let me know if there are any problems with it as I can't exactly test it.
Does AWS offer inter-region / cross region VPC Peering? **AWS inter-region / cross region VPC Peering** We have been using VPC peering for connecting two VPCs within a region. It works great and eliminates the need for a VPN. As we expand to other regions, we use VPNs to connect the VPCs across regions. VPN scaling and HA are two big issues and the connectivity is not robust. Is it possible to create VPC peering between two regions in AWS?
AWS announced on [Nov 29, 2017, that AWS started supporting inter-region VPC peering](https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-support-for-inter-region-vpc-peering/) and on [Feb 20, 2018 added 9 additional regions](https://aws.amazon.com/about-aws/whats-new/2018/02/inter-region-vpc-peering-is-now-available-in-nine-additional-aws-regions/). Amazon EC2 now allows peering relationships to be established between Virtual Private Clouds (VPCs) across different AWS regions. Inter-Region VPC Peering allows VPC resources like: - EC2 instances - RDS databases - Lambda functions **running in different AWS regions to communicate with each other** using private IP addresses, without requiring gateways, VPN connections or separate network appliances. As of Feb 20, 2018, inter-region VPC peering is available in: - us-east-1 (N. Virginia) (added Nov 29, 2017) - us-east-2 (Ohio) (added Nov 29, 2017) - us-west-1 (Northern California) (added Feb 20, 2018) - us-west-2 (Oregon) (added Nov 29, 2017) - ap-south-1 (Mumbai) (added Feb 20, 2018) - ap-southeast-1 (Singapore) (added Feb 20, 2018) - ap-southeast-2 (Sydney) (added Feb 20, 2018) - ap-northeast-1 (Tokyo) (added Feb 20, 2018) - ca-central-1 (Central) (added Feb 20, 2018) - eu-west-1 (Ireland) (added Nov 29, 2017) - eu-west-2 (London) (added Feb 20, 2018) - eu-west-3 EU (Paris) (added Feb 20, 2018) - sa-east-1 (São Paulo) (added Feb 20, 2018) For more information: [Announcing Support for Inter-Region VPC Peering](https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-support-for-inter-region-vpc-peering/) - Horizontally scaled - Highly available - Encrypts inter-region traffic - No single point of failure or bandwidth bottleneck - Traffic always stays on the global AWS backbone and never traverses the public internet - Reduces threat vectors, such as common exploits and DDoS attacks
Extract features into a dataset from keras model I use the following code (courtesy to [here](https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py)) which runs CNN for training MNIST images: ``` from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K batch_size = 128 num_classes = 10 epochs = 1 # input image dimensions img_rows, img_cols = 28, 28 # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() if K.image_data_format() == 'channels_first': x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) print(model.save_weights('file.txt')) # <<<<<---- score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` My goal is to use CNN model to extract MNIST features into a dataset that I can use as an input for another classifier. In this example, I don't care about the classification operation since all I need is the features of the trained images. The only method I found is `save_weights` as: `print(model.save_weights('file.txt'))` How can I extract features into a dataset from keras model?
After training or loading the existing trained model, you can create another model: ``` extract = Model(model.inputs, model.layers[-3].output) # Dense(128,...) features = extract.predict(data) ``` and use the `.predict` method to return the vectors from a specific layer, in this case every image will become (128,), the output of the Dense(128, ...) layer. You can also train these networks jointly with 2 outputs using the [functional API](https://keras.io/getting-started/functional-api-guide/). Follow the guide and you'll see that you can chain models together and have multiple outputs each possibly with a separate loss. This will allow your model to learn shared features that is useful for both classifying the MNIST image and your task at the same time.
Pass dynamic parameters to Application.Run in VBA - Error 449 argument not optional error **Context:** I am trying to dynamically call a method via VBA's `Application.Run` function and pass parameters to the method, dynamically. More of a proof of concept than an actual use case. **Code:** ``` Public Sub Test() Call MethodDynamically("MethodToBeCalled", "This doesnt, work") End Sub Public Sub MethodDynamically(MethodName As String, Params As String) Application.Run MethodName, Params End Sub Public Sub MethodToBeCalled(Param1 As String, Param2 As String) Debug.Print Param1 & " " & Param2 End Sub ``` **Error:** Running the `Test` method I receive `Run-time error '449': Argument not optional` on the `Application.Run` line in the `MethodDynamically` method. **Expectation**: My desire is that running the `Test` method will trigger `MethodToBeCalled` with `This doesnt` and `work` being passed as parameters. The result would be `This doesnt work` in the Immediate Window.
This question already has an answer [here](https://stackoverflow.com/questions/44737114/excel-vba-forward-paramarray-to-application-run?noredirect=1&lq=1) but it's worth considering an example that allows for `MethodDynamically` to call other sub-routines with an arbitrary number of arguments. The solution is to use the `ParamArray` to deal with an unknown number of arguments. For example: ``` Option Explicit Public Sub Test() Call MethodDynamically("MethodToBeCalled1", "This", "works") '<-- 2 args Call MethodDynamically("MethodToBeCalled2", "This", "works", "too") '<-- 3 args Call MethodDynamically("MethodToBeCalled3", "This", "works", "too", "as well") '<-- 4 args Call MethodDynamically("MethodToBeCalled4", "Working", 10, 2, 35) '<-- 4 args; different types End Sub Public Sub MethodDynamically(MethodName As String, ParamArray Params() As Variant) Application.Run MethodName, Params End Sub Public Sub MethodToBeCalled1(Params As Variant) Debug.Print Params(0) & " " & Params(1) End Sub Public Sub MethodToBeCalled2(Params As Variant) Debug.Print Params(0) & " " & Params(1) & " " & Params(2) End Sub Public Sub MethodToBeCalled3(Params As Variant) Debug.Print Params(0) & " " & Params(1) & " " & Params(2) & " " & Params(3) End Sub Public Sub MethodToBeCalled4(Params As Variant) Debug.Print Params(0) & " " & CStr((Params(1) ^ Params(2)) + Params(3)) End Sub ``` Outputs: ``` This works This works too This works too as well Working 135 ```
How to ignore some route while using ASP.NET Friendly URLs? I am using ASP.NET Friendly URLs with success, but I need to ignore route for a particular `Foo.aspx` page (because this page needs POST data and once re-routed the POST data is not available anymore in `Page_Load()`!). It looks like using ASP.NET Friendly URLs discard any attempt to ignore a route. Even the [MSDN example](http://msdn.microsoft.com/en-us/library/dd992982(v=vs.110).aspx) for ignoring route doesn't work once ASP.NET Friendly URLs routing is used: ``` routes.Ignore("{*allaspx}", new {allaspx=@".*\.aspx(/.*)?"}); ``` And to ignore route to `Foo.aspx` the code should look like that, isn't it? ``` routes.Ignore("{*fooaspx}", new { fooaspx = @"(.*/)?foo.aspx(/.*)?" }); ``` The `Global.asax` code looks like: ``` public static void RegisterRoutes(RouteCollection routes) { // This doesn't work whether I put this code before or after ASP.NET Friendly URLs code. routes.Ignore("{*allaspx}", new { allaspx = @".*\.aspx(/.*)?" }); routes.Canonicalize().Lowercase(); var settings = new FriendlyUrlSettings(); settings.AutoRedirectMode = RedirectMode.Permanent; routes.EnableFriendlyUrls(settings); } void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } ``` This question [has been asked on the ASP.NET Friendly URLs codeplex site](https://aspnetfriendlyurls.codeplex.com/workitem/13), but didn't get an answer. Thanks for your help on this :)
Thanks to Damian Edwards comment, I got this issue completely solved, thanks Damian. I just need to derive from `WebFormsFriendlyUrlResolver` to override the method `ConvertToFriendlyUrl()` to make it no-op when the url match the url I don't want to redirect: ``` using Microsoft.AspNet.FriendlyUrls.Resolvers; public class MyWebFormsFriendlyUrlResolver : WebFormsFriendlyUrlResolver { public MyWebFormsFriendlyUrlResolver() { } public override string ConvertToFriendlyUrl(string path) { if (!string.IsNullOrEmpty(path)) { if (path.ToLower().Contains("foo")) { // Here the filter code return path; } } return base.ConvertToFriendlyUrl(path); } } ``` Then in `Global.asax` the code now looks like: ``` public static void RegisterRoutes(RouteCollection routes) { routes.Canonicalize().Lowercase(); var settings = new FriendlyUrlSettings(); settings.AutoRedirectMode = RedirectMode.Permanent; routes.EnableFriendlyUrls(settings, new IFriendlyUrlResolver[] { new MyWebFormsFriendlyUrlResolver() }); } void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } ```
Pandas read text file slicing columns possibly with empty strings/values according to strings/values alignment with header Imagine a text file that looks like this: ``` Places Person Number Comments bar anastasia 75 very lazy home jimmy nothing to say beach 2 ``` Consider the first line as the header containing the names of the columns I want for my pandas data frame. You can see that there are empty cells. And there is a column that has strings with spaces. There is a readable patter in this file, columns are separated by at least 2 spaces and the information of each column can be red from the end point of the column name to the end point of the previous column name basically. There is no ambiguity in this. --- If I do ``` df = pd.read_csv('text_file.txt') ``` I will get a 3 x 1 data frame where the only column gets called `"Places Person Number Comments"`. So it fails to understand the table format. --- If I do ``` df = pd.read_csv('text_file.txt', delim_whitespace = True) ``` It will create the good number of columns but won't be able to understand the spaces in the values in `Comments` and will split the comments and send it to different cells, like so: ``` Places Person Number Comments bar anastasia 75 very lazy home jimmy nothing to say beach 2 NaN NaN NaN ``` --- If I do ``` df = pd.read_csv('text_file.txt', sep = '\s{2,}', engine = 'python') ``` It will understand that only if there are two or more spaces it can be considered part of another column. So that's correct. But it won't be able to understand that there are empty cells and will wrongly displace cells from one column to another. ``` Places Person Number Comments 0 bar anastasia 75 very lazy 1 home jimmy nothing to say None 2 beach 2 None None ``` At this point I don't know what to do. Is there an elegant way to do this in Pandas?
You can use [`pd.read_fwf()`](https://pandas.pydata.org/docs/reference/api/pandas.read_fwf.html) to read your file, which is a file of fixed-width formatted lines, into DataFrame. ``` df = pd.read_fwf('text_file.txt') ``` ## Demo I use StringIO as demo. You can use your actual file name as parameter to the function call. ``` text = """ Places Person Number Comments bar anastasia 75 very lazy home jimmy nothing to say beach 2 """ from io import StringIO df = pd.read_fwf(StringIO(text)) print(df) Places Person Number Comments 0 bar anastasia 75.0 very lazy 1 home jimmy NaN nothing to say 2 beach NaN 2.0 NaN ```
How do I access Sinatra's Logger outside the Request Scope It seems that Sinatra's logger is only available inside request handlers (See <https://github.com/sinatra/sinatra#logging>), but what if I want to use a logger in other places, for example in helper methods or in the configure hook? Is the Sinatra logger available through other means?
[The docs](http://www.sinatrarb.com/intro#Scopes%20and%20Binding) give some examples on scope, but you can see the `logger` helper within methods defined in the `helper` block, as the helper block has Application scope. Within configure it's not available, but what I tend to do is set up my own logging anyway, within the rackup file as a constant or class instance variable, and then it's available within configure (and anywhere else). As an example for just a single application: ``` require 'sinatra' require 'logger' configure do set :logging, nil logger = Logger.new STDOUT logger.level = Logger::INFO logger.datetime_format = '%a %d-%m-%Y %H%M ' set :logger, logger end helpers do def check settings.logger.info "I'm inside a helper" # if you were using Sinatra's logger, you could just call # logger.info "I'm inside a helper" # and it would work, but only if you've not done the stuff above # in the configure block end end get "/" do check # this would work for the way above, or with the Sinatra logger "Hello, World!" end get "/another" do settings.logger.info "Using the settings helper this time" # this only works # when you've defined your own logger "Hello again" end ``` --- An example as a class instance variable as a better "global": ``` class MyLogger def self.logger if @_logger.nil? @_logger = Logger.new STDOUT @_logger.level = Logger::INFO @_logger.datetime_format = '%a %d-%m-%Y %H%M ' end @_logger end end ``` and then use wherever needed: ``` configure do set :logging, nil logger = MyLogger.logger set :logger, logger end ``` or in a class: ``` class AnotherClass def some_method MyLogger.logger.warn "I'm in some method" end ``` --- Sinatra also comes (since 1.3) with [a helper for logging](http://www.sinatrarb.com/intro.html#Logging), and [here is a recipe](http://recipes.sinatrarb.com/p/middleware/rack_commonlogger) for logging to STDOUT and a file that you may find useful too.
Can I prevent users from closing a web browser window? I have a Windows 8 machine that will be setup in a training room. The training modules run through a web browser. I don't want the users to be able to close the browser or open any other programs, so I'm using group policies to prevent access to other programs, but I can't figure out a way to prevent the Browser from being closed. Is there a way I can accomplish that?
Kiosk mode for [Internet Explorer](http://www.ehow.com/how_6908457_open-ie-explorer-kiosk-mode.html). `Start`, point to `Run`, and then type `iexplore -k <web address>` > > When you run Internet Explorer in Kiosk mode, the Internet Explorer title bar, menus, toolbars, and status bar are not displayed and Internet Explorer runs in Full Screen mode. The Windows taskbar is not displayed, but you can switch to other running programs by pressing ALT+TAB or CTRL+ALT+DEL. Because Internet Explorer is running in Full Screen mode, you cannot access the Windows desktop until you quit Internet Explorer. > > > Key combinations still work, but for the average application, this should work for you.
How to update UI on chain request with Rx Java I would like to update UI 2 times on chain request. First update UI after request `getUserByID()` completed. Second on `getParentName()` completed but when I try to update UI on first request and get error cause of thread problem. I don't know how to solve this problem. Sample code ``` API.getUserByID("USER ID") .flatMap(user-> { // How to update UI here? return API.getParentName(user.getId())) }) .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe(parent-> { // It's ok to update UI cause of on main thread. parentTextView.setText(parent.getName()) }); ``` Thank you.
You want update UI on observable emission, so you need specify observer thread for that observable. Then you should use `doOnNext` operator, that invokes on observer thread: ``` API.getUserByID("USER ID") .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .doOnNext(user_ -> {calls on observer thread}) .flatMap(user-> { return API.getParentName(user.getId())) .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) }) .subscribe(parent-> { // It's ok to update UI cause of on main thread. parentTextView.setText(parent.getName()) }); ```
how to know location of return address on stack c/c++ i have been reading about a function that can overwrite its return address. ``` void foo(const char* input) { char buf[10]; //What? No extra arguments supplied to printf? //It's a cheap trick to view the stack 8-) //We'll see this trick again when we look at format strings. printf("My stack looks like:\n%p\n%p\n%p\n%p\n%p\n% p\n\n"); //%p ie expect pointers //Pass the user input straight to secure code public enemy #1. strcpy(buf, input); printf("%s\n", buf); printf("Now the stack looks like:\n%p\n%p\n%p\n%p\n%p\n%p\n\n"); } ``` It was sugggested that this is how the stack would look like Address of foo = **00401000** **My stack looks like:** 00000000 00000000 7FFDF000 0012FF80 **0040108A <-- We want to overwrite the return address for foo.** 00410EDE **Question:** -. Why did the author arbitrarily choose the second last value as the return address of foo()? -. Are values added to the stack from the bottom or from the top? - apart from the function return address, what are the other values i apparently see on the stack? ie why isn't it filled with zeros Thanks.
The one above it is the previous EBP (`0012FF80`). The value above the prev-EBP is always the return address. (This obviously assumes a non-FPO binary and 32bit Windows)1. If you recall, the prologue looks like: ``` push ebp ; back up the previous ebp on the stack mov ebp, esp ; set up the new frame pointer ``` and when a function is called, e.g. ``` call 0x00401000 ``` The current EIP is pushed on the stack (used as the return address), so the stack after the prologue looks like: ``` [ebp+0xc] ; contains parameter 1, etc [ebp+0x8] ; contains parameter 0 [ebp+0x4] ; contains return address [ebp] ; contains prev-EBP ``` So for each `%p`, printf uses the next 4 bytes starting from `[ebp+0xc]` (the first `%p` parameter). Eventually you hit the previous EBP value stored on the stack, which is (`0012FF80`) and then the next one is the Return Address. Note these addresses must 'make sense', which they clearly do here (though it may not be 'clear' for all). Re Q2) The stack grows down. So when you `push eax`, 4 is subtracted from `esp`, then the value of eax is moved to [esp], equivalently in code: ``` push eax ; <=> sub esp, 4 mov [esp], eax ``` --- 1. The book is *Writing Secure Code*, yes?
ocaml type over-binding due to specialized recursive use of type I have a parameterized type that recursively uses itself but with a type parameter specialized and when I implement a generic operator, the type of that operator is bound too tightly because of the case that handles the specialized sub-tree. The first code sample shows the problem, and the second shows a workaround that I'd rather not use because the real code has quite a few more cases so duplicating code this way is a maintenance hazard. Here's a minimal test case that shows the problem: ``` module Op1 = struct type 'a t = A | B (* 'a is unused but it and the _ below satisfy a sig *) let map _ x = match x with | A -> A | B -> B end module type SIG = sig type ('a, 'b) t = | Leaf of 'a * 'b (* Here a generic ('a, 'b) t contains a specialized ('a, 'a Op1.t) t. *) | Inner of 'a * ('a, 'a Op1.t) t * ('a, 'b) t val map : ('a -> 'b) -> ('a_t -> 'b_t) -> ('a, 'a_t) t -> ('b, 'b_t) t end module Impl : SIG = struct type ('a, 'b) t = | Leaf of 'a * 'b | Inner of 'a * ('a, 'a Op1.t) t * ('a, 'b) t (* Fails signature check: Values do not match: val map : ('a -> 'b) -> ('a Op1.t -> 'b Op1.t) -> ('a, 'a Op1.t) t -> ('b, 'b Op1.t) t is not included in val map : ('a -> 'b) -> ('a_t -> 'b_t) -> ('a, 'a_t) t -> ('b, 'b_t) t *) let rec map f g n = match n with | Leaf (a, b) -> Leaf (f a, g b) (* possibly because rec call is applied to specialized sub-tree *) | Inner (a, x, y) -> Inner (f a, map f (Op1.map f) x, map f g y) end ``` This modified version of `Impl.map` fixed the problem but introduces a maintenance hazard. ``` let rec map f g n = match n with | Leaf (a, b) -> Leaf (f a, g b) | Inner (a, x, y) -> Inner (f a, map_spec f x, map f g y) and map_spec f n = match n with | Leaf (a, b) -> Leaf (f a, Op1.map f b) | Inner (a, x, y) -> Inner (f a, map_spec f x, map_spec f y) ``` Is there any way to get this to work without duplicating the body of `let rec map`? --- Applying gasche's solution yields the following working code: ``` let rec map : 'a 'b 'c 'd . ('a -> 'b) -> ('c -> 'd) -> ('a, 'c) t -> ('b, 'd) t = fun f g n -> match n with | Leaf (a, b) -> Leaf (f a, g b) | Inner (a, x, y) -> Inner (f a, map f (Op1.map f) x, map f g y) ```
This style of recursion in datatype definitions is called "non-regular": the recursive type `'a t` is reused at an instance `foo t` where `foo` is different from the single variable `'a` used in the definition. Another well-known example is the type of full binary trees (with exactly 2^n leaves): ``` type 'a full_tree = | Leaf of 'a | Node of ('a * 'a) full_tree ``` Recursive functions that operate these datatypes typically suffer from the *monomorphic recursion* restriction of languages with type inference. When you do type inference you have to make a guess at what the type of a recursive function may be, before type-checking its body (as it may be use inside). ML languages refine this guess by unification/inference, but only monomorphic types may be inferred. If your function makes polymorphic uses of itself (it calls itself recursively on a different type that what it took as input), this cannot be inferred (it is undecidable in the general case). ``` let rec depth = function | Leaf _ -> 1 | Node t -> 1 + depth t ^ Error: This expression has type ('a * 'a) full_tree but an expression was expected of type 'a full_tree ``` Since 3.12, OCaml allows to use an explicit polymorphic annotation of the form `'a 'b . foo`, meaning `forall 'a 'b. foo`: ``` let rec depth : 'a . 'a full_tree -> int = function | Leaf _ -> 1 | Node t -> 1 + depth t ``` You could do the same in your example. However, I wasn't able to compile the type after using the annotation you have in your module signature, as it appear to be wrong (the `'a_t` are just weird). Here is what I used to make it work: ``` let rec map : 'a 'b . ('a -> 'b) -> ('a Op1.t -> 'b Op1.t) -> ('a, 'a Op1.t) t -> ('b, 'b Op1.t) t = fun f g n -> match n with | Leaf (a, b) -> Leaf (f a, g b) | Inner (a, x, y) -> Inner (f a, map f (Op1.map f) x, map f g y) ```
Is the "true" result of >, <, !, &&, || or == defined? When I for instance write `7>1` in C (say C99 if this is not an always-been feature), can I expect the result will be exactly 1 or just some non-zero value? Does this hold for all bool operators?
In C99 §6.5.8 Relational Operators, item 6 (`<`,`>`,`<=` and `>=`): > > Each of the operators < (less than), > (greater than), <= (less than or equal to), and >= > (greater than or equal to) shall yield **1** if the specified relation is true and **0** if it is false) > The result has type **int**. > > > As for equality operators, it's a bit further in §6.5.9 (`==` and `!=`): > > The == (equal to) and != (not equal to) operators are analogous to the relational > operators except for their lower precedence) Each of the operators yields **1** if the > specified relation is true and **0** if it is false. The result has type **int**. For any pair of > operands, exactly one of the relations is true. > > > The logical AND and logical OR are yet a bit further in §6.5.13 (`&&`) > > The && operator shall yield **1** if both of its operands compare unequal to 0; otherwise, it > yields **0**. The result has type **int**. > > > ... and §6.5.14 (`||`) > > The || operator shall yield **1** if either of its operands compare unequal to 0; otherwise, it > yields **0**. The result has type **int**. > > > And the semantics of the unary arithmetic operator `!` are over at §6.5.3.3/4: > > The result of the logical negation operator ! is **0** if the value of its operand compares > unequal to 0, **1** if the value of its operand compares equal to 0. The result has type **int**. > The expression !E is equivalent to (0==E). > > > Result type is `int` across the board, with `0` and `1` as possible values. (Unless I missed some.)
Give equal height to children of elements within a flex container Right now, my orange squares (the children in the flex container) work as intended and they are all the same height, but I can't make it work for the red ones. I want to make the height of the red items (the children of the children) all have the same height as the highest one. My HTML is the following: ``` .container { display: flex; flex-direction: row; flex-wrap: wrap; justify-content: center; align-content: center; } .col { flex: 1; background:orange; margin: 1px; } .col-item { margin: 15px; } ``` ``` <html> <body> <div class="container"> <div class="col"> <div class="col-item" style="background: red;"> <h2>Column 1</h2> <p>Hello World</p> </div> </div> <div class="col"> <div class="col-item" style="background: red;"> <h2>Column 2</h2> <p>Hello World!</p> <p>Hello World!</p> <p>Hello World!</p> <p>Hello World!</p> </div> </div> <div class="col"> <div class="col-item" style="background: red;"> <h2>Column 3</h2> <p>Some other text..</p> <p>Some other text..</p> </div> </div> </div> </body> </html> ```
The flex properties on the parent container (`container` in your example) don't pass down to the child elements, even if they are also containers. Therefore you also need to make the `col` divs `display:flex`, as follows: ``` .col { flex: 1; display: flex; /* make this a flex container so it has flex properties */ justify-content: center; /* to horizontally center them in this flex display */ /* REST OF YOUR CSS HERE */ } ``` Note that you also need to add `flex: 1;` to the content itself so it doesn't shrink, e.g.: ``` .col-item { flex: 1; /* REST OF YOUR CSS HERE */ } ``` **Working Example** with your code: ``` .container { display: flex; flex-direction: row; flex-wrap: wrap; justify-content: center; align-content: center; } .col { flex: 1; background: orange; margin: 1px; display: flex; justify-content: center; } .col-item { flex: 1; margin: 15px; } ``` ``` <html> <body> <div class="container"> <div class="col"> <div class="col-item" style="background: red;"> <h2>Column 1</h2> <p>Hello World</p> </div> </div> <div class="col"> <div class="col-item" style="background: red;"> <h2>Column 2</h2> <p>Hello World!</p> <p>Hello World!</p> <p>Hello World!</p> <p>Hello World!</p> </div> </div> <div class="col"> <div class="col-item" style="background: red;"> <h2>Column 3</h2> <p>Some other text..</p> <p>Some other text..</p> </div> </div> </div> </body> </html> ```
When is it OK to use exception handling for business logic? I think it is accepted that as a general rule in Java (and perhaps any language with exception handling) one should try to avoid using exception handling to actually handle business logic. In general, if it is expected that a certain situation is supposed to happen, one should check for it and handle it more directly than relying on exception handling to do the checking for you. For example, the following is not considered good practice: ``` try{ _map.put(myKey, myValue); } catch(NullPointerException e){ _map = new HashMap<String, String>(); } ``` Instead lazy initialization should be accomplished more like this: ``` if(_map == null){ _map = new HashMap<String, String>(); } _map.put(myKey, myValue); ``` Of course there could be far more complex logic than simply handling lazy initialization. So given that this type of thing is usually frowned upon...when, if ever, is it a good idea to rely on an exception happening for certain business logic to occur? Would it be accurate to say that any instance where one feels compelled to use this approach is really highlighting a weakness of the API being used?
Whenever the exception can be anticipated but not avoided. Say, if you are relying on an external API of some sort to parse data, and that API offers parse methods but nothing to tell whether a given input can be parsed or not (or if whether the parse can succeed or not depends on factors out of your control, but the API doesn't provide appropriate function calls), and the parsing method throws an exception when the input cannot be parsed. **With a properly designed API, this should boil down to a quantity somewhere in the range "virtually never" to "never".** I can see absolutely no reason to use exception handling as a means of normal flow control in code. It's expensive, it's hard to read (just look at your own first example; I realize it was probably written very quickly, but when `_map` hasn't been initialized, what you end up with is an empty map, throwing away the entry you were trying to add), and it litters the code with largely useless try-catch blocks, which can very well hide *real* problems. Again taking your own example, what if the call to `_map.add()` were to throw a `NullPointerException` for some reason *other* than `_map` being `null`? Suddenly, you are silently recreating an empty map rather than adding an entry to it. Which I'm sure I don't really have to say can lead to any number of bugs in completely unrelated places in the code because of unexpected state... **Edit:** Just to be clear, the above answer is written in the context of Java. Other languages may (and apparently, do) differ in the implementation expense of exceptions, but other points should still hold.
AttributeError: module 'scipy.sparse' has no attribute 'coo\_array' Getting this error in my Jupyter Notebook What would be the best way to fix this utilizing conda instead of pip I've attempted `conda upgrade --all` and that didnt seem to work
The `scipy.sparse.*_array` functions were introduced with v1.8. The `networkx` package started requiring `scipy >=1.8` with v2.7. So, either upgrade SciPy ``` conda install 'scipy>=1.8' ``` or downgrade NetworkX: ``` conda install 'networkx<2.7' ``` Part of the issue here is that, at [the recommendation of a `networkx` developer](https://github.com/conda-forge/networkx-feedstock/issues/1#issuecomment-1047181328), Conda Forge stopped explicitly requiring `scipy` as a dependency of `networkx`, and therefore there is no longer any constraint. I opened [an issue on the feedstock](https://github.com/conda-forge/networkx-feedstock/issues/42) to revisit coinstallation constraints (`run_constrained` specifications).
How to integrate Tesseract OCR Library to a C++ program I am trying to use [Tesseract OCR Library](https://code.google.com/p/tesseract-ocr/) in order to create a program to read pictures of elevator floor numbers. I haven't found any example on how to include the Tesseract Library into a C++ file. Something like: ``` #include "tesseract.h" ``` I am using Tesseract v 3.00 on Ubuntu 10.10.
The [PlatformStatus](http://code.google.com/p/tesseract-ocr/wiki/PlatformStatus) Page has some comments on how to install it. It has dependencies (leptonica) which also need to be installed. [Another solution](http://paramountideas.com/tesseract-ocr-30-and-leptonica-installation-centos-55-and-opensuse-113) also linked from the above discussion has similar details for other linux distributions. When it comes to linking with your program, [this post](https://stackoverflow.com/a/5283690/1408646) has some specifics There [is also a C wrapper to the underlying API calls](http://code.google.com/p/ocrivist/source/browse/#svn/tessintf); looking at the files included should tell you what to include. [Other wrappers](http://code.google.com/p/tesseract-ocr/wiki/AddOns) are available here. The documentation of the [base API class are here...](http://fossies.org/dox/tesseract-ocr-3.02.02/classtesseract_1_1TessBaseAPI.html) A comment from the [Platform Status](http://code.google.com/p/tesseract-ocr/wiki/PlatformStatus) page for the installation. *Comment by tim.lawr...@gmail.com, Nov 23, 2011 I successfully installed tesseract-ocr on Ubuntu 11.10 64Bit using these commands:* ``` sudo apt-get install libleptonica-dev autoconf automake libtool libpng12-dev libjpeg62- dev libtiff4-dev zlib1g-dev subversion g++ cd svn checkout http://tesseract-ocr.googlecode.com/svn/trunk/ tesseract-ocr cd tesseract-ocr ./autogen.sh ./configure make sudo make install sudo ldconfig cd /usr/local/share/tessdata/ sudo wget http://tesseract-ocr.googlecode.com/files/eng.traineddata.gz sudo gunzip eng.traineddata.gz cd ~/tesseract-ocr/ tesseract phototest.tif phototest cat phototest.txt ```
F# compiler keep dead objects alive I'm implementing some algorithms which works on large data (~250 MB - 1 GB). For this I needed a loop to do some benchmarking. However, in the process I learn that F# is doing some nasty things, which I hope some of you can clarify. Here is my code (description of the problem is below): ``` open System for i = 1 to 10 do Array2D.zeroCreate 10000 10000 |> ignore printfn "%d" (GC.GetTotalMemory(true)) Array2D.zeroCreate 10000 10000 |> ignore // should force a garbage collection, and GC.Collect() doesn't help either printfn "%d" (GC.GetTotalMemory(true)) Array2D.zeroCreate 10000 10000 |> ignore printfn "%d" (GC.GetTotalMemory(true)) Array2D.zeroCreate 10000 10000 |> ignore printfn "%d" (GC.GetTotalMemory(true)) Array2D.zeroCreate 10000 10000 |> ignore printfn "%d" (GC.GetTotalMemory(true)) Console.ReadLine() |> ignore ``` Here the output will be like: ``` 54000 54000 54000 54000 54000 54000 54000 54000 54000 54000 400000000 800000000 1200000000 Out of memory exception ``` So, in the loop F# discards the result, but when I'm not in the loop F# will keep references to "dead data" (I've looked in the IL, and apparently the class Program gets fields for this data). Why? And can I fix that? This code is runned outside Visual Studio and in release mode.
The reason for this behavior is that the F# compiler behaves differently in the global scope than in local scope. A variable declared at global scope is turned into a static field. A module declaration is a static class with `let` declarations compiled as fields/properties/methods. The simplest way to fix the problem is to write your code in a function: ``` let main () = Array2D.zeroCreate 10000 10000 |> ignore printfn "%d" (GC.GetTotalMemory(true)) Array2D.zeroCreate 10000 10000 |> ignore printfn "%d" (GC.GetTotalMemory(true)) // (...) Console.ReadLine() |> ignore main () ``` ... but why does the compiler declare fields when you're not using the value and just `ignore` it? This is quite interesting - the `ignore` function is a very simple function that is inlined when you use it. The declaration is `let inline ignore _ = ()`. When inlining the function, the compiler declares some variables (to store the arguments of the function). So, another way to fix this is to omit `ignore` and write: ``` Array2D.zeroCreate 10000 10000 printfn "%d" (GC.GetTotalMemory(true)) Array2D.zeroCreate 10000 10000 printfn "%d" (GC.GetTotalMemory(true)) // (...) ``` You'll get some compiler warnings, because the result of expression is not `unit`, but it will work. However, using some function and writing code in local scope is probably more reliable.
How to alternate in a table cell background color in each row and column I'd like to format a CSS as shown in the image: In odd rows the first column cell and in even rows the second column cell should be formatted with a different background color. [![example table](https://i.stack.imgur.com/8irly.png)](https://i.stack.imgur.com/8irly.png) I know how to alternate a whole row or column like ``` tr { border-top: 0px solid $input-border-col; &:first-child { border-top: none; } &:nth-child(even) {background: #CCC;} } ``` but haven't found a way how to alternate in each row
You can use `odd` and `even` CSS pseudo names. Like this it does not matter how many `<tr>` or `<td>` your `<table>` will have. ``` td { padding: 20px; border: 1px solid black; } table tr:nth-child(odd) td:nth-child(odd), table tr:nth-child(even) td:nth-child(even) { background: orange; } ``` ``` <table> <tr> <td>x</td> <td>x</td> <td>x</td> <td>x</td> </tr> <tr> <td>x</td> <td>x</td> <td>x</td> <td>x</td> </tr> <tr> <td>x</td> <td>x</td> <td>x</td> <td>x</td> </tr> <tr> <td>x</td> <td>x</td> <td>x</td> <td>x</td> </tr> </table> ```
How to find axes by YLabel string? I am creating a figure with multiple subplots and saving it to a file. like this: ``` fig = figure; ax1 = subplot(2, 1, 1); ax2 = subplot(2, 1, 2); ylabel(ax1, 'First'); ylabel(ax2, 'Second'); savefig('myfigure.fig') ``` Later, I want to copy one of the subplots to a new figure without re-running the code that creates the figure. My current approach is to load the saved figure, locate the axes I want to copy by its YLabel, and then copying it to a new figure: ``` newfig = figure; oldfig = openfig('myfigure.fig'); ylabel_obj = findobj(oldfig, 'String', 'First'); % This is not givng me what I expect old_axes_obj = ylabel_obj.Parent; new_axes_obj = copyobj(old_axes_obj, newfig); ``` The problem is that `findobj` above is not finding the YLabel. It just returns a 0x0 empty GraphicsPlaceholder array. Why isn't `findobj` finding my YLabel? Is there a better way to find the axes I want?
The [handle visibility](https://www.mathworks.com/help/matlab/ref/matlab.graphics.primitive.text-properties.html#budt_bq-1_sep_shared-HandleVisibility) for the label text object is turned off, so it will not show up in the `Children` property of its parent axes, and therefore won't be found when using [`findobj`](https://www.mathworks.com/help/matlab/ref/findobj.html). You can instead use [`findall`](https://www.mathworks.com/help/matlab/ref/findall.html) to get around this limitation: ``` ylabel_obj = findall(oldfig, 'String', 'First'); ``` Alternatively, you can set the [root `ShowHiddenHandles` property](https://www.mathworks.com/help/matlab/ref/matlab.ui.root-properties.html#buc8_0n-ShowHiddenHandles) to `'on'` to list all object handles regardless of their `HandleVisibility` property setting, making `findobj` and `findall` equivalent.
what's the difference between object \* x and object& x in c++ > > **Possible Duplicate:** > > [Difference between pointer variable and reference variable in C++](https://stackoverflow.com/questions/57483/difference-between-pointer-variable-and-reference-variable-in-c) > > > suppose I'm trying to pass a reference to object x to a c++ function... what's the difference between ``` pass(Object * x){ } ``` and ``` pass(Object& x){ } ``` and how would you access the actual object itself when the pointer/reference is declared using the different methods... for instance if I have Object \* x, how would I actually access the actual object that is referenced by x same with Object& x
The first is a pass by pointer. The second is a pass by reference. As for usage, a pointer must be "dereferenced" before it can be used. That is done with the `*` and `->` operators: ``` void do_something(Object * x) { Object & ref = *x; // * returns a reference ref.method(); x->method(); // same as (*x).method() } ``` References have no such restriction: ``` void do_something(Object & x) { x.method(); } ``` However, references can only point to a single object for their whole lifetime, while pointers can change target and (as John mentionned below) point to "nothing" (that is, `NULL`, `0` or, in C++0x, `nullptr`). There is no such thing as a `NULL` reference in C++. Since references are easier to use and less error-prone, prefer them unless you know what you're doing (pointers are a pretty tough subject).
Assert PHPUnit that an object has an integer attribute I'm using PHPUnit and I have to check a `json_decode` result. I have an object containing an integer attribute as you can see in the debugger view : [![Debugger view of my object](https://i.stack.imgur.com/e0Pqv.png)](https://i.stack.imgur.com/e0Pqv.png) When I do this : ``` $this->assertObjectHasAttribute('1507',$object); ``` I get an error : ``` PHPUnit_Framework_Assert::assertObjectHasAttribute() must be a valid attribute name ``` My `$object` is an instance of `stdClass`
A numeric property is abnormal, and [PHPUnit won't accept it as a valid attribute name](https://github.com/sebastianbergmann/phpunit/blob/master/src/Framework/Assert.php#L2647-L2654): ``` private static function isAttributeName(string $string) : bool { return preg_match('/[a-zA-Z_\x7f-\xff][a-zA-Z0-9_\x7f-\xff]*/', $string) === 1; } ``` Therefore the best thing to do is *not* test if the object has an attribute, but rather check if an array has a key. ## `json_decode` returns an object *OR* an array [As described in the docs](http://php.net/manual/en/function.json-decode.php): > > `mixed json_decode ( string $json [, bool $assoc = false [, int $depth = 512 [, int $options = 0 ]]] )` > > > ... > > > assoc > > > - When TRUE, returned objects will be converted into associative arrays. > > > An appropriate test method is therefore: ``` function testSomething() { $jsonString = '...'; $array = json_decode($jsonString, true); $this->assertArrayHasKey('1507',$array); } ```
SimpleDateFormat adding 7 hours? I'm trying to do a simple subtraction of dates and getting odd results. For some reason when I format it with SimpleDateFormat, there are 7 extra hours difference. ``` package timedemo; import java.text.SimpleDateFormat; import java.util.Date; public class Timedemo { public static void main(String[] args) { Date start = new Date(); // time right now Date stop = new Date(); long startTime = start.getTime()-1000; // introduce a second of skew long stopTime = stop.getTime(); SimpleDateFormat sdf = new SimpleDateFormat("hh:mm:ss"); // First one shows up prior to 1970 epoch. Okay, except for 5 hour variance. // Probably a timezone thing, as I'm in EST (-5). System.out.println("New date is "+new Date(stopTime - startTime)); System.out.println("Raw Start is "+startTime); // fine System.out.println("Raw Stop is "+stopTime); // fine System.out.println("Raw Difference is "+(stopTime-startTime)); System.out.println("Formatted Start is "+sdf.format(startTime)); System.out.println("Formatted Stop is "+sdf.format(stopTime)); System.out.println("Formatted Difference is "+sdf.format(stopTime-startTime)); } } ``` And the results are: ``` New date is Wed Dec 31 19:00:01 EST 1969 Raw Start is 1418397344360 Raw Stop is 1418397345360 Raw Difference is 1000 Formatted Start is 10:15:44 Formatted Stop is 10:15:45 Formatted Difference is 07:00:01 ``` - I had thought it was a timezone thing, but I'm in EST (-5), not MST (-7). - I would suspect Daylight Savings, but it's 7 hours, not 1. - 12/24 hour difference? 12-7=5 which is my timezone offset... not sure what to make of it though. - Kind of out of ideas at this point. Why the seven hour shift on the last line?
The "12-7 = 5" is definitely related to the problem... or more accurately, it's "12-5=7", i.e. 5 hours before midnight is 7pm. You'll see that if you format it as a full date/time: ``` SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); ``` Heck, you can see that in your first line: "Wed Dec 31 19:00:01 EST 1969" - that 19:00 is 7pm, which you're formatting using `hh` as `07`. Fundamentally, the problem is that you're trying to treat a *difference* in time as if it's a *point* in time. I would very strongly urge you not to do that. If you *absolutely* want to do that (and the difference will always be non-negative but less than 24 hours), then you should set the time zone on `SimpleDateFormat` to UTC, and use `HH` instead of `hh`. But it would be better to use [Joda Time](http://joda.org/joda-time) or `java.time` from Java 8 which can represent a `Duration`, i.e. the difference between two points in time. `Date` is simply not an appropriate data type for that.
How to robustly describe conditional expressions with AND, OR in JSON? Say I have an expression: ``` ( A >= 10 && B == 20 ) || ( C < 30 ) || ( D != 50 ) ``` I can suggest the following JSON to store/represent this expression: ``` { "filter": [ { "var":"A", "condition":"ge", "num":10 }, { "var":"B", "condition":"e", "num":20 } ], [ { "var":"C", "condition":"lt", "num":30 } ], [ { "var":"D", "condition":"ne", "num":50 } ] } ``` in which "filter" maps to an array of sub-arrays. All elements in each sub-array are associated with AND, while all sub-arrays are associated with OR. Is there anything I've overlooked in writing the JSON like this?
You're making a couple of assumptions here: 1. Comparisons will be always be between a variable and a number, and never between two variables or two numbers. 2. The variable will always be on the left hand side of the comparison, and the number on the right. Those assumptions may be correct for your particular use case, but a more future-proof approach would be to treat comparisons similarly to functions with arguments: ``` { "ge": ["A", 10] } ``` Also, while your idea of using an array of objects to represent AND and an array of arrays to represent OR is clever, it might not be immediately obvious to a human being tasked with writing code to parse it. Reusing the idea of an object where the key represents a function and its associated value the arguments is more expressive: ``` { "all": [<condition 1>, <condition 2>, ...] } ``` Putting those two ideas together, we get something like this: ``` { "any": [ { "all": [ { "ge": ["A", 10] }, { "eq": ["B", 20] } ]}, { "lt": ["C", 30] }, { "ne": ["D", 50] } ]} ```
Incorrect datetime conversion from America/New\_York time to UTC and back to America/New\_York I am trying to convert time from America/New York to UTC and then converting it back to the New York time. But I get different results while doing with this with `pytz`. I am doing this: ``` new_date = parser.parse("May 4, 2021") new_date = new_date.replace(tzinfo=pytz.timezone("America/New_York")) date = new_date.astimezone(pytz.timezone("UTC")) ``` Output: ``` datetime.datetime(2021, 5, 4, 4, 56, tzinfo=<UTC>) ``` When I try to reconvert it back to the New York time I get this: ``` date.astimezone(pytz.timezone("America/New_York")) ``` I get: ``` datetime.datetime(2021, 5, 4, 0, 56, tzinfo=<DstTzInfo 'America/New_York' EDT-1 day, 20:00:00 DST>) ``` My Question is why there is 56 minute difference and what can be done to prevent this?
The 56 min difference originates from the fact that the first entry in the database that `pytz` accesses, refers to LMT (local mean time): ``` import pytz t = pytz.timezone("America/New_York") print(repr(t)) # <DstTzInfo 'America/New_York' LMT-1 day, 19:04:00 STD> ``` You can read more in P. Ganssle's [Fastest Footgun](https://blog.ganssle.io/articles/2018/03/pytz-fastest-footgun.html) blog post or in [Weird timezone issue with pytz](https://stackoverflow.com/q/11473721/10197418) here on SO. **tl;dr - Never `replace` a tzinfo with a timezone object from `pytz`!** Use `localize` (or `astimezone`) instead, to adjust the timezone to the year of the datetime object. ***...But:*** since you use `dateutil` already, why don't you use it here as well: ``` import dateutil new_date = dateutil.parser.parse("May 4, 2021") # you can safely replace with dateutil's tz objects: new_date = new_date.replace(tzinfo=dateutil.tz.gettz("America/New_York")) date = new_date.astimezone(dateutil.tz.UTC) # date # datetime.datetime(2021, 5, 4, 4, 0, tzinfo=tzutc()) date = date.astimezone(dateutil.tz.gettz("America/New_York")) # date # datetime.datetime(2021, 5, 4, 0, 0, tzinfo=tzfile('US/Eastern')) print(date) # 2021-05-04 00:00:00-04:00 ```
How to replace a string in all folder and file names How can I recursively replace a string in all folders and files' name with a different string? I am running Red Hat 6 and I can find them with: ``` find . -name \*string\* ``` I've managed to do it for strings within files: ``` find . -type f -exec sed -i 's/string1/string2/g' {} + ``` but how could I replace in a similar way all file names?
Using `find` and `rename`: ``` find . -type f -exec rename 's/string1/string2/g' {} + ``` The `find . -type f` part of the command means to search for all files (`-type f`) in the current directory (`.`). - The `-exec` option tells `find` to execute a command on each file it finds. - The `rename` command is used to rename files, and the syntax used here is for the Perl version of `rename`. - The `'s/string1/string2/g'` is a regular expression that specifies what to search for and what to replace it with. In this case, `string1` is the string to be replaced, and `string2` is the replacement string. The `/g` at the end means to replace all occurrences of `string1` in the filename. - The `{}` symbol is a placeholder for the filename that `find` has found. - The `+` at the end of the command tells `find` to pass multiple filenames at once to the rename command, which is more efficient than invoking rename for each individual file. So, overall, this command searches for all files in the current directory and executes the `rename` command to replace all occurrences of `string1` with `string2` in the filename for each file found.
Having jeditable save when focus is lost? (How) can I tweak [jeditable](http://www.appelsiini.net/projects/jeditable) to save the text when focus is lost from the text area? If you don't supply submit/cancel buttons, then when pressing 'Enter' the content is saved ... but I haven't found how to save the content on focus lost.
You can do this by setting the `onblur` option, like this: ``` $('.editable').editable('myPage.php', { onblur: 'submit' }); ``` [Inside jEditable](http://www.appelsiini.net/download/jquery.jeditable.js) this is what happens: ``` } else if ('submit' == settings.onblur) { input.blur(function(e) { t = setTimeout(function() { form.submit(); }, 200); }); } ``` There are detils for the `onblur` option [at the bottom of the jEditable post here](http://www.appelsiini.net/projects/jeditable), just search for "onblur" in the page.
PowerShell says "execution of scripts is disabled on this system." I am trying to run a `cmd` file that calls a PowerShell script from `cmd.exe`, but I am getting this error: > > `Management_Install.ps1` cannot be loaded because the execution of scripts is disabled on this system. > > > I ran this command: ``` Set-ExecutionPolicy -ExecutionPolicy Unrestricted ``` When I run `Get-ExecutionPolicy` from PowerShell, it returns `Unrestricted`. ``` Get-ExecutionPolicy ``` Output: ``` Unrestricted ``` --- > > cd "C:\Projects\Microsoft.Practices.ESB\Source\Samples\Management Portal\Install\Scripts" > powershell .\Management\_Install.ps1 1 > > > WARNING: Running x86 PowerShell... > > > File `C:\Projects\Microsoft.Practices.ESB\Source\Samples\Management Portal\Install\Scripts\Management_Install.ps1` cannot be loaded because the execution of scripts is disabled on this system. Please see "`get-help about_signing`" for more details. > > > At line:1 char:25 > > > - `.\Management_Install.ps1` <<<< 1 > > > - CategoryInfo : NotSpecified: (:) [], PSSecurityException > - FullyQualifiedErrorId : RuntimeException > > > C:\Projects\Microsoft.Practices.ESB\Source\Samples\Management Portal\Install\Scripts> PAUSE > > > Press any key to continue . . . > > > --- The system is [Windows Server 2008](https://en.wikipedia.org/wiki/Windows_Server_2008) R2. What am I doing wrong?
If you're using [Windows Server 2008](https://en.wikipedia.org/wiki/Windows_Server_2008) R2 then there is an *x64* and *x86* version of PowerShell both of which have to have their execution policies set. Did you set the execution policy on both hosts? As an *Administrator*, you can set the execution policy by typing this into your PowerShell window: ``` Set-ExecutionPolicy RemoteSigned ``` For more information, see *[Using the Set-ExecutionPolicy Cmdlet](https://learn.microsoft.com/powershell/module/microsoft.powershell.security/set-executionpolicy)*. When you are done, you can set the policy back to its default value with: ``` Set-ExecutionPolicy Restricted ``` You may see an error: ``` Access to the registry key 'HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell' is denied. To change the execution policy for the default (LocalMachine) scope, start Windows PowerShell with the "Run as administrator" option. To change the execution policy for the current user, run "Set-ExecutionPolicy -Scope CurrentUser". ``` So you may need to run the command like this (as seen in comments): ``` Set-ExecutionPolicy RemoteSigned -Scope CurrentUser ```
mysql regex utf-8 characters I am trying to get data from `MySQL` database via `REGEX` with or without special utf-8 characters. Let me explain on example : If user enters word like `sirena` it should return rows which include words like `sirena`,`siréna`,`šíreňá` .. and so on.. also it should work backwards when he enters `siréná` it should return the same results.. I am trying to search it via `REGEX`, my query looks like this : ``` SELECT * FROM `content` WHERE `text` REGEXP '[sšŠ][iíÍ][rŕŔřŘ][eéÉěĚ][nňŇ][AaáÁäÄ0]' ``` It works only when in database is word `sirena` but not when there is word `siréňa`.. Is it because something with `UTF-8` and MySQL? (collation of mysql column is `utf8_general_ci`) Thank you!
MySQL's regular expression library does not support utf-8. See [Bug #30241 Regular expression problems](http://bugs.mysql.com/bug.php?id=30241), which has been open since 2007. They will have to change the regular expression library they use before that can be fixed, and I haven't found any announcement of when or if they will do this. The only workaround I've seen is to search for specific HEX strings: ``` mysql> SELECT * FROM `content` WHERE HEX(`text`) REGEXP 'C3A9C588'; +----------+ | text | +----------+ | siréňa | +----------+ ``` --- Re your comment: No, I don't know of any solution with MySQL. You might have to switch to PostgreSQL, because that RDBMS supports `\u` codes for UTF characters in their [regular expression syntax](http://www.postgresql.org/docs/current/static/functions-matching.html).
Need to Calculate SHA1 hash of file stored in Azure storage in c# I am uploading large files (1-10 GB) to azure storage and need to calculate SHA1 hash value of files when uploaded. Am I able to calculate the SHA1 on the server, without having to download the file?
Azure Blob Storage support the MD5 hash calculation for blob automatically when putting blob, please see the content below of [`Get Blob Properties`](https://msdn.microsoft.com/en-us/library/azure/dd179394.aspx). > > **Content-MD5** > > > If the Content-MD5 header has been set for the blob, this response header is returned so that the client can check for message content integrity. > In version 2012-02-12 and newer, Put Blob sets a block blob’s MD5 value even when the Put Blob request doesn’t include an MD5 header. > > > So it's not necessary to calculate SHA1 hash for a blob if not has special needs. As reference, here is a sample which calculate SHA1 hash without downloading for a blob stored in storage. ## Synchronous ``` CloudStorageAccount storageAccount = CloudStorageAccount.Parse("<StorageAccountConnectionString>"); CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); CloudBlobContainer container = blobClient.GetContainerReference("<container-name>"); CloudBlob blob = container.GetBlobReference("<blob-name>"); using(Stream blobStream = blob.OpenRead()) { using (SHA1 sha1 = SHA1.Create()) { byte[] checksum = sha1.ComputeHash(blobStream); } } ``` ## Async: ``` CloudStorageAccount storageAccount = CloudStorageAccount.Parse("<StorageAccountConnectionString>"); CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); CloudBlobContainer container = blobClient.GetContainerReference("<container-name>"); CloudBlob blob = container.GetBlobReference("<blob-name>"); using(Stream blobStream = await blob.OpenReadAsync().ConfigureAwait(false)) { using (SHA1 sha1 = SHA1.Create()) { byte[] checksum = await sha1.ComputeHashAsync(blobStream); } } // ComputeHashAsync extension method from https://www.tabsoverspaces.com/233439-computehashasync-for-sha1 public static async Task<Byte[]> ComputeHashAsync(this HashAlgorithm algo, Stream stream, Int32 bufferSize = 4096) { algo.Initialize(); var buffer = new byte[bufferSize]; var streamLength = inputStream.Length; while (true) { var read = await inputStream.ReadAsync(buffer, 0, buffer.Length).ConfigureAwait(false); if (inputStream.Position == streamLength) { algo.TransformFinalBlock(buffer, 0, read); break; } algo.TransformBlock(buffer, 0, read, default(byte[]), default(int)); } return algo.Hash; } ```
Reading specific columns from a text file in python I have a text file which contains a table comprised of numbers e.g: > > 5 10 6 > > > 6 20 1 > > > 7 30 4 > > > 8 40 3 > > > 9 23 1 > > > 4 13 6 > > > if for example I want the numbers contained only in the second column, how do i extract that column into a list?
``` f=open(file,"r") lines=f.readlines() result=[] for x in lines: result.append(x.split(' ')[1]) f.close() ``` You can do the same using a list comprehension ``` print([x.split(' ')[1] for x in open(file).readlines()]) ``` Docs on `split()` > > `string.split(s[, sep[, maxsplit]])` > > > Return a list of the words of the string `s`. If the optional second argument sep is absent or None, the words are separated by arbitrary strings of whitespace characters (space, tab, newline, return, formfeed). If the second argument sep is present and not None, it specifies a string to be used as the word separator. The returned list will then have one more item than the number of non-overlapping occurrences of the separator in the string. > > > So, you can omit the space I used and do just `x.split()` but this will also remove tabs and newlines, be aware of that.
Upgrading Zimbra and your OS at the same time I've recently had some problems regarding upgrading Zimbra while also upgrading my OS at the same time. To be more specific: I wanted to upgrade Zimbra Open Source Edition from 8.0.5 to 8.6 and at the same time upgrade Ubuntu Server from 12.04 to 14.04. The problem I encountered was that I couldn't get it to work, Zimbra threw a lot of Perl-related exceptions and since the Zimbra forum isn't very communicative I had to reroll to my backup to be up and running after the weekend again. So what is the correct process?
I felt the need to respond after spending two days to get this all working, performing numerous restores due to upgrade issues. In my situation, it was an update of Ubuntu 12 (Zimbra 8.6) -> Ubuntu 16 (Zimbra 8.8). Sadly, instructions from Zimbra's website (`/opt/zimbra/conf/localconfig.xml`), and the post from OP resulted in the following LDAP error after performing the task `./install -s` on a production version of zimbra: **ldap\_url and ldap\_master\_url cannot be the same on an ldap replica** As a result, the method that worked for me is to perform Labsy's suggestion, which is performed during the `do-release-upgrade`. To add some in-depth information, for other people that may be confused, you will receive the following message during the Ubuntu OS upgrade process ``` Updating repository information Third party sources disabled Some third party entries in your sources.list were disabled. You can re-enable them after the upgrade with the 'software-properties' tool or your package manager. ``` At this time, open a SSH session to the server, and edit the file `/etc/apt/sources.list.d/zimbra.list` and remove the `#` from the beginning of the lines, so that the zimbra packages will be updated as part of the upgrade process. Follow the OS upgrade normally, and agree to all default messages. Once the OS upgrade is complete, you can reboot the server. You will know that all has gone well, if the Zimbra services start up during the boot process. This way you don't need to run `./install -s` on the new OS at all. For others who have come here due to the following message, **ldap\_url and ldap\_master\_url cannot be the same on an ldap replica**, the reason why you receive the error, is because the `./install -s` blows away the LDAP configuration stored in `/opt/zimbra/conf/localconfig.xml`. If you do not have a backup of this file, you will have to restore from backup/snapshot and start the upgrade process from the beginning.
Is it possible to make indicator-appmenu ignore a specific application? The new indicator-appmenu in Maverick breaks the LyX menu: the application menu is not shown either in the application window nor in the applet. (See [Bug report](https://bugs.launchpad.net/ubuntu/+source/indicator-appmenu/+bug/619811).) As a workaround while the bug is fixed, is there a way to make an exception for the applet, so that LyX would be ignored and the applet could still be used for everything else? Something akin to Maximus exceptions.
## Run an application - To start an application (eg., `gcalctool`) with the menu within the application rather than in the panel, run the following in a terminal: ``` UBUNTU_MENUPROXY= gcalctool ``` To start the application with the menu enabled in the application **and** the panel, run: ``` APPMENU_DISPLAY_BOTH=1 gcalctool ``` - Instead of using the terminal, you can use the `Alt` + `F2` shortcut to start a run dialog, in which you would enter: ``` env UBUNTU_MENUPROXY= gcalctool ``` or ``` env UBUNTU_DISPLAY_BOTH=1 gcalctool ``` ## Edit application launchers in Ubuntu 10.10 To make it easier to always launch your application with the same appmenu settings, you can edit application launchers in the menu, the gnome-panel, and on the desktop: - Gnome-panel and desktop: simply right-click the launcher, select "Properties" and prepend `env UBUNTU_MENUPROXY=` or `env UBUNTU_DISPLAY_BOTH=1` to the value in the "Command" field: [![](https://i.stack.imgur.com/hvvUz.png)](https://i.stack.imgur.com/hvvUz.png) (source: [xrmb2.net](https://img.xrmb2.net/images/302698.png)) - Menu: right-click the menu and select "Edit Menus". In the new window, find the launcher you want to edit and click the "Properties" button on the right. Again, simply prepend the variables like above (don't forget the 'env'), click on "Close" two times and you should be done. ## Edit launchers in Compiz-based Unity in Ubuntu 11.04 - Dirty method: Change the launcher's .desktop file in the `/usr/share/applications` directory: - For example, run ``` gksudo gedit /usr/share/applications/gcalctool.desktop ``` - Now edit the `Exec=`-line to contain either of the two variables from above, eg.: ``` Exec=env UBUNTU_MENUPROXY= gcalctool ``` - Save the file, and launching gcalctool from Unity's launcher bar should run it with the menu within the application.Disadvantages of this method: it will change the launcher for all users and will probably be reverted by system updates. - Better method: - If already added, unpin the launcher from the launcher bar. - Copy the related .desktop file to your home directory: ``` cp /usr/share/applications/gcalctool.desktop ~/.local/share/applications ``` - Like in the method above, edit the `Exec=`-line to contain either of the two variables: ``` Exec=env UBUNTU_MENUPROXY= gcalctool ``` - Make the file executable: ``` chmod +x ~/.local/share/applications/gcalctools.desktop ``` - Start Nautilus in that folder and double click the .desktop file (which should just read "Calculator" in the example): ``` nautilus ~/.local/share/applications ``` - Now you should see the launcher icon in the launcher bar - pin it via the quicklist and you are done. --- **Note:** To make above work with KDE applications, replace `UBUNTU_MENUPROXY=` with `QT_X11_NO_NATIVE_MENUBAR=1`.
Converting CommonJS to ES modules I'm doing coding as a hobby and currently working on a new NodeJS project. With my limited knowledge I have the feeling that working with ES Modules is the future (correct me if I'm wrong). Therefore I would like to re-write some CommonJS scripts that I have into ES Modules. I'm stuck on the following line trying to convert it: `require('./app/routes/routes')(app)` (I don't understand what the "(app)" part does at the end). **routes.js:** ``` module.exports = app => { const recipe = require('../controllers/recipe-controller.js'); var router = require('express').Router(); // Create a new Recipe router.post('/recipe', recipe.create); app.use('/api/recipes', router); }; ``` **server.js:** ``` import express from 'express' import db from './app/models/index' const app = express(); app.get('/', (req, res) => { res.json({ message: 'Welcome to bezkoder application.' }); }); require('./app/routes/routes')(app); const PORT = process.env.PORT || 8080; app.listen(PORT, () => { console.log(`Server is running on port ${PORT}.`); }); ```
This line: ``` require('./app/routes/routes')(app) ``` is importing a function and then calling it. It is logically the same as: ``` const init = require('./app/routes/routes'); init(app); ``` So, to translate this to ESM, you change your routes module to export a named function. You can then import that named function and then call it (pass `app` to it). routes.js ``` import express from 'express'; import recipe from '../controllers/recipe-controller.js'; export function init(app) { // Create a new Recipe const router = express.Router(); router.post('/recipe', recipe.create); app.use('/api/recipes', router); } ``` server.js ``` import express from 'express' import db from './app/models/index' import init from './app/routes/routes'; const app = express(); app.get('/', (req, res) => { res.json({ message: 'Welcome to bezkoder application.' }); }); init(app); const PORT = process.env.PORT || 8080; app.listen(PORT, () => { console.log(`Server is running on port ${PORT}.`); }); ```
What is the benefit of not allocating a terminal in ssh? Every once in a while I will do something like ``` ssh user@host sudo thing ``` and I am reminded that ssh doesn't allocate a pseudo-tty by default. Why doesn't it? What benefits would I be losing if I aliased `ssh` to `ssh -t`?
The primary difference is the concept of *interactivity*. It's similar to running commands locally inside of a script, vs. typing them out yourself. It's different in that a remote command must choose a default, and non-interactive is safest. (and usually most honest) ## STDIN - If a PTY is allocated, applications can detect this and know that it's safe to prompt the user for additional input without breaking things. There are many programs that will skip the step of prompting the user for input if there is no terminal present, and that's a good thing. It would cause scripts to hang unnecessarily otherwise. - Your input will be sent to the remote server for the duration of the command. This includes control sequences. While a `Ctrl-c` break would normally cause a loop on the ssh command to break immediately, your control sequences will instead be sent to the remote server. This results in a need to "hammer" the keystroke to ensure that it arrives when control *leaves* the ssh command, but before the next ssh command begins. I would caution against using `ssh -t` in unattended scripts, such as crons. A non-interactive shell asking a remote command to behave interactively for input is asking for all kinds of trouble. You can also test for the presence of a terminal in your own shell scripts. To test STDIN with newer versions of bash: ``` # fd 0 is STDIN [ -t 0 ]; echo $? ``` ## STDOUT - When aliasing `ssh` to `ssh -t`, you can expect to get an extra carriage return in your line ends. It may not be visible to you, but it's there; it will show up as `^M` when piped to `cat -e`. You must then expend the additional effort of ensuring that this control code does not get assigned to your variables, particularly if you're going to insert that output into a database. - There is also the risk that programs will assume they can render output that is not friendly for file redirection. Normally if you were to redirect STDOUT to a file, the program would recognize that your STDOUT is not a terminal and omit any color codes. If the STDOUT redirection is from the output of the *ssh client* and the there is a PTY associated with the remote end of the client, the remote programs cannot make such a distinction and you will end up with terminal garbage in your output file. Redirecting output to a file on the *remote end* of the connection should still work as expected. Here is the same bash test as earlier, but for STDOUT: ``` # fd 1 is STDOUT [ -t 1 ]; echo $? ``` --- While it's possible to work around these issues, you're inevitably going to forget to design scripts around them. All of us do at some point. Your team members may also not realize/remember that this alias is in place, which will in turn create problems for you when *they* write scripts that use your alias. Aliasing `ssh` to `ssh -t` is very much a case where you'll be violating the design principle of [least surprise](http://en.wikipedia.org/wiki/Principle_of_least_astonishment); people will be encountering problems they do not expect and may not understand what is causing them.
What is "swarming"? I've heard **swarming** mentioned in the context of Agile or Extreme Programming. It seems to be a complement to pairing. What exactly is it? When should it be applied? How do you do it well?
**The idea is that everyone on your team works on the same story at the same time.** Instead of everyone focusing on different tasks, everyone focuses on one task at a time until it's completed. Then they move on to the next thing, where they all work together on it. This helps teams that struggle completing stories before the end of sprint. Often teams finish 80% of all the stories, but none are complete. This is less useful than completely finishing 80% of the stories, since unfinished stories have (effectively) no value to an end user. It's easier to get stories completed when everyone on the team is focusing on one story at a time. This is the motivation behind swarming. There are some difficulties here. For instance, QA can't always test things before they are built (or even designed). In this case, you should establish a design together early on, and then QA can write (initially failing) tests against the design and not the actual implementation.
matplotlib: any way to get existing colorbars? In matplotlib's object oriented style you can get the current axes, lines and images in an existing figure: ``` fig.axes fig.axes[0].lines fig.axes[0].images ``` But I haven't found a way to get the existing colorbars, I have to assign the colorbar a name when first creating it: ``` cbar = fig.colorbar(image) ``` Is there any way to get the colorbar objects in a given figure if I didn't assign them names?
The problem is that the colorbar is added as "just another" axis, so it will be listed with the 'normal' axes. ``` import matplotlib.pyplot as plt import numpy as np data = np.random.rand(6,6) fig = plt.figure(1) fig.clf() ax = fig.add_subplot(1,1,1) cax = ax.imshow(data, interpolation='nearest', vmin=0.5, vmax=0.99) print "Before adding the colorbar:" print fig.axes fig.colorbar(cax) print "After adding the colorbar:" print fig.axes ``` For me, this gives the result: > > > ``` > Before adding the colorbar: > [<matplotlib.axes._subplots.AxesSubplot object at 0x00000000080D1D68>] > After adding the colorbar: > [<matplotlib.axes._subplots.AxesSubplot object at 0x00000000080D1D68>, > <matplotl ib.axes._subplots.AxesSubplot object at 0x0000000008268390>] > > ``` > > That is, there are two axes in your figure, the second one is the new colorbar. Edit: Code is based on answer given here: <https://stackoverflow.com/a/2644255/2073632>
How to filter json data by date range in javascript I want to filter the below json data by start date and end date, it should return the data between start date and end date, I tried to achieve using below code but I'm doing wrong something to filter. I'm new to front end technologies like Javascript and jquery, it would be appreciated if Someone can correct me what I'm doing wrong here: ``` <html> <head> <title>Test</title> </head> <body> <script type="text/javascript"> var product_data = [ { "productId": "12", "productName": "ProductA", "productPrice": "1562", "ProductDateCreated": "2015-07-24T12:58:17.430Z", "TotalProduct": 294 }, { "productId": "13", "productName": "ProductB", "productPrice": "8545", "TotalProduct": 294, "ProductHits": { "2015-08-01T00:00:00Z" } }, { "productId": "14", "productName": "ProductC", "productPrice": "8654", "TotalProduct": 78, "ProductHits": { "2015-08-10T00:00:00Z" } }, { "productId": "15", "productName": "ProductD", "productPrice": "87456", "TotalProduct": 878, "ProductHits": { "2015-05-12T00:00:00Z" } } ]; var startDate = "2015-08-04"; var endDate = "2015-08-12"; var resultProductData = product_data.filter( function (a) { return (a.ProductHits) > startDate && (a.ProductHits) < endDate; }); console.log(resultProductData); </script> </body> </html> ```
``` var startDate = new Date("2015-08-04"); var endDate = new Date("2015-08-12"); var resultProductData = product_data.filter(function (a) { var hitDates = a.ProductHits || {}; // extract all date strings hitDates = Object.keys(hitDates); // convert strings to Date objcts hitDates = hitDates.map(function(date) { return new Date(date); }); // filter this dates by startDate and endDate var hitDateMatches = hitDates.filter(function(date) { return date >= startDate && date <= endDate }); // if there is more than 0 results keep it. if 0 then filter it away return hitDateMatches.length>0; }); console.log(resultProductData); ``` fiddle: <http://jsfiddle.net/4nz1ahuw/> --- **UPDATE** as Ates Goral suggests in the comments the solution above can be optimized by using Array.protype.some: ``` var startDate = new Date("2015-08-04"); var endDate = new Date("2015-08-12"); var resultProductData = product_data.filter(function (a) { var hitDates = a.ProductHits || {}; // extract all date strings hitDates = Object.keys(hitDates); // improvement: use some. this is an improment because .map() // and .filter() are walking through all elements. // .some() stops this process if one item is found that returns true in the callback function and returns true for the whole expression hitDateMatchExists = hitDates.some(function(dateStr) { var date = new Date(dateStr); return date >= startDate && date <= endDate }); return hitDateMatchExists; }); console.log(resultProductData); ``` thank you for your good advice :)
Managing multiple sound cards in Kubuntu - more reliable way than KDE phonon GUI? I have two sound cards attached to my PC - an internal Soundblaster and an external DAC. Unfortunately, managing what to output where is a real pain. Just to give an example: When I enter the Phonon interface and set the DAC as the preferred device for ALL categories and then start Dragon Player, a message pops up saying "Reverting to Soundblaster because it has higher priority" (even though it obviously does not). Also, it's really inflexible. The KDE *mixer* even has an option to adjust volume for individual programs using sound devices. I'd like to have similar functionality to switch, e.g. Amarok's output to card A and chromium's to card B. Are there any tools which make this possible in Kubuntu? If all else fails - is the Gnome way of handling this problem more flexible and configurable?
You can do this with [pavucontrol ![Install pavucontrol](https://hostmar.co/software-small)](https://apps.ubuntu.com/cat/applications/pavucontrol). It is a GTK application, but don't worry, it works just fine in KDE. In combination with [paman ![Install paman](https://hostmar.co/software-small)](https://apps.ubuntu.com/cat/applications/paman), it can be a very versatile controller for Pulseaudio. ![enter image description here](https://i.stack.imgur.com/1Kmmi.png) I use it for choosing where to record/play through, etc. Unfortunately, at this time I only have a single card to test on (and no HDMI devices), so a setup similar to yours, I can't test. --- You can also try [veromix ![Install veromix](https://hostmar.co/software-small)](https://apps.ubuntu.com/cat/applications/veromix), which can be used to move streams between devices, add effects, and more.
Explain me that strings are immutable I read a lot that string objects are immutable and only string buffers are mutable. But When I tried this program. I am confused. So whats going on here in this program. ``` class Stringss { public static void main(String[] args) { String s="hello"; String ss=new String("xyz"); System.out.println(ss); System.out.println(s); s="do"; ss=new String("hello"); System.out.println(s); System.out.println(ss); } } ``` Output is ``` xyz hello do hello ```
In your code, `s` is not a String object. It's a *reference* to a String object. Your code makes it reference several different String objects. But the String object itself doesn't change. A String would not be immutable if you could do, for example ``` s.setCharacterAt(3, 'Z'); ``` or ``` s.setValue("foo") ``` But doing ``` s = "a string"; s = "another string"; ``` Doesn't change what the `"a string"` object contains. It just makes s point to another String. To make an analogy, a VHS is mutable. You can replace what is on the band. A DVD is immutable: you can't change what's being written on the disk. But that doesn't prevent the DVD player to play several different DVDs. Putting another DVD inside the DVD player doesn't change what the DVDs contain.
How to enumerate enum members using SWIG Can I expose a C++ `enum` to SWIG as a real entity rather than a set of constants so I can enumerate over them in python code?
I faced the same issue. I hope that SWIG soon supports C++11's `enum class`. Here's a hack that convinces SWIG to put enums in a structure: ``` #ifdef SWIG %rename(MyEnum) MyEnumNS; #endif struct MyEnumNS { enum Value { Value1, Value2, Value3 }; }; typedef MyEnumNS::Value MyEnum; ``` In `.cpp` code you now must use `MyEnum::Value1` and in Python code it is `MyEnum.Value1`. Although convoluted, the `typedef` prevents having to change existing code that uses the enum everywhere and the SWIG %rename makes the enum have the same name in the SWIG wrapper. In Python you can enumerate the values with a little code: ``` def values(enum): return [(k,v) for k,v in vars(enum).items() if isinstance(v,int)] ``` It's not pretty, and I'd love to see a better solution.
How do I use the aws cli to set permissions on files in an S3 bucket? I am new to the aws cli and I've spent a fair amount of time in the [documentation](http://docs.aws.amazon.com/cli/latest/reference/s3/index.html#cli-aws-s3) but I can't figure out how to set permissions on files after I've uploaded them. So if I uploaded a file with: `aws s3 cp assets/js/d3-4.3.0.js s3://example.example.com/assets/js/` and didn't set access permissions, I need a way to set them. Is there an equivalent to `chmod 644` in the aws cli? And for that matter is there a way to view access permission? I know I could use the `--acl public-read` flag with `aws s3 cp` but if I didn't, can I set access without repeating the full copy command?
The awscli supports two groups of S3 actions: [s3](http://docs.aws.amazon.com/cli/latest/reference/s3/index.html) and [s3api](http://docs.aws.amazon.com/cli/latest/reference/s3api/). You can use [aws s3api put-object-acl](http://docs.aws.amazon.com/cli/latest/reference/s3api/put-object-acl.html) to set the ACL permissions on an existing object. The [logic behind there being two sets of actions](https://aws.amazon.com/blogs/developer/leveraging-the-s3-and-s3api-commands/) is as follows: - `s3`: high-level abstractions with file system-like features such as `ls`, `cp`, `sync` - `s3api`: one-to-one with the low-level S3 APIs such as `put-object`, `head-bucket` In your case, the command to execute is: ``` aws s3api put-object-acl --bucket example.example.com --key assets/js/d3-4.3.0.js --acl public-read ```
Why does AnimationController needs a vsync? When using [AnimationController](https://api.flutter.dev/flutter/animation/AnimationController-class.html), what is the purpose of that `vsync` parameter? ``` class Example extends StatefulWidget { @override _ExampleState createState() => _ExampleState(); } class _ExampleState extends State<Example> with SingleTickerProviderStateMixin { AnimationController controller; @override void initState() { super.initState(); controller = AnimationController( duration: const Duration(seconds: 2), vsync: this, // Why do we need this? ); } // ... } ```
[AnimationController](https://api.flutter.dev/flutter/animation/AnimationController-class.html)'s `vsync` parameter has one purpose: Controlling the progress of the animation based on external factors. There are typically three main usages: - devtools like "slow animations", which reduce the speed of [AnimationController](https://api.flutter.dev/flutter/animation/AnimationController-class.html)s by 50%. - widget testing. By using `vsync`, this allows tests to skip frames to target a very specific state of the animation. This is both precise and doesn't involve waiting for real-time. - it allows animations to be "muted" when the widget associated with the [SingleTickerProviderStateMixin](https://api.flutter.dev/flutter/widgets/SingleTickerProviderStateMixin-mixin.html) is not visible anymore The last scenario is the main reason why our widgets need to that [SingleTickerProviderStateMixin](https://api.flutter.dev/flutter/widgets/SingleTickerProviderStateMixin-mixin.html). Knowing what widget is associated with the animation matters. We can't just use a [TickerProvider](https://api.flutter.dev/flutter/scheduler/TickerProvider-class.html) obtained from the root widget of our application. Through that `vsync`, this will avoid scenarios where our widget is no longer visible (for example if another route is pushed on the top of it), but the animation is still playing and therefore makes our screen to keep refreshing A way of seeing that behavior is by using the "performance overlay" devtool combined with widgets like [CircularProgressIndicator](https://api.flutter.dev/flutter/material/CircularProgressIndicator-class.html), which internally uses [AnimationController](https://api.flutter.dev/flutter/animation/AnimationController-class.html). If we use `Opacity` to hide our indicator (which doesn't pause animations): ``` Opacity( opacity: 0, child: CircularProgressIndicator(), ) ``` Then the performance overlay shows that our screen keeps refreshing: ![opacity performance overlay](https://i.stack.imgur.com/CT0l2.gif) Now, if we add a [TickerMode](https://api.flutter.dev/flutter/widgets/TickerMode-class.html) (implicitly done by widgets like [Visibility](https://api.flutter.dev/flutter/widgets/Visibility-class.html) and [Navigator](https://api.flutter.dev/flutter/widgets/Navigator-class.html)), we can pause the animation, which stops the unnecessary refresh: ``` Opacity( opacity: 0, child: TickerMode( enabled: false, child: CircularProgressIndicator(), ), ), ``` ![muted](https://i.stack.imgur.com/RFICd.jpg)
Why are UI events not thread-safe in Swift/Objective-C? So I'm beginning to learn the basics of Grand Central Dispatch and the whole concept of multithreading with iOS applications. Every tutorial will tell you that you must run UI events on the main thread, but I don't completely understand why. Here's a problem I came across yesterday, and finally fixed it by running a segue on the main thread, but I still don't understand why running it off the main thread was a problem: I had a custom initial VC (barcode scanner) and a segue to a new view controller with a `UIWebView` attached. As soon as the VC found a barcode, it called a handler, and in that closure, I had a `performSegueWithIdentifier`. However, I got a `EXC_BAD_ACCESS` because of this (it didn't happen when the second VC had a label or a `UIImageView`, just with `UIWebView`). I finally realized that for some reason, the closure was called off the main thread, and thus the segue was being performed off the main thread. Why exactly would performing the segue on another thread throw a memory error? Is it because `self` in `self.performSegueWithIdentifier` was somehow nil? And why wouldn't Swift automatically dispatch a segue event on the main thread?
Interesting question! The crash isn't related to UIKit. It's a crash specific to UIWebView. Looking at the stack trace, the exception happens in the `WebCore::FloatingPointEnvironment::saveMainThreadEnvironment` function, which is part of the WebKit init process. Since WebKit manages a threaded execution environment of its own, it makes sense that it needs a definite starting point (i.e. the main thread) to build this environment. UIKit operations (like presenting a view controller) performed on threads other than `main` will not cause an exception, but they will be delayed (depending on the QoS of the dispatching queue). As for why the UIKit operations aren't automatically dispatched on the main queue, I can only speculate that adding extra checks inside the library calls would add too much redundant work that can be avoided simply by following a convention. For a larger discussion on UIKit and the main thread, see this answer: [Why must UIKit operations be performed on the main thread?](https://stackoverflow.com/questions/18467114/why-must-uikit-operations-be-performed-on-the-main-thread) The short answer is that all operations that modify the UI of your app need to come together in one place to be evaluated to generate the next frame at regular intervals (the V-Sync interval specifically). Keeping track of all of the mutated state requires all changes to happen synchronously, and for performance reasons, all of these operations are generally batched up and executed once per frame (while also coordinating with the GPU).
Is it possible to create a docker container that contains one or more containers? I want to create a docker container which contains one or more containers. Is it possible with Docker?
To run docker inside docker is definitely possible. The main thing is that you `run` the outer container with [extra privileges](https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration) (starting with `--privileged=true`) and then install docker in that container. Check this blog post for more info: [Docker-in-Docker](http://blog.docker.com/2013/09/docker-can-now-run-within-docker/). One potential use case for this is described in [this entry](https://dantehranian.wordpress.com/2014/10/25/building-docker-images-within-docker-containers-via-jenkins/). The blog describes how to build docker containers within a Jenkins docker container. However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as [described in this post](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/)
java.lang.ClassCastException: Z cannot be cast to java.lang.String I'm getting an error: `java.lang.ClassCastException: Z cannot be cast to java.lang.String` while trying to run coverage (EclEmma) on a **Junit test**. If I run the test regularly (without coverage) then it passes. This is the code (all the fields in the class are `Strings`): ``` @Override public Map<String, String> getErrors() throws IllegalAccessException, IllegalArgumentException { Map<String, String> errors = new HashMap<String, String>(); for (Field field : this.getClass().getDeclaredFields()) { field.setAccessible(true); String value = (String) field.get(this); if (value.equals("N")) { if (!errors.containsKey(field.getName())) { errors.put(field.getName(), value); } } } return errors; } ```
The problem is that to produce the code coverage EclEmma adds a field `private static final transient boolean[] $jacocoData` to your class. Since this field is only present during code coverage runs, the normal unit test passes, but the code coverage run fails: your original code is not expecting this non-String field. The best solution is to check if the field you are seeing is really a String field and otherwise skipping the test of the field value: ``` for (Field field : this.getClass().getDeclaredFields()) { field.setAccessible(true); if (field.getType() != String.class) { continue; } String value = (String) field.get(this); if (value.equals("N")) { if (!errors.containsKey(field.getName())) { errors.put(field.getName(), value); } } } ```
conflicting distributed object modifiers on return type in implementation of 'release' in Singleton class I have recently upgraded to Xcode 4.2 and its started to give me so many semantic warnings with my code... one of them is "conflicting distributed object modifiers on return type in implementation of 'release'" in my singleton class.. I read somewhere about - (oneway void)release; to release this warning but once i put that in my code i start to getting compile error as "Duplicate declaration of release" not sure why and if you try to find the second declaration it shows in this line SYNTHESIZE\_SINGLETON\_FOR\_CLASS(GlobalClass); Update: This is the [post](https://stackoverflow.com/questions/7379470/singleton-release-method-produces-warning) where it explained about - (oneway void)release; how to get rid of this warning "conflicting distributed object modifiers on return type in implementation of release" ? and why its happening ?
The post you link to contains the solution to the problem in the title and explains why it happened to you. However, from reading your question it appears that your new issue is caused by mis-applying the great advice in that post's answer. I am fairly certain you *added* the line ``` - (oneway void) release {} ``` in your .m file rather than amending your existing ``` - (void) release { ``` line with the extra word "oneway". This would be why you get "Duplicate declaration of release". Yes, this is confusing because it's a duplicate definition that is invisibly creating the duplicate declaration. But I've just tried doing it your wrong way, and I get that "duplicate declaration" message. I get the impression, perhaps wrongly, that you didn't realise you actually had a release method, particularly when you think adding the line will "release this warning". Don't take all errors too literally, and always try to think what someone might really mean as it's often different from what they say, but do try and understand what is in your code, even in the classes you've taken off the shelf. And to address other questions raised, the reason you're overriding release is because it is a singleton which is not usually released. You probably only have a definition in your code, which will suffice. What Jonathan Grynspan has to say about specifying on both the declaration and the definition is broadly valid (and indeed the root of the issue) but it's important to recognise that in this specific case, the declaration is by Apple's foundation code which has changed. So, if it's not clear already, **amend the line that XCode finds problem with to include the word oneway**.
No recent books on MPI: is it dying? I've never used Message Passing Interface (MPI), but I've heard its name thrown about, most recently with Windows HPC Server. I had a quick look on amazon to see if there were any books on it, but they're all dated around 7 or more years ago. Is MPI still a valid technology choice for new applications, or has it been largely superceded by other distributed programming alternatives (e.g. DataSynapse GridServer)? As it's not really an implementation, but rather a standard, what is the likelihood (assuming it's not dead) that learning it will result in better design of distributed programming systems? Is there something else I should be looking at instead?
For what MPI is good for it's still a good choice. It's just possible that there are no recent books on the topic because the existing ones are good enough and most of us using MPI don't need anything more. I wouldn't characterise MPI as a distributed programming standard, more a standard for parallel programming on distributed memory computers -- which covers most of the largest computers in the world right now. If I were betting on it being replaced I'd be looking at [Chapel](http://chapel.cray.com/), [X10](http://x10-lang.org/), or, most likely, Fortran 2008. What you should be looking at depends on your requirements, but if they include high-performance number-crunching for scientific and engineering codes, Fortran or C/C++ with MPI should be in your sights. I've never heard of DataSynapse GridServer, a quick Google suggests to me that it's aimed at a completely different class of computational problems. EDIT: I just checked Amazon for books 'on MPI'. While the Gropp *et al* books are a bit old now, there are still plenty of other books being published which cover (use of) MPI. This is, in part, a reflection of the usage of MPI. It's not terribly interesting to computer scientists so there aren't many books on 'MPI for MPI's sake', but it is of interest to many computational scientists, so there's a steady stream of 'physics with MPI' and 'engineering with MPI' books. If these are outside your sphere of interest, MPI probably is too.
Why is the Ubuntu One server side application proprietary? > > **NOTE:** as of 02-04-14. Canonical has announced that they will be shutting down the ubuntu one service, and in the process, will make the server side application open source as well, as detailed in their [blog post](http://blog.canonical.com/2014/04/02/shutting-down-ubuntu-one-file-services/) > > > I've come across the [Launchpad](https://bugs.launchpad.net/ubuntuone-servers/+bug/375272) bug report which gave me some insight into people's opinions on the matter, but didn't explain why Canonical made this decision. The [Wikipedia](http://en.wikipedia.org/wiki/Ubuntu_One#Criticism) article on Ubuntu One didn't contain any links to announcements that would clear this up. **Question**: Why is the Ubuntu One server side application proprietary?
> > Ubuntu One's source code has been OSSed, see: <http://insights.ubuntu.com/2015/08/10/ubuntu-one-file-syncing-code-open-sourced/> > > > - Original answer for posterity: > > > This was a commercial decision made early on in the project. There where many rationales behind it, one of the predominant ones being that by making the server open source, anyone could set up a competing site with lower prices, effectively making the project hard (impossible?) to be sustained. That said, many bits and pieces have been open sourced, and many more are to come. We are continuously exploring ways to generate revenue that at the same time allow us to shift away from this model. The cost of the infrastructure to give away free storage space and synchronisation to millions of users as well as a brilliant development team is hugely expensive, so it is a sensitive subject. Everyone involved in the project is passionate about open source, and the majority of us are deeply involved in many open source communities.
How to change the XAxis used on my chart ShinobiCharts I'd like to have the xAxis of my chart to look like this : Jan, Feb, Mar, Apr, ..... Nov, Dec Right now it's following it's default, numbering xAxis according to the number of Data Points. How can i achieve this change to this axis ? I've tried using Category Axis and setting an NSMutableArray containing these strings ("Jan", "Feb"...) as categories and with a numberRange going from 1 to 12 but it didn't work. ``` chart = [[ShinobiChart alloc] initWithFrame:chartEmbaixo.frame withPrimaryXAxisType:SChartAxisTypeCategory withPrimaryYAxisType:SChartAxisTypeNumber]; NSMutableArray * monthNames = [[NSMutableArray alloc] initWithObjects:@"Jan", @"Fev", @"Mar", @"Abr", @"Mai", @"Jun", @"Jul", @"Ago", @"Set", @"Out", @"Nov", @"Dez", nil]; SChartNumberRange * numberRange = [[SChartNumberRange alloc] initWithMinimum:[NSNumber numberWithInt:1]andMaximum:[NSNumber numberWithInt:12]]; SChartCategoryAxis *xAxis = [[SChartCategoryAxis alloc] initWithRange:numberRange]; xAxis.categories = monthNames; chart.xAxis = xAxis; ```
first i use as my x axis **Edit** how i make my x axis: ``` SChartNumberRange *r1 = [[SChartNumberRange alloc] initWithMinimum:[NSNumber numberWithInt:0] andMaximum:[NSNumber numberWithInt:2]]; SChartCategoryAxis *xAxis = [[SChartCategoryAxis alloc] initWithRange:r1]; xAxis.title = @""; //xAxis.enableGesturePanning = YES; xAxis.enableGesturePanning = YES; xAxis.style.gridStripeStyle.showGridStripes = NO; xAxis.style.majorGridLineStyle.showMajorGridLines = NO; ``` when you make you data points it should use the xValue as the x axis point. like this: ``` dp.yValue = 1000; dp.xValue = @"Jan"; ``` the xValue should be set as the x point for that particular data point. This should work, but if it doesn't or you want to do something more complex you can extend this method from `SChartDelegate` protocol: ``` -(void)sChart:(ShinobiChart *)chart alterTickMark:(SChartTickMark *)tickMark beforeAddingToAxis:(SChartAxis *)axis ``` in this method you have the `tickMark.tickLabel`is the axis label for that given point where you can do your editing. Don't forget to verify what axis your on. Hope this helps. If not tomorrow i can post you some code from my project (currently i don't have access to it from where i am) **Edit:** currently i have this code: ``` - (void)sChart:(ShinobiChart *)chart alterTickMark:(SChartTickMark *)tickMark beforeAddingToAxis:(SChartAxis *)axis { if (chart.yAxis == axis ) return; for (UIView *i in tickMark.tickMarkView.subviews) [i removeFromSuperview]; tickMark.tickMarkView.frame = CGRectMake(0, 0, 170, 75); //center the marker at the right place because the size was changed tickMark.tickMarkX = tickMark.tickMarkX - (tickMark.tickMarkView.frame.size.width/2) ; tickMark.tickMarkY = 10; //img UIImageView *img = [[UIImageView alloc] initWithImage:[UIImage imageNamed: @"graph_bar_tag_2@2x.png"]]; img.frame = CGRectMake( 0, 0, tickMark.tickMarkView.frame.size.width, tickMark.tickMarkView.frame.size.height); [tickMark.tickMarkView addSubview:img]; //label with the markView's size with 7px padding on the left and on the right UILabel *label = [[UILabel alloc] initWithFrame: CGRectMake( 7, 5, tickMark.tickMarkView.frame.size.width-14, 15)]; label.backgroundColor = [UIColor clearColor]; //tikMark.tickLabel has an pair of indexes so that i can easily find the data for this particular data point and series. label.text = [_dataSource getNameFor: tickMark.tickLabel.text]; label.textAlignment = UITextAlignmentCenter; //color_other_light is a UIColor var [label setTextColor: color_other_light]; [tickMark.tickMarkView addSubview:label]; ... } ```
ROS tf.transform cannot find a frame which actually exists (can be traced with rosrun tf tf\_echo) Doe someone experienced with ROS the following behavior: how is it possible that a `tf.lookupTransform("map", "base_link", ros::Time(0), transform);` tells you that: `"base_link" passed to lookupTransform argument target_frame does not exist` but if i enter: ``` rosrun tf tf_echo base_link map At time 1549633095.937 - Translation: [-0.005, 0.020, -0.129] - Rotation: in Quaternion [0.033, 0.063, 0.002, 0.997] in RPY (radian) [0.066, 0.127, 0.009] ``` I can see that the frame not only exist but that there is an effective transformation available between `map` and `base_link`? I am totally clueless about such weird behavior. Any help will be very very welcome. Indeed the program is working on my laptop but not on an Intel NUC. The complete piece of code is given below (actually I have a segmentation fault during the creation of the costmap\_2d::costmap2DROS and looks like it is because tf.transform is failing): ``` #include <ros/ros.h> #include <tf/transform_listener.h> #include <costmap_2d/costmap_2d_ros.h> int main(int argc, char **argv) { ros::init(argc, argv, "hector_exploration_node"); ros::NodeHandle nh; tf::TransformListener tf; tf::StampedTransform transform; try{ tf.lookupTransform("/base_link", "/map", ros::Time(0), transform); std::cout << "transform exist\n"; } catch (tf::TransformException ex){ ROS_ERROR("%s",ex.what()); ros::Duration(1.0).sleep(); } std::cout << "before costmap\n"; costmap_2d::Costmap2DROS costmap("global_costmap", tf); costmap.start(); ros::Rate rate(10); while (ros::ok()) ros::spin(); return 0; } ```
You try to perform the transformation immediately after you've created your tf listener, which is commonly a bad practice for the following reason: The listener's buffer, which carries all information about recent transformation, is literally empty. Therefore, any transform which looks-up the buffer does not find the frames it needs. It is good practice to wait for some time after the listener has been created so that the buffer can fill up. But instead of just sleeping, tf comes with its own implementation to wait for exactly the frames you are asking for: [waitForTransform](http://docs.ros.org/jade/api/tf/html/c++/classtf_1_1Transformer.html#a72dc26fe7bfcb9585123309e092e5c83). It can be used as [explained here](http://wiki.ros.org/tf/Tutorials/tf%20and%20Time%20%28C%2B%2B%29#Wait_for_transforms). Therefore, you just have to extend your `try` block as follows: ``` try{ tf.waitForTransform("/base_link", "/map", ros::Time(0), ros::Duration(3.0)); tf.lookupTransform("/base_link", "/map", ros::Time(0), transform); std::cout << "transform exist\n"; } ```
Xamarin.Forms + .resx string resources - System.IO.FileNotFoundException: Invalid Image I am creating a Xamarin.Forms app based on a C# shared code project and another two projects for the actual Android + iOS app. I now want to implement string localization like it is documented [here](https://learn.microsoft.com/en-us/xamarin/xamarin-forms/app-fundamentals/localization/text?tabs=windows). So I created a .net standard 2.0 library project in my solution for the .resx file and referenced this project in my two main projects, and I did all the other steps described in the linked article, of course. When I now want to access any of my string resources from one of my app projects, `ResourceManager.GetString` throws this Exception: `System.IO.FileNotFoundException: Invalid Image` Example code would be that line, but it can be also any other string resource. ``` public static string global_copyright { get { return ResourceManager.GetString("global_copyright", resourceCulture); } } ``` I can confirm that the assembly of this library project is found and loaded correctly, since I can create instances of other classes defined in that project. Now I put this code directly at the beginning of the App() constructor like it is described in that article to debug for such issues: ``` var assembly = typeof(AppResources).GetTypeInfo().Assembly; foreach (var res in assembly.GetManifestResourceNames()) { System.Diagnostics.Debug.WriteLine("found resource: " + res); } var test = AppResources.global_copyright; ``` The first lines are executed fine and it shows me that one resource file is found in that assembly. Its name is also correct. But on the last line it crashes with that exception. What did I wrong? Why is ResourceManager not able to load string resources from that assembly? Visual Studio 2017 (15.9.6) with Xamarin.Forms v3.4
Since there is no real answer yet, I want to present a **workaround** as a preliminarily answer until someone find something better. First, the Problem still exists in Visual Studio 2019 (16.1.2) with Xamarin.Forms v3.6 I found out that it occurs because in Visual Studio Exception Settings I checked to break on all "Common Language Runtime Exceptions". When I disable that Setting, the app can be debugged without any problems and the resource strings are loaded correctly. Next, I found out that it seems to be an internal exception of the ResourceManager that is for whatever reason bubbled up into my own code. Simply press F5 to continue debugging when this exception is raised (has to be done twice, but only the first time you access your resources) and the app continues running normally. **So there are three possible workarounds:** 1. Disable to break on "Common Language Runtime Exceptions" in Exception settings [![Screenshot](https://i.stack.imgur.com/rK8O2.png)](https://i.stack.imgur.com/rK8O2.png) 2. In Exception Settings, let "Common Language Runtime Exceptions" checked but disable just the "System.IO.FileNotFoundException" break. [![Screenshot 2](https://i.stack.imgur.com/Tg8Ls.png)](https://i.stack.imgur.com/Tg8Ls.png) 3. Just continue Debugging with F5 when that exception is thrown
How do I fix a "Can't access Dropbox folder" error? I'm having a rather strange problem with Dropbox that started a few weeks ago. Dropbox will stop syncing with the message "Can't access Dropbox folder", and refuse to sync until I reboot. I've tried restarting Dropbox and logging out, but nothing I seem to do will allow it to sync again short of a reboot. This is not a permissions problem, as the permissions don't change when I suddenly lose access. I've checked lsof for anything related to dropbox that might still be hanging on when I stop it. As far as I can tell nothing else other than Dropbox is accessing it's folders when this happens.
if you put these in `/etc/sysctl.conf`: ``` fs.inotify.max_user_watches = 1048576 fs.inotify.max_user_instances = 256 ``` it will fix the issue. You may have to run `sudo sysctl -p` for these settings to take effect. Alternatively, if you are not interested in making these settings permanent, you may try the following commands... ``` sudo sysctl fs.inotify.max_user_instances=256 sudo sysctl fs.inotify.max_user_watches=1048576 ``` In this case the settings will be lost after a reboot. The problem is that the system has run out of inotify instances. You can check your syslog (e.g. by using `tail -f /var/log/syslog`) and if you see... > > tail: inotify cannot be used, reverting to polling: Too many open file > > > then you know this is your issue. For me it was nepomuk.
select max, group by and display other column that's not in group by clause To keep it short, I have found a tutorial online and followed to the point: <http://www.tizag.com/mysqlTutorial/mysqlmax.php> ``` SELECT type, MAX(price) FROM products GROUP BY type ``` My question is: How do I echo which "clothing" is the most expensive (In this case "Blouse")? **UPDATE:** --- Sorry guys, my bad. I needed to make myself more clear. What I am looking for is a solution that shows each "name" where they are most expensive: ``` name type price Clothing Blouse 34.97 Toy Playstation 89.95 Music Country Tunes 21.55 ```
Try the following query: **Solution #1:** ``` SELECT products.name, products.type, products.price FROM products INNER JOIN ( SELECT type,MAX(price) max_price FROM products GROUP BY type ) t ON products.type = t.type AND products.price = t.max_price; ``` [**Demo Here**](http://sqlfiddle.com/#!9/1637b0/1/0) **Solution #2:** ``` SELECT products.name, products.type, products.price FROM products WHERE (type, price) IN ( SELECT type, MAX(price) max_price FROM products GROUP BY type ) ``` [**See Demo**](http://sqlfiddle.com/#!9/1637b0/2/0) **EDIT:** --- **Note:** Both solutions might give you multiple products under same `type` if they share the same `maximum` price. If you strictly want **at most one** item from each type then you need to **`group by`** again in the last line. So for both solutions the last line would be: `GROUP BY products.type, products.price` [**See Demo of it**](http://sqlfiddle.com/#!9/fa5471/2/0)
When use "cv2.drawMatches", error occurs: "outImg is not a numpy array, neither a scalar" I have the following code fo the keyframe matching by ORB: ``` import numpy as np import cv2 from matplotlib import pyplot as plt img1 = cv2.imread("C:\\Users\\user\\Desktop\\picture\\Pikachu_Libre.png",0) img2 = cv2.imread("C:\\Users\\user\\Desktop\\picture\\Pikachu_Libre.png",0) # Initiate STAR detector orb = cv2.ORB_create() # find the keypoints with ORB kp1, des1 = orb.detectAndCompute(img1,None) kp2, des2 = orb.detectAndCompute(img2,None) bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) # Match descriptors. matches = bf.match(des1,des2) # Sort them in the order of their distance. matches = sorted(matches, key = lambda x:x.distance) # Draw first 10 matches. img3 = cv2.drawMatches(img1,kp1,img2,kp2,None,matches[:10], flags=2) plt.imshow(img3),plt.show() ``` After i run I get the following error: ``` img3 = cv2.drawMatches(img1,kp1,img2,kp2,None,matches[:10], flags=2) TypeError: outImg is not a numpy array, neither a scalar ``` Anyone can help me on this?
Notice the prototype of [`cv2.drawMatches()`](https://docs.opencv.org/3.4.1/d4/d5d/group__features2d__draw.html#ga7421b3941617d7267e3f2311582f49e1): ``` cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]) -> outImg ``` So your parameters' order is wrong. --- **From**: ``` img3 = cv2.drawMatches(img1,kp1,img2,kp2,None,matches[:10], flags=2) ``` **To**: ``` img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], None,flags=2) ```