prompt
stringlengths
49
4.73k
ground_truth
stringlengths
238
35k
Nesting unnamed namespaces Is there a functional difference between these two nested usages of unnamed namespaces: ``` namespace A { namespace { void foo() {/*...*/} }} ``` and ``` namespace { namespace A { void foo() {/*...*/} }} ``` As far as I see it, both `foo`s will get an internal unique identifier per compilation unit and can be accessed with `A::foo` - but is there a subtle or not-so-subtle difference that I'm not seeing?
Exactly as you typed, there is no difference. You can, of course, add declarations in the first level of namespace to booth examples and then it will be a difference. ``` namespace A { int i; // Accessed globally in this file as "A::i". namespace { void foo() {/*...*/} }} namespace { int i; // Accessed globally in this file simply as "i". namespace A { void foo() {/*...*/} }}} ``` Note that, although you programmer have no way to distinguish, for the compiler, the namespaces are distinct: ``` unnamed_namespaces.cpp:42:5: error: reference to ‘A’ is ambiguous unnamed_namespaces.cpp:19:17: error: candidates are: namespace A { } unnamed_namespaces.cpp:28:19: error: namespace <unnamed>::A { } ``` Usefull: - [Nested anonymous namespace?](https://stackoverflow.com/questions/2094072/nested-anonymous-namespace) - <http://www.codingunit.com/cplusplus-tutorial-namespaces-and-anonymous-namespaces> - <http://publib.boulder.ibm.com/infocenter/comphelp/v8v101/index.jsp?topic=%2Fcom.ibm.xlcpp8a.doc%2Flanguage%2Fref%2Funnamed_namespaces.htm> - <http://www.informit.com/articles/article.aspx?p=31783&seqNum=6> - <http://msdn.microsoft.com/en-us/library/yct4x9k5%28v=vs.80%29.aspx> --- EDIT: In respect to ADL (Argument-dependent name lookup), I understand that it will be no precedence difference in overload resolution for other `foo()` as below: ``` #include <iostream> void foo() { std::cout << "::foo()" << std::endl; } namespace A { namespace { void foo() { std::cout << "A::<unnamed>::foo()" << std::endl; } class AClass { public: AClass( ) { foo( ); } }; } } namespace { namespace B { void foo() { std::cout << "B::<unnamed>::foo()" << std::endl; } using namespace A; class BClass { public: BClass( ) { foo( ); } ~BClass( ) { A::foo( ); } }; } } int main( ) { A::foo( ); B::foo( ); foo( ); A::AClass a; B::BClass b; return 0; } ``` Compiler will prefer the closest `foo( )` unless explicitly specified. So `BClass` constructor calls `B::foo( )` even having a `using namespace A` on it. To call `A::foo( )` on `BClass` destructor, the call must be explicitly qualified. ``` A::<unnamed>::foo() B::<unnamed>::foo() ::foo() A::<unnamed>::foo() B::<unnamed>::foo() A::<unnamed>::foo() ``` Maybe it become clearer if we think in nested *named* namespaces and how the argument-dependent will be solved. The olny difference will be an implicit `using` on the unnamed ones, but it won't change the compiler preference.
How to display only date in GridView when pulling data from a DB? C# I am pulling data from an access database to show in a GridView control on a ASP.NET project. It works fine but I want to see if I can format the data that is being pulled. Currently any currency is being truncated from xx.xx to just the dollar amounts. Also the dates are displaying mm/dd/yyyy hh/mm/ss AM/PM I tried editing the database itself to the right values (I set the currency field to "Currency" and the date field to "Short Date" but when I pull that date it still shows them not formatted. **EDIT:** Sorry, had to take the code down Any ideas? Thank you
in the grid view of yours add the property called `DataFormatString` **DataFormatString examples:** ``` {0:dd MMMM yyyy} - gives 24 February 2006 {0:MMM dd} - gives Feb 24 (substitue MMM with MMMM for the full month name instead of abbreviation) {0:dd/MM/yy} - gives 24/02/06 {0:dd/MM/yyyy} - gives 24/02/2006 ``` **Sample Code** ``` <asp:BoundField HeaderText="Date" DataField="SampleDate" DataFormatString="{0:MM/dd/yyyy}" > ``` # [MSDN BoundField.DataFormatString Property](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.boundfield.dataformatstring%28v=vs.110%29.aspx)
Stop Apps Script from creating GCP projects I'm seeing a large number of GCP projects being created by *serviceAccount:appsdev-apps-dev-script-auth@system.gserviceaccount.com*, following invocation of Apps Scripts. I'd like to control or block such project creation. What are the right ways to accomplish that?
That's not possible. Quoting GCP Projects's [documentation](https://developers.google.com/apps-script/guides/cloud-platform-projects#standard_cloud_platform_projects): > > By default GCP projects have an Identity and Access Management (IAM) > policy with one entry, a Google service account that acts as the owner > of the default project. The Google service account is > **appsdev-apps-dev-script-auth@system.gserviceaccount.com**. > > > Also: > > Every Apps Script project uses the Google Cloud Platform to manage > authorization, Advanced services, and other details. To configure and > manage these settings, every Apps Script project has an associated > Google Cloud Platform project (a GCP project). > > > --- You can use a [default GCP project](https://developers.google.com/apps-script/guides/cloud-platform-projects#default_cloud_platform_projects) or a [standard](https://developers.google.com/apps-script/guides/cloud-platform-projects#standard_cloud_platform_projects) project created by you, but the Apps Script project does need a GCP Project. --- Moreover, since April 8, 2019 **it's not possible to access the default GCP projects** created for Apps Script projects unless they are older.
Programmatically build an F# regular expression with the FsVerbalExpressions library I've been using the library [FsVerbalExpressions](http://verbalexpressions.github.io/FSharpVerbalExpressions/) to write some functions. I'm having a hard time trying to build a regEx programmatically. For example, if I have a string `"Int. Bus. Mach"`, I can remove periods and whitespaces and end up with the array ``` let splitString = [|"Int"; "Bus"; "Mach"|] ``` What I'd like to do is build a regular expression from `splitString` so that its result is: ``` let hardCoded = VerbEx() |> startOfLine |> then' "Int" |> anything |> whiteSpace |> then' "Bus" |> anything |> whiteSpace |> then' "Mach" hardCoded;; val it : VerbEx = ^(Int)(.*)\s(Bus)(.*)\s(Mach) {MatchTimeout = -00:00:00.0010000; Regex = ^(Int)(.*)\s(Bus)(.*)\s(Mach); RegexOptions = None; RightToLeft = false;} ``` My problem is that I don't know how to build this programmatically so that, if the original string is `"This is a much bigger string"`, the entire regEx is built from code rather than hard coded. I can create individual regular expressions with ``` let test = splitString |> Array.map (fun thing -> VerbEx() |> then' thing) |> Array.toList ``` but this is a list of `VerbEx()` rather than a single `VerbEx()` above. Does anyone know how I could build a regEx with `FsVerbalExpressions` programmatically? Thanks in advance for your help!
Think about it like this: you need to start with some initial value, `VerbEx() |> startOfLine`, and then apply to it repeating patterns that have the general shape of `anything |> whitespace |> then' word`. You can also think about it in inductive terms: you're producing a series of values, where each value is expressed as `previousValue |> anything |> whitespace |> then' word` - that is, each next value in the series is previous value with some change applied to it. The very last element of such series is your final answer. Such operation - producing a series of values, where each value is expressed as a modification of the previous one, - is traditionally called `fold`. And sure enough, F# has standard library functions for performing this operation: ``` let applyChange previousValue word = previousValue |> anything |> whitespace |> then' word let initialValue = VerbEx() |> startOfLine let finalAnswer = splitString |> Array.fold applyChange initialValue ``` Or you can roll that all together: ``` let finalAnswer = splitString |> Array.fold (fun previousValue word -> previousValue |> anything |> whitespace |> then' word) (VerbEx() |> startOfLine) ```
Keeping HTML footer at the bottom of the window if page is short Some of my webpages are short. In those pages, the footer might end up in the middle of the window and below the footer is whitespace (in white). That looks ugly. I'd like the footer to be at the bottom of the window and the limited content body just gets stretched. However, if the webpage is long and you have to scroll to see the footer (or all of it), then things should behave as normal. What's the proper way to do this with CSS? Do I need Javascript/jQuery to make this happen? I only care about IE9+ and modern versions of other browsers. The height of the footer can change from page to page too, so I'd like to not rely on the height.
Check out [this site](http://matthewjamestaylor.com/blog/keeping-footers-at-the-bottom-of-the-page). He has a good tutorial on how to do this with css. I copied his css just in case Matthew's site is taken down. ``` html, body { margin:0; padding:0; height:100%; } #container { min-height:100%; position:relative; } #header { background:#ff0; padding:10px; } #body { padding:10px; padding-bottom:60px; /* Height of the footer */ } #footer { position:absolute; bottom:0; width:100%; height:60px; /* Height of the footer */ background:#6cf; } ``` **EDIT** Since the height of the footer is different from page to page, you could get the height of the footer and then adjust the #body padding-bottom with javascript. Here is an example using jquery. ``` $(function(){ $('#body').css('padding-bottom', $('#footer').height()+'px'); }); ```
Python Line Sorting - I have the following: ``` line = ['aaaa, 1111, BOB, 7777','aaaa, 1111, BOB, 8888','aaaa, 1111, larry, 7777',,'aaaa, 1111, Steve, 8888','BBBB, 2222, BOB, 7777'] ``` Is there away I can sort by (Bob,Larry,Steve) then by (1111,2222)? so... ``` for i in line: i = i.split(' ') pos1 = i[0] pos2 = i[1] pos3 = i[2] pos4 = i[3] ``` So I need to sort by pos3 and then by pos2. Desired output would be: ``` 'aaaa, 1111, BOB, 7777' 'aaaa, 1111, BOB, 8888' 'BBBB, 2222, BOB, 7777' 'aaaa, 1111, larry, 7777' 'aaaa, 1111, Steve, 8888' ```
Leave the splitting to a key function: ``` sorted(line, key=lambda l: l.lower().split(', ')[2:0:-1]) ``` This returns the strings in `line` in lexicographically sorted order, case-insensitive. The `[2:0:-1]` slice returns the third and second columns in reverse order. Demo: ``` >>> line = ['aaaa, 1111, BOB, 7777','aaaa, 1111, BOB, 8888','aaaa, 1111, larry, 7777','aaaa, 1111, Steve, 8888','BBBB, 2222, BOB, 7777'] >>> from pprint import pprint >>> pprint(sorted(line, key=lambda l: l.lower().split(', ')[2:0:-1])) ['aaaa, 1111, BOB, 7777', 'aaaa, 1111, BOB, 8888', 'BBBB, 2222, BOB, 7777', 'aaaa, 1111, larry, 7777', 'aaaa, 1111, Steve, 8888'] ``` If your 'lines' are not as neatly comma + space separated, you may need to strip whitespace too.
Equivalent of "Dim As String \* 1" VB6 to VB.NET I have some VB6 code that needs to be migrated to VB.NET, and I wanted to inquire about this line of code, and see if there is a way to implement it in .NET ``` Dim strChar1 As String * 1 ``` Intellisense keeps telling me that an end of statement is expected.
That's known as a "fixed-length" string. There isn't an exact equivalent in VB.NET. > > **Edit**: Well, OK, there's **[VBFixedStringAttribute](http://msdn.microsoft.com/en-us/library/microsoft.visualbasic.vbfixedstringattribute.aspx)**, but I'm pretty sure that exists solely so that automated migration tools can more easily convert VB6 code to VB.NET for you, and it's not really the ".NET way" to do things. Also see the warnings in the article for details on why this still isn't exactly the same thing as a fixed-length string in VB6. > > > Generally, fixed-length strings are only used in VB6 if you are reading fixed-size records from a file or over the network (i.e. parsing headers in a protocol frame). For example, you might have a file that contains a set of fixed-length records that all have the format **(integer, 1-character-string, double)**, which you could represent in VB6 as a user-defined type: ``` Public Type Record anInteger As Integer aSingleCharacter As String * 1 aDouble As Double End Type ``` This way, VB6 code that reads from the file containing records in this format can read each fixed-sized record stored in the file, and in particular, it will only read 1 byte for `aSingleCharacter`. Without the `* 1`, VB6 would have no idea how many characters to read from the file, since a `String` can normally have any number of characters. In VB.NET, you can do one of the following, depending on your needs: - If the length matters (i.e. you need to read exactly one byte from some data source, for example) consider using an array instead, such as `Dim aSingleByteArray(1) As Byte` - Alternatively, you could use one of the **[Stream](http://msdn.microsoft.com/en-us/library/system.io.stream.aspx)** classes. In particular, if you are reading data from a network socket or a file, consider using **[NetworkStream](http://msdn.microsoft.com/en-us/library/system.net.sockets.networkstream.aspx)** or **[FileStream](http://msdn.microsoft.com/en-us/library/system.io.filestream.aspx)**, respectively. A **Stream** is meant for byte-for-byte access (i.e. raw binary access). **[StreamReader](http://msdn.microsoft.com/en-us/library/system.io.streamreader.aspx)** is a related class that simplifies reading data when it is text-based, so that might be good if you are reading a text file, for example. Otherwise (if dealing with binary data), stick with one of the **Stream** classes. - If the length doesn't matter, you could just use a "normal" `String`. That is to say: `Dim aNormalString As String` Which answer is "correct" really depends on why it was declared that way in the original VB6 code.
Basic questions re Objective-C property syntax I have a few basic questions regarding the syntax and usage of properties in Objective C: Assume the following declaration in the header: ``` @interface TestObject : NSObject { NSArray *myArray; } @property (nonatomic, retain) NSArray *myArray; ``` In the implementation, can I: 1. List item 2. Use `myArray` and `self.myArray` interchangeably for setting and getting purposes? 3. Is `self.myArray = nil` equivalent to `[myArray release]`? If so, Is there ever a reason to use `self.myArray = nil` rather than `[myArray release]`?
1. `myArray` and `self.myArray` are actually different. `myArray` is accessing the variable directly, whereas `self.myArray` (which is equivalent to `[self myArray]`) is calling an accessor method. Most people agree that you should use `self.myArray` (or `[self myArray]`) all the time, and never use `myArray` directly. This is because the accessor might have side effects; for example, KVO won't work if you set your variable directly, and memory management won't be handled for you. 2. Your property is declared with `retain`, so `self.myArray = anArray` (which is the same as `[self setMyArray:anArray]`) does the following: 1. Retain anArray, which will soon be the new myArray. 2. Release the old myArray, which will soon no longer be myArray. 3. Change the pointer myArray so it now points to anArray. Therefore, when you do `self.myArray = nil`, one of the steps (#2) *is indeed* release the old array. (And since the new one is `nil`, we don't have to worry about *its* memory management, even though we retained it.) So yes, `self.myArray = nil` is a valid way of releasing `myArray`. HOWEVER, if you're talking about releasing `myArray` in `dealloc`, it's generally a good idea to use `[myArray release]`, because calling `self.myArray = nil` will have side effects if any other objects are observing `myArray` through KVO. So while it **does** follow memory management standards, it's **not** a good idea to write your `dealloc` method using `self.myArray = nil`.
Where is VirtualBox's virtual hard disks repository? A colleague shared a VirtualBox VM with me. When double clicking the VDI file, VirtualBox opens but I get a few errors (the errors were related to incorrect paths and UUIDs in the VBOX file). So I had to RTFM... I found [Importing a VDI in VirtualBox](https://blogs.oracle.com/oswald/entry/importing_a_vdi_in_virtualbox) from Oracle's blog. The blog states: > > First copy your VDI file into VirtualBox's virtual hard disks repository. On Mac OS X it's $HOME/Library/VirtualBox/HardDisks/. > > > I seem to have VDI's scattered all about, but all under a common root folder of `"VirtualBox VMs"`. And I don't have a folder `"HardDisks"`. So its not clear to me where the repository is, or where I'm supposed to put the VDI. Where is VirtualBox's virtual hard disks repository in Windows 8.1?
For Microsoft Windows OS, default Storage is in `%HOMEDRIVE%%HOMEPATH%\VirtualBox VMs` Which typically expands out to `C:\Users\[username]\VirtualBox VMs` The subfolders will be named with the `Name:` field contents entered in the **Create Virtual Machine** dialog where you're asked to enter Name and OS information. If you've decided to do a little more orgainization, the Machine Name subfolders will be under the Machine Group subfolder. Full implementation may look like the following: `C:\Users\[username]\VirtualBox VMs\[vm group]\[vm name]` The VDI may exist elsewhere if you used the option to **Use an existing virtual hard drive** It's easy to find where the VDI disks reside by using **File=>Virtual Media Manager** ![enter image description here](https://i.stack.imgur.com/9tGWP.png)
Return promise from the function I have a shallow understanding of JavaScript Promise and promise chain. Say, I have a method as shown below. It's written is TypeScript, but could be modified to match JavaScript ES6 ``` private InsertPersonInDB(p : Person) { return this.db.find({ //<- would this return? selector: {objType: 'Person'}, fields: ['_id'], sort: ['_id'] }).then( result => { let allpersondIds : string[] = []; (result.docs).forEach(rec => { allpersondIds.push(rec._id); }); return allpersondIds; }).then ( allpersonIdsInDB => { var id = this.getIdfromPersonName(person.personName, allpersonIdsInDB); person._id = id; return this.db.post(person) //<- or would this return? } } //Calling function for(let person of this.persons) { InsertPersonInDB(person).then(result => { console.log(result) //Some UI updates }).catch(err => { console.log(err) //Some UI updates notifying user about failure }); } ``` Here, I have two return, first is > > return this.db.find > > > which *find* function is a promise and end of the then chain returns > > return this.db.post(person) > > > even *post* function is a promise. Here, I have three questions 1) When this function executes, what would return? 2) If the function immediately returns the promise, when would the chain *thens* execute? 2) What are better approaches to refactored the promise chain in a layered application. E.g. Few chain *then* needs to be executed in the service, other needs to be executed in UI, how do I structure my promises code?
First of all, I think you can easily test our your questions by just making some small examples for yourself. When I am unclear about how things work, I find it very useful to create a small example to just try out and see what happens. Lets do that for this question as well (see also <https://plnkr.co/edit/K18o4Lp2jtUincjyG5wi?p=preview> for the working version; open the console to see the results!): ``` function test() { return returnPromise().then((value) => { console.log('1st then, inside test(): ' + value); return 'Hello'; }).then((value) => { console.log('2nd then, inside test(): ' + value); return 'world'; }); } function returnPromise() { return new Promise(function(resolve, reject) { resolve('start of new Promise'); }); } test().then((value) => { console.log('3rd then, after calling test: ' + value); }); ``` For your questions: 1. You return the Promise together with all the chained `then` functions. If you add another `then` to the returned Promise, it will be added at the end of the chain. That is what you see when we are doing `test().then(...)`. 2. A Promise tells you that it will execute at some point in time, without telling you when. The then chain will execute whenever the Promise resolves. You can see that in more detail in `returnPromise`. Here we return a new Promise. The body of the Promise calls the `resolve` method when it is done (in this case that is instantly), triggering the Promise to resolve and execute all `then` methods chained to the Promise. Usually the Promise won't resolve instantly, but will perform an async task (e.g. retrieving data from a server) first. 3. That really depends on the type of application and what you are looking for. Your current approach is not bad in itself, as long as the responsibilities are clearly defined.
How do I use structural annotations to set SQL type to Date in model first approach Is it possible to set type to just date (NOT datetime) via entity framework designer? I had a look around and the only answer that I've found is a post from MSDN forum from a year ago... <http://social.msdn.microsoft.com/Forums/en/adodotnetentityframework/thread/28e45675-f64b-41f0-9f36-03b67cdf2e1b> I'm very new here and I don't really understand the instructions where they talk about structural annotations... I can go through the generated SQL script and change each line but I rather not do that...
Structural annotation - nice. It is the first time I heard about this feature but it works. I just tried it. I will try to explain it little bit. Structural annotations are just random xml added to EDMX file. EDMX file is in fact just XML wich has 4 parts - CSDL, MSL, SSDL and part related to positioning elements in the designer. - CSDL describes entities and associations among entities (defined in the designer) - SSDL describes tables and relations - MSL describes mapping between CSDL and SSDL If you start with model first (you want to generate database from your model), you have only CSDL part and both SSDL and MSL will be generated by some automatic process (T4 templates executed in workflow) once SSDL is created another T4 template will generate SQL script for database creation. Structural annotation described in linked MSDN forum's thread is a hint. You will place structural annotation into CSDL part of the EDMX (you must open EDMX as XML - click on the file in solution explorer and choose Open with). My test CSDL describes single User entity with three properties (entity is visible on screenshot later in the answer): ``` <!-- CSDL content --> <edmx:ConceptualModels> <Schema xmlns="http://schemas.microsoft.com/ado/2008/09/edm" xmlns:cg="http://schemas.microsoft.com/ado/2006/04/codegeneration" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns:annotation="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns:custom="http://tempuri.org/custom" Namespace="Model" Alias="Self" > <EntityContainer Name="ModelContainer" annotation:LazyLoadingEnabled="true"> <EntitySet Name="UsersSet" EntityType="Model.User" /> </EntityContainer> <EntityType Name="User"> <Key> <PropertyRef Name="Id" /> </Key> <Property Type="Int32" Name="Id" Nullable="false" annotation:StoreGeneratedPattern="Identity" /> <Property Type="String" Name="Login" Nullable="false" /> <Property Type="DateTime" Name="CreatedAt" Nullable="false"> <custom:SqlType edmx:CopyToSSDL="true">Date</custom:SqlType> </Property> </EntityType> </Schema> </edmx:ConceptualModels> ``` I have added custom namespace definition in `Schema` element: `xmlns:custom="http://tempuri.org/custom"` and defined custom structural annotation for `CreatedAt` property: ``` <Property Type="DateTime" Name="CreatedAt" Nullable="false"> <custom:SqlType edmx:CopyToSSDL="true">Date</custom:SqlType> </Property> ``` The name of the namespace or element used for structural annotation are not important - it is absolutely up to you what names do you use. The only important thing is `edmx:CopyToSSDL="true"` attribute. This attribute is recognized by T4 template used for SSDL creation and it just takes this element and places it to SSDL. Generated SSDL looks like: ``` <Schema Namespace="Model.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2008" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2009/02/edm/ssdl"> <EntityContainer Name="ModelStoreContainer"> <EntitySet Name="UsersSet" EntityType="Model.Store.UsersSet" store:Type="Tables" Schema="dbo" /> </EntityContainer> <EntityType Name="UsersSet"> <Key> <PropertyRef Name="Id" /> </Key> <Property Name="Id" Type="int" StoreGeneratedPattern="Identity" Nullable="false" /> <Property Name="Login" Type="nvarchar(max)" Nullable="false" /> <Property Name="CreatedAt" Type="datetime" Nullable="false"> <custom:SqlType xmlns:custom="http://tempuri.org/custom">Date</custom:SqlType> </Property> </EntityType> </Schema> ``` The only point was moving the structural annotation to SSDL. All annotations are accessible in metadata through some name value collection. Now you need to modify T4 template responsible for SQL script generation to recognize this annotation and use the value defined in the annotation instead of type defined in the property. You can find the template in: ``` C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Entity Framework Tools\DBGen\SSDLToSQL10.tt ``` Copy template file to new location (so that you don't modify the original one) and replace default table creation with this: ``` -- Creating table '<#=tableName#>' CREATE TABLE <# if (!IsSQLCE) {#>[<#=schemaName#>].<#}#>[<#=tableName#>] ( <# for (int p = 0; p < entitySet.ElementType.Properties.Count; p++) { EdmProperty prop = entitySet.ElementType.Properties[p]; #> [<#=Id(prop.Name)#>] <# if (prop.MetadataProperties.Contains("http://tempuri.org/custom:SqlType")) { MetadataProperty annotationProperty = prop.MetadataProperties["http://tempuri.org/custom:SqlType"]; XElement e = XElement.Parse(annotationProperty.Value.ToString()); string value = e.Value.Trim(); #> <#=value#> <# } else { #> <#=prop.ToStoreType()#> <# } #> <#=WriteIdentity(prop, targetVersion)#> <#=WriteNullable(prop.Nullable)#><#=(p < entitySet.ElementType.Properties.Count - 1) ? "," : ""#> <# } #> ); GO ``` Now the last point is changing the template used for SQL script generation. Open EDMX file in the designer and go to model's properties (just click somewhere in the designer while you have properties window opened). Change DDL Generation Template to the template you modified. ![enter image description here](https://i.stack.imgur.com/Rtyq7.png) Run *Generate Database from Model* and it will create SQL script containing: ``` -- Creating table 'UsersSet' CREATE TABLE [dbo].[UsersSet] ( [Id] int IDENTITY(1,1) NOT NULL, [Login] nvarchar(max) NOT NULL, [CreatedAt] Date NOT NULL ); GO ``` This is probably the most advanced and hidden feature of EDMX I have seen yet. Annotations together with custom T4 templates can get you a lot of control over both class and SQL generation. I can imagine using this to define for example database indexes or unique keys when using model first or add selectively some custom attributes to generated POCO classes. The reason why this is so hidden is that there is no tooling support in VS out-of-the box to use this.
How to align view to the end of constraint layout along with a start constraint I have a constraint layout with one text view on left side and other on the right side. I want the right text view to be aligned to the end. Right text view should also be able to take the available space on left. It should ellipsize at end if space is not enough. I have tried this using following xml code: ``` <?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" xmlns:app="http://schemas.android.com/apk/res-auto" android:padding="@dimen/dimen_16dp"> <TextView android:id="@+id/title" android:layout_width="wrap_content" android:layout_height="wrap_content" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" android:text="Title" android:textStyle="bold" android:layout_marginEnd="@dimen/dimen_24dp"/> <TextView android:id="@+id/subtitle" android:layout_width="wrap_content" android:layout_height="wrap_content" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintTop_toTopOf="parent" android:ellipsize="end" android:maxLines="1" android:text="Subtitle"/> </androidx.constraintlayout.widget.ConstraintLayout> ``` This works great for small text in the right text view. But if the **text in the right text view** is large it overlaps the left text view. I tried adding `app:layout_constraintStart_toEndOf="@id/title"` to the right text view, but it causes two problems: - If the text is small, text in right text view is centered in the available space but I want it to be right aligned. - If the text is large, it does not ellipsize properly How can this be achieved **without changing parent layout to linear layout**?
> > 1st make ***subtitle*** textview width = 0dp and give it's start constraint to end of ***title*** textview and ***android:textAlignment="textEnd"*** > > > > > 2nd ***title*** textview end constraint to start of ***subtitle*** textview > > > ``` <?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" android:padding="16dp"> <TextView android:id="@+id/title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginEnd="24dp" android:text="Title" android:textStyle="bold" app:layout_constraintEnd_toStartOf="@+id/subtitle" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" /> <TextView android:id="@+id/subtitle" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_marginStart="5dp" android:ellipsize="end" android:maxLines="1" android:text="Subtitle Subtitle Subtitle Subtitle Subtitle Subtitle Subtitle" android:textAlignment="textEnd" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toEndOf="@+id/title" app:layout_constraintTop_toTopOf="parent" /> </androidx.constraintlayout.widget.ConstraintLayout> ``` [![enter image description here](https://i.stack.imgur.com/Kxtt1.png)](https://i.stack.imgur.com/Kxtt1.png)
What is wrong on this Decimal.TryParse? Code : ``` Decimal kilometro = Decimal.TryParse(myRow[0].ToString(), out decimal 0); ``` some arguments are not valid?
`out decimal 0` is not a valid parameter - `0` is not a valid variable name. ``` decimal output; kilometro = decimal.TryParse(myRow[0].ToString(), out output); ``` By the way, the return value will be a `bool` - from the name of the variable, your code should probably be: ``` if(decimal.TryParse(myRow[0].ToString(), out kilometro)) { // success - can use kilometro } ``` Since you want to return `kilometro`, you can do: ``` decimal kilometro = 0.0; // Not strictly required, as the default value is 0.0 decimal.TryParse(myRow[0].ToString(), out kilometro); return kilometro; ```
Trouble compiling helloworld.cu While compiling this hello world sample in Ubuntu 10.10 This is from [CUDA by Example](https://rads.stackoverflow.com/amzn/click/com/0131387685), chapter 3 (No compile instructions provided >:@) ``` #include <iostream> __global__ void kernel (void){ } int main(void){ kernel <<<1,1>>>(); printf("Hellow World!\n"); return 0; } ``` I got this: > > $ nvcc -lcudart hello.cu hello.cu(11): error: identifier "printf" is > undefined > > > 1 error detected in the compilation of > "/tmp/tmpxft\_00007812\_00000000-4\_hello.cpp1.ii". > > > Why? How should this code be compiled?
You need to include `stdio.h` or `cstdio`not `iostream` (which is for `std::cout` stuff) for `printf` (see `man 3 printf`). I found the source code for the book [here](https://github.com/jiekebo/CUDA-By-Example/blob/master/common/book.h). `chapter03/hello_world.cu` is actually: ``` /* * Copyright 1993-2010 NVIDIA Corporation. All rights reserved. * * NVIDIA Corporation and its licensors retain all intellectual property and * proprietary rights in and to this software and related documentation. * Any use, reproduction, disclosure, or distribution of this software * and related documentation without an express license agreement from * NVIDIA Corporation is strictly prohibited. * * Please refer to the applicable NVIDIA end user license agreement (EULA) * associated with this source code for terms and conditions that govern * your use of this NVIDIA software. * */ #include "../common/book.h" int main( void ) { printf( "Hello, World!\n" ); return 0; } ``` Where `../common/book.h` includes `stdio.h`. The `README.txt` file details how to compile the examples: ``` The vast majority of these code examples can be compiled quite easily by using NVIDIA's CUDA compiler driver, nvcc. To compile a typical example, say "example.cu," you will simply need to execute: > nvcc example.cu ```
Javascript Date object returns 'invalid date' for my date string I want to create a Date object in Javascript using this string `04/21/2014 12:00p` When passed to the constructor (`new Date('04/21/2014 12:00p')`), it returns `Invalid Date`. I've seen other posts which manipulate the string in order to fulfill the requirements of a valid dateString, however that is not what I want. I want Javascript to recognize my date format (`m/dd/yy h:mmt`). In Java, something like that is simple, I imagine that there would be a similar way in Javascript. How can I get the Date object to recognize my format?
This is trivial only when using a library like [moment.js](http://momentjs.com): ``` var dt = moment("04/21/2014 12:00p","MM/DD/YYYY h:mma").toDate(); ``` Otherwise, you would have considerable string manipulation to do. Also you would have to account for users in parts of the world that use m/d/y or other formatting instead of the y/m/d formatting of your input string. If this string is being sent from some back-end process, you might consider changing the format to a standard interchange format like [ISO-8601](http://en.wikipedia.org/wiki/ISO_8601) instead. Ex. `"2014-04-21T12:00:00"`
how to add all titles to the queue of handbrake automatically? When I open a DVD in Handbrake, it's scanned and the main title is found by the tool. The output name is also automatically adjusted. Then I may add it to the queue. But I need all titles to be added to the queue automatically with proper names specifying their title number. While now, I need to name them and select from the title combobox and add them one after another. I didn't find Handbrake forum. So I decided to ask here.
I don't believe you can do this automatically, but if Simultaneously would serve, Here's an excerpt from the [Handbrake Team Documentation](https://handbrake.fr/docs/en/latest/advanced/queue.html) **Adding multiple encodes simultaneously** When using the Queue, you may find it beneficial to create multiple Jobs at one time. If your Source is a Blu-ray, DVD, or other disc or disc-like format with multiple Titles, you may open it with HandBrake and add multiple Titles to the Queue simultaneously. Additionally, you may open a folder of multiple single-Title videos (excluding discs and disc-like formats) as one virtual Source with multiple Titles using the Open Source dialog, and then add multiple Titles to the Queue. On Linux, select Add Multiple from the Queue menu, then select the Titles you wish to encode using the presented dialog. Be sure to give each a unique file name. You may open additional Sources and add them to the Queue in the same manner. **EDIT:** As an alternative you could script write a script that loops through all the files in a given directory and run the same conversion command on all of them utilizing [handbrake-cli](https://packages.ubuntu.com/search?suite=all&section=all&arch=any&keywords=handbrake-cli&searchon=names)
Is it bad to not use a DB but use in memory objects? I've been tasked to write a small app to be used by a single user. This app will pull in ~500 employee names/departments from our master employee DB. Then the user will enter like 5 fields for each employee. Those 5 fields will typically only change once a year, but could be once a month worst case. I only am supposed to keep track of 2 years worth at any given time. I've looked at SQLite and SQL CE and I'm just not thrilled by either of them. SQL CE doesn't want to allow the data file to reside on a network share. (Only a single user but they store all their documents on their private share that is backed up daily). SQLite seems like it would fit the bill better but it doesn't integrate as well into Visual Studio without wrappers or anything. The other thing to consider is that our people are versed in MS' SQL Server and little else so having something that they understand vs SQLlite will be an important thing to my boss. So my question is: What if I store the data in Objects in memory and serialize them to disk when saving. I've done a quick test and with 10k people (our use will only be 500-1000 max) and 10 years each (or 10 months if they update their data every month, highly unlikely) only caused my demo app to use 30MB of memory. Also populating that data was instantanous even with using GUID's to randomly fill all the strings. Is this a bad idea? Its a fairly simple app and in this case it seems ok to me.
**I see a few issues with the idea of persisting business data using object serialization:** *These aren't necessarily show-stoppers for the idea, but rather something to think about...* 1. The data can't be queried, reported or inspected. It's entirely opaquely captured by the application. 2. Debugging serialized data is harder than being able to view the corresponding data in a database, or even a format like CSV. 3. There's no atomicity - it possible to corrupt your entire "database" with one power failure or application crash. 4. If the data model changes, updating the existing persisted entities requires a version of the app that can read both the old and new format. With a database, you can just add a column (or sub table). 5. There's no clean way to implement concurrent access. What happens if more than one user want to view or edit the data? **One thing I've learned, is that small apps tend to grow and become "large apps".** When organizations guess incorrectly about the potential value of an application, they tend to incur the costs of this kind of unexpected, organic growth later. You also mention that you liked at SQLLite and didn't like it. What is it that you didn't like? What kinds of problems did you anticipate? If you're just looking for a way to "cut corners" to get this done quicker - that may be ok in the short term - but be careful - *these kinds of decisions have way of coming back to bite you*.
when adding 2+ buttons to east layout, only 1 shows When adding 2+ buttons to east layout, only 1 shows. I am trying to test a layout that uses tabbed panes. For some reason when I try to add multiple buttons to the east region, it only shows 1 button. It just so happens the button displayed is the last one added to the east region, the rest are ignored. I am thinking maybe they are just hidden underneath the last button. ``` public void createPage1() { { panel1 = new JPanel(); panel1.setLayout( new BorderLayout() ); panel1.add( new JButton( "North" ), BorderLayout.EAST ); panel1.add( new JButton( "South" ), BorderLayout.EAST ); panel1.add( new JButton( "East" ), BorderLayout.EAST ); panel1.add( new JButton( "West" ), BorderLayout.EAST ); panel1.add( new JButton( "Center" ), BorderLayout.EAST ); } } ```
I dont know, how you want your UI to look like, but try it this way: ``` public void createPage1() { //This will be the main panel. //We are going to put several buttons only in the "EAST" part of it. panel1 = new JPanel(); panel1.setLayout( new BorderLayout() ); //We create a sub-panel. Notice, that we don't use any layout-manager, //Because we want it to use the default FlowLayout JPanel subPanel = new JPanel(); subPanel.add( new JButton( "1" )); subPanel.add( new JButton( "2" )); subPanel.add( new JButton( "3" )); //Now we simply add it to your main panel. panel1.add(subPanel, BorderLayout.EAST); } ```
CocoaPods block dependency installation I haven't found the answer to this within the Podfile docs, so I'm not sure if it's possible. I want to install a CocoaPods project which has 3 dependencies. I add it to my Podfile: `pod 'IDMPhotoBrowser'` and run install: ``` $ pod install Installing DACircularProgress (2.1.0) … Installing IDMPhotoBrowser (1.2) … Installing SVProgressHUD (0.9) ``` However, I have a hacked up version of SVProgressHUD in my project which contains code not in the current repo. Additionally, SVProgressHUD 0.9 is from January, and there are months of additional commits since then. I would like to use my manually added version instead. Can I specify in my Podfile that `SVProgressHUD` should *not* be installed, so that my manually added version is used? Or do I just need to delete it by hand every time I run `pod install`? ### Alternatives I know I could upload my fork to github and do something like: ``` pod 'SVProgressHUD', :git => '<my git repo>', :commit => '<my sha>' ``` but I'm hoping to not need to upload code just to get Cocoapods to do what I want.
It's not so much about blocking the dependency as it is overriding it with your own. This means that CocoaPods needs to find your local copy of `SVProgressHUD` before it activates `IDMPhotoBrowser` and looks for `SVProgressHUD` in the master spec repo. You can achieve the setup you want by declaring your version of `SVProgressHUD` first in your Podfile using a [local podspec](https://github.com/CocoaPods/CocoaPods/wiki/Dependency-declaration-options#working-on-a-pod): 1. Your custom version needs to be in a subdirectory of your project, with a valid podspec at the root of that directory, e.g., `External/SVProgressHUD/SVProgressHUD.podspec`. 2. Update your Podfile like this: ``` pod 'SVProgressHUD', :path => 'External/SVProgressHUD' # this needs to be declared first pod 'IDMPhotoBrowser' # your custom pod will be used as the dependency here ``` If you don't have a local podspec, you should be able to take a copy of the 0.9 version of `SVProgressHUD` (and if necessary modify it to compile any new code you've added).
Recursive functions in computation expressions Some background first. I am currently learning some stuff about monadic parser combinators. While I tried to transfer the 'chainl1' function from [this paper](http://www.cs.nott.ac.uk/~gmh/monparsing.pdf) (p. 16-17), I came up with this solution: ``` let chainl1 p op = parser { let! x = p let rec chainl1' (acc : 'a) : Parser<'a> = let p' = parser { let! f = op let! y = p return! chainl1' (f acc y) } p' <|> succeed acc return! chainl1' x } ``` I tested the function with some large input and got a StackOverflowException. Now I am wondering, is it posible to rewrite a recursive function, that uses some computation expression, in a way so it is using tail recursion? When I expand the computation expression, I can not see how it would be generally possible. ``` let chainl1 p op = let b = parser b.Bind(p, (fun x -> let rec chainl1' (acc : 'a) : Parser<'a> = let p' = let b = parser b.Bind(op, (fun f -> b.Bind(p, (fun y -> b.ReturnFrom(chainl1' (f acc y)))))) p' <|> succeed acc b.ReturnFrom(chainl1' x))) ```
In your code, the following function isn't tail-recursive, because - in every iteration - it makes a choice between either `p'` or `succeed`: ``` // Renamed a few symbols to avoid breaking SO code formatter let rec chainl1Util (acc : 'a) : Parser<'a> = let pOp = parser { let! f = op let! y = p return! chainl1Util (f acc y) } // This is done 'after' the call using 'return!', which means // that the 'cahinl1Util' function isn't really tail-recursive! pOp <|> succeed acc ``` Depending on your implementation of parser combinators, the following rewrite could work (I'm not an expert here, but it may be worth trying this): ``` let rec chainl1Util (acc : 'a) : Parser<'a> = // Succeeds always returning the accumulated value (?) let pSuc = parser { let! r = succeed acc return Choice1Of2(r) } // Parses the next operator (if it is available) let pOp = parser { let! f = op return Choice2Of2(f) } // The main parsing code is tail-recursive now... parser { // We can continue using one of the previous two options let! cont = pOp <|> pSuc match cont with // In case of 'succeed acc', we call this branch and finish... | Choice1Of2(r) -> return r // In case of 'op', we need to read the next sub-expression.. | Choice2Of2(f) -> let! y = p // ..and then continue (this is tail-call now, because there are // no operations left - e.g. this isn't used as a parameter to <|>) return! chainl1Util (f acc y) } ``` In general, the pattern for writing tail-recursive functions inside computation expressions works. Something like this will work (for computation expressions that are implemented in a way that allows tail-recursion): ``` let rec foo(arg) = id { // some computation here return! foo(expr) } ``` As you can check, the new version matches this pattern, but the original one did not.
Java: Safe to "leak" this-reference in constructor for final class via \_happens-before\_ relation? Section 3.2.1 of Goetz's "Java Concurrency in Practice" contains the following rule: > > Do not allow the `this` reference to escape during construction > > > I understand that, in general, allowing `this` to escape can lead to other threads seeing incompletely constructed versions of your object and violate the initialization safety guarantee of `final` fields (as discussed e.g. [here](https://stackoverflow.com/questions/2513597/what-is-an-incompletely-constructed-object)) **But is it ever possible to safely leak `this`? In particular, if you establish a `happen-before` relationship prior to the leakage?** For example, the [official Executor Javadoc](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Executor.html) says > > Actions in a thread prior to submitting a `Runnable` object to an `Executor` *happen-before* its execution begins, perhaps in another thread > > > My naive reading understanding of the Java memory model this is that something like the following should be safe, even though it's leaking `this` prior to the end of the constructor: ``` public final class Foo { private final String str1; private String str2; public Foo(Executor ex) { str1 = "I'm final"; str2 = "I'm not"; ex.execute(new Runnable() { // Oops: Leakage! public void run() { System.out.println(str1 + str2);} }); } } ``` That is, even though we have leaked `this` to a potentially malicious `Executor`, the assignments to `str1` and `str2` *happen-before* the leakage, so the object is (for all intents and purposes) completely constructed, even though it has not been "completely initialized" per JLS 17.5. Note that I also am requiring that the class be `final`, as any subclass's fields would be initialized after the leakage. Am I missing something here? Is this actually guaranteed to be well-behaved? It looks to me like an legitimate example of "Piggybacking on synchronization" (16.1.4) In general, I would greatly appreciate any pointers to additional resources where these issues are covered. **EDIT**: I am aware that, as @jtahlborn noted, I can avoid the issue by using a public static factory. I'm looking for an answer of the question directly to solidify my understanding of the Java memory model. **EDIT #2**: [This answer](https://stackoverflow.com/a/2513624/3297537) alludes to what I'm trying to get at. That is, following the rule from the JLS cited therein is **sufficient** for guaranteeing visibility of all `final` fields. But is it necessary, or can we make use of other *happen-before* mechanisms to ensure our own visibility guarantees?
You are correct. In *general*, Java memory model does not treat constructors in any special way. Publishing an object reference before or after a constructor exit makes very little difference. The only exception is, of course, regarding `final` fields. The exit of a constructor where a final field is written to defines a "freeze" action on the field; if `this` is published after the `freeze`, even without happens-before edges, other threads will read the field properly initialized; but not if `this` is published before the `freeze`. Interestingly, if there is constructor chaining, `freeze` is defined on the smallest scope; e.g. ``` -- class Bar final int x; Bar(int x, int ignore) { this.x = x; // assign to final } // [f] freeze action on this.x public Bar(int x) { this(x, 0); // [f] is reached! leak(this); } ``` Here `leak(this)` is safe w.r.t. `this.x`. See my other [answer](https://stackoverflow.com/questions/35167777/are-immutable-objects-immune-to-improper-publication/35169705#35169705) for more details on `final` fields. --- If `final` seems too complicated, it is. My advice is -- forget it! Do not ever rely on `final` field semantics to publish unsafely. If you program is properly synchronized, you don't need to worry about `final` fields or their delicate semantics. Unfortunately, the current climate is to push `final` fields as much as possible, creating an undue pressure on programmers.
MonoMac - how can I execute apple script I am trying to create a simple app with MonoMac and need to execute apple script to access playlist information from different media players (see my [previous question](https://stackoverflow.com/questions/5617609/ruby-on-rails-get-mediaplayer-information-itunes-traktor-cog-current-song)). I decided to give MonoMac a try because I am very familiar with C# and .net development and there is a nice Websocket implementation. Unfortunately there seems to be no working wrapper for NSAppleScript. I tried [Frederic Forjans implementation](http://forjan.fr/) and tried to use [monobjc](http://api.monobjc.net/html/T_Monobjc_Cocoa_NSAppleScript.htm). Frederics wrapper class does not even compile when I use his code I get the following exception: ``` Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'MonoMac, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. File name: 'MonoMac, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' ``` when creating a new `NSString` instance. However `new NSDictionary();` which is located in the same assembly works fine. When using monobjc I get the following error: ``` Unhandled Exception: System.TypeInitializationException: An exception was thrown by the type initializer for Monobjc.ObjectiveCRuntime ---> System.EntryPointNotFoundException: monobjc_install_bridge at (wrapper managed-to-native) Monobjc.NativeMethods:InstallBridge () at Monobjc.ObjectiveCRuntime..cctor () [0x00000] in :0 --- End of inner exception stack trace --- at CocoaApplication1.Program.Main () [0x00000] in :0 ``` Can anyone suggest a simple and working way to execute apple scripts in a mono environment?
You cannot use the Monobjc assemblies directly without running them with the Monobjc runtime binary. Whether you want to build a Cocoa based or console application, [there are tutorials](http://www.monobjc.net/index.php?page=tutorials) for that on the Monobjc's website to help you getting started. The easiest way to achieve inter-application communication is to use the ScriptingBridge framework. Many applications provide scripting definitions that can be then used in an object-oriented way: [take a look at this guide](http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/ScriptingBridgeConcepts/Introduction/Introduction.html) for more details. Note that the scripting bridge will only works with scripting enabled applications (like iTunes, iCal, iMovie, Finder, etc). Monobjc supports the ScriptingBridge framework; there are even two sample applications ([SBSetFinderComment](http://www.monobjc.net/index.php?page=sbsetfindercomment) and [ScriptingBridgeiCal](http://www.monobjc.net/index.php?page=scriptingbridgeical)) that show how to integrate it.
Using local private key via putty -> shell on remote server -> github Is this scenario possible: 1. Using putty/pageant to connect via SSH to a remote linux server (ubuntu)? 2. In the shell on the remote linux server doing "git push origin master" to github (url = git@github.com:username/repo.git and authenticating with the key I used to login to the linux server? (avoiding to have the key in `~/.ssh/id_dsa/id_rsa`) ?
Yes, that is possible and called [*SSH Agent Forwarding*](https://help.github.com/articles/using-ssh-agent-forwarding). > > The nifty thing is, you can selectively let remote servers access your local ssh-agent as if it was running on the server. This is sort of like asking a friend to enter their password so you can use their computer. > > > - In \*nix, this would be as simple as adding the following to your `~/.ssh/config` file: ``` Host example.com ForwardAgent yes ``` Obviously, you have to replace `example.com` with the hostname of the server you want to forward your local key to. - With PuTTY and Pageant, all you have to do is [set up a session where agent forwarding is allowed](http://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter4.html#config-ssh-agentfwd) in the *Auth* panel. ![](https://i.stack.imgur.com/INkYx.png) See here about [using Pageant for agent forwarding](http://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter9.html#pageant-forward) in general.
hibernate does not reconnect once the database is bounced I've found that after we've bounced (stopped and started) our database (postgresql 8.3), our apps which use hibernate (3.2.6) fail to re-acquire connections, instead getting a SocketException with the message "broken pipe". I believe we're configured to use the built in connection pooling. how can i make hibernate re-acquire connections after a db restart without restarting the app? p.
What you want is a feature called *connection testing* provided by connection pools - the connection pool should run a quick query to verify that the connection it is about to hand out is not stale. Unfortunately [DriverManagerConnectionProvider](http://docs.jboss.org/hibernate/core/3.6/javadocs/org/hibernate/connection/DriverManagerConnectionProvider.html), Hibernate's default connection pooling class, does not support this feature. Hibernate team [strongly discourages](http://docs.jboss.org/hibernate/core/3.6/reference/en-US/html_single/#configuration-programmatic) the use of this connection pool in production code. > > Hibernate's own connection pooling > algorithm is, however, quite > rudimentary. It is intended to help > you get started and is not intended > for use in a production system, or > even for performance testing. You > should use a third party pool for best > performance and stability. > > > My recommendation is that you switch to some other connection pool implementation. If you switch to C3P0 (which is shipped with Hibernate), connection testing can be configured as explained [here](http://www.mchange.com/projects/c3p0/index.html#configuring_connection_testing). If you use Apache DBCP, it lets you set a validationQuery as explained [here](http://commons.apache.org/dbcp/configuration.html).
std::ostringstream operator overload search order? I have the following class: ``` namespace { class MimeLogger : public std::ostringstream { public: MimeLogger() {} ~MimeLogger() { LOGEVENT( logModuleWSE, logEventDebug, logMsgWSETrace1, str() ); } }; } ``` When I do this: ``` MimeLogger() << "Hello " << "World"; ``` The first `"Hello "` string is treated as a `void*`. If I debug the code, `"Hello "` is passed into `std::basic_ostream::operator<< (void const*)` and prints as a pointer value, not a string. The second string, `"World"` is properly passed into the global overloaded << operator that takes a `char const*`. I expect both usages of the << operator to resolve to the same overload, but this does not happen. Can someone explain, and maybe propose a fix? Thanks in advance. ## Update I neglected to mention that I'm stuck with C++03, but I'm glad that some people covered both the C++03 and C++11 cases.
C++03: For the expression `MimeLogger() << "Hello "`, the template function ``` template <typename charT, class traits> std::basic_ostream<charT, traits>& std::operator<< ( std::basic_ostream<charT, traits>& os, const char* cstr); ``` is not considered during overload resolution because the temporary `MimeLogger()` may not be bound to a non-const reference. The member function overloads do not have this problem because the rules for the implicit parameter do allow binding to a temporary. If you can use a compiler with support for C++11 rvalue-references, this should work as you intended, because the C++11 library provides an additional overload ``` template <typename charT, class traits, typename T> std::basic_ostream<charT, traits>& std::operator<< ( std::basic_ostream<charT, traits>&& os, const T& x ); // { os << x; return os; } ``` which allows temporary streams to be used left of `<<` as though they were not temporary. (I did try a test program with g++ and got different results without and with -std=c++0x.) If you cannot use a C++11 friendly compiler, adding this to the public section of `class MimeLogger` is a workaround that will do what you want with C++03: ``` template<typename T> MimeLogger& operator<<(const T& x) { static_cast<std::ostringstream&>(*this) << x; return *this; } using std::ostringstream::operator<<; ``` The using-declaration makes sure the member overloads from the standard library are also visible from `MimeLogger`. In particular, without it manipulators like `std::endl` don't work with the template operator, since `std::endl` is itself a function template, and that's too much template type deduction to expect from C++. But things are fine as long as we're sure not to hide the `ostream` member that makes the function manipulators work (27.7.3.6.3): ``` namespace std { template <typename charT, class traits> class basic_ostream : /*...*/ { public: basic_ostream<charT, traits>& operator<<( basic_ostream<charT,traits>& (*pf)(basic_ostream<charT,traits>&)); }; } ```
Why does UTF8 encoding change/corrupt bytes as oppose to Base64 and ASCII, when writing to file? I am writing an application, which would receive encrypted byte array, consisting of file name and file bytes, with the following protocol: `file_name_and_extension|bytes`. Byte array is then decrypted and passing into `Encoding.UTF8.getString(decrypted_bytes)` would be preferable, because I would like to trim `file_name_and_extension` from the received bytes to save actual file bytes into `file_name_and_extension`. I simplified my application, to only receive file `bytes` which are then passed into `Encoding.UTF8.GetString()` and back into byte array with `Encoding.UTF8.getBytes()`. After that, I am trying to write a zip file, but the file is invalid. It works when using `ASCII` or `Base64`. ``` private void Decryption(byte[] encryptedMessage, byte[] iv) { using (Aes aes = new AesCryptoServiceProvider()) { aes.Key = receiversKey; aes.IV = iv; // Decrypt the message using (MemoryStream decryptedBytes = new MemoryStream()) { using (CryptoStream cs = new CryptoStream(decryptedBytes, aes.CreateDecryptor(), CryptoStreamMode.Write)) { cs.Write(encryptedMessage, 0, encryptedMessage.Length); cs.Close(); string decryptedBytesString = Encoding.UTF8.GetString(decryptedBytes.ToArray()); //corrupts the zip //string decryptedBytesString = Encoding.ASCII.GetString(decryptedBytes.ToArray()); //works //String decryptedBytesString = Convert.ToBase64String(decryptedBytes.ToArray()); //works byte[] fileBytes = Encoding.UTF8.GetBytes(decryptedBytesString); //byte[] fileBytes = Encoding.ASCII.GetBytes(decryptedBytesString); //byte[] fileBytes = Convert.FromBase64String(decryptedBytesString); File.WriteAllBytes("RECEIVED\\received.zip", fileBytes); } } } } ```
Because one shouldn't try to interpret raw bytes as symbols in some encoding unless he actually knows/can deduce the encoding used. If you receive some nonspecific raw bytes, then [process them as raw bytes](https://stackoverflow.com/questions/472906/how-to-get-a-consistent-byte-representation-of-strings-in-c-sharp-without-manual). ### But why it works/doesn't work? Because: 1. Encoding.Ascii seems to ignore values greater than 127 and return them as they are. So no matter the encoding/decoding done, raw bytes will be the same. 2. Base64 is a straightforward encoding that won't change the original data in any way. 3. UTF8 - theoretically with those bytes not being proper UTF8 string we may have some conversion data loss (though it would more likely result in an exception). But the most probable reason is a [BOM being added](https://stackoverflow.com/questions/2915182/how-do-i-ignore-the-utf-8-byte-order-marker-in-string-comparisons) during `Encoding.UTF8.GetString` call that would remain there after `Encoding.UTF8.GetBytes`. In any case, I repeat - do not encode/decode anything unless it is actually string data/required format.
How to enable built-in VPN in OperaDriver? The opera browser has a built-in VPN which allows you to hide your IP while browsing. My question is can the VPN be turned on while using OperaDriver with selenium in python? *Attempt and problem in detail:* I have this script that goes to a website to display my IP address. ``` from selenium import webdriver from selenium.webdriver.opera.options import Options from time import sleep driver = webdriver.Opera(executable_path=r'/path/to/operadriver') driver.get('https://whatismyipaddress.com') sleep(10) driver.quit() ``` When I go to this site on the opera browser with VPN enabled, my IP is masked and some other IP address is shown. But my script opens up the browser to display my real IP address. I have searched almost all questions on OperaDriver on SO as well as on other sites. There seems to be absolutely no documentation or any other questions related to this anywhere. The closest I got was [this feature request on github](https://github.com/operasoftware/operachromiumdriver/issues/23). The OP says that he was able to make it work by using OperaOptions to load a custom profile. The code posted in the link is ``` OperaOptions operaOptions = new OperaOptions(); operaOptions.addArguments("user-data-dir", "~/Library/Application Support/com.operasoftware.Opera"); driver = new OperaDriver(operaOptions); ``` I tried to do this in python and nothing worked out. If it is of any concern I use Ubuntu 16.04, and OperaDriver is downloaded from the [official github page](https://github.com/operasoftware/operachromiumdriver/releases). Python version is `3.6.7` and Opera version is `57.0.3098.116` for `Ubuntu 16.04 LTS (x86_64; Unity)`.
You are trying to use OperaOptions not ChromeOptions, from <https://seleniumhq.github.io/selenium/docs/api/py/webdriver_opera/selenium.webdriver.opera.webdriver.html> > > options: this takes an instance of ChromeOptions > > > As kaqqao says > > "enable VPN from the GUI and the setting got saved in the active > profile." > > > ``` from selenium import webdriver from time import sleep # The profile where I enabled the VPN previously using the GUI. opera_profile = '/home/dan/.config/opera' options = webdriver.ChromeOptions() options.add_argument('user-data-dir=' + opera_profile) driver = webdriver.Opera(options=options) driver.get('https://whatismyipaddress.com') sleep(10) driver.quit() ``` Results: ``` First try IPv6: 2001:67c:2660:425:2:0:0:3f8 IPv4: 77.111.247.26 Second try IPv6: 2001:67c:2660:425:1a:0:0:1a0 IPv4: 77.111.247.66 Third try IPv4: 77.111.247.133 IPv6: Not detected Forth try IPv6: 2001:67c:2660:425:1c:0:0:1fe IPv4: 77.111.247.68 ``` None of which are my IP and the VPN icon is showing next to the address bar. **UPDATED** in response to question. From <https://techdows.com/2016/08/opera-profile-location.html> > > Simple way to know the profile path of Opera is just type > about://about in address bar and check for the Profile line under > paths. > > > On Windows 10 the code looks like this. ``` from selenium import webdriver from time import sleep # The profile where I enabled the VPN previously using the GUI. opera_profile = r'C:\\Users\\dan\\AppData\\Roaming\\Opera Software\\Opera Stable' options = webdriver.ChromeOptions() options.add_argument('user-data-dir=' + opera_profile) options._binary_location = r'C:\\Users\\dan\\AppData\\Local\\Programs\Opera\\58.0.3135.114\\opera.exe' driver = webdriver.Opera(executable_path=r'C:\\operadriver_win64\\operadriver.exe',options=options) driver.get('https://whatismyipaddress.com') sleep(10) driver.quit() ```
Isn't \d redundant in [\w\d]? I am reading a book and see tons of examples like this: ``` (?P<email> [\w\d.+-]+ # username @ ([\w\d.]+\.)+ # domain name prefix (com|org|edu) # limit the allowed top-level domains ) ``` Since `\w` means `[a-zA-Z0-9_]`, `\d` means `[0-9]`, `\d` is subset of `\w`. So, aren't those "\d"s redundant? Please someone confirm my understanding is correct as this drives me nut.
Yes, this is redundant, and plain `\w` would work just as well. See <https://docs.python.org/2/library/re.html> > > `\d` > > > When the `UNICODE` flag is not specified, matches any decimal digit; this is equivalent to the set `[0-9]`. With `UNICODE`, it will match whatever is classified as a decimal digit in the Unicode character properties database. > > > `\w` > > > When the `LOCALE` and `UNICODE` flags are not specified, matches any alphanumeric character and the underscore; this is equivalent to the set `[a-zA-Z0-9_]`. With `LOCALE`, it will match the set `[0-9_]` plus whatever characters are defined as alphanumeric for the current locale. If `UNICODE` is set, this will match the characters `[0-9_]` plus whatever is classified as alphanumeric in the Unicode character properties database. > > >
Writing a JUnit test to check SharedPreferences data I am new to unit testing in Android and my attempt is to `assertTrue` that the data is successfully passed to a method and saved in `SharedPreferences`. This is my test so far: ``` public class AuthTest { Authorization authorization = new Authorization(); @Before public void init() { MockitoAnnotations.initMocks(this); } @Test public void test_IfAuthIsSaved() { //test if the auth object is saved in the auth object is saved as a.. //..json string in Shared preferences Auth auth = mock(Auth.class); authorization.saveAuth(auth); //test if the auth is received and saved to sharedpreferences } } ``` **saveAuth method:** ``` public void saveAuth(Auth auth) { editor.putString("USER_AUTH", new Gson().toJson(auth)); editor.commit(); } ``` What would the assertion look like for this?
You are mocking `Auth` which does not interact with anything in your code so you can't do any assertions on it. You need to change your approach of testing: ### 1st Approach - Mock `SharedPreferences.Editor` and inject it inside `Authorization`. - Instantiate a new `Auth` object and invoke `authorization.saveAuth(auth)`. - Assert that `editorMock.putString()` is invoked with the expected json. - Assert that `editorMock.commit()` is invoked. This approach has some drawbacks: - your test is coupled with the implementation. - if you decide to store the `Auth` data in some other kind of form you would need to change the test - you are not really testing behavior (which you actually want to do) ### 2nd Approach - Create a fake implementation of `SharedPreferences.Editor` and inject it inside `Authorization`. - Create a new `Auth` object and invoke `authorization.saveAuth(auth)`. - Retrieve auth after saving it by invoking `authorization.getAuth()` and assert that it is the same `Auth` that you saved. Drawbacks: \* you need to create a fake implementation of ``SharedPrefereces.Editor``` for test purposes that simulates the same behavior Advantages: \* your test is not coupled with the implementation \* you are free to change the implementation without changing the test \* you are testing behavior not methods Some references to backup the second approach: > > *Now, from a technical point of view, retrieval of a stored object is really a subset of creation, since ...* > > > Domain Driven Design by Eric Evans > > >
Linear Regression in R: "Error in eval(expr, envir, enclos) : object not found" I'm trying to do a simple least-squares regression in R and have been getting errors constantly. This is really frustrating, can anyone point out what I am doing wrong? First I attach the dataset (17 variables, 440 observations, each observation on a single line, no column titles). Here, I get a "masked" error. From what I've read, the "masked" error happens when objects overlap. However here I am not using any packages but the default, and I loaded a new workspace image before this. Not sure what this error refers to? ``` > cdi=read.table("APPENC02.txt", header=FALSE) > attach(cdi) The following objects are masked from cdi (position 3): V1, V10, V11, V12, V13, V14, V15, V16, V17, V2, V3, V4, V5, V6, V7, V8, V9 ``` Next, since the data set does not come with headings, I use the `colnames()` command to add column names, then check my work with the `head()` command: ``` colnames(cdi)<- c("IDnmbr","Countynm","Stateabv","LandArea","totpop","youngpct","oldpct","actphy","hspbed","srscrime","hsgrad","BAgrad","povpct","unempct","pcincome","totincome","georegion") > head(cdi) IDnmbr Countynm Stateabv LandArea totpop youngpct oldpct actphy hspbed srscrime hsgrad BAgrad povpct unempct pcincome totincome georegion 1 1 Los_Angeles CA 4060 8863164 32.1 9.7 23677 27700 688936 70.0 22.3 11.6 8.0 20786 184230 4 2 2 Cook IL 946 5105067 29.2 12.4 15153 21550 436936 73.4 22.8 11 etcetc(manually truncated) ``` Now the most annoying part: I can't get the lm() function to work! ``` > model1=lm(actphy~totpop) Error in eval(expr, envir, enclos) : object 'actphy' not found ``` It's not a upper/lowercase issue, and i've tried `"actphy"` and `actphy`. What gives? Also, the manual i'm following suggests using the `attach()` function but I've read a few posts discouraging it. What would be a better solution in this case? Thanks!
As @joran comments, `attach` is a dangerous thing. Just see, for example, this simple set of code: ``` > x <- 2:1 > d <- data.frame(x=1:2, y=3:4) > lm(y~x) Error in eval(expr, envir, enclos) : object 'y' not found > lm(y~x, data=d) Call: lm(formula = y ~ x, data = d) Coefficients: (Intercept) x 2 1 > attach(d) The following object is masked _by_ .GlobalEnv: x > lm(y~x, data=d) Call: lm(formula = y ~ x, data = d) Coefficients: (Intercept) x 2 1 > lm(y~x) Call: lm(formula = y ~ x) Coefficients: (Intercept) x 5 -1 ``` Using `attach` puts the data.frame on the search path, which allows you to cheat in `lm` by not specifying a `data` argument. However, this means that if there are objects in your global environment that have names conflicting with objects in your data.frame, weird stuff can happen, like in the last two results in the code shown above.
what should be HTTP status code for credentials Expired error/exception? I am developing RESTful APIs, I have implemented token-based authentication, where token digest is prepared using time-stamp. Now when request comes to API server, I am checking if the supplied time-stamp is invalid ( i.e. date-time from future/past is specified) then am throwing error message indicating that "future token detected" or "token has expired". I need to attach HTTP status code I am confused about which status code is suitable for this situation? I have gone through the status codes available ([ref1](http://www.restapitutorial.com/httpstatuscodes.html), [ref2](http://robertlathanh.com/2012/06/http-status-codes-401-unauthorized-and-403-forbidden-for-authentication-and-authorization-and-oauth/)) so far, I think, using 400 'bad request' will be suitable here instead of 401 'Unauthorized' and 403 'forbidden' status codes. what do you think guys?
As the timestamp is invalid, I think the token is invalid. So the client is not authenticated anymore. So I would throw a 401 Unauthorized. You're free to add extra data of the context as HTTP header with the X- prefix, or to add a response body encoded according to the Accept request header (json, text, etc.). Like: ``` { "error": { "status": 401, "details": { "code": "401.3", "description": "The timestamp provided must not be in the future." } } } ``` It is not a 403 Forbidden : 403 means "the client is authenticated but does not have the right to send this request". In your case, I think the client is not authenticated any more.
Difference between "perl" and "perl -w"? I am learning Perl , very new user . May i know whats the difference between these Perl codes. ``` #!/usr/bin/perl ``` & ``` #!/usr/bin/perl -w ```
That is not perl code, it's a [shebang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29), which is used in a linux/unix environment as a way to tell the shell what program should be used to run the program file. It has no effect in windows, but it does activate any switches used. The `-w` part is a switch for perl, telling it to activate warnings. You can learn more about perl's command line switches by typing `perl -h` at the command prompt. Read more in [`perldoc perlrun`](http://perldoc.perl.org/perlrun.html) `-w` is the older version of the `use warnings` pragma, which is preferred nowadays. While `-w` is global, `use warnings` is lexical, and can be activated selectively.
Ternary operator inside an object in JS I'm creating a React app, and I have a code segment that looks like this: ``` import React, { Component } from 'react'; import { RaisedButton } from 'material-ui'; let isZero = false; class Button extends Component { render() { const { value } = this.props; isZero = false; if (value === 0) { isZero = true; } // removed for brevity } } const styles = { otherStyles: { minWidth: isZero ? '120px' : '60px', margin: '5px 5px 5px 0', lineHeight: isZero ? '120px' : '60px', }, }; export default Button; ``` But, apparently the conditional statement is not applied inside the object, as when `value` is 0, I still get the `60px` instead of `120px`. Any ideas?
Your styles are defined outside of render and hence are not calculated everytime the variable isZero change, define it inside your render if you want it to be dynamic ``` import React, { Component } from 'react'; import { RaisedButton } from 'material-ui'; class KeyPadButton extends Component { isZero = false; render() { const { click, disabled, value } = this.props; this.isZero = false; if (value === 0) { this.isZero = true; console.log(value); console.log(isZero); console.log(styles.buttonStyles); console.log(styles.otherStyles); } const styles = { buttonStyles: { float: 'left', width: this.isZero ? '120px' : '60px', height: this.isZero ? '120px' : '60px', border: '1px solid #f9f9f9', borderRadius: '5px', }, otherStyles: { minWidth: this.isZero ? '120px' : '60px', margin: '5px 5px 5px 0', lineHeight: this.isZero ? '120px' : '60px', }, }; return ( <RaisedButton label={value} buttonStyle={styles.buttonStyles} style={styles.otherStyles} disabled={disabled} onClick={() => click(value)} /> ); } } export default KeyPadButton; ```
finding element of numpy array that satisfies condition One can use `numpy`'s `extract` function to match an element in an array. The following code matches an element `'a.'` exactly in an array. Suppose I want to match all elements containing `'.'`, how would I do that? Note that in this case, there would be two matches. I'd also like to get the row and column number of the matches. The method doesn't have to use `extract`; any method will do. Thanks. ``` In [110]: x = np.array([['a.','cd'],['ef','g.']]) In [111]: 'a.' == x Out[111]: array([[ True, False], [False, False]], dtype=bool) In [112]: np.extract('a.' == x, x) Out[112]: array(['a.'], dtype='|S2') ```
You can use the [string operations](http://docs.scipy.org/doc/numpy/reference/routines.char.html#string-information): ``` >>> import numpy as np >>> x = np.array([['a.','cd'],['ef','g.']]) >>> x[np.char.find(x, '.') > -1] array(['a.', 'g.'], dtype='|S2') ``` **EDIT:** As per request in the comments... If you want to find out the indexes of where the target condition is true, use [numpy.where](http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html): ``` >>> np.where(np.char.find(x, '.') > -1) (array([0, 1]), array([0, 1])) ``` or ``` >>> zip(*np.where(np.char.find(x, '.') > -1)) [(0, 0), (1, 1)] ```
How to debug 'npm ERR! 403 In most cases, you or one of your dependencies are requesting a package version that is forbidden by your security policy.' I am currently trying to set up a Jenkins and a private npm repository (Sonatype Nexus). I get the following error when I try to publish to the repository within a Jenkins build pipeline. ``` + npm publish --registry https://<my-private-registry>/repository/npm-private/ npm notice npm notice package: ts-acoustics@0.0.0 npm notice === Tarball Contents === npm notice 2.4kB Jenkinsfile ... ('notice' level info about the files) ... npm notice === Tarball Details === npm notice name: ts-acoustics npm notice version: 0.0.0 npm notice package size: 13.8 kB npm notice unpacked size: 47.5 kB npm notice shasum: 554b6d2b41321d78e00f6a309bb61c9181a2e3d6 npm notice integrity: sha512-QtExdu6IqZ+lH[...]r+HXolo4YCFPg== npm notice total files: 17 npm notice npm ERR! code E403 npm ERR! 403 403 Forbidden - PUT https://<my-private-registry>/repository/npm-private/ts-acoustics npm ERR! 403 In most cases, you or one of your dependencies are requesting npm ERR! 403 a package version that is forbidden by your security policy. ``` I find no further info about why it is forbidden in the Nexus logs and [this open GitHub bug](https://github.com/npm/cli/issues/622) tells me that the above error text is leading in the wrong direction in most of the cases?! Any idea of how to proceed to make publishing work?! --- **Update 1: I just saw that I have the same problem when I try to publish it manually!** So Jenkins is out of the equation for simplicity reasons. **Update 2:** I can do `npm adduser --registry...` and npm tells me ``` Logged in as <my-user> on https://<my-private-registry>/repository/npm-private/. ``` When I do `npm whoami --registry...` it displays the correct user name. When I do `npm publish --registry...` in the project, it shows the 403 Error
***How to debug this:*** As you can see by all the answers, there are a lot of things that result in the same failure message. Here is how you can find your root cause: In the Nexus Repository Manager -> **menu entry "Logging"** There you can simply change the log level for each java package Nexus consists of at runtime. Change all LogLevels for packages including "security" or "rest" to TRACE and trigger your request again. In the **LogViewer** (also part of Nexus) you can hopefully see all the necessary information to understand the problem now. --- In my case, I had to add the `nx-repository-view-*-*-edit` privilege to the role I had created for the user that Jenkins uses to login to Nexus. I thought `nx-repository-view-*-*-add` is enough to publish. Hope it helps!
Showing the same AlertDialog again I was testing the behavior of AlertDialog to integrate in a bigger component. I am unable to show the same dialog again. Here is the test code: ``` public class MainActivity extends AppCompatActivity { private AlertDialog alertDialogACCreationRetry; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); alertDialogACCreationRetry = new AlertDialog.Builder(this) .setTitle("Account creation failed") .setMessage("There was a problem connecting to the Network. Check your connection.") .setPositiveButton("Retry", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { } }) .setNegativeButton("Cancel", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { } }).create(); alertDialogACCreationRetry.show(); alertDialogACCreationRetry.show(); } } ``` I have tried putting the `alertDialogACCreationRetry.show();` inside the Retry button but it still won't show. I have also tried putting `alertDialogACCreationRetry.dismiss();` inside the Retry button and then calling `alertDialogACCreationRetry.show();` outside, it still doesn't show. More so it is frightening that it doesn't give me an exception on reshowing it if that is not supposed to be allowed. So, my question is this: **Will I have to create a new Dialog every time after once it is dismissed(automatically) on pressing a button?**
``` public void showAlertDialog() { final AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this); builder.setTitle("Time"); builder.setPositiveButton("Retry", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int i) { dialog.cancel(); // call function show alert dialog again showAlertDialog(); } }); builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }); final AlertDialog alert = builder.create(); alert.show(); } ``` [![enter image description here](https://i.stack.imgur.com/nzjS3.gif)](https://i.stack.imgur.com/nzjS3.gif)
Replicating data from GAE data store We have an application that we're deploying on GAE. I've been tasked with coming up with options for replicating the data that we're storing the the GAE data store to a system running in Amazon's cloud. Ideally we could do this without having to transfer the entire data store on every sync. The replication does not need to be in anything close to real time, so something like a once or twice a day sync would work just fine. Can anyone with some experience with GAE help me out here with what the options might be? So far I've come up with: 1. Use the Google provided bulkloader.py to export the data to CSV and somehow transfer the CSV to Amazon and process there 2. Create a Java app that runs on GAE, reads the data from the data store and sends the data to another Java app running on Amazon. Do those options work? What would be the gotchas with those? What other options are there?
You could use a logic similar to what App Engine HRD migration or backup tool are doing: 1. Mark modified entities with a child entity marker 2. Run a MapperPipeline using [App Engine mapreduce library](http://code.google.com/p/appengine-mapreduce/) iterating on those entity using a Datastore Input Reader 3. In your map function fetch the parent entity and serialize it to Google Storage using a File Output Writer and remove the marker 4. Ping the remote host to import those entity from the Google Storage url As an alternative to 3 and 4, you could make multiple urlfetch(POST) to send each serialized entity to the remote host directly, but it is more fragile as an single failure could compromise the integrity of your data import. You could look at the [datastore admin source code](http://code.google.com/p/googleappengine/source/browse/#svn/trunk/python/google/appengine/ext/datastore_admin) for inspiration.
Why is it recommended to use concat then uglify when the latter can do both? I keep seeing the recommendation for making JS files ready for production to be concat then uglify. For example [here](https://github.com/yeoman/grunt-usemin#transformation-flow), in on of Yeoman's grunt tasks. > > By default the flow is: concat -> uglifyjs. > > > Considering UglifyJS can do both concatenation and minification, why would you ever need both at the same time? Thanks.
Running a basic test to see if there is a performance difference between executing `concat` and then `uglify` vs. just `uglify`. **package.json** ``` { "name": "grunt-concat-vs-uglify", "version": "0.0.1", "description": "A basic test to see if we can ditch concat and use only uglify for JS files.", "devDependencies": { "grunt": "^0.4.5", "grunt-contrib-concat": "^0.5.0", "grunt-contrib-uglify": "^0.6.0", "load-grunt-tasks": "^1.0.0", "time-grunt": "^1.0.0" } } ``` **Gruntfile.js** ``` module.exports = function (grunt) { // Display the elapsed execution time of grunt tasks require('time-grunt')(grunt); // Load all grunt-* packages from package.json require('load-grunt-tasks')(grunt); grunt.initConfig({ paths: { src: { js: 'src/**/*.js' }, dest: { js: 'dist/main.js', jsMin: 'dist/main.min.js' } }, concat: { js: { options: { separator: ';' }, src: '<%= paths.src.js %>', dest: '<%= paths.dest.js %>' } }, uglify: { options: { compress: true, mangle: true, sourceMap: true }, target: { src: '<%= paths.src.js %>', dest: '<%= paths.dest.jsMin %>' } } }); grunt.registerTask('default', 'concat vs. uglify', function (concat) { // grunt default:true if (concat) { // Update the uglify dest to be the result of concat var dest = grunt.config('concat.js.dest'); grunt.config('uglify.target.src', dest); grunt.task.run('concat'); } // grunt default grunt.task.run('uglify'); }); }; ``` In `src`, I've put a bunch of JS files, including the uncompressed source of jQuery, copied several times, spread around into subfolders. Much more than what a normal site/app usually has. Turns out the time it takes to concat and compress all of these files is essentially the same in both scenarios. **Except** when using the `sourceMap: true` option on `concat` as well (see below). On my computer: ``` grunt default : 6.2s (just uglify) grunt default:true : 6s (concat and uglify) ``` It's worth noting that the resulting `main.min.js` is the same in both cases. Also, `uglify` automatically takes care of using the proper separator when combining the files. The only case where it does matter is when adding `sourceMap: true` to the `concat` `options`. This creates a `main.js.map` file next to `main.js`, and results in: ``` grunt default : 6.2s (just uglify) grunt default:true : 13s (concat and uglify) ``` But if the production site loads only the `min` version, this option is useless. I did found a major **disadvantage** with using `concat` before `uglify`. When an error occurs in one of the JS files, the `sourcemap` will link to the concatenated `main.js` file and not the original file. Whereas when `uglify` does the whole work, it **will** link to the original file. **Update:** We can add 2 more options to `uglify` that will link the `uglify` sourcemap to `concat` sourcemap, thus handling the "disadvantage" I mentioned above. ``` uglify: { options: { compress: true, mangle: true, sourceMap: true, sourceMapIncludeSources: true, sourceMapIn: '<%= paths.dest.js %>.map', }, target: { src: '<%= paths.src.js %>', dest: '<%= paths.dest.jsMin %>' } } ``` But it seems highly unnecessary. ### Conclusion I think it's safe to conclude that we can ditch `concat` for JS files if we're using `uglify`, and use it for other purposes, when needed.
How to implement a simple TCP protocol using Akka Streams? I took a stab at implementing a simple TCP-based protocol for exchanging messages with Akka Streams (see below). However, it seems like the *incoming* messages are not processed immediately; that is, in the scenario where two messages are sent one after another from the client, the first message is only printed *after* something is sent from the server: ``` At t=1, on [client] A is entered At t=2, on [client] B is entered At t=3, on [server] Z is entered At t=4, on [server] A is printed At t=5, on [server] Y is entered At t=6, on [server] B is printed ``` What I expected/want to see: ``` At t=1, on [client] A is entered At t=2, on [server] A is printed At t=3, on [client] B is entered At t=4, on [server] B is printed At t=5, on [server] Z is entered At t=6, on [client] Z is printed At t=7, on [server] Y is entered At t=8, on [client] Y is printed ``` What am I missing? Perhaps I need to, somehow, make the sinks at both ends eager? Or is each sink somehow blocked by the corresponding source (while the source is waiting for input from the command line)? ``` import java.nio.charset.StandardCharsets.UTF_8 import akka.actor.ActorSystem import akka.stream.ActorMaterializer import akka.stream.scaladsl.{BidiFlow, Flow, Sink, Source, Tcp} import akka.util.ByteString import com.typesafe.config.ConfigFactory import scala.io.StdIn object AkkaStreamTcpChatter extends App { implicit val system = ActorSystem("akka-stream-tcp-chatter", ConfigFactory.defaultReference()) implicit val materializer = ActorMaterializer() type Message = String val (host, port) = ("localhost", 46235) val deserialize:ByteString => Message = _.utf8String val serialize:Message => ByteString = message => ByteString(message getBytes UTF_8) val incoming:Flow[ByteString, Message, _] = Flow fromFunction deserialize val outgoing:Flow[Message, ByteString, _] = Flow fromFunction serialize val protocol = BidiFlow.fromFlows(incoming, outgoing) def prompt(s:String):Source[Message, _] = Source fromIterator { () => Iterator.continually(StdIn readLine s"[$s]> ") } val print:Sink[Message, _] = Sink foreach println args.headOption foreach { case "server" => server() case "client" => client() } def server():Unit = Tcp() .bind(host, port) .runForeach { _ .flow .join(protocol) .runWith(prompt("S"), print) } def client():Unit = Tcp() .outgoingConnection(host, port) .join(protocol) .runWith(prompt("C"), print) } ```
I think the problem is that Akka Stream does [operator fusion](http://doc.akka.io/docs/akka/2.4.2/scala/stream/stream-flows-and-basics.html#Operator_Fusion). This means that the complete flow handling runs on a single actor. When it blocks for reading your messages it cannot print out anything. The solution would be to add an async boundary, after your source. See an example below. ``` def server(): Unit = Tcp() .bind(host, port) .runForeach { _ .flow .join(protocol) .runWith(prompt("S").async, print) // note .async here } def client(): Unit = Tcp() .outgoingConnection(host, port) .join(protocol).async .runWith(prompt("C").async, print) // note .async here ``` When you add an async boundary, then the fusion does not happen across the boundary, and the `prompt` runs on another actor, thus is not blocking `print` from showing anything.
Is it possible to activate a thread every 1us with python? For my application, a need a function that records an Ethernet Frame every 1us. Is it possible to do it with python/threading ? The maximum delay I got with threading.Timer is close to 10ms.
No. 1µs is well below the the granularity regular operating systems offer, which is usually measured in **milli**seconds (i.e. thousands of µs). See [this answer for a discussion about Linux time slices](https://stackoverflow.com/questions/16401294/how-to-know-linux-scheduler-time-slice) as well as [this one](https://stackoverflow.com/questions/55671406/how-to-measure-the-length-of-time-slices-on-os-such-as-windows-or-linux). Needless to say, if the operating system cannot offer such granularity, then there's no hope for anything running in user space. If you actually need µs level precision, you need to be looking at real-time systems. And since you have the `ethernet` tag in your question, you might also want to look at [network processors](https://en.wikipedia.org/wiki/Network_processor).
WinSCP and PuTTY drop out constantly, on other computer they don't? This has been bothering me for a long time. When I ssh or sftp into my server via PuTTY or WinSCP, it drops out after 2 minutes, then has to reconnect. PuTTY will just lock up and not let me type... it wont say inactive just pure lockup. WinSCP will say *"host not communicating"* then I press reconnect and it reconnects... This happens within 3-5 minutes! It's driving me **mad**! On my other PC, connecting to the same server same version of both programs **it will stay connected for 24 hours**! I even reformatted my PC and still same issue, do you have any ideas to solve this?
This means your server has some connection timeout (try a look at its `sshd_config` file or equivalent). You can try to play with the "keep alive" options in PuTTY or WinSCP. Here are my options for some similar case connection: **PuTTY**: ![enter image description here](https://i.stack.imgur.com/lOwC4.png) **WinSCP** Open the following dialog by going to your site, clicking *Edit* and *Advanced* and going to [*Connection* page](https://winscp.net/eng/docs/ui_login_connection): [![enter image description here](https://i.stack.imgur.com/CfJYy.png)](https://i.stack.imgur.com/CfJYy.png)
Mysql can't create table errno 121 Why am I getting this error? I don't have any foreign keys ``` drop table if exists t_issue; SET foreign_key_checks = 0;SET storage_engine=INNODB; CREATE TABLE `t_issue` ( `id_issue` int(11) NOT NULL PRIMARY KEY AUTO_INCREMENT, `fk_project` int(11) DEFAULT NULL, `subject` varchar(255) DEFAULT NULL, `estimated_due_date` date DEFAULT NULL, `due_date` date DEFAULT NULL, `done_ratio` int(11) DEFAULT NULL, `fk_status` int(11) DEFAULT NULL, `fk_assigned_to` int(11) DEFAULT NULL, `fk_owner` int(11) DEFAULT NULL ) ENGINE=innodb DEFAULT CHARSET=latin1 ```
# Mysql can't create table errno 121 You will get this message if you are trying to add a constraint with a name that is already used somewhere else. To check constraints, use the following SQL query: ``` SELECT constraint_name, table_name FROM information_schema.table_constraints WHERE constraint_type = 'FOREIGN KEY' AND table_schema = DATABASE() ORDER BY constraint_name; ``` Reference: <https://dba.stackexchange.com/questions/425/error-creating-foreign-key-from-mysql-workbench> See also: [SQL - error code 1005 with error number 121](https://stackoverflow.com/questions/4412693/sql-error-code-1005-with-error-number-121)
What's the difference between thread\_posixs and thread\_win32 in gcc port of Windows? I wanted to download the latest available version of `gcc 4.7.2` compiler for Windows. When I got to this [page](http://sourceforge.net/projects/mingwbuilds/files/host-windows/releases/4.7.2/32-bit/) where I was supposed to see a download link, I confronted with two categories: 1. threads-posix 2. threads-win32 What's the difference between these two? Are they thread implementations only? I mean are they only different in how they are implemented and thus the ending result (classes, how to use them, etc) remains the same? Or do they impose a specific coding style?
So, the link you provided leads to builds of the standalone gcc 4.7.2 for windows, a.k.a mingw64. In order to build this compiler, a set of scripts are used, which help define the options of compilations. The scripts are simply called MinGW-builds, and can be found in different places: - [google code](https://code.google.com/p/mingw-builds) - [github fork](https://github.com/niXman/mingw-builds) The scripts have an option which specify which thread model is to be used for the std::threads part of the C++11 standard (this is allowed for MinGW thanks to an [experimental patch](http://tehsausage.com/mingw-std-thread-gcc-4-7) applied on that version of GCC). In one case, the win32 thread API is used, and in the other case it's the posix API which is used. *Note that Windows doesn't support all the POSIX API out of the box, so some external emulation library needs to be used (winpthreads).* The GCC source configure script has an option to specify that API (--enable-threads=), and that's what is used in the build scripts. In short, for this version of mingw, the threads-posix release will use the posix API and **allow the use of std::thread**, and the threads-win32 will use the win32 API, and **disable the std::thread part of the standard**.
How to find a lower bound in a sorted vector I'm pretty new to C++ and do not understand all the concepts of the STL library, so bear with me. I wrote the following code snippet (pasted below) to find the lower\_bound in a sorted vector. Although this code works fine in Release mode, it asserts in debug mode (VStudio-8). I believe this is because `less_equal<int>` is not a strictly weak ordering . From the following thread: [stl ordering - strict weak ordering](https://stackoverflow.com/questions/1293231/stl-ordering-strict-weak-ordering) I do sort of understand that a weak ordering is imposed by the STL, but I'm still not very clear why? In my case below I need to use `less_equal<int>` since I'm trying to find the nearest element to a given value in a sorted vector. Is the code snippet below even valid? Also, is there a better way to do it? Also any insights/references to what exactly is weak and partial ordering would be helpful. ``` int main() { vector<int> dest; for(int i = 0;i <6;i++) { dest.push_back(i); } vector<int>::iterator i = std::lower_bound(dest.begin(),dest.end(),4,less_equal< int >()); return 1; } ```
The STL uses strict weak orderings because given a SWE (let's denote it `<`), you can define all six of the relational operators: ``` x < y iff x < y x <= y iff !(y < x) x == y iff !(x < y || y < x) x != y iff (x < y || y < x) x >= y iff !(x < y) x > y iff y < x ``` As for the problem you're trying to solve, if you want the value as close as possible to the target value, you really don't need to use `less_equal` here. Rather, use `lower_bound` to get an iterator to the smallest element bigger than the value you're looking for (using the default `<` comparison on integers), then compare that value to the value before it (assuming, of course, that both these values exist!) The value from `lower_bound` is the smallest element as least as large as x and the element before that value is the largest value no greater than x, so one of the two must be the closest. As for why the program was asserting, it's quite possible that it's due to the fact that `<=` is not a strict weak ordering, but I can't be certain about that. Changing to using the above approach should fix it unless the problem is from some other source.
JSF 2.0 app does not find resource I am new to developing on JSF (and new to web development in general) and I am trying to put an image on a page. I'm developing my JSF app in Eclipse and running it on Glassfish 3.1.2. I have registered Glassfish as a server in Eclipse, and running the app via Eclipse. I use the following markup on my xhtml page: ``` <h:graphicImage library="images" name="logo.png"/> ``` I copied the image in META-INF/resources/images/logo.png . The image does not appear on the page and when I view the page source I see the element ``` <img src="RES_NOT_FOUND" /> ``` indicating that the image is not found. When I export my app to a war file and deploy it onto Glassfish via the autodeploy folder, I get the same results - the page displays, but the image does not appear. Can anyone advise why the image resource is not found?
> > *I copied the image in META-INF/resources/images/logo.png* > > > This location will only work if the project represents a standalone module JAR which ultimately ends up in `/WEB-INF/lib` of the WAR. This does not seem to the case in your particular case. Nothing in your question indicates that you're indeed developing a module JAR file. Apparently you've placed it straight in the WAR this way. This is not right. You need to put the `/resources` folder straight in the public web content, not in the `META-INF` folder. ``` WebContent |-- META-INF |-- WEB-INF | |-- faces-config.xml | `-- web.xml |-- resources | `-- images | `-- logo.png `-- index.xhtml ``` ### See also: - [Structure for multiple JSF projects with shared code](https://stackoverflow.com/questions/8320486/structure-for-multiple-jsf-projects-with-shared-code/8320738#8320738) - [Why some resource files are put under META-INF directory](https://stackoverflow.com/questions/5609272/why-some-resource-files-are-put-under-meta-inf-directory/5609325#5609325) --- **Unrelated** to the concrete problem, using `images` as library name does not look entirely right. Just use ``` <h:graphicImage name="images/logo.png"/> ``` ### See also: - [What is the JSF resource library for and how should it be used?](https://stackoverflow.com/questions/11988415/what-is-the-jsf-resource-library-for-and-how-should-it-be-used)
Difference between this and .this? What is the difference between `this` and `.this` when calling functions? And, what happens when `this` or `this.` is used? Example: ``` class reference { public void object() { reference obj = new reference(); this.obj(); } } ```
The `Class.this` syntax is useful when you have a non-static nested class that needs to refer to its enclosing class's instance.It is only used in cases where there is an inner class, and one needs to refer to the enclosing class Within an instance method or a constructor, `this` is a reference to the current object — the object whose method or constructor is being called. You can refer to any member of the current object from within an instance method or a constructor by using `this`. A good example ``` public class TestForThis { String name; public void setName(String name){ this.name = name; } public String getName() { return name; } class TestForDotThis { String name ="in"; String getName() { return TestForThis.this.name; } } public static void main(String[] args) { TestForThis t = new TestForThis(); t.setName("out"); System.out.println(t.getName()); TestForThis.TestForDotThis t1 = t.new TestForDotThis(); System.out.println(t1.getName()); } } ``` Output will be ``` out out ```
Build MVC structure on top of Sinatra I'm learning Sinatra and I was wondering if someone knows a good way to make an MVC structure for a project with Sinatra. I've got some ideas but they seems too much cumbersome to me.
Sinatra is already "VC" - you have views separated from your routes (controllers). You can choose to break it into multiple files if you like; for more on that, see this answer (mine): [Using Sinatra for larger projects via multiple files](https://stackoverflow.com/questions/5015471/using-sinatra-for-larger-projects-via-multiple-files/5030173#5030173) To add an "M" (model), pick a database framework. Some people like [ActiveRecord](http://ar.rubyonrails.org/). Some people like [DataMapper](http://datamapper.org/). There are many more from which you might choose. I personally love and highly recommend [Sequel](http://sequel.rubyforge.org). My answer linked above also suggests a directory structure and shell for including the models. Once you distribute appropriate logic between your models and controllers, you have your "MVC". Note that MVC is not about separate files, but separation of concerns. If you set up a Sinatra application as I suggest above, but have your views fetching data from your models, or you have your routes directly generating HTML (not through a *"helper"*), then you don't really have MVC. Conversely, you can do all of the above in a single file and still have an MVC application. Just put your data-integrity logic in your models (and more importantly, in the database itself), your presentation logic in your views and reusable helpers, and your mapping logic in your controllers.
How to visually horizontally center an emoji in Chrome? It is surprisingly hard to visually horizontally center an emoji in Google Chrome, as there appears to be whitespace to the right of the emoji where there shouldn't be. An example: ``` .avatar { width: 30px; padding: 10px; background-color: #eee; border-radius: 50%; display: grid; justify-items: center; align-items: center; } ``` ``` <div class="avatar"> <div>&#x1F436;</div> </div> ``` <https://codepen.io/tommedema/pen/xxbXBRe> In Chrome 79.0.3945.79 on MacOS Catalina 10.15.2 this renders as: [![Chrome 79.0.3945.79](https://i.stack.imgur.com/61znr.png)](https://i.stack.imgur.com/61znr.png) Clearly it's not visually horizontally centered. Yet in other browsers like Safari and Firefox 71 it is: [![Firefox 71](https://i.stack.imgur.com/8mDyf.png)](https://i.stack.imgur.com/8mDyf.png) Regarding Carol's answer of using font-size and box-sizing, the result is still the same. I've selected the emoji/text so you can more clearly see the issue of there being whitespace to the right of the emoji, but only on Chrome and not on other browsers: [![Chrome with box-sizing](https://i.stack.imgur.com/UWEM8.png)](https://i.stack.imgur.com/UWEM8.png)
This appears to be an old Chromium rendering bug that specifically affects Retina devices. That might explain why some other posters are suggesting solutions that don't work for you! See the bug report here: <https://bugs.chromium.org/p/chromium/issues/detail?id=551420>. There's no ETA on a fix, of course... I have stumbled across something interesting playing around with font sizes though. At larger font sizes (approx 22px in my testing, but this might be dependent on a variety of factors), the problem goes away entirely. Therefore, my suggested fix is a bit of a workaround, but should be safe for other browsers too. Double the font size, but scale it down again using `transform`: ``` .avatar { font-size: 30px; /* double the size you wanted */ ... } .avatar div { transform: scale(0.5); /* reduce size by 50%, back to originally intended size */ } ```
Why does Elasticsearch Cluster JVM Memory Pressure keep increasing? The JVM Memory Pressure of my AWS Elasticsearch cluster has been increasing consistently. The pattern I see for the last 3 days is that it adds 1.1% every 1 hour. This is for one of the 3 master nodes I have provisioned. All other metrics seem to be in the normal range. The CPU is under 10% and there are barely any indexing or search operations being performed. I have tried clearing the cache for `fielddata` for all indices as mentioned in [this document](https://aws.amazon.com/premiumsupport/knowledge-center/high-jvm-memory-pressure-elasticsearch/) but that has not helped. Can anyone help me understand what might be the reason for this? [![Step Pattern](https://i.stack.imgur.com/5L3fE.png)](https://i.stack.imgur.com/5L3fE.png)
Got this answer from AWS Support > > I checked the particular metric and can also see the JVM increasing from the last few days. However, I do not think this is an issue as JVM is expected to increase over time. Also, the garbage collection in ES runs once the JVM reaches 75% (currently its around 69%), after which you would see a drop in the JVM metric of your cluster. If JVM is being continuously > 75 % and not coming down after GC's is a problem and should be investigated. > > > The other thing which you mentioned about clearing the cache for > fielddata for all indices was not helping in reducing JVM, that is > because the dedicated master nodes do not hold any indices data and > their related caches. Clearing caches should help in reducing JVM on > the data nodes. > > >
Downloading + caching + playing audio at the same time Is there a beautiful way to download an MP3 file via HTTP and cache it, playing it at the same time? I tried to write downloaded bytes to a file and play that file using MediaPlayer. But when playing speed reaches downloading speed, playback stops. I don't think I'm thinking in a right direction. Writing into a file and reading it it at the same time seems to be quite strange, or am I wrong?
> > > > > > > > > > > > > > > > > > > > But when playing speed reaches downloading speed, playback stops. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This is the key: you cannot play what you have not downloaded yet, no matter how you handle files, bytes etc. on the client side. Most of applications that try to play "live" have buffer long enough to compensate for fluctuations of download speed. And if download speed is actually slower than playback speed (which is unusual for audio stream but possible) your goal may be not achievable at all - you may have to download whole song before playing it in order to ensure there will be no pauses in the playback.
Dereference may produce NullPointerException when using getter I'm using a custom View class with an instance reference which serves as an Editor. View is only used in a fragment. I need the instance reference so I can always get custom parameters of the custom View. ``` public static StoryView instance; private Story story; public static Story getCurrentStory(){ if(instance == null) return null; else return instance.story; } ``` However, when I'm using this getter method to change the contents of Navigation Drawer, I'm getting a warning: [![enter image description here](https://i.stack.imgur.com/vw5d1.png)](https://i.stack.imgur.com/vw5d1.png) In here: ``` private static IDrawerItem[] drawerEditorItems(){ Story s = StoryView.getCurrentStory(); SectionDrawerItem section_editor = new SectionDrawerItem() .withName(str("placeholder_story_by", s.name, s.author)) .withDivider(false); return new IDrawerItem[]{ section_editor }; } ``` `str(String id, Object... args)` is a static method that basically formats i18n strings. --- My guess is that the reference `s` is getting destroyed at the end of the function scope maybe by assigning `s = null`? And maybe that might destroy the actual `instance.story` from my custom View?
When you call ``` public static Story getCurrentStory(){ if(instance == null) return null; else return instance.story; } ``` You check to make sure instance isn't null. If it is, you return null. What may be the case here is that instance is always null(never initialized). Meaning you have to ensure instance is initialized before calling it if you want to get the current story. Also, this is technically not necessary. Returning a null instance is equivalent to checking if it's null, then returning null. you can also use `@NotNull` and `@Nullable` to help both the compiler, yourself, and anyone else working on the code/interacting with it. Further, it may still return null in some cases, so you want to add a check to ensure it isn't null. This can be done using an if-statement: ``` if(s != null){ //Do whatever } ``` But the reason you are getting that warning is (in my experience) when it is almost guaranteed you will get an exception. take this for an instance: ``` View v = null; v.setText(""); ``` That shows the exact same warning as you are getting. So, most likely, your method will return null no matter what. So you have to make sure `instance` is initialized, and have an if-statement to make sure the app doesn't crash if it is null. Initializing `instance` is a way to ensure you get a reference that isn't null
How to you add a parameter to an executable in Windows 10? How to you add a parameter to an executable in Windows 10? Once you select *Properties* by right-clicking on a program's icon (or its shortcut), there is no field to do this. I am using a non-privileged account.
You can only add parameters to shortcuts, not to normal exe properties. So select the exe, do a right click select copy go to desktop, make a rightclick and select `paste shortcut` [![enter image description here](https://i.stack.imgur.com/TrEJI.png)](https://i.stack.imgur.com/TrEJI.png) Now do a right click on the shortcut and select properties. At `target` you must add the parameters after the exe name. [![enter image description here](https://i.stack.imgur.com/WGZL3.png)](https://i.stack.imgur.com/WGZL3.png) In this demo I created a `Explorer.exe` shortcut and added `/n,/e,C:\` to start Explorer directly in `C:\`.
How to load selected list items in multiple-select-listbox in update view in yii? I have a `multiple select-list-box` for `Staff` in `Create-Service-Form`, used to select multiple staff when creating a new service. for this i can assign multiple staff on a single service. I saved `staff_id` field as: `$model->staff_id = serialize($model->staff_id);` Here the **update-view** code for multiple-select-list-box: ``` <div class="row"> <?php echo $form->labelEx($model,'staff_id'); ?> <?php $data = array('1' => 'Sam', '2' => 'john', '3' => 'addy'); $htmlOptions = array('size' => '5', 'prompt'=>'Use CTRL to Select Multiple Staff', 'multiple' => 'multiple'); echo $form->ListBox($model,'staff_id', $data, $htmlOptions); ?> <?php echo $form->error($model,'staff_id'); ?> </div> ``` Problem is, when i load form for updating a service. how do i select those staff, which are previously saved in database? I tried [this](http://www.yiiframework.com/doc/api/1.1/CHtml#dropDownList-detail) dropDownList-attributes, but it not working. $select | string | the selected value if someone has solution, then suggest me. Thanks All Mates...
Here's a quick code I wrote for you, its an example that will help you understand how it works. ``` <div class="row"> <?php echo $form->labelEx($model,'staff_id'); ?> <?php $data = array('101' => 'Faraz Khan', '102' => 'Depesh Saini', '103' => 'Nalin Gehlot', '104' => 'Hari Maliya'); $selected = array( '102' => array('selected' => 'selected'), '103' => array('selected' => 'selected'), ); $htmlOptions = array('size' => '5', 'prompt'=>'Use CTRL to Select Multiple Staff', 'multiple' => 'true', 'options' => $selected); echo $form->listBox($model,'staff_id', $data, $htmlOptions); ?> <?php echo $form->error($model,'staff_id'); ?> </div> ``` Have Fun Ya !!!
Showing all index values when using multiIndexing in Pandas I would like that when viewing my DataFrame I will see all values of the multiIndex, including when subsequent rows have the same index for one of the levels. Here is an example: ``` arrays = [['20', '50', '20', '20'],['N/A', 'N/A', '10', '30']] tuples = list(zip(*arrays)) index = pd.MultiIndex.from_tuples(tuples, names=['Jim', 'Betty']) pd.DataFrame([np.random.rand(1)]*4,index=index) ``` The output is: ``` 0 Jim Betty 20 N/A 0.954973 50 N/A 0.954973 20 10 0.954973 30 0.954973 ``` I would like to have a 20 also in the south-west corner. That is, I would like my DataFrame to be: ``` 0 Jim Betty 20 N/A 0.954973 50 N/A 0.954973 20 10 0.954973 20 30 0.954973 ``` Is Pandas capable of doing that?
You need set `display.multi_sparse` to `False`: ``` #if need temporary use option with pd.option_context('display.multi_sparse', False): print (df) 0 Jim Betty 20 N/A 0.201643 50 N/A 0.201643 20 10 0.201643 20 30 0.201643 ``` If this display option is required throughout a notebook, the option can be set once and for all as follows: ``` # if permanent use import pandas as pd pd.options.display.multi_sparse = False ``` [Documentation](http://pandas.pydata.org/pandas-docs/stable/options.html#available-options): > > *display.multi\_sparse* > > **True** > > “Sparsify” MultiIndex display (don’t display repeated elements in outer levels within groups) > > >
Using ScrollView Programmatically in Swift 3 I have searched other questions and seem to still have some trouble creating my scrollView programmatically with autolayout in swift 3. I am able to get my scrollview to show up as shown in the picture below, but when I scroll to the bottom my other label does not show up and the 'scroll top' label does not disappear. [![ScrollView](https://i.stack.imgur.com/W6QJ2.png)](https://i.stack.imgur.com/W6QJ2.png) Hoping someone can help review my code below! ``` import UIKit class ViewController: UIViewController { let labelOne: UILabel = { let label = UILabel() label.text = "Scroll Top" label.backgroundColor = .red label.translatesAutoresizingMaskIntoConstraints = false return label }() let labelTwo: UILabel = { let label = UILabel() label.text = "Scroll Bottom" label.backgroundColor = .green label.translatesAutoresizingMaskIntoConstraints = false return label }() override func viewDidLoad() { super.viewDidLoad() let screensize: CGRect = UIScreen.main.bounds let screenWidth = screensize.width let screenHeight = screensize.height var scrollView: UIScrollView! scrollView = UIScrollView(frame: CGRect(x: 0, y: 120, width: screenWidth, height: screenHeight)) scrollView.contentSize = CGSize(width: screenWidth, height: 2000) scrollView.addSubview(labelOne) scrollView.addSubview(labelTwo) view.addSubview(labelOne) view.addSubview(labelTwo) view.addSubview(scrollView) // Visual Format Constraints view.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "H:|[v0]|", options: NSLayoutFormatOptions(), metrics: nil, views: ["v0": labelOne])) view.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "V:|-100-[v0]", options: NSLayoutFormatOptions(), metrics: nil, views: ["v0": labelOne])) // Using iOS 9 Constraints in order to place the label past the iPhone 7 view view.addConstraint(NSLayoutConstraint(item: labelTwo, attribute: .top, relatedBy: .equal, toItem: labelOne, attribute: .bottom, multiplier: 1, constant: screenHeight + 200)) view.addConstraint(NSLayoutConstraint(item: labelTwo, attribute: .right, relatedBy: .equal, toItem: labelOne, attribute: .right, multiplier: 1, constant: 0)) view.addConstraint(NSLayoutConstraint(item: labelTwo, attribute: .left, relatedBy: .equal, toItem: labelOne, attribute: .left, multiplier: 1, constant: 0) } } ```
It is easy to use constraints to define the scroll content size - so you don't have to do any manual calculations. Just remember: 1. The *content elements* of your scroll view must have left / top / width / height values. In the case of objects such as labels, they have intrinsic sizes, so you only have to define the left & top. 2. The *content elements* of your scroll view ***also*** define the bounds of the scrollable area - the `contentSize` - but they do so with the bottom & right constraints. 3. Combining those two concepts, you see that you need a "continuous chain" with at least one element defining the top / left / bottom / right extents. Here is a simple example, that will run directly in a Playground page: ``` import UIKit import PlaygroundSupport class TestViewController : UIViewController { let labelOne: UILabel = { let label = UILabel() label.text = "Scroll Top" label.backgroundColor = .red label.translatesAutoresizingMaskIntoConstraints = false return label }() let labelTwo: UILabel = { let label = UILabel() label.text = "Scroll Bottom" label.backgroundColor = .green label.translatesAutoresizingMaskIntoConstraints = false return label }() let scrollView: UIScrollView = { let v = UIScrollView() v.translatesAutoresizingMaskIntoConstraints = false v.backgroundColor = .cyan return v }() override func viewDidLoad() { super.viewDidLoad() // add the scroll view to self.view self.view.addSubview(scrollView) // constrain the scroll view to 8-pts on each side scrollView.leftAnchor.constraint(equalTo: view.leftAnchor, constant: 8.0).isActive = true scrollView.topAnchor.constraint(equalTo: view.topAnchor, constant: 8.0).isActive = true scrollView.rightAnchor.constraint(equalTo: view.rightAnchor, constant: -8.0).isActive = true scrollView.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: -8.0).isActive = true // add labelOne to the scroll view scrollView.addSubview(labelOne) // constrain labelOne to left & top with 16-pts padding // this also defines the left & top of the scroll content labelOne.leadingAnchor.constraint(equalTo: scrollView.leadingAnchor, constant: 16.0).isActive = true labelOne.topAnchor.constraint(equalTo: scrollView.topAnchor, constant: 16.0).isActive = true // add labelTwo to the scroll view scrollView.addSubview(labelTwo) // constrain labelTwo at 400-pts from the left labelTwo.leadingAnchor.constraint(equalTo: scrollView.leadingAnchor, constant: 400.0).isActive = true // constrain labelTwo at 1000-pts from the top labelTwo.topAnchor.constraint(equalTo: scrollView.topAnchor, constant: 1000).isActive = true // constrain labelTwo to right & bottom with 16-pts padding labelTwo.rightAnchor.constraint(equalTo: scrollView.rightAnchor, constant: -16.0).isActive = true labelTwo.bottomAnchor.constraint(equalTo: scrollView.bottomAnchor, constant: -16.0).isActive = true } } let vc = TestViewController() vc.view.backgroundColor = .yellow PlaygroundPage.current.liveView = vc ``` --- **Edit** - since this answer still gets occasional attention, I've updated the code to use more modern syntax, to respect the safe-area, and to use the scroll view's `.contentLayoutGuide`: ``` class TestViewController : UIViewController { let labelOne: UILabel = { let label = UILabel() label.text = "Scroll Top" label.backgroundColor = .yellow label.translatesAutoresizingMaskIntoConstraints = false return label }() let labelTwo: UILabel = { let label = UILabel() label.text = "Scroll Bottom" label.backgroundColor = .green label.translatesAutoresizingMaskIntoConstraints = false return label }() let scrollView: UIScrollView = { let v = UIScrollView() v.translatesAutoresizingMaskIntoConstraints = false v.backgroundColor = .cyan return v }() override func viewDidLoad() { super.viewDidLoad() // add the scroll view to self.view self.view.addSubview(scrollView) // add labelOne to the scroll view scrollView.addSubview(labelOne) // add labelTwo to the scroll view scrollView.addSubview(labelTwo) // always a good idea to respect safe area let safeG = view.safeAreaLayoutGuide // we want to constrain subviews to the scroll view's Content Layout Guide let contentG = scrollView.contentLayoutGuide NSLayoutConstraint.activate([ // constrain the scroll view to safe area with 8-pts on each side scrollView.topAnchor.constraint(equalTo: safeG.topAnchor, constant: 8.0), scrollView.leadingAnchor.constraint(equalTo: safeG.leadingAnchor, constant: 8.0), scrollView.trailingAnchor.constraint(equalTo: safeG.trailingAnchor, constant: -8.0), scrollView.bottomAnchor.constraint(equalTo: safeG.bottomAnchor, constant: -8.0), // constrain labelOne to leading & top of Content Layout Guide with 16-pts padding // this also defines the left & top of the scroll content labelOne.topAnchor.constraint(equalTo: contentG.topAnchor, constant: 16.0), labelOne.leadingAnchor.constraint(equalTo: contentG.leadingAnchor, constant: 16.0), // constrain labelTwo leading at 400-pts from labelOne trailing labelTwo.leadingAnchor.constraint(equalTo: labelOne.trailingAnchor, constant: 400.0), // constrain labelTwo top at 1000-pts from the labelOne bottom labelTwo.topAnchor.constraint(equalTo: labelOne.bottomAnchor, constant: 1000), // constrain labelTwo to trailing & bottom of Content Layout Guide with 16-pts padding // this also defines the right & bottom of the scroll content labelTwo.trailingAnchor.constraint(equalTo: contentG.trailingAnchor, constant: -16.0), labelTwo.bottomAnchor.constraint(equalTo: contentG.bottomAnchor, constant: -16.0), ]) } } ```
In MATLAB, how can you have a callback execute while a slider is being dragged? I have created a MATLAB GUI using GUIDE. I have a slider with a callback function. I have noticed that this callback, which is supposed to execute 'on slider movement', in fact only runs once the slider has been moved and the mouse released. Is there a way to get a script to run as the slider is being dragged, for live updating of a plot? There would I presume need to be something to stop the script being run too many times.
Even though the callback of the slider isn't being called as the mouse is moved, the [`'Value'` property](https://www.mathworks.com/help/matlab/ref/matlab.ui.control.uicontrol-properties.html#bt6ck7c-1-Value) of the slider uicontrol **is** being updated. Therefore, you could create a listener using [`addlistener`](http://www.mathworks.com/help/matlab/ref/handle.addlistener.html) that will execute a given callback when the `'Value'` property changes. Here's an example: ``` hSlider = uicontrol('Style', 'slider', 'Callback', @(s, e) disp('hello')); hListener = addlistener(hSlider, 'Value', 'PostSet', @(s, e) disp('hi')); ``` As you move the slider you should see `'hi'` being printed to the screen (the listener callback), and when you release the mouse you will see `'hello'` printed (the uicontrol callback).
Qt: send QPixmap in QDrag's QMimeData? I create a drag object from a `QListWidgetItem`. I can send text as mime data in this drag object. How can I send a pixmap and retrieve it from the mime data? Would it even be possible to create a `QGraphicsItem` and retrieve it? I try to drag & drop from the `QListWidget` into a `QGraphicsView`.
There are multiple ways to send a `QPixmap` through `QMimeData`: 1. by encoding it into a file format such as PNG and sending that with mime-type `image/png` (`QMimeData` has built-in support for that, cf. `QMimeData::imageData()`). 2. by serialising the `QPixmap` into a `QByteArray` using a `QDataStream` and sending the serialisation under an app-specific mime-type `application/x-app-name`. 3. by writing the image data to a file on disk and sending a file-URL for it with mime-type `text/uri-list` (`QMimeData` has built-in support for this, cf. `QMimeData::urls()`). This allows to drag these images onto a file manager or the desktop. 4. similar to (2) above, you can also create a `QGraphicsItem`, stuff its address into a `QByteArray` and send that under an app-specific mime-type. This doesn't work if the drag ends in another process, of course (the receiving site can test, because `QDragEvent::source()` returns `0` in that case), and it requires special care to handle the graphic item's lifetime. Seeing as `QMimeData` allows you to pass several formats at once, these options are non-exclusive. You should, however, sort the formats you return from your reimplementation of `QMimeData::formats()` in order of decreasing specificity, i.e. your app-private formats come first, and `text/uri-list` comes last.
Why no volatile? I have a discussion with a colleague about this code: ``` public final class ShutdownHookRegistration { /** * Global shutdown flag */ private static boolean isServerShutdown = false; private ShutdownHookRegistration() { // empty } /** * Returns the current value of the global shutdown flag * * @return */ public static boolean isServerShutdown() { return isServerShutdown; } /** * Registration if shutdown hooks */ public static void registerShutdownHooks() { /** * 1. Shutdown hook to set the shutdown flag */ Runtime.getRuntime().addShutdownHook(setGlobalShutdownFlag()); } /** * Sets the global static is shutdown flag which can be checked by other processes. * * @return */ private static Thread setGlobalShutdownFlag() { return new Thread() { @Override public void run() { isServerShutdown = true; System.out.println(Thread.currentThread().getName() + ":shutdown set"); } }; } public static void main(String[] args) throws InterruptedException { System.out.println(Thread.currentThread().getName() + " Flag set:" + ShutdownHookRegistration.isServerShutdown); Thread t1 = ShutdownHookRegistration.setGlobalShutdownFlag(); Thread t2 = new Thread() { public void run() { while (!ShutdownHookRegistration.isServerShutdown) { System.out.println(Thread.currentThread().getName() + " Flag set:" + ShutdownHookRegistration.isServerShutdown); } } }; t2.start(); t1.start(); } ``` Output: ``` Thread-1 Flag set:false Thread-1 Flag set:false Thread-1 Flag set:false Thread-1 Flag set:false [..] Thread-0:shutdown set ``` I thought without volatile this code would run in an infinite loop, but somehow it will always terminate. Can someone explain why here no volatile is necessary?
In short there is two reasons, your loop has a memory barrier and even if it didn't it's not run long enough to be optimised/compiled in a matter which need volatile. The key is here ``` while (!ShutdownHookRegistration.isServerShutdown) { System.out.println(Thread.currentThread().getName() + " Flag set:" + ShutdownHookRegistration.isServerShutdown); } ``` System.out.println is `synchronized` which means there is a read/write barrier in each iteration. ``` // from the java 6 source public void println(Object x) { String s = String.valueOf(x); synchronized (this) { print(s); newLine(); } } ``` In the x64 JVM, once you lock one object you place a memory barrier which protects all memory accesses. Additionally, this slows down your code by 10,000x or more, so that it doesn't run long enough to get compiled (and optimised in a matter which required `volatile`) It would take looping in the order of 10,000 times before this code is compiled.
Classpath trouble using JUnit with both Eclipse and Maven In a JUnit test I'm using this code to load in a test-specific config file: ``` InputStream configFile = getClass().getResourceAsStream("config.xml"); ``` When I run the test through eclipse, it requires the xml file to be in the same directory as the test file. When I build the project with maven, it requires the xml to be in `src/test/resources`, so that it gets copied into `target/test-classes`. How can I make them both work with just one file?
Place the config.xml file in src/test/resources, and add src/test/resources as a source folder in Eclipse. The other issue is how `getResourceAsStream("config.xml")` works with packages. If the class that's calling this is in the `com.mycompany.whatever` package, then `getResourceAsStream` is also expecting config.xml to be in the same path. However, this is the same path in the *classpath* not the file system. You can either place file in the same directory structure under src/test/resources - src/test/resources/com/mycompany/whatever/config.xml - or you can add a leading "/" to the path - this makes `getResourceAsStream` load the file from the base of the classpath - so if you change it to `getResourceAsStream("/config.xml")` you can just put the file in src/test/resources/config.xml
CBCentralManager scanForPeripheralsWithServices:nil is not returning any results I am trying to display all available BLE beacons. I got some Estimote and Kontakt.io beacons with me and for some reason the below BLE scanning code does not find any of them. I went through all the possible SO questions relating to BLE discovery and that codes is exactly as in other places. [App source code here](https://www.dropbox.com/s/tefyp8jszpx8ay1/BTTest.zip?dl=0) ``` - (void)viewDidLoad { [super viewDidLoad]; self.manager = [[CBCentralManager alloc] initWithDelegate:self queue:nil]; } /* Request CBCentralManager to scan for all available services */ - (void) startScan { NSLog(@"Start scanning"); NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], CBCentralManagerScanOptionAllowDuplicatesKey, nil]; [self.manager scanForPeripheralsWithServices:nil options:options]; } ``` This delegate method never gets called ``` /* Invoked when the central discovers bt peripheral while scanning. */ - (void) centralManager:(CBCentralManager *)central didDiscoverPeripheral:(CBPeripheral *)aPeripheral advertisementData:(NSDictionary *)advertisementData RSSI:(NSNumber *)RSSI { NSLog(@"THIS NEVER GETS CALLED"); } ```
iBeacon's aren't accessed as peripherals - they are beacons. They are handled through the [Core Location](https://developer.apple.com/library/ios/documentation/userexperience/Conceptual/LocationAwarenessPG/RegionMonitoring/RegionMonitoring.html#//apple_ref/doc/uid/TP40009497-CH9-SW1) framework, not the Core Bluetooth framework. There may be vendor specific services advertised by the beacons that can be detected by Core Bluetooth. Your code doesn't wait until the `CBCentralManager` is in the powered on state. I made the following changes and it worked on both iOS 7.1 and iOS8 - ``` - (void)viewDidLoad { [super viewDidLoad]; self.manager = [[CBCentralManager alloc] initWithDelegate:self queue:nil]; } - (void) startScan { NSLog(@"Start scanning"); NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], CBCentralManagerScanOptionAllowDuplicatesKey, nil]; [self.manager scanForPeripheralsWithServices:nil options:options]; } - (BOOL) isLECapableHardware { NSString * state = nil; switch ([self.manager state]) { case CBCentralManagerStateUnsupported: state = @"The platform/hardware doesn't support Bluetooth Low Energy."; break; case CBCentralManagerStateUnauthorized: state = @"The app is not authorized to use Bluetooth Low Energy."; break; case CBCentralManagerStatePoweredOff: state = @"Bluetooth is currently powered off."; break; case CBCentralManagerStatePoweredOn: [self startScan]; return TRUE; case CBCentralManagerStateUnknown: default: return FALSE; } NSLog(@"Central manager state: %@", state); UIAlertView *alert = [[UIAlertView alloc] init]; [alert setMessage:state]; [alert addButtonWithTitle:@"OK"]; [alert show]; return FALSE; } ```
onLoadEnd is not fired in react-native Image hi am trying to load an remote image. onLoadStart is hitting but not the onLoadEnd ` ``` <View style={{ paddingTop: 60, paddingBottom: 10 }}> {this.state.loading ? ( <DotIndicator size={25} color={"white"} /> ) : ( <Image resizeMode={this.resizeMode} style={[styles.imageStyle, this.tintStyle]} onLoadStart={e => { this.setState({ loading: true }); }} onLoadEnd={e => this.setState({ loading: false })} // defaultSource={NoProfile} // loadingIndicatorSource={require("@images/profile_placeholder.png")} source={this.userImageUri} onError={error => { this.tintStyle = { tintColor: "lightgray" }; this.resizeMode = "contain"; this.userImageUri = NoProfile; }} /> )} </View> ``` ` **EDIT 1** *onLoadStart* is hit. *onLoad* is also never being called does any one have a clue. am new to react. Any help is appreciated. thanks is advance **SOLUTION** Since Vignesh and hong mentioned the image is never there so its on loadEnd will never be called. So instead of loading only image or loader I loaded the loader on top of image. Posting this here as it may be useful for someone at sometime. Once again thanks to Vignesh and hong ``` <View style={{ padding: 10, width: WIDTH - 50, height: WIDTH - 25, alignSelf: "center" }} > {this.state.loading ? ( <MaterialIndicator size={50} color={"red"} style={{ marginTop: WIDTH / 2, alignSelf: "center" }} /> ) : null} <Image resizeMode={this.resizeMode} style={[styles.imageStyle, this.tintStyle]} onLoadStart={e => { this.setState({ loading: true }); }} onLoad={e => { this.setState({ loading: false }); }} onLoadEnd={e => this.setState({ loading: false })} // defaultSource={NoProfile} // loadingIndicatorSource={require("@images/profile_placeholder.png")} source={this.userImageUri} onError={error => { this.tintStyle = { tintColor: "lightgray" }; this.resizeMode = "contain"; this.userImageUri = NoProfile; }} /> </View> ```
Let's say that the value of `this.state.loading` was `false` before the first render. When the first render happens, `this.state.loading ?` returns the `Image` component, `onLoadStart` is triggered and `this.state.loading` is set to `true`. When the second render happens, `this.state.loading` is found to be `true` and `this.state.loading ?` returns the `DotIndicator` component. All the hard work done by the `Image` component during the previous the render is lost. In fact, `Image` component was never present in that context. Hence, `onLoadingEnd` will never be triggered, because `Image` component never appeared in the second render. And the `DotIndicator` will forever go round and round and round... Waiting for it's lost love..
Image changes color when saved with java When I save this image: ![Holiday Doodle](https://www.google.at/logos/doodles/2013/holiday-series-2013-3-4504416610680832-hp.jpg) with this method: ``` private final static Path ROOT_PATH = Paths.getPath("C:/images"); private static void saveImageFromWebSimple(final String url) { URL u = null; try { u = new URL(url); } catch (MalformedURLException e) { // TODO Auto-generated catch block e.printStackTrace(); } String file = url.substring(url.indexOf("//") + 2); Path filePath = ROOT_PATH.resolve(file); try { Files.createDirectories(filePath.getParent()); BufferedImage img = ImageIO.read(u); ImageIO.write(img, "jpg", filePath.toFile()); } catch (IOException e) { e.printStackTrace(); } } ``` this is my result: ![Result](https://i.stack.imgur.com/W8WUE.jpg) This doesn't happen with all pictures though. Can you tell me why?
According to @uckelman's [**comment**](https://stackoverflow.com/questions/9340569/jpeg-image-with-wrong-colors#comment21166430_9395544) on [this post](https://stackoverflow.com/questions/9340569/jpeg-image-with-wrong-colors), Java's decoder makes a different assumption about the format of the image than most other renders when the image is missing the JFIF header: > > I believe the answer to your question of how to detect the bad JPEGs > is found [here](http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4712797) and [here](https://stackoverflow.com/questions/7676701/java-jpeg-converter-for-odd-image-types). What you have is a JPEG with no JFIF marker. > All other image loaders assume that the data is YCbCr in that case, > except for ImageIO, which assumes that it is RGB when channels 1 and 2 > are not subsampled. So, check whether the first 4 bytes are FF D8 FF > E1, and if so, whether channels 1 and 2 are subsampled. That's the > case where you need to convert. > > >
Analyzing logistic regression coefficients Here is a list of logistic regression coefficients (first one is an intercept) ``` -1059.61966694592 -1.23890500515482 -8.57185269220438 -7.50413155570413 0 1.03152408392552 1.19874787949191 -4.88083274930613 -5.77172565873336 -1.00610998453393 ``` I find it weird how the intercept is so low and I have a coefficient that is actually equal to 0. I am not fully sure how I would interpret this. Does the 0 indicate that the specific variable has no affect at all on the model? But the intercept that is made by inputting a column of one's is suddenly really important? Or is my data just crap and the model is unable to properly fit to it.
You are getting some very good information in the comments, in my opinion. I wonder if some basic facts about logistic regression would help make these things more comprehensible, so with that in mind, let me state a couple of things. In logistic regression, coefficients are on the *logistic* scale (hence the name...). If you were to plug in your covariate values for an observation, multiply them by the coefficients, and sum them, you would get a *logit*. $$ \text{logit}=\beta\_0+\beta\_1x\_1+\beta\_2x\_2+...+\beta\_kx\_k $$ A logit is a number that makes no intuitive sense to anyone, so it is very difficult to know what to do with a number looks funny (e.g., very high or very low). The best way to understand these things is to convert them from their original scale (logits) to one that you can understand, specifically probabilities. To do that, you take your logit and exponentiate it. That means you take the number [e](http://en.wikipedia.org/wiki/E_%28mathematical_constant%29) ($e\approx 2.718281828$) and raise it to the power of the logit. Imagine your logit were 2: $$ e^2=7.389056 $$ This will give you the odds. You can convert the odds to a probability by dividing the odds by one plus the odds: $$ \frac{7.389056}{1+7.389056}=0.880797 $$ People typically find the probability much easier to deal with. For your model, imagine you had an observation in which the value of all of your variables is exactly 0, then all of your coefficients would drop out and you would be left with only your intercept value. If we exponentiate your value, we get 0 as the odds (if it were -700, the odds would be $9.8\times 10^{-305}$, but I can't get my computer to give me a value for -1060, it is too small given the numerical limits of my software). Converting those odds to a probability, ( $0/(1+0)$ ), gives us 0 again. Thus, what your output is telling you is that your event (whatever it is) simply does not occur when all of your variables are equal to 0. Of course, it depends on what we are talking about, but I find nothing too remarkable about this. A standard logistic regression equation (say, without a squared term, for instance) necessarily assumes that the relationship between a covariate and the probability of success is either [monotonically increasing or monotonically decreasing](http://en.wikipedia.org/wiki/Monotonic). That means that it always gets larger and larger (or smaller and smaller), and so, if you go far enough in one direction, you will get to numbers so small my computer can't tell them apart from 0. That is just the nature of the beast. As it happens, for your model, going really far is going to where your covariate vales equal 0. As for the coefficient of 0, it does mean that that variable has no effect, as you suggest. Now, it is quite reasonable that a variable will not have an effect, nonetheless, you will basically never get a coefficient of exactly 0. I don't know why it occurred in this case; the comments offer some possible suggestions. I can offer another, which is that there may be no variation in that variable. For example, if you had a variable that coded for sex, but only women in your sample. I don't know if that's the real answer (R, for example, returns `NA` in that case, but software differ)--it's just another suggestion.
Break X Axis in R I want to get a broken X-axis in my plot. In the x-axis I like to insert a broken-axis symbol `< // >` [starting from 2 and ended in end 8 which means 2-8 will be hidden in `< // >` symbol] so the other values can be emphasized. In Matlab, this task is performed by using [BreakXAxis](http://www.mathworks.com/matlabcentral/fileexchange/3683-breakxaxis). In R, **[plotrix](http://cran.r-project.org/web/packages/plotrix/plotrix.pdf)** library helps only to plugin a break-axis symbol, that's all. ``` x <- c(9.45, 8.78, 0.93, 0.47, 0.24, 0.12) y <- c(10.72, 10.56, 10.35, 10.10, 9.13, 6.72) z <- c(7.578, 7.456, 6.956, 6.712, 4.832, 3.345) plot(x, y, col='blue', pch=16, xlab= 'x', ylab='y, z') points(x, z, col='red', pch=17) library(plotrix) axis.break(1,2,style="slash") ```
``` xgap <- ifelse(x > 8, x-6, x) #Possibly you'd want to check if there are values between 2 and 8. plot(xgap, y, col='blue', pch=16, xlab= 'x', ylab='y, z', xaxt="n") points(xgap, z, col='red', pch=17) xat <- pretty(xgap) xat <- xat[xat!=2] xlab <- ifelse(xat>2, xat+6, xat) axis(1,at=xat, labels=xlab) library(plotrix) axis.break(1,2,style="slash") ``` ![enter image description here](https://i.stack.imgur.com/bTsWh.png) Don't do this. `gap.plot` provides a slightly better alternative, but I would probably use facets, e.g., with ggplot2.
angular service unit test DoneFn I'm following the angular official doc and I can see this code: ``` it("#getObservableValue should return value from observable", (done: DoneFn) => { service.getObservableValue().subscribe(value => { expect(value).toBe("observable value"); done(); }); }); ``` I'm wondering where DoneFn comes from because I've no error either import for the typing.
If you follow the Interface definition you will see that it is under: `node_modules/@types/jasmine/index.d.ts` ``` /** Action method that should be called when the async work is complete */ interface DoneFn extends Function { (): void; /** fails the spec and indicates that it has completed. If the message is an Error, Error.message is used */ fail: (message?: Error | string) => void; } ``` You don't need to import or use it, it's mostly for reference. I am not sure how `@types` exactly work but I suppose if there is a `@types` typing within the project, node knows how to find the definition as they are all indexed in this one folder. **UPDATE:** I found out this is being configured by `tsconfig.json` ``` "typeRoots": [ "node_modules/@types" ], ```
Nested subroutines and Scoping in Perl I'm writing Perl for quite some time now and always discovering new things, and I just ran into something interesting that I don't have the explanation to it, nor found it over the web. ``` sub a { sub b { print "In B\n"; } } b(); ``` how come I can call `b()` from outside its scope and it works? I know its a bad practice to do it, and I dont do it, I use closured and such for these cases, but just saw that.
Subroutines are stored in a global namespace at compile time. In your example `b();` is short hand for `main::b();`. To limit visibility of a function to a scope you need to assign an anonymous subroutines to a variable. Both named and anonymous subroutines can form closures, but since named subroutines are only compiled once if you nest them they don't behave as many people expect. ``` use warnings; sub one { my $var = shift; sub two { print "var: $var\n"; } } one("test"); two(); one("fail"); two(); __END__ output: Variable "$var" will not stay shared at -e line 5. var: test var: test ``` Nesting named subroutines is allowed in Perl but it's almost certainly a sign that the code is doing someting incorrectly.
Unable to profile app on device with iOS 9.0.1 using Xcode 7, 7.0.1 or 7.1 beta I have been trying unsuccessfully to profile my device (via Instruments) using the latest version of Xcode 7.0.1 (7A1001 released 9/28), as well as the previous version of Xcode 7 (7A218), as well as Xcode 7.1 Beta 2 (7B75). My device is an iPhone 6+ with iOS 9.0.1 installed - the latest GM release of iOS9. I am able to run / debug applications on this device without issues. In the screenshots below you can see that my device is disabled (greyed out) in all screenshots in all versions. I am able to profile other devices running iOS 8.4.1 without any issues. Does the current version of Xcode not support profiling against iOS 9.0.1 or is there some kind of configuration setting or known work around for this? **Xcode 7.0.1:** [![Xcode 7.0.1](https://i.stack.imgur.com/qTPVR.png)](https://i.stack.imgur.com/qTPVR.png) **Xcode 7.0:** [![Xcode 7.0](https://i.stack.imgur.com/CWM6Y.png)](https://i.stack.imgur.com/CWM6Y.png) **Xcode 7.1 beta 2:** [![Xcode 7.1 beta 2](https://i.stack.imgur.com/l4RUD.png)](https://i.stack.imgur.com/l4RUD.png)
**TL;DR** - Perform a complete reboot of your device; restart Xcode & instruments; select "Open Xcode" if prompted to enable the device for development. **Update 3/31/2016:** I haven't encountered any issues with the latest version(s) of Xcode (7.2.x, 7.3), so it seems that the stability here has been improved. --- I believe I may have finally gotten this to work properly. Detailed steps: 1. Unplug the device from your Mac & power down the device completely (hold the power button for several seconds; slide to power off). 2. Close Xcode and Instruments. 3. Restart the device & once it has booted completely re-connect it to your Mac. 4. Re-launch Xcode. Here, my device showed as disabled and Xcode indicated that the device was not available for use. 5. Open your project; clean (Shift+Command+K), Build (Command+B), Profile (Command+I). 6. After Instruments launched I noticed that the device was enabled. Upon selecting it, a message was displayed with the title "Enable this device for development?" and message "This will open Xcode and enable this device for development." (Note that this only happened to me the first time I went through this process even though I had already been using the device for development - whereas some users have also reported that they are not presented with this dialogue.) [![Enable this device for development?](https://i.stack.imgur.com/ciAwX.png)](https://i.stack.imgur.com/ciAwX.png) 7. Click "Open Xcode". Here Xcode did not prompt me for anything nor was anything displayed - no additional messages indicating anything had been done or that the device was or was not available for development. Opening the Devices window, the device appeared to be available. (I have not been presented with this option for subsequent occurrences.) 8. Now I was able to select the device in Instruments and profile it. As a side note, I was also again able to delete installed apps from the Devices window (I realized that this was not possible to do previously). I'm unsure how my device ended up in this state however I will be on the lookout to see if this continues to occur. Please note that this was done using **Xcode 7.0.1**. --- **Update:** My device seems to lapse back into not being able to be used for profiling some time after performing these steps - I've had to reboot my device again in order for it to be available for profiling. Not sure what is triggering this behavior but I will file a Radar for this.
Creating list of individual list items multiplied n times I'm fairly new to Python, and think this should be a fairly common problem, but can't find a solution. I've already looked at [this page](https://stackoverflow.com/questions/3459098/create-list-of-single-item-repeated-n-times-in-python) and found it helpful for one item, but I'm struggling to extend the example to multiple items without using a 'for' loop. I'm running this bit of code for 250 walkers through Emcee, so I'm looking for the fastest way possible. I have a list of numbers, `a = [x,y,z]` that I want to repeat `b = [1,2,3]` times (for example), so I end up with a list of lists: ``` [ [x], [y,y], [z,z,z] ] ``` The 'for' loop I have is: ``` c = [ ] for i in range (0,len(a)): c.append([a[i]]*b[i]) ``` Which does exactly what I want, but means my code is excruciatingly slow. I've also tried naively turning a and b into arrays and doing `[a]*b` in the hopes that it would multiply element by element, but no joy.
You can use `zip` and a list comprehension here: ``` >>> a = ['x','y','z'] >>> b = [1,2,3] >>> [[x]*y for x,y in zip(a,b)] [['x'], ['y', 'y'], ['z', 'z', 'z']] ``` or: ``` >>> [[x for _ in xrange(y)] for x,y in zip(a,b)] [['x'], ['y', 'y'], ['z', 'z', 'z']] ``` `zip` will create the whole list in memory first, to get an iterator use `itertools.izip` In case `a` contains mutable objects like lists or lists of lists, then you may have to use `copy.deepcopy` here because modifying one copy will change other copies as well.: ``` >>> from copy import deepcopy as dc >>> a = [[1 ,4],[2, 5],[3, 6, 9]] >>> f = [[dc(x) for _ in xrange(y)] for x,y in zip(a,b)] #now all objects are unique >>> [[id(z) for z in x] for x in f] [[172880236], [172880268, 172880364], [172880332, 172880492, 172880428]] ``` `timeit` comparisons(ignoring imports): ``` >>> a = ['x','y','z']*10**4 >>> b = [100,200,300]*10**4 >>> %timeit [[x]*y for x,y in zip(a,b)] 1 loops, best of 3: 104 ms per loop >>> %timeit [[x]*y for x,y in izip(a,b)] 1 loops, best of 3: 98.8 ms per loop >>> %timeit map(lambda v: [v[0]]*v[1], zip(a,b)) 1 loops, best of 3: 114 ms per loop >>> %timeit map(list, map(repeat, a, b)) 1 loops, best of 3: 192 ms per loop >>> %timeit map(list, imap(repeat, a, b)) 1 loops, best of 3: 211 ms per loop >>> %timeit map(mul, [[x] for x in a], b) 1 loops, best of 3: 107 ms per loop >>> %timeit [[x for _ in xrange(y)] for x,y in zip(a,b)] 1 loops, best of 3: 645 ms per loop >>> %timeit [[x for _ in xrange(y)] for x,y in izip(a,b)] 1 loops, best of 3: 680 ms per loop ```
Setup postgres in Github Actions for Django I'm currently working on a website right now on Django. On my computer, I am running it on Docker with a postgres database. Here's the docker-compose file I have: ``` version: '3' services: db: image: postgres environment: - POSTGRES_DB=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres web: build: . volumes: - .:/usr/src/app ports: - "8000:8000" ``` And here's the relevant part in settings.py ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'postgres', 'USER': 'postgres', 'PASSWORD': 'postgres', 'HOST': 'db', 'PORT': 5432, } } ``` When I run my tests in the docker container with this setup, it works find and the tests run. However, in github actions, it doesn't work. Here's my workflow file: ``` name: Django CI on: push jobs: build: runs-on: ubuntu-latest strategy: max-parallel: 4 matrix: python-version: [3.7, 3.8] services: db: image: postgres env: POSTGRES_DB: postgres POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres ports: - 5432:5432 steps: - uses: actions/checkout@v2 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v1 with: python-version: ${{ matrix.python-version }} - name: Install Dependencies run: | python -m pip install --upgrade pip pip install -r requirements.txt - name: Run Tests run: | python manage.py test ``` When this runs in github actions, I get the following error: ``` django.db.utils.OperationalError: could not translate host name "db" to address: Temporary failure in name resolution ``` Could someone please help me with this, and please let me know if you need anymore code.
The reason this is not working is that the settings `HOST` is set to `db`. When you're working with a Github Action that only has services, this is called "running all steps on the virtual machine" or VM. You know this is the case because your `steps` are at the same indentation level as `services`. In this situation, you do not specify the service's container name (in this case `db`) as the host. Instead, services running in `services` as shown in the question are actually running on `localhost` which is the same place the steps are taking place. When the `run tests` step is reached, we can see that django is in fact attempting to start up, because the `django.db.utils.OperationalError...` error *is* a Django runtime error. It is showing Django's attempt to reach out and talk to the database. This means that the settings configuration is wrong. To specifically answer this question, simply update the settings to: ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'postgres', 'USER': 'postgres', 'PASSWORD': 'postgres', 'HOST': 'localhost', 'PORT': 5432, } } ``` Note a very similar question and resolving answer [is shown here](https://github.community/t/cant-connect-to-postgres-service/17946/5). --- ## If this doesn't fix it If this is still not working for you, most likely the settings.py that the Github Actions is running is not the same one you are editing. As I mention [here](https://stackoverflow.com/questions/41573313/docker-compose-with-django-could-not-translate-host-name-db-to-address-name-o#comment112012742_41574217), ensure first that your setting is **not overridden** and that your Github Actions job is pulling the **correct branch** that you are editing the `settings.py`. This can be especially hard because of the delay of waiting for Github Actions to build causes something of a disconnect between what you're doing and the result. You may think you're working on relevant code / pushing to the right place / setting the correct environment variables when you are not. And you do not detect it possibly because it is easy to get distracted waiting to see if your fix worked. :) One more note on this, Github has an [example simple service using postgres](https://github.com/actions/example-services/blob/master/.github/workflows/postgres-service.yml#L71). Look at it carefully. The second example shows the one asked about in this question, where all steps are run on the virtual machine.
Create a Virtual Host in Xamp Windows 10 Last night I have updated my windows 7 to windows 10. The result is struggle trying to run my local apache server in windows 10 that is running on windows 7.I have tried uninstalling and installing another versions of xampp then I came up that I have to change the apache's default port just to make it run. I changed httpd.conf from **`Listen 80`** to **`Listen 1234`** AND **`ServerName localhost:80`** to **`ServerName localhost:1234`** and in **xampp control panel Config->Service and Port Settings**. I also change the Main Port [![enter image description here](https://i.stack.imgur.com/87An2.png)](https://i.stack.imgur.com/87An2.png) Now I can access phpmyadmin using **`localhost:1234/phpmyadmin`**. And now my problem is creating Virtual host so I added in my host(C:\Windows\System32\drivers\etc\hosts) file ``` 127.0.0.1 sample.local 127.0.0.1 anothersample.local ``` And my vhost (D:\xampp\apache\conf\extra\httpd-vhosts.conf) file ``` <VirtualHost *:1234> DocumentRoot "D:/xampp/htdocs/sample" ServerName sample.local </VirtualHost> <VirtualHost *:1234> DocumentRoot "D:/xampp/htdocs/anothersample" ServerName anothersample.local </VirtualHost> ``` I did make sure the vhost file above was include [![enter image description here](https://i.stack.imgur.com/6bWKu.png)](https://i.stack.imgur.com/6bWKu.png) I already restarted my apache but seems like my vhost is not working .Can anyone point out what i missed?
Thank you @ShamSUP AND @maytham-ɯɐɥıλɐɯ I was able to solve my problem by uninstalling the xampp. Then following the instructions [here](https://stackoverflow.com/a/32259668/4179779) I will just list the steps I have done here. 1. Windows+R and type `appwiz.cpl` and use Turn Windows features on or off and install the IIS Manager Console by expanding `Internet Information Services`->`Web Management Tools`->then checking `IIS Management Console` 2. Windows+R and type `InetMgr.exe` and enter, then expand `Site` Right Click it then click `Edit Bindings` 3. Change the **http** port from **80** to **8080** After that I then install the XAMPP and configure the Virtual host **host(C:\Windows\System32\drivers\etc\hosts) file** ``` 127.0.0.1 sample.local 127.0.0.1 anothersample.local ``` **vhost (D:\xampp\apache\conf\extra\httpd-vhosts.conf) file** ``` <VirtualHost *:80> DocumentRoot "D:\xampp\htdocs\sample" ServerName sample.local </VirtualHost> <VirtualHost *:80> DocumentRoot "D:/xampp/htdocs/anothersample" ServerName anothersample.local </VirtualHost> ``` And by default in windows virtual host is uncommented[![enter image description here](https://i.stack.imgur.com/6bWKu.png)](https://i.stack.imgur.com/6bWKu.png) After restarting apache and mysql.The Virtual host is running now.I hope this may help someone in the future.
No RTTI but still virtual methods C++ code can be compiled with run-time type information disabled, which disables `dynamic_cast`. But, virtual (polymorphic) methods still need to be dispatched based on the run-time type of the target. Doesn't that imply the type information is present anyway, and `dynamic_cast` should be able to always work?
Disabling RTTI kills `dynamic_cast` and `typeid` but has no impact on virtual functions. Virtual functions are dispatched via the "vtable" of classes which have any virtual functions; if you want to avoid having a vtable you can simply not have virtual functions. Lots of C++ code in the wild can work without `dynamic_cast` and almost all of it can work without `typeid`, but relatively few C++ applications would survive without any virtual functions (or more to the point, functions they expected to be virtual becoming non-virtual). A virtual table (vtable) is just a per-instance pointer to a per-type lookup table for all virtual functions. You only pay for what you use (Bjarne loves this philosophy, and initially resisted RTTI). With full RTTI on the other hand, you end up with your libraries and executables having quite a lot of elaborate strings and other information baked in to describe the name of each type and perhaps other things like the hierarchical relations between types. I have seen production systems where disabling RTTI shrunk the size of executables by 50%. Most of this was due to the massive string names that end up in some C++ programs which use templates heavily.
UI Router and query parameters I built a small search app using Angular, UI Router and Elasticsearch and I'm trying to get UI Router query parameters in the url on the results page. I'm trying to achieve this ``` domain.com/search?user_search_terms ``` with this ``` .state('search', { url: '/search?q', ``` and I init searchTerms and $stateParams like this in my controller ``` vm.searchTerms = $stateParams.q || ''; ``` and then in my search function in my controller I have this ``` vm.search = function() { $state.go('search', {q: vm.searchTerms}); ... ``` Everything works fine, until I try to implement the UI Route query parameters. I can still get search suggestions, transition from state to state but search breaks. I thought I needed to implement Angular $http get params within a config {}, but then I realized I'm JUST trying to get query parameters using UI Router. It seems I have everything setup right with UI Router to get query parameters but ... what am I doing wrong?
For query parameters, you should use **`$state.params`** instead of `$stateParams` **STATE CONFIG:** ``` stateProvider.state({ name: 'search', url: '/search?q', //... } ``` **CONTROLLER FROM:** ``` function fromCtrl ($state) { $state.go('search', {q: vm.searchTerms}); } ``` **OR TEMPLATE/HTML LINK FROM:** ``` <a ui-sref="search({q:'abc'})">my Link</a> ``` **CONTROLLER TO:** ``` function toCtrl ($state) { this.searchTerms = $state.params.q; } ``` --- **UPDATE:** use `$transition$` for new versions >= 1.0.0 ([PLUNKER DEMO](https://plnkr.co/edit/ZFGntE4FORWPEzZQaoZH?p=preview)) The code above is the same for both versions, only you need to change the `toCtrl`... ``` function toCtrl ($transition$) { this.myParam = $transition$.params().q; } ``` --- *If your searchTerms is an object, you can use `JSON.stringify()` and `JSON.parse()`* Check these posts if you still have any doubts: [How to extract query parameters with ui-router for AngularJS?](https://stackoverflow.com/questions/19053991/how-to-extract-query-parameters-with-ui-router-for-angularjs) [AngularJS: Pass an object into a state using ui-router](https://stackoverflow.com/questions/20632255/angularjs-pass-an-object-into-a-state-using-ui-router)
Adding custom locator shortcuts in Protractor In Protractor, there are [`$` and `$$` convenient shortcuts](https://stackoverflow.com/questions/31881583/protractor-by-css-vs-dollar-sign-vs-bling-bling) for CSS selectors: ``` $(".myclass") // means: element(by.css(".myclass")) $$(".myclass") // means: element.all(by.css(".myclass")) ``` Is it possible to *introduce custom shortcuts* for other locators? --- To be more specific, what if we want to have a `$r` and `$$r` shortcuts for "by repeater" calls. To be able to write: ``` $r("item in items") $$r("item in items") ``` instead of: ``` element(by.repeater("item in items")) element.all(by.repeater("item in items")) ```
To create a shortcut, add the custom locator on the global namespace and on the prototype of `ElementFinder` and `ElementArrayFinder`: ``` global.$r = function(selector) { return protractor.element(protractor.by.repeater(selector)); }; global.$$r = function(selector) { return protractor.element.all(protractor.by.repeater(selector)); }; ElementFinder.prototype.$$r = function(selector) { return this.all(protractor.by.repeater(selector)); }; ElementFinder.prototype.$r = function(selector) { return this.element(protractor.by.repeater(selector)); }; ElementArrayFinder.prototype.$$r = function(selector) { return this.all(protractor.by.repeater(selector)); }; ``` Usage: ``` $r("item in items") $$r("item in items") $("#id").$r("item in items") $("#id").$$r("item in items") ```
Sanitizing a Vaadin RichTextArea Vaadin framework has this useful RichTextArea component. It is however possible for users to insert harmful javascript for example into this field so the value of the field should be sanitized before saving. What would be the Vaadin way of doing this? Book of Vaadin only mentions that the field "should be sanitized" yet doesn't give a hint of how to actually do it. Asking in the forums a week ago didn't get any replies. I don't want to add anymore libraries to the project for this purpose. How would one go on about making his own RichTextArea sanitizer in Java with or without Vaadin?
The easiest approach is to use JSoup, which comes with Vaadin 7 (`vaadin-server` depends on it). E.g.: ``` Jsoup.clean(richTextArea.getValue(), Whitelist.simpleText()) ``` See [`Jsoup.clean`](http://jsoup.org/apidocs/org/jsoup/Jsoup.html#clean-java.lang.String-org.jsoup.safety.Whitelist-) > > > ``` > public static String clean(String bodyHtml, Whitelist whitelist) > > ``` > > Get safe HTML from untrusted input HTML, by parsing input HTML and filtering it through a white-list of permitted tags and attributes. > > > Parameters: > > > `bodyHtml` - input untrusted HTML (body fragment) > > > `whitelist` - white-list of permitted HTML elements > > > Returns: > > > safe HTML (body fragment) > > > and [`Whitelist`](http://jsoup.org/apidocs/org/jsoup/safety/Whitelist.html) > > > ``` > public class Whitelist extends Object > > ``` > > Whitelists define what HTML (elements and attributes) to allow through the cleaner. Everything else is removed. > > > Start with one of the defaults: > > > - `none()` > - `simpleText()` > - `basic()` > - `basicWithImages()` > - `relaxed()` > > >
How to parse aggregate in a group by statement [SNOWFLAKE] SQL How do you rewrite this code correctly in Snowflake? ``` select account_code, date, sum(box_revenue_recognition_amount) as box_revenue_recognition_amount , sum(case when box_flg = 1 then box_sku_quantity end) as box_sku_quantity , sum(box_revenue_recognition_refund_amount) as box_revenue_recognition_refund_amount , sum(box_discount_amount) as box_discount_amount , sum(box_shipping_amount) as box_shipping_amount , sum(box_cogs) as box_cogs , max(invoice_number) as invoice_number , max(order_number) as order_number , min(box_refund_date) as box_refund_date , first (case when order_season_rank = 1 then box_type end) as box_type , first (case when order_season_rank = 1 then box_order_season end) as box_order_season , first (case when order_season_rank = 1 then box_product_name end) as box_product_name , first (case when order_season_rank = 1 then box_coupon_code end) as box_coupon_code , first (case when order_season_rank = 1 then revenue_recognition_reason end) as revenue_recognition_reason from dedupe_sub_user_day group by account_code, date ``` I have tried to apply window rule has explained in [first\_value Snowflake documentation](https://docs.snowflake.com/en/sql-reference/functions/first_value.html) to no avail with the SQLCompilation Error: ... is not a valid group by expression ``` select account_code, date, first_value(case when order_season_rank = 1 then box_type end) over (order by box_type ) as box_type first_value(case when order_season_rank = 1 then box_order_season end) over (order by box_order_season ) as box_order_season, first_value(case when order_season_rank = 1 then box_product_name end) over (order by box_product_name ) as box_product_name, first_value(case when order_season_rank = 1 then box_coupon_code end) over (order by box_coupon_code ) as box_coupon_code, first_value(case when order_season_rank = 1 then revenue_recognition_reason end) over (order by revenue_recognition_reason ) as revenue_recognition_reason , sum(box_revenue_recognition_amount) as box_revenue_recognition_amount , sum(case when box_flg = 1 then box_sku_quantity end) as box_sku_quantity , sum(box_revenue_recognition_refund_amount) as box_revenue_recognition_refund_amount , sum(box_discount_amount) as box_discount_amount , sum(box_shipping_amount) as box_shipping_amount , sum(box_cogs) as box_cogs , max(invoice_number) as invoice_number , max(order_number) as order_number , min(box_refund_date) as box_refund_date from dedupe_sub_user_day group by 1,2 ```
First\_value is not an aggregate function. But an window function, thus you get an error when you use it in relation to a GROUP BY. If you want to use it with a group up put an ANY\_VALUE around it. here is some data I will use below in a CTE: ``` with data(id, seq, val) as ( select * from values (1, 1, 10), (1, 2, 11), (1, 3, 12), (1, 4, 13), (2, 1, 20), (2, 2, 21), (2, 3, 22) ) ``` So to show FIRST\_VALUE is a window function we can just use it ``` select * ,first_value(val)over(partition by id order by seq) as first_val from data ``` | ID | SEQ | VAL | FIRST\_VAL | | --- | --- | --- | --- | | 1 | 1 | 10 | 10 | | 1 | 2 | 11 | 10 | | 1 | 3 | 12 | 10 | | 1 | 4 | 13 | 10 | | 2 | 1 | 20 | 20 | | 2 | 2 | 21 | 20 | | 2 | 3 | 22 | 20 | So if we GROUP BY id, to avoid an error we have to wrap the FIRST\_VALUE by an aggregate value, as given the are all equal, ANY\_VALUE is a good pick, and it seems it needs to be in another layer of SQL: ``` select id ,count(*) as count ,any_value(first_val) as first_val from ( select * ,first_value(val)over(partition by id order by seq) as first_val from data ) group by 1 order by 1; ``` ID |COUNT |FIRST\_VAL 1 |4 |10 2 |3 |20 now MAX can be fun to use where used in relation to ROW\_NUMBER() to pick the best value: ``` select id ,count(*) as count ,max(first_val) as first_val from ( select * ,row_number() over (partition by id order by seq) as rn ,iff(rn=1, val, null) as first_val from data ) group by 1 order by 1; ``` but this is almost more complex than the ANY\_VALUE solution, but I feel the performance would be better, but if they have the same magnitude of performance, I would always choose readable to you and your team, over a smaller performance difference.
Physical operators in SQL Server execution plans: what are rebinds, rewinds and number of executions? I'm trying to understand physical operators in SQL Server execution plans. This page is quite helpful: <http://technet.microsoft.com/en-us/library/ms191158.aspx> SSMS 2008 shows some physical operator properties that are not displayed in SSMS 2005: `Estimated Number of Executions` and `Number of Executions`. But what do these actually mean, and how are they related to rebinds and rewinds? `Estimated Number of Executions` is particularly interesting because it doesn't seem to be stored in the XML. So how is it calculated? It seems to be equal to `Estimated Rebinds + Estimated Rewinds + 1.0`. But if that is the case why is `Number of Executions` not equal to `Actual Rebinds + Actual Rewinds + 1`? Thanks.
The book is a little off in that description. Rewind and Rebind are only applicable within a loop join. They refer to the use of the probe values from the outer part of the loop to the inner part of the loop and are only reflected in certain operators (that are costly, so it's worth knowing how often they're getting recalled). Rebinds and Rewinds should correlate directly to the number of executions, not the number + 1. Number of executions, estimated or actual, is the number of times that the operator is called. This can affected by a lot of things. For example in a loop join, you'll see multiple calls to the operators in the outer branch corresponding directly to the number of rows in the inner branch. You will absolutely see differences between the actual and the estimated. In the case of a loop (great example to beat on) you'll see an estimated value of one in the inner loop, but the actual number of executions will be, as already stated, equal to the number of rows in the outer.
Implementing Interfaces That "Inherit" (Implement) A Common Interface? ``` interface ITurtle { void Fight(); void EatPizza(); } interface ILeonardo : ITurtle { void UseKatana(); } interface IRaphael : ITurtle { void UseSai(); } interface IDonatello : ITurtle { void UseBo(); } interface IMichelangelo : ITurtle { void UseNunchuku(); } ``` What if I want to create a grand turtle that can do all 4? I want to code: ``` class GrandTurtle : IMichelangelo, IDonatello, IRaphael, ILeonardo { // Implementation not shown for brevity. } ``` Is this possible because now, it seems like I'd have to implement `Fight()` and `EatPizza()` 4 times each. But I think those two common functions will resolve and would only need to be implemented once, right? I could have created the 4 intermediate interfaces without inheriting `ITurtle`, and then have `GrandTurtle` implement `ITurtle`. This solves the interface inheritance issue but now it looks semantically wrong because it makes `ITurtle` look like a 5th brother which it's not. Plus, I want to be able to create turtle-specific classes, for example, `class BostonLeonardo : ILeonardo`. I've read from many places and it seems like an endless debate - some say "inheritance within interfaces" is perflectly fine, and those that say it's not - either I don't understand their explanation or they just say it's bad practice without explaining why.
You can only implement the methods `Fight` and `EatPizza` once because only one of the interfaces defines them. If you had `Fight` and `EatPizza` on each of the `ILeonardo` etc. interfaces, you could choose to implement them once OR **use explicit interface implementations to change the behavior of those methods per interface signature**. I'll do an example because I love TMNT: ``` interface ILeonardo { void Fight(); void EatPizza(); void UseKatana(); } interface IRaphael { void Fight(); void EatPizza(); void UseSai(); } interface IDonatello { void Fight(); void EatPizza(); void UseBo(); } interface IMichelangelo { void Fight(); void EatPizza(); void UseNunchuku(); } class GrandTurtle : IMichelangelo, IDonatello, IRaphael, ILeonardo { // Code that fires when Fight is called on ILeonardo turtle = new GrandTurtle() void ILeonardo.Fight() { UseKatana(); } // Code that fires when Fight is called on IRaphael turtle = new GrandTurtle() void IRaphael.Fight() { UseSai(); } // Code that fires for all other turtles public void Fight() { UseThatCrappyStickThingTheOldActionFiguresCameWith(); } // Implement EatPizza() and such here... } ``` These explicit interface implementations would take effect **only when the type signature of GrandTurtle is the appropriate interface**.
Creating requirements.txt in pip compatible format in a conda virtual environment I have created a conda virtual environment on a Windows 10 PC to work on a project. To install the required packages and dependencies, I am using `conda install <package>` instead of `pip install <package>` as per the best practices mentioned in <https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#using-pip-in-an-environment> In order to distribute my software, I choose to create an environment.yml and a requirements.txt file targeting the conda and non-conda users respectively. I am able to export the current virtual environment into a yml file, so the conda users are taken care of. But, for the non-conda users to be able to replicate the same environment, I need to create and share the requirements.txt file. This file can be created using `conda list --export > requirements.txt` but this format is not compatible with pip and other users can't use `pip install -r requirements.txt` on their systems. Using `pip freeze > requiremens.txt` is a solution that is mentioned [here](https://stackoverflow.com/questions/48787250/set-up-virtualenv-using-a-requirements-txt-generated-by-conda) and [here](https://stackoverflow.com/questions/50777849/from-conda-create-requirements-txt-for-pip3). This means that non-conda users can simply execute `pip install -r requirements.txt` inside a virtual environment which they may create using virtualenv in the absence of conda. However, if you generate a requiremets.txt file in the above style, you will end up with a requirements.txt file that has symbolic links. This is because we tried to create a requirements.txt file for packages that are installed using `conda install` and not `pip install`. For example, the requirements.txt file that I generated in a similar fashion looks like this. ``` certifi==2020.6.20 cycler==0.10.0 kiwisolver==1.2.0 matplotlib @ file:///C:/ci/matplotlib-base_1603355780617/work mkl-fft==1.2.0 mkl-random==1.1.1 mkl-service==2.3.0 numpy @ file:///C:/ci/numpy_and_numpy_base_1596215850360/work olefile==0.46 pandas @ file:///C:/ci/pandas_1602083338010/work Pillow @ file:///C:/ci/pillow_1602770972588/work pyparsing==2.4.7 python-dateutil==2.8.1 pytz==2020.1 sip==4.19.13 six==1.15.0 tornado==6.0.4 wincertstore==0.2 ``` These symbolic links will lead to errors when this file is used to install the dependencies. Steps I took that landed me to the above requirements.txt file: 1. Created a new conda virtual environment using `conda create -n myenv python=3.8` 2. Activated the newly created conda virtual environment using `conda activate myenv` 3. Installed pip using `conda install pip` 4. Installed pandas using `conda intall pandas` 5. Installed matplotlib using `conda install matplotlib` 6. generated a pip compatible requrements.txt file using `pip freeze > requirements.txt` So, my question is how do you stick to the best practice of using `conda install` instead of `pip install` while still being able to distribute your software package to both conda and non-conda users?
The best solution I've found for the above is the combination I will describe below. For `conda`, I would first export the environment list as `environment.yml` and omit the package build numbers, which is often what makes it hard to reproduce the environment on another OS: ``` conda env export > environment.yml --no-builds ``` Output: ``` name: myenv channels: - defaults - conda-forge dependencies: - blas=1.0 - ca-certificates=2020.10.14 - certifi=2020.6.20 ... ``` For `pip`, what you describe above is apparently a [well-known issue](https://github.com/pypa/pip/issues/8176) in more recent versions of pip. The workaround to get a "clean" `requirements.txt` file, is to export as such: ``` pip list --format=freeze > requirements.txt ``` Output: ``` certifi==2020.6.20 cycler==0.10.0 kiwisolver==1.2.0 matplotlib==3.3.2 mkl-fft==1.2.0 ... ``` Notice that the above are different between `pip` and `conda` and that is most likely because `conda` is more generic than `pip` and includes not only Python packages. Personally, I have found that for distributing a package, it is perhaps more concise to determine the minimum set of packages required and their versions by inspecting your code (what imports do you make?), instead of blindly exporting the full `pip` or `conda` lists, which might end up (accidentally or not) including packages that are not really necessary to use the package.
Custom .NET Data Providers Is is possible to use a custom .NET data provider without installing it in the GAC? Can I reference a custom DLL and register it inside my configuration file?
**Yes**, you can register an implementation of the [DbProviderFactory](http://msdn.microsoft.com/en-us/library/system.data.common.dbproviderfactory.aspx) class by adding the [following section](http://msdn.microsoft.com/en-us/library/dd0w4a2z%28v=vs.100%29.aspx) in your configuration file: ``` <system.data> <DbProviderFactories> <add name="My Custom Data Provider" invariant="MyCustomDataProvider" description="Data Provider for My Custom Store" type="MyNamespace.MyCustomProviderFactory, MyCustomDataProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=" /> </DbProviderFactories> </system.data> ``` The `MyCustomDataProvider` assembly doesn't have to be registered in the GAC but can be deployed together with the application as a [private assembly](http://msdn.microsoft.com/en-us/library/windows/desktop/ff951638%28v=vs.85%29.aspx). You can refer to the registered data provider programmatically by using the value specified in the `invariant` attribute. For example you could tell ADO.NET to use the `MyNamespace.MyCustomProviderFactory` by specifying `MyCustomProvider` as the `providerName` in the connection string: ``` <connectionStrings> <add name="ConnString" providerName="MyCustomProvider" connectionString="MyCustomConnectionString" /> </connectionStrings> ``` In code you can use the same provider name with the [DbProviderFactories.GetFactory](http://msdn.microsoft.com/en-us/library/system.data.common.dbproviderfactories.getfactory.aspx) method: ``` DbProviderFactory factory = DbProviderFactories.GetFactory("MyCustomDataProvider"); ``` where `factory` will be an instance of the `MyNamespace.MyCustomProviderFactory` class.
Why bool is not represented using float? In C (before C99), booleans are usually represented as ``` typedef int bool; #define true 1 #define false 0 ``` Why it is represented as 'int' rather than 'float'? This is an interview question, even I wonder why such question is asked! Any convincing answers?
`bool` values are mostly used in comparisons, and using the `int` type uses the [integer ALU](http://en.wikipedia.org/wiki/Arithmetic_logic_unit) for these comparisons. It is very fast, as it's in the CPU's normal pipeline. If you were to use the `float` type, then it would have to use the floating-point unit, which would take more cycles. Also, if you wanted to support using your `bool` type in mathematical expressions, i.e.: ``` x = (4 * !!bool1) + (2 * !bool1); ``` so as to avoid unnecessary branching, your use of the integer ALU would also be faster than using the floating point unit. The above code is equivalent to the following branching code: ``` if (bool1) { x = 4; } else { x = 2; } ```
How to make good use of stack trace (from kernel or core dump)? If you are lucky when your kernel module crashes, you would get an oops with a log with a lot of information, such as values in the registers etc. One such information is the stack trace (The same is true for core dumps, but I had originally asked this for kernel modules). Take this example: ``` [<f97ade02>] ? skink_free_devices+0x32/0xb0 [skin_kernel] [<f97aba45>] ? cleanup_module+0x1e5/0x550 [skin_kernel] [<c017d0e7>] ? __stop_machine+0x57/0x70 [<c016dec0>] ? __try_stop_module+0x0/0x30 [<c016f069>] ? sys_delete_module+0x149/0x210 [<c0102f24>] ? sysenter_do_call+0x12/0x16 ``` My guess is that the `+<number1>/<number2>` has something to do with the offset from function in which the error has occurred. That is, by inspecting this number, perhaps looking at the assembly output I should be able to find out the line (better yet, instruction) in which this error has occurred. Is that correct? My question is, what are these two numbers exactly? How do you make use of them?
``` skink_free_devices+0x32/0xb0 ``` This means the offending instruction is `0x32` bytes from the start of the function `skink_free_devices()` which is `0xB0` bytes long in total. If you compile your kernel with `-g` enabled, then you can get the line number inside functions where the control jumped using the tool `addr2line` or our good old `gdb` Something like this ``` $ addr2line -e ./vmlinux 0xc01cf0d1 /mnt/linux-2.5.26/include/asm/bitops.h:244 or $ gdb ./vmlinux ... (gdb) l *0xc01cf0d1 0xc01cf0d1 is in read_chan (include/asm/bitops.h:244). (...) 244 return ((1UL << (nr & 31)) & (((const volatile unsigned int *) addr)[nr >> 5])) != 0; (...) ``` So just give the address you want to inspect to `addr2line` or `gdb` and they shall tell you the line number in the source file where the offending function is present See [this](http://ds9a.nl/symoops.html) article for full details **EDIT:** `vmlinux` is the uncompressed version of the kernel used for debugging and is generally found @ `/lib/modules/$(uname -r)/build/vmlinux` provided you have built your kernel from sources. `vmlinuz` that you find at `/boot` is the compressed kernel and may not be that useful in debugging
How can I specify the order that before\_filters are executed? Does rails make any guarantees about the order that before filters get executed with either of the following usages: ``` before_filter [:fn1, :fn2] ``` or ``` before_filter :fn1 before_filter :fn2 ``` I'd appreciate any help.
If you refer <http://api.rubyonrails.org/v2.3.8/classes/ActionController/Filters/ClassMethods.html>, there is a subheading called "Filter chain ordering", here is the example code from that: ``` class ShoppingController < ActionController::Base before_filter :verify_open_shop class CheckoutController < ShoppingController prepend_before_filter :ensure_items_in_cart, :ensure_items_in_stock ``` According to the explanation: > > The filter chain for the `CheckoutController` is now > `:ensure_items_in_cart`, `:ensure_items_in_stock`, > `:verify_open_shop.` > > > So you can explicitly give the order of the filter chain like that.
How do I install Fish Shell on AWS Linux I can't seem to install Fish shell on an AWS Linux instance. I keep getting the following error and can't seem to find a fix for it. ``` [root@ip-172-31-20-125 ec2-user]# yum -y install fish Loaded plugins: priorities, update-motd, upgrade-helper Repository shells_fish_release_2 is listed more than once in the configuration Resolving Dependencies --> Running transaction check ---> Package fish.x86_64 0:2.5.0-1.2 will be installed --> Processing Dependency: hostname for package: fish-2.5.0-1.2.x86_64 --> Finished Dependency Resolution Error: Package: fish-2.5.0-1.2.x86_64 (shells_fish_release_2) Requires: hostname You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest ```
Same way as on Centos 6. [from:](https://software.opensuse.org/download.html?project=shells%3Afish%3Arelease%3A2&package=fish) ``` cd /etc/yum.repos.d/ wget http://download.opensuse.org/repositories/shells:fish:release:2/CentOS_6/shells:fish:release:2.repo yum install fish ``` --- 2021 update: If you're running a newer version of AWS Linux - run `rpm -E %{rhel}` to see the RHEL version, and then use [one of the links here](https://software.opensuse.org/download.html?project=shells%3Afish%3Arelease%3A3&package=fish) for the `wget` command. For instance, if the RHEL version is 7, run as root: ``` cd /etc/yum.repos.d/ wget --no-check-certificate https://download.opensuse.org/repositories/shells:fish:release:3/CentOS_7/shells:fish:release:3.repo yum install fish ```
java.time: Does the CET time zone considers daylight saving time? I use the new `java.time` implementation of Java 8 and wonder about the output of a UTC to [CET](http://en.wikipedia.org/wiki/Central_European_Time) time conversion result. ``` ZonedDateTime utcTime = ZonedDateTime.of(2014, 7, 1, 8, 0, 0, 0, ZoneId.of("UTC")); ZonedDateTime cetTime = ZonedDateTime.ofInstant(utcTime.toInstant(), ZoneId.of("CET")); System.out.println("Summer-UTC-Time: " + utcTime); System.out.println("Summer-CET-Time: " + cetTime); System.out.println(); utcTime = ZonedDateTime.of(2014, 1, 1, 8, 0, 0, 0, ZoneId.of("UTC")); cetTime = ZonedDateTime.ofInstant(utcTime.toInstant(), ZoneId.of("CET")); System.out.println("Winter-UTC-Time: " + utcTime); System.out.println("Winter-CET-Time: " + cetTime); ``` I expected that the CET time would always be +1 of the UTC time but instead I got: ``` Summer-UTC-Time: 2014-07-01T08:00Z[UTC] Summer-CET-Time: 2014-07-01T10:00+02:00[CET] -> +2 **Unexpected** Winter-UTC-Time: 2014-01-01T08:00Z[UTC] Winter-CET-Time: 2014-01-01T09:00+01:00[CET] -> +1 Expected ``` So apparently I have to deal with daylight saving time which I did not expect when using CET. Is the `java.time` CET in truth [CEST](http://en.wikipedia.org/wiki/Central_European_Summer_Time)? And if yes, what zone should I use if I need CET?
The [IANA definition of CET](http://www.iana.org/time-zones) is that it follows the time-zone rules of Central Europe, which includes both winter and summer time. The rules can be seen [here](https://github.com/eggert/tz/blob/master/europe#L627), which shows that "CET" is based on ["C-Eur"](https://github.com/eggert/tz/blob/master/europe#L564) which includes summer time. In `java.time` you can also see the full set of rules: ``` ZoneId zone = ZoneId.of("CET"); System.out.println(zone); System.out.println(zone.getRules()); for (ZoneOffsetTransition trans : zone.getRules().getTransitions()) { System.out.println(trans); } for (ZoneOffsetTransitionRule rule : zone.getRules().getTransitionRules()) { System.out.println(rule); } ``` which prints: ``` CET ZoneRules[currentStandardOffset=+01:00] Transition[Gap at 1916-04-30T23:00+01:00 to +02:00] Transition[Overlap at 1916-10-01T01:00+02:00 to +01:00] Transition[Gap at 1917-04-16T02:00+01:00 to +02:00] Transition[Overlap at 1917-09-17T03:00+02:00 to +01:00] Transition[Gap at 1918-04-15T02:00+01:00 to +02:00] Transition[Overlap at 1918-09-16T03:00+02:00 to +01:00] Transition[Gap at 1940-04-01T02:00+01:00 to +02:00] Transition[Overlap at 1942-11-02T03:00+02:00 to +01:00] Transition[Gap at 1943-03-29T02:00+01:00 to +02:00] Transition[Overlap at 1943-10-04T03:00+02:00 to +01:00] Transition[Gap at 1944-04-03T02:00+01:00 to +02:00] Transition[Overlap at 1944-10-02T03:00+02:00 to +01:00] Transition[Gap at 1945-04-02T02:00+01:00 to +02:00] Transition[Overlap at 1945-09-16T03:00+02:00 to +01:00] Transition[Gap at 1977-04-03T02:00+01:00 to +02:00] Transition[Overlap at 1977-09-25T03:00+02:00 to +01:00] Transition[Gap at 1978-04-02T02:00+01:00 to +02:00] Transition[Overlap at 1978-10-01T03:00+02:00 to +01:00] Transition[Gap at 1979-04-01T02:00+01:00 to +02:00] Transition[Overlap at 1979-09-30T03:00+02:00 to +01:00] Transition[Gap at 1980-04-06T02:00+01:00 to +02:00] Transition[Overlap at 1980-09-28T03:00+02:00 to +01:00] Transition[Gap at 1981-03-29T02:00+01:00 to +02:00] Transition[Overlap at 1981-09-27T03:00+02:00 to +01:00] Transition[Gap at 1982-03-28T02:00+01:00 to +02:00] Transition[Overlap at 1982-09-26T03:00+02:00 to +01:00] Transition[Gap at 1983-03-27T02:00+01:00 to +02:00] Transition[Overlap at 1983-09-25T03:00+02:00 to +01:00] Transition[Gap at 1984-03-25T02:00+01:00 to +02:00] Transition[Overlap at 1984-09-30T03:00+02:00 to +01:00] Transition[Gap at 1985-03-31T02:00+01:00 to +02:00] Transition[Overlap at 1985-09-29T03:00+02:00 to +01:00] Transition[Gap at 1986-03-30T02:00+01:00 to +02:00] Transition[Overlap at 1986-09-28T03:00+02:00 to +01:00] Transition[Gap at 1987-03-29T02:00+01:00 to +02:00] Transition[Overlap at 1987-09-27T03:00+02:00 to +01:00] Transition[Gap at 1988-03-27T02:00+01:00 to +02:00] Transition[Overlap at 1988-09-25T03:00+02:00 to +01:00] Transition[Gap at 1989-03-26T02:00+01:00 to +02:00] Transition[Overlap at 1989-09-24T03:00+02:00 to +01:00] Transition[Gap at 1990-03-25T02:00+01:00 to +02:00] Transition[Overlap at 1990-09-30T03:00+02:00 to +01:00] Transition[Gap at 1991-03-31T02:00+01:00 to +02:00] Transition[Overlap at 1991-09-29T03:00+02:00 to +01:00] Transition[Gap at 1992-03-29T02:00+01:00 to +02:00] Transition[Overlap at 1992-09-27T03:00+02:00 to +01:00] Transition[Gap at 1993-03-28T02:00+01:00 to +02:00] Transition[Overlap at 1993-09-26T03:00+02:00 to +01:00] Transition[Gap at 1994-03-27T02:00+01:00 to +02:00] Transition[Overlap at 1994-09-25T03:00+02:00 to +01:00] Transition[Gap at 1995-03-26T02:00+01:00 to +02:00] Transition[Overlap at 1995-09-24T03:00+02:00 to +01:00] Transition[Gap at 1996-03-31T02:00+01:00 to +02:00] Transition[Overlap at 1996-10-27T03:00+02:00 to +01:00] Transition[Gap at 1997-03-30T02:00+01:00 to +02:00] Transition[Overlap at 1997-10-26T03:00+02:00 to +01:00] TransitionRule[Gap +01:00 to +02:00, SUNDAY on or after MARCH 25 at 02:00 STANDARD, standard offset +01:00] TransitionRule[Overlap +02:00 to +01:00, SUNDAY on or after OCTOBER 25 at 02:00 STANDARD, standard offset +01:00] ``` The key here is to understand that the time-zone identifier and the ["short name"](http://docs.oracle.com/javase/8/docs/api/java/time/ZoneId.html#getDisplayName-java.time.format.TextStyle-java.util.Locale-) of that identifier are two different elements. The identifier is always fixed as "CET", but the name changes between "CET" and "CEST".
error : The given path's format is not supported Getting this error `The given path's format is not supported.` at this line ``` System.IO.Directory.CreateDirectory(visit_Path); ``` Where I am doing mistake in below code ``` void Create_VisitDateFolder() { this.pid = Convert.ToInt32(db.GetPatientID(cmbPatientName.SelectedItem.ToString())); String strpath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location); String path = strpath + "\\Patients\\Patient_" + pid + "\\"; string visitdate = db.GetPatient_visitDate(pid); this.visitNo = db.GetPatientID_visitNo(pid); string visit_Path = path +"visit_" + visitNo + "_" + visitdate+"\\"; bool IsVisitExist = System.IO.Directory.Exists(path); bool IsVisitPath=System.IO.Directory.Exists(visit_Path); if (!IsVisitExist) { System.IO.Directory.CreateDirectory(path); } if (!IsVisitPath) { System.IO.Directory.CreateDirectory(visit_Path);\\error here } } ``` getting this value for `visit_Path` ``` C:\Users\Monika\Documents\Visual Studio 2010\Projects\SonoRepo\SonoRepo\bin\Debug\Patients\Patient_16\visit_4_16-10-2013 00:00:00\ ```
In general always use [`Path.Combine`](http://msdn.microsoft.com/en-us/library/system.io.path.combine.aspx) to create paths: ``` String strPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location); String path = Path.Combine(strPath,"Patients","Patient_" + pid); string visitdate = db.GetPatient_visitDate(pid); this.visitNo = db.GetPatientID_visitNo(pid); string fileName = string.Format("visit_{0}_{1}", visitNo, visitdate); string visit_Path = Path.Combine(path, fileName); bool IsVisitExist = System.IO.Directory.Exists(path); bool IsVisitPath=System.IO.Directory.Exists(visit_Path); ``` To replace invalid characters from a filename you could use this loop: ``` string invalidChars = new string(Path.GetInvalidFileNameChars()) + new string(Path.GetInvalidPathChars()); foreach (char c in invalidChars) { visit_Path = visit_Path.Replace(c.ToString(), ""); // or with "." } ```
How do you split reading a large csv file into evenly-sized chunks in Python? In a basic I had the next process. ``` import csv reader = csv.reader(open('huge_file.csv', 'rb')) for line in reader: process_line(line) ``` See this related [question](https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks-in-python). I want to send the process line every 100 rows, to implement batch sharding. The problem about implementing the related answer is that csv object is unsubscriptable and can not use len. ``` >>> import csv >>> reader = csv.reader(open('dataimport/tests/financial_sample.csv', 'rb')) >>> len(reader) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object of type '_csv.reader' has no len() >>> reader[10:] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: '_csv.reader' object is unsubscriptable >>> reader[10] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: '_csv.reader' object is unsubscriptable ``` How can I solve this?
Just make your `reader` subscriptable by wrapping it into a `list`. Obviously this will break on really large files (see alternatives in the *Updates* below): ``` >>> reader = csv.reader(open('big.csv', 'rb')) >>> lines = list(reader) >>> print lines[:100] ... ``` Further reading: [How do you split a list into evenly sized chunks in Python?](https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks-in-python) --- **Update 1** (list version): Another possible way would just process each chuck, as it arrives while iterating over the lines: ``` #!/usr/bin/env python import csv reader = csv.reader(open('4956984.csv', 'rb')) chunk, chunksize = [], 100 def process_chunk(chuck): print len(chuck) # do something useful ... for i, line in enumerate(reader): if (i % chunksize == 0 and i > 0): process_chunk(chunk) del chunk[:] # or: chunk = [] chunk.append(line) # process the remainder process_chunk(chunk) ``` --- **Update 2** (generator version): I haven't benchmarked it, but maybe you can increase performance by using a chunk *generator*: ``` #!/usr/bin/env python import csv reader = csv.reader(open('4956984.csv', 'rb')) def gen_chunks(reader, chunksize=100): """ Chunk generator. Take a CSV `reader` and yield `chunksize` sized slices. """ chunk = [] for i, line in enumerate(reader): if (i % chunksize == 0 and i > 0): yield chunk del chunk[:] # or: chunk = [] chunk.append(line) yield chunk for chunk in gen_chunks(reader): print chunk # process chunk # test gen_chunk on some dummy sequence: for chunk in gen_chunks(range(10), chunksize=3): print chunk # process chunk # => yields # [0, 1, 2] # [3, 4, 5] # [6, 7, 8] # [9] ``` There is a minor gotcha, as [@totalhack](https://stackoverflow.com/users/10682164/totalhack) [points out](https://stackoverflow.com/questions/4956984/how-do-you-split-reading-a-large-csv-file-into-evenly-sized-chunks-in-python/4957046?noredirect=1#comment103177531_4957046): > > Be aware that this yields the same object over and over with different contents. This works fine if you plan on doing everything you need to with the chunk between each iteration. > > >
How to make Windows use always the latest CLR for .Net assemblies? Days ago I saw a very useful registry value that instructs Windows to use always the last CLR version installed, this way the user does not need to have older versions of .NetFx installed in the system to run older applications targeting those older .Net Framework versions like the always annonying .NetFx 3.5. I tested it and it works as expected. The problem is that I lost the value and the website on which I seen this value, I don't remember it and I put my effort to find it again in Google around the MSDN docs and random Windows personalization forums with no luck. My question is to know again the details of that registry value I reffered.
It should be as simple as doing the following: > > 1. Open `Regedit` > 2. Navigate to `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework` (64-bit) > > > 2b. Navigate to `HKEY_LOCAL_MACHINE\SOFTWARE\wow6432node\Microsoft\.NETFramework` (32-bit) > 3. Create a new `DWORD` value > 4. Set the name to `OnlyUseLatestCLR` and set value as `1` (decimal) > > > - [Google Search](https://www.google.com/#safe=off&q=force+windows+to+Use+always+last+CLR+registry+value) - [Relevant Source](http://www.overclockers.com/forums/archive/index.php/t-695707.html) - [Relevant Stackoverflow Question](https://stackoverflow.com/questions/2094694/how-can-i-run-powershell-with-the-net-4-runtime)
Backbone bootstrapped collection doesn't initialize correctly I have an issue, that was really hard to notice, because for the most part everything works. It was only when I tried to manipulate my data in my collections initialize function that I found a problem. The backbone docs at <http://backbonejs.org/#Collection-constructor> "If you define an initialize function, it will be invoked when the collection is created." so I interpreted that as, my initialize function won't run until after my models are set. "That sounds ideal," said I, but then I ran into this. My bootstrap code is as follows: ``` new MyCollection(<?php if ($data) {echo json_encode($data);} ?>); ``` My collection: ``` var MyCollection = Backbone.Collection.extend({ model: MyModel, initialize: function() { console.log(this); console.log(this.length); this.each(function(model) { console.log(model); }); } }); ``` I got strange results. The first `console.log(this);` was a collection object as expected: ``` { .... models: [3], length: 3 .... } ``` and the second `console(this.length);` printed out the number `0` the console inside `this.each()` didn't show up. What's happening?
The Collection constructor [looks like this](https://github.com/documentcloud/backbone/blob/master/backbone.js#L597): ``` var Collection = Backbone.Collection = function(models, options) { //... this._reset(); this.initialize.apply(this, arguments); //... this.reset(models, {silent: true, parse: options.parse}); //... }; ``` Step by step: 1. The [`this._reset()`](https://github.com/documentcloud/backbone/blob/master/backbone.js#L896) call does a `this.length = 0`. 2. The `this.initialize.apply(...)` is the call to your `initialize` method. 3. The [`this.reset(...)`](https://github.com/documentcloud/backbone/blob/master/backbone.js#L741) will call [`add`](https://github.com/documentcloud/backbone/blob/master/backbone.js#L633) to add the models. The `add` call will update the collection's `models` and `length` properties. So, when `initialize` is called, you'll have `this.length == 0` and `this.models` will be an empty array since only `_reset` will have been called here. Now we can easily see why `this.each` doesn't do anything and why `console.log(this.length)` says `0`. But why does `console.log(this)` tell us that we have a populated collection? Well, `console.log` doesn't happen right away, it just grabs references to its arguments and logs something to the console a little bit later; by the time `console.log` gets around to putting something in the console, you'll have gotten through **(3)** above and that means that you'll have the `this.models` and `this.length` that you're expecting to see. If you say ``` console.log(this.toJSON()); ``` or: ``` console.log(_(this.models).clone()) ``` you'll see the state of things when `console.log` is called rather than the state of things when `console.log` writes to the console. The documentation isn't exactly explicit about what is supposed to be ready when `initialize` is called so you're stuck tracing through the source. This isn't ideal but at least the Backbone source is clean and straight forward. You'll notice that `initialize` is called like this: ``` this.initialize.apply(this, arguments); ``` The [`arguments`](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Functions_and_function_scope/arguments) in there means that `initialize` will receive the same arguments as the constructor so you could look in there if you wanted: ``` initialize: function(models, options) { // The raw model data will be in `models` so do what // needs to be done. } ```
Practical meaning of std::strong\_ordering and std::weak\_ordering I've been reading a bit about C++20's [consistent comparison](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0515r3.pdf) (i.e. `operator<=>`) but couldn't understand what's the practical difference between `std::strong_ordering` and `std::weak_ordering` (same goes for the `_equality` version for this manner). Other than being very descriptive about the substitutability of the type, does it actually affect the generated code? Does it add any constraints for how one could use the type? Would love to see a real-life example that demonstrates this.
> > Does it add any constraints for how one could use the type? > > > ~~One very significant constraint (which wasn't intended by the original paper) was the adoption of the significance of `strong_ordering` by [P0732](https://wg21.link/p0732) as an indicator that a class type can be used as a non-type template parameter. `weak_ordering` isn't sufficient for this case due to how template equivalence has to work.~~ This is no longer the case, as non-type template parameters no longer work this way (see [P1907R0](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1907r0.html) for explanation of issues and [P1907R1](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1907r1.html) for wording of the new rules). Generally, it's possible that some algorithms simply require `weak_ordering` but other algorithms require `strong_ordering`, so being able to annotate that on the type might mean a compile error (insufficiently strong ordering provided) instead of simply failing to meet the algorithm's requirements at runtime and hence just being undefined behavior. But all the algorithms in the standard library and the Ranges TS that I know of simply require `weak_ordering`. I do not know of one that requires `strong_ordering` off the top of my head. > > Does it actually affect the generated code? > > > Outside of the cases where `strong_ordering` is required, or an algorithm explicitly chooses different behavior based on the comparison category, no.
Hide submit button until form is valid I'm fairly new to Jquery, so this might be a simple problem, but is there a way to hide the submit button on a form until all the fields have been validated. The validation would need to be an 'as you type' solution. Basically I have 3 fields - first name, last name and e-mail. I'd like the submit button to stay hidden until the two 'name' fields have been filled in and a valid e-mail address has been entered into the e-mail field. The form itself uses AJAX to input the form data into a database. The form is in a lightbox which should automatically close once the submit button is clicked. You can see an example here: <http://testing.xenongroupadmin.com/whatis/pfi> Ignore the 'Close this Window' link - that's just there for my convenience and will be removed in the final version. Below is the HTML code for the form, followed by the JQuery/AJAX submission code: ``` <form id="registerform" action="thanks.php" method="POST"> <ul id="inputform"> <li> <label for="firstname" id="firstnamelabel">First Name</label> <input type="text" name="first_name" id="fname" class="registerboxes" /> </li> <li> <label for="lastname" id="lastnamelabel">Last Name</label> <input type="text" name="last_name" id="lname" class="registerboxes" /> </li> <li> <label for="email" id="emaillabel">E-mail Address</label> <input type="text" name="emailbox" id="email" class="registerboxes" /> </li> </ul> <input type="submit" value="Submit" id="emailbutton" /> </form> ``` And the Jquery: ``` $(document).ready(function(){ $("form#registerform").submit(function() { var fname = $('#fname').attr('value'); var lname = $('#lname').attr('value'); var email = $('#email').attr('value'); $.ajax({ type: "POST", url: "post.php", data: "fname="+ fname +"& lname="+ lname +"& email="+ email, success: function(){ $('div#register-panel, div#lightbox').fadeOut(300); } }); return false; }); }); ``` Thanks!
<http://jsfiddle.net/nickaknudson/KnZaq/> ``` $(document).ready(function() { $('#emailbutton').hide(); $('input').change(function(e) { if ($('#fname').val() && $('#lname').val() && $('#email').val() && validateEmail($('#email').val())) { $('#emailbutton').show(); } }); }); var validateEmail = function(email) { var re = /^(([^<>()[\]\\.,;:\s@\"]+(\.[^<>()[\]\\.,;:\s@\"]+)*)|(\".+\"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/; return re.test(email); };​ ``` **UPDATE** Sorry, it may have been buggy on that version of the fiddle. Try again? <http://jsfiddle.net/nickaknudson/KnZaq/3/> **RESOURCES** - [Validate email address in JavaScript?](https://stackoverflow.com/questions/46155/validate-email-address-in-javascript)
another switch case with functions Diving deeper in the interesting python language, so there is no switch in the language as a construct. So using dictionaries is the first place choice by reading learning python first edition. So I tried sth like, ``` cases = { 2 : readt3(e,t,off, partElems, partsNodes), # to read the triangular elements 3 : readq4(e,t,off, partElems, partsNodes), # to read the quadrangular elements 5 : readh8(e,t,off, partElems, partsNodes), # to read the hexa elements } # define functions def readt3( e, t, off, partElems, partsNodes, partPnt ): partsElems[partPnt].append(e) nods = t[offset: offset+3]; for n in nods: partsNodes[partPnt].append(n) return ``` And got the error "readt3 is undefined", I thought I got this because it is not defined before the case then moved the function definitions up above cases but still the same problem but this time "e is not defined" I could not understand this, so e is a function parameter, why do I get a definition related error on e? Where should the functions definitions be placed while emulating a switch-case in this situation?
when you are doing something like this: ``` ... 2 : readt3(e,t,off, partElems, partsNodes) ... ``` actually you are evaluating (calling) the function `readt3` with the arguments `e,t,off, partElems, partsNodes` what i think you want to do is something like this (to emulate switch case statement ) : ``` def readt3( e, t, off, partElems, partsNodes, partPnt ): partsElems[partPnt].append(e) nods = t[offset: offset+3]; for n in nods: partsNodes[partPnt].append(n) return # And of course all your function definition should be here before the cases dict. cases = { 2 : readt3, # to read the triangular elements 3 : readq4, # to read the quadrangular elements 5 : readh8, # to read the hexa elements } ``` and now you can call your cases given a `case` argument like this : ``` case = 2 cases[case](e,t,off, partElems, partsNodes) ```
NextJS getStaticProps() never called I am making a simple website and I would like to fetch data from an API and display it on my component. The problem is that the `getStaticProps()` method is never called. Here is the code of the component : ``` import React from "react"; import {GetStaticProps, InferGetStaticPropsType} from "next"; type RawProject = { owner: string; repo: string; link: string; description: string; language: string; stars: number; forks: number; } function Projects({projects}: InferGetStaticPropsType<typeof getStaticProps>) { console.log(projects); return ( <section id="projects" className="bg-white p-6 lg:p-20"> <h1 className="sm:text-4xl text-2xl font-medium title-font mb-4 text-gray-900 pb-6 text-center"> Quelques de mes projets </h1> {/* <div className="container px-5 mx-auto"> <div className="flex flex-wrap"> {rawProjects.map((project: RawProject) => ( <ProjectCard title={project.repo} language={project.language} description={project.description} imageUrl="https://dummyimage.com/720x400" repoUrl={project.link} /> ))} </div> </div> */} </section> ); } export const getStaticProps: GetStaticProps = async () => { console.log("getStaticProps()"); const res = await fetch("https://gh-pinned-repos-5l2i19um3.vercel.app/?username=ythepaut"); const projects: RawProject[] = await res.json(); return !projects ? {notFound: true} : { props: {projects: projects}, revalidate: 3600 }; } export default Projects; ``` The full code can be found here : <https://github.com/ythepaut/webpage/tree/project-section> I am not sure if the problem is caused by the fact that I use typescript, or that I use a custom `_app.tsx` I tried the solutions from : - <https://github.com/vercel/next.js/issues/11328> - [How to make Next.js getStaticProps work with typescript](https://stackoverflow.com/questions/65078245/how-to-make-next-js-getstaticprops-work-with-typescript) but I couldn't make it work. Could someone help me please ? Thanks in advance.
`getStaticProps()` is only allowed in pages. Your code at the moment is : ``` import Hero from "../sections/Hero"; import Contact from "../sections/Contact"; import Projects from "../sections/Projects"; // you cannot call getStaticProps() in this componenet function HomePage(): JSX.Element { return ( <div className="bg-gray-50"> <Hero /> <Projects /> <Contact /> </div> ); } export default HomePage; ``` Instead call `getStaticProps()` inside `index.tsx` and pass the props to the component something like this :: ``` import Hero from "../sections/Hero"; import Contact from "../sections/Contact"; import Projects from "../sections/Projects"; function HomePage({data}): JSX.Element { return ( <div className="bg-gray-50"> <Hero /> <Projects data={data} /> <Contact /> </div> ); } export const getStaticProps: GetStaticProps = async () => { console.log("getStaticProps()"); const res = await fetch("https://gh-pinned-repos-5l2i19um3.vercel.app/?username=ythepaut"); const projects: RawProject[] = await res.json(); return !projects ? {notFound: true} : { props: {projects: projects}, revalidate: 3600 }; } export default HomePage; ```
How to disable DTD fetching using JAXB2.0 I'm trying to use JAXB to unmashall some XML which I used xjc to create in the first place. I don't want to do any validation on the unmarshalling, but even though I have disabled the validation according to the JAXB documentation with `u.setSchema(null);`, but this hasn't prevented a `FileNotFoundException` being thrown when it tries to run and can't find the schema. ``` JAXBContext jc = JAXBContext.newInstance("blast"); Unmarshaller u = jc.createUnmarshaller(); u.setSchema(null); return u.unmarshal(blast) ``` I've seen similar questions for disabling SAX parsing from validation by setting the apache property `http://apache.org/xml/features/validation/schema` to `false`, but I can't get the Unmarshaller to use my own sax parser.
Below is sample code that demonstrates how to get a [**JAXB (JSR-222)**](http://jcp.org/en/jsr/detail?id=222) implementation to use your SAX parser: ``` import java.io.FileReader; import javax.xml.XMLConstants; import javax.xml.bind.JAXBContext; import javax.xml.bind.Unmarshaller; import javax.xml.parsers.SAXParserFactory; import javax.xml.transform.sax.SAXSource; import org.xml.sax.InputSource; import org.xml.sax.XMLReader; public class Demo { public static void main(String[] args) throws Exception { JAXBContext jc = JAXBContext.newInstance(Foo.class); SAXParserFactory spf = SAXParserFactory.newInstance(); spf.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true); XMLReader xmlReader = spf.newSAXParser().getXMLReader(); InputSource inputSource = new InputSource(new FileReader("input.xml")); SAXSource source = new SAXSource(xmlReader, inputSource); Unmarshaller unmarshaller = jc.createUnmarshaller(); Foo foo = (Foo) unmarshaller.unmarshal(source); System.out.println(foo.getValue()); } } ```
What do the f and t commands do in Vim? What do `f` and `t` commands do in vim and exactly how they work?
Your first stop with questions like these should be vim's internal help, `:h f` and `:h t`. However, in this case, those entries are a bit cryptic without an example. Suppose we had this line (`^` = cursor position): ``` The quick brown fox jumps over the lazy dog. ^ ``` These commands find characters on a line. So `fb` would place the cursor here: ``` The quick brown fox jumps over the lazy dog. ^ ``` `t` is like `f` but places the cursor on the preceding character. So `tb` would give you: ``` The quick brown fox jumps over the lazy dog. ^ ``` You can remember these commands as `f`ind and `t`ill. Also, you can prepend the commands with a number to move to the nth occurrence of that character. For example, `3fb` would move to the third b to the right of the cursor. My example sentence only has one b though, so the cursor wouldn't move at all.
Net::HTTP.post\_form throws EOFError? So I am trying to get transaction detail from paypal without creating a form and just by sending a post to the url with the necessary information. This is a snippet of my code and I have tried doing the same through form and it works. ``` <form method=post action="https://www.sandbox.paypal.com/cgi-bin/webscr"> <input type="hidden" name="cmd" value="_notify-synch"> <input type="hidden" name="tx" value="<%= subscription.tx %>"> <input type="hidden" name="at" value="<%= @identity_token %>"> <input type="submit" value="View Details"> </form> ``` The Ruby counterpart throws a EOFError and I don't know why it's doing it ``` paypal_uri = URI.parse('https://www.sandbox.paypal.com/cgi-bin/webscr') @post_request = Net::HTTP.post_form(paypal_uri , {:tx => @subscription.tx, :at => IDENTITY_TOKEN, :cmd => "_notify-sync"}) ```
The URL is https, so you need to enable SSL on your Net::HTTP. ``` require 'openssl' paypal_uri = URI.parse('https://www.sandbox.paypal.com/cgi-bin/webscr') req = Net::HTTP::Post.new(paypal_uri.path) req.set_form_data({:tx => @subscription.tx, :at => IDENTITY_TOKEN, :cmd => "_notify-sync"}) sock = Net::HTTP.new(paypal_uri.host, 443) sock.use_ssl = true store = OpenSSL::X509::Store.new store.add_cert OpenSSL::X509::Certificate.new(File.new('paypal.pem')) store.add_cert OpenSSL::X509::Certificate.new(File.new('paypal2.pem')) sock.cert_store = store sock.start do |http| response = http.request(req) end ``` To get the CA certificates paypal.pem and paypal2.pem, simply browse to the PayPal URL manually, I'll describe it for FireFox. Click on the green icon on the left of your address bar, open the dialog, view certificate, Details, then export the two **VeriSign** certificates as paypal.pem and paypal2.pem. Put them into the same folder than your script. That should cure your problems!
SPARQL concat plus group\_concat on multiple fields I have the following RDF structure that I cannot change: [![enter image description here](https://i.stack.imgur.com/XXhyF.png)](https://i.stack.imgur.com/XXhyF.png) Multiple Assignments can be associated to each employee (Manager). The output I'd like would be (including the word "in" and "&): ``` Employee Name | Assignment Name 1 | Assignment1 in Location1 & Assignment2 in Location2 &.... Name 2 | Assignment1 in Location2 & Assignment3 in Location1 &.... ``` Is there a way to do this in `Sparql`? This is what I have so far: ``` select ?name group_concat(DISTINCT ?description; separator("&")) where { ?employee :hasName ?name { select concat(?name, "In", ?location) ?employee ^:hasManager/:hasAsstName ?name ?employee ^:hasManager/:hasLocation ?location } } ``` This gives me empty employee name and lots of ?Descriptions. It does not seem to reflect what I was expecting.
Assuming the nested query is fine, you should assign a variable there to group concatenate and then group the results for all not concatenated variables. The query should look something like this: ``` select ?name (group_concat(DISTINCT ?description; separator = " & ") as ?descriptions) where { ?employee :hasName ?name { select (concat(?name, " in ", ?location) AS ?description) ?employee ^:hasManager/:hasAsstName ?name ?employee ^:hasManager/:hasLocation ?location } } GROUP BY ?name ``` Note the syntax for `GROUP_CONCAT`. If you remove the subquery, it will be much faster. As I don't have your data, here's a very similar query on DBpedia, not using subquery: ``` SELECT ?name (GROUP_CONCAT(DISTINCT ?SpouseInfo; separator = " & ") AS ?SpousesInfo) { ?name a foaf:Person; dbo:spouse ?spouse. ?spouse dbo:residence/rdfs:label ?residence; rdfs:label ?spouse_name BIND (CONCAT(?spouse_name, " lives in ",?residence) AS ?SpouseInfo) } GROUP BY ?name ORDER BY ?name LIMIT 100 ``` Here's [the result](http://live.dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=SELECT+%3Fname+%28GROUP_CONCAT%28DISTINCT+%3FSpouseInfo%3B+separator+%3D+%22+%26+%22%29+AS+%3FSpousesInfo%29%0D%0A%0D%0A%7B%0D%0A%3Fname+a+foaf%3APerson%3B%0D%0Adbo%3Aspouse+%3Fspouse.%0D%0A%3Fspouse+dbo%3Aresidence%2Frdfs%3Alabel+%3Fresidence%3B%0D%0Ardfs%3Alabel+%3Fspouse_name%0D%0A%0D%0ABIND+%28CONCAT%28%3Fspouse_name%2C+%22+lives+in+%22%2C%3Fresidence%29+AS+%3FSpouseInfo%29%0D%0A%0D%0A%7D%0D%0AGROUP+BY+%3Fname%0D%0AORDER+BY+%3Fname%0D%0ALIMIT+100%0D%0A&format=text%2Fhtml&timeout=30000&debug=on).
How do I use Dialog resources in Win32? Without resources I can create my UI with a complex array of `CreateWindow()` and `CreateWindowEx()`, and `WndProc()` to process my events. I noticed if I right-click in the resource view and click "add resource", I can draw a dialog box with all the controls. This would save me a huge amount of time if I could draw the interface like I normally do with C#. After I've drawn the interface with the resource editor, how do I then create the window from code? Can someone provide a very simple example with a button, and show how to handle a WM\_COMMAND event on that button please? Also, is this generally how people create the GUI? Is there any loss in flexible to do this way? Even in C# I often have to supplement designer-generated UI with my own code-generated UI, but the majority of the time I'm quite happy to use designer.
After creating the dialog in the resource editor, call `CreateDialog`(modeless dialog;you need to dispatch the messages manually just like when you use `CreateWindow`) or `DialogBox`(modal dialog; the function does not return until you close the dialog. it does the dispatching for you) to make the dialog show up. Just like you pass in the window proc to RegisterClass, you pass the dialog proc to those functions for the dialog call back. An example of DialogProc looks likes this: ``` BOOL DialogProc( HWND hDlg, UINT iMessage, WPARAM wParam, LPARAM lParam ){ switch( iMessage ){ case WM_COMMAND: switch( LOWORD( wParam ) ){ case BTOK: MessageBox( hDlg, "Hello, World!", NULL, NULL ); return TRUE; break; } break; } return FALSE; } ``` This is a basic way of creating a dialog. More sophisticated method would normally involve OOP, usually wrapping each resource( button, window, etc) as a C++ object or using MFC.
What will be the best practice for having 'reviewed' source code in a source control repository? What will be the best way to manage reviewed source code in a source control repository? Should the source code go through a review process before getting checked in, or should the code review happen after the code is committed? If the review happens after the code is checked in to the repository, then how should that be tracked?
Google has the best code review practices of any place I have ever seen. Everyone I met there is in complete agreement on how to do code reviews. The mantra is "review early and often". Suppose you use a process that looks like what Graham Lee suggested. (Which is a process I'd previously used myself.) The problem is that reviewers are being asked to look at big chunks of code. That is a lot more effort, and it is harder to get reviewers to do it. And when they do do it, it is harder to get them to do a thorough job of it. Furthermore when they notice design issues, it is harder to get developers to go back and redo all of their working code to make it better. You still catch stuff, and it is still valuable, but you won't notice that you are missing over 90% of the benefit. By contrast Google has code review on **every single commit** before it can go into source control. Naively many people think that this would be a heavy-weight process. But it doesn't work out that way in practice. It turns out to be massively easier to review small pieces of code in isolation. When issues are found, it is much less work to change the design because you have not written a bunch of code around that design yet. The result is that it is much easier to do thorough code review, and much easier to fix issues changed. If you wish to do code review like Google does (which I really, really recommend), there is software to help you do so. Google has released their tool integrated with Subversion as [Rietveld](http://code.google.com/appengine/articles/rietveld.html). Go (the language) is developed with a version of Rietveld which is modified for use with Mercurial. There is a rewrite for people who use git named [Gerrit](http://code.google.com/p/gerrit/). I have also seen two commercial tools recommended for this, [Crucible](http://www.atlassian.com/software/crucible/) and [Review Board](http://www.reviewboard.org/). The only one I have used is Google's internal version of Rietveld, and I was very pleased with it.
iPhone - Draw transparent rectangle on UIView to reveal view beneath I currently have two UIViews: one of a red background and the other blue. The blue view is a subview of the red view. What I would like to do is be able to "cut" out rectangles on the blue view so that the red view can be visible. How do you go about doing this?
You have to override the top view's `drawRect` method. So, for example, you might create a `HoleyView` class that derives from `UIView` (you can do that by adding a new file to your project, selecting Objective-C subclass, and setting "Subclass of" to `UIView`). In `HoleyView`, `drawRect` would look something like this: ``` - (void)drawRect:(CGRect)rect { // Start by filling the area with the blue color [[UIColor blueColor] setFill]; UIRectFill( rect ); // Assume that there's an ivar somewhere called holeRect of type CGRect // We could just fill holeRect, but it's more efficient to only fill the // area we're being asked to draw. CGRect holeRectIntersection = CGRectIntersection( holeRect, rect ); [[UIColor clearColor] setFill]; UIRectFill( holeRectIntersection ); } ``` If you're using Interface builder, make sure to change the holey view's class to `HoleyView`. You can do that by selecting in the view in Interface Builder and selecting the "Identity" pane in the inspector (its the one on the far right the the "i" icon). You also have to set the top view to be non-opaque either with the following code snippet, or by un-checking the `Opaque` checkbox in the view's properties in Interface Builder (you'll find it in the View section of the view's attributes) and set its background color's opacity to 0% (background color is set in the same section). ``` topView.opaque = NO; topView.backgroundColor = [UIColor clearColor]; ``` If you want to do circles, you have to use `Core Graphics` (aka Quartz 2D). You'll probably want to read the programming guide, which is available [here](http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/). To draw an ellipse instead of the rectangle, your `drawRect` would look something like this: ``` - (void)drawRect:(CGRect)rect { // Get the current graphics context CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetFillColorWithColor( context, [UIColor blueColor].CGColor ); CGContextFillRect( context, rect ); if( CGRectIntersectsRect( holeRect, rect ) ) { CGContextSetFillColorWithColor( context, [UIColor clearColor].CGColor ); CGContextFillEllipseInRect( context, holeRect ); } } ```
Add Test Case to ITestSuiteBase in TFS API I'm working with the TFS API and have run into a problem with ITestSuiteBase and IRequirementTestSuite. I've mananged to easily create a new test case within a IStaticTestSuite: ``` IStaticTestSuite workingSuite = this.WorkingSuite as IStaticTestSuite; testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description); workingSuite.Entries.Add(testCase); this.Plan.Save(); ``` However, this solution doesn't work for requirements test suites or ITestSuiteBase. The method that I would assume would work is: ``` ITestcase testCase = null; testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description); this.WorkingSuite.AllTestCases.Add(testCase); this.WorkingSuite.TestCases.Add(testCase); this.Plan.Save(); ``` But this method doesn't actually add the test case to the suite. It does, however, add the test case to the plan. I can query the created test case but it doesn't show up in the suite as expected - even immediately in the code afterwards. Refreshing the working suite has no benefit. Additional code included below: ``` public static ITestCase CreateTestCase(ITestManagementTeamProject project, string title, string desc = "", TeamFoundationIdentity owner = null) { // Create a test case. ITestCase testCase = project.TestCases.Create(); testCase.Owner = owner; testCase.Title = title; testCase.Description = desc; testCase.Save(); return testCase; } ``` Has anyone been able to successfully add a test case to a requirements test suite or a ITestSuiteBase?
Giulio's link proved to be the best way to do this ``` testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description); if (this.BaseWorkingSuite is IRequirementTestSuite) TFS_API.AddTestCaseToRequirementSuite(this.BaseWorkingSuite as IRequirementTestSuite, testCase); else if (this.BaseWorkingSuite is IStaticTestSuite) (this.BaseWorkingSuite as IStaticTestSuite).Entries.Add(testCase); this.Plan.Save(); ``` And the important method: ``` public static void AddTestCaseToRequirementSuite(IRequirementTestSuite reqSuite, ITestCase testCase) { WorkItemStore store = reqSuite.Project.WitProject.Store; WorkItem tfsRequirement = store.GetWorkItem(reqSuite.RequirementId); tfsRequirement.Links.Add(new RelatedLink(store.WorkItemLinkTypes.LinkTypeEnds["Tested By"], testCase.WorkItem.Id)); tfsRequirement.Save(); reqSuite.Repopulate(); } ```