prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
SPARQL concat plus group\_concat on multiple fields I have the following RDF structure that I cannot change: [![enter image description here](https://i.stack.imgur.com/XXhyF.png)](https://i.stack.imgur.com/XXhyF.png) Multiple Assignments can be associated to each employee (Manager). The output I'd like would be (including the word "in" and "&): ``` Employee Name | Assignment Name 1 | Assignment1 in Location1 & Assignment2 in Location2 &.... Name 2 | Assignment1 in Location2 & Assignment3 in Location1 &.... ``` Is there a way to do this in `Sparql`? This is what I have so far: ``` select ?name group_concat(DISTINCT ?description; separator("&")) where { ?employee :hasName ?name { select concat(?name, "In", ?location) ?employee ^:hasManager/:hasAsstName ?name ?employee ^:hasManager/:hasLocation ?location } } ``` This gives me empty employee name and lots of ?Descriptions. It does not seem to reflect what I was expecting.
Assuming the nested query is fine, you should assign a variable there to group concatenate and then group the results for all not concatenated variables. The query should look something like this: ``` select ?name (group_concat(DISTINCT ?description; separator = " & ") as ?descriptions) where { ?employee :hasName ?name { select (concat(?name, " in ", ?location) AS ?description) ?employee ^:hasManager/:hasAsstName ?name ?employee ^:hasManager/:hasLocation ?location } } GROUP BY ?name ``` Note the syntax for `GROUP_CONCAT`. If you remove the subquery, it will be much faster. As I don't have your data, here's a very similar query on DBpedia, not using subquery: ``` SELECT ?name (GROUP_CONCAT(DISTINCT ?SpouseInfo; separator = " & ") AS ?SpousesInfo) { ?name a foaf:Person; dbo:spouse ?spouse. ?spouse dbo:residence/rdfs:label ?residence; rdfs:label ?spouse_name BIND (CONCAT(?spouse_name, " lives in ",?residence) AS ?SpouseInfo) } GROUP BY ?name ORDER BY ?name LIMIT 100 ``` Here's [the result](http://live.dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=SELECT+%3Fname+%28GROUP_CONCAT%28DISTINCT+%3FSpouseInfo%3B+separator+%3D+%22+%26+%22%29+AS+%3FSpousesInfo%29%0D%0A%0D%0A%7B%0D%0A%3Fname+a+foaf%3APerson%3B%0D%0Adbo%3Aspouse+%3Fspouse.%0D%0A%3Fspouse+dbo%3Aresidence%2Frdfs%3Alabel+%3Fresidence%3B%0D%0Ardfs%3Alabel+%3Fspouse_name%0D%0A%0D%0ABIND+%28CONCAT%28%3Fspouse_name%2C+%22+lives+in+%22%2C%3Fresidence%29+AS+%3FSpouseInfo%29%0D%0A%0D%0A%7D%0D%0AGROUP+BY+%3Fname%0D%0AORDER+BY+%3Fname%0D%0ALIMIT+100%0D%0A&format=text%2Fhtml&timeout=30000&debug=on).
How do I use Dialog resources in Win32? Without resources I can create my UI with a complex array of `CreateWindow()` and `CreateWindowEx()`, and `WndProc()` to process my events. I noticed if I right-click in the resource view and click "add resource", I can draw a dialog box with all the controls. This would save me a huge amount of time if I could draw the interface like I normally do with C#. After I've drawn the interface with the resource editor, how do I then create the window from code? Can someone provide a very simple example with a button, and show how to handle a WM\_COMMAND event on that button please? Also, is this generally how people create the GUI? Is there any loss in flexible to do this way? Even in C# I often have to supplement designer-generated UI with my own code-generated UI, but the majority of the time I'm quite happy to use designer.
After creating the dialog in the resource editor, call `CreateDialog`(modeless dialog;you need to dispatch the messages manually just like when you use `CreateWindow`) or `DialogBox`(modal dialog; the function does not return until you close the dialog. it does the dispatching for you) to make the dialog show up. Just like you pass in the window proc to RegisterClass, you pass the dialog proc to those functions for the dialog call back. An example of DialogProc looks likes this: ``` BOOL DialogProc( HWND hDlg, UINT iMessage, WPARAM wParam, LPARAM lParam ){ switch( iMessage ){ case WM_COMMAND: switch( LOWORD( wParam ) ){ case BTOK: MessageBox( hDlg, "Hello, World!", NULL, NULL ); return TRUE; break; } break; } return FALSE; } ``` This is a basic way of creating a dialog. More sophisticated method would normally involve OOP, usually wrapping each resource( button, window, etc) as a C++ object or using MFC.
What will be the best practice for having 'reviewed' source code in a source control repository? What will be the best way to manage reviewed source code in a source control repository? Should the source code go through a review process before getting checked in, or should the code review happen after the code is committed? If the review happens after the code is checked in to the repository, then how should that be tracked?
Google has the best code review practices of any place I have ever seen. Everyone I met there is in complete agreement on how to do code reviews. The mantra is "review early and often". Suppose you use a process that looks like what Graham Lee suggested. (Which is a process I'd previously used myself.) The problem is that reviewers are being asked to look at big chunks of code. That is a lot more effort, and it is harder to get reviewers to do it. And when they do do it, it is harder to get them to do a thorough job of it. Furthermore when they notice design issues, it is harder to get developers to go back and redo all of their working code to make it better. You still catch stuff, and it is still valuable, but you won't notice that you are missing over 90% of the benefit. By contrast Google has code review on **every single commit** before it can go into source control. Naively many people think that this would be a heavy-weight process. But it doesn't work out that way in practice. It turns out to be massively easier to review small pieces of code in isolation. When issues are found, it is much less work to change the design because you have not written a bunch of code around that design yet. The result is that it is much easier to do thorough code review, and much easier to fix issues changed. If you wish to do code review like Google does (which I really, really recommend), there is software to help you do so. Google has released their tool integrated with Subversion as [Rietveld](http://code.google.com/appengine/articles/rietveld.html). Go (the language) is developed with a version of Rietveld which is modified for use with Mercurial. There is a rewrite for people who use git named [Gerrit](http://code.google.com/p/gerrit/). I have also seen two commercial tools recommended for this, [Crucible](http://www.atlassian.com/software/crucible/) and [Review Board](http://www.reviewboard.org/). The only one I have used is Google's internal version of Rietveld, and I was very pleased with it.
iPhone - Draw transparent rectangle on UIView to reveal view beneath I currently have two UIViews: one of a red background and the other blue. The blue view is a subview of the red view. What I would like to do is be able to "cut" out rectangles on the blue view so that the red view can be visible. How do you go about doing this?
You have to override the top view's `drawRect` method. So, for example, you might create a `HoleyView` class that derives from `UIView` (you can do that by adding a new file to your project, selecting Objective-C subclass, and setting "Subclass of" to `UIView`). In `HoleyView`, `drawRect` would look something like this: ``` - (void)drawRect:(CGRect)rect { // Start by filling the area with the blue color [[UIColor blueColor] setFill]; UIRectFill( rect ); // Assume that there's an ivar somewhere called holeRect of type CGRect // We could just fill holeRect, but it's more efficient to only fill the // area we're being asked to draw. CGRect holeRectIntersection = CGRectIntersection( holeRect, rect ); [[UIColor clearColor] setFill]; UIRectFill( holeRectIntersection ); } ``` If you're using Interface builder, make sure to change the holey view's class to `HoleyView`. You can do that by selecting in the view in Interface Builder and selecting the "Identity" pane in the inspector (its the one on the far right the the "i" icon). You also have to set the top view to be non-opaque either with the following code snippet, or by un-checking the `Opaque` checkbox in the view's properties in Interface Builder (you'll find it in the View section of the view's attributes) and set its background color's opacity to 0% (background color is set in the same section). ``` topView.opaque = NO; topView.backgroundColor = [UIColor clearColor]; ``` If you want to do circles, you have to use `Core Graphics` (aka Quartz 2D). You'll probably want to read the programming guide, which is available [here](http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/). To draw an ellipse instead of the rectangle, your `drawRect` would look something like this: ``` - (void)drawRect:(CGRect)rect { // Get the current graphics context CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetFillColorWithColor( context, [UIColor blueColor].CGColor ); CGContextFillRect( context, rect ); if( CGRectIntersectsRect( holeRect, rect ) ) { CGContextSetFillColorWithColor( context, [UIColor clearColor].CGColor ); CGContextFillEllipseInRect( context, holeRect ); } } ```
Add Test Case to ITestSuiteBase in TFS API I'm working with the TFS API and have run into a problem with ITestSuiteBase and IRequirementTestSuite. I've mananged to easily create a new test case within a IStaticTestSuite: ``` IStaticTestSuite workingSuite = this.WorkingSuite as IStaticTestSuite; testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description); workingSuite.Entries.Add(testCase); this.Plan.Save(); ``` However, this solution doesn't work for requirements test suites or ITestSuiteBase. The method that I would assume would work is: ``` ITestcase testCase = null; testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description); this.WorkingSuite.AllTestCases.Add(testCase); this.WorkingSuite.TestCases.Add(testCase); this.Plan.Save(); ``` But this method doesn't actually add the test case to the suite. It does, however, add the test case to the plan. I can query the created test case but it doesn't show up in the suite as expected - even immediately in the code afterwards. Refreshing the working suite has no benefit. Additional code included below: ``` public static ITestCase CreateTestCase(ITestManagementTeamProject project, string title, string desc = "", TeamFoundationIdentity owner = null) { // Create a test case. ITestCase testCase = project.TestCases.Create(); testCase.Owner = owner; testCase.Title = title; testCase.Description = desc; testCase.Save(); return testCase; } ``` Has anyone been able to successfully add a test case to a requirements test suite or a ITestSuiteBase?
Giulio's link proved to be the best way to do this ``` testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description); if (this.BaseWorkingSuite is IRequirementTestSuite) TFS_API.AddTestCaseToRequirementSuite(this.BaseWorkingSuite as IRequirementTestSuite, testCase); else if (this.BaseWorkingSuite is IStaticTestSuite) (this.BaseWorkingSuite as IStaticTestSuite).Entries.Add(testCase); this.Plan.Save(); ``` And the important method: ``` public static void AddTestCaseToRequirementSuite(IRequirementTestSuite reqSuite, ITestCase testCase) { WorkItemStore store = reqSuite.Project.WitProject.Store; WorkItem tfsRequirement = store.GetWorkItem(reqSuite.RequirementId); tfsRequirement.Links.Add(new RelatedLink(store.WorkItemLinkTypes.LinkTypeEnds["Tested By"], testCase.WorkItem.Id)); tfsRequirement.Save(); reqSuite.Repopulate(); } ```
knockoutJS execute callback after foreach finishes rendering In my code i want to execute function or callback just after KnockoutJS `foreach` binding finishes rendering all the items i know i can do this simply by check if I'm at the last element (i found that here [execute code after the last item has been rendered](https://stackoverflow.com/questions/16385679/knockout-execute-code-after-the-last-item-has-been-rendered)). But using this my callback function 'll be called each time a new element or record is rendered. I want to execute my callback function only once (for performance). **UPDATE** another solution is here [success callback after knockout.js finishes rendering all the elements](https://stackoverflow.com/questions/14254317/success-callback-after-knockout-js-finishes-rendering-all-the-elements?rq=1). but again using this my callback function 'll be called each time a new element is rendered.
I think that a good solution for this type of issue is to use a custom binding. It would be something like: ``` ko.bindingHandlers.doSomething = { update: function(element, valueAccessor) { ko.utils.unwrapObservable(valueAccessor()); //grab a dependency to the obs array //do something based on "element" (the container) } } ``` You would use it like: ``` <ul data-bind="foreach: items, doSomething: items"> <li>...</li> </ul> ``` The `doSomething` needs to grab its own dependency to `items`, as `foreach` updates inside of its own computed observable and in KO 3.0 bindings will be independent. You could also pass options to `doSomething` and then grab a dependency by accessing the observableArray through `allBindingsAccessor().foreach` (the third arg), if you always couple it with `foreach`. Here is a sample that randomizes the background color of each element in the observableArray whenever once on each change to the observbaleArray: <http://jsfiddle.net/rniemeyer/SCqaS/>
CSS Dynamic Navigation with Hover - How Do I make it work in iOS Safari? In my site I use a CSS only dynamic menu. This is fine in desktop browsers, but not on iOS (iphone, ipad, etc) because the touch interface does not support the `:hover` selector. My question is: what is the best way of supporting this on iOS? (Ideally either by patching with some CSS, or Javascript that will make the existing code work, rather than doing the whole thing over just to support iOS) My html looks like this ``` <ul id="nav"> <li> Item 1 <ul> <li><a href=''>sub nav 1.1</a></li> <li><a href=''>sub nav 1.2</a></li> </ul> </li> <li> Item 2 <ul> <li><a href=''>sub nav 2.1</a></li> <li><a href=''>sub nav 2.2</a></li> </ul> </li> <li> Item 3 <ul> <li><a href=''>sub nav 3.1</a></li> <li><a href=''>sub nav 3.2</a></li> </ul> </li> </ul> ​​​​​ ``` And the CSS is this ``` #nav li { float:left; padding:0 15px; } #nav li ul { position: absolute; width: 10em; left: -999em; margin-left: -10px; } #nav li:hover ul { left: auto; } ``` I have done a jsfiddle of this here: <http://jsfiddle.net/NuTz4/>
Check this article, perhaps it's a solution for you ;) <http://www.usabilitypost.com/2010/05/12/css-hover-controls-on-iphone/> Also JS solution, taken from: <http://www.evotech.net/blog/2008/12/hover-pseudoclass-for-the-iphone/> ``` var nav = document.getElementById('nav'); var els= nav.getElementsByTagName('li'); for(var i = 0; i < els.length; i++){ els[i].addEventListener('touchstart', function(){this.className = "hover";}, false); els[i].addEventListener('touchend', function(){this.className = "";}, false); } ``` In jQuery: ``` $('#nav li').bind('touchstart', function(){ $(this).addClass('hover'); }).bind('touchend', function(){ $(this).removeClass('hover'); }); ``` css: ``` li:hover, li.hover { /* whatever your hover effect is */ } ```
Building lxml for Python 2.7 on Windows I am trying to build lxml for Python 2.7 on Windows 64 bit machine. I couldn't find lxml egg for Python 2.7 version. So I am compiling it from sources. I am following instructions on this site <http://lxml.de/build.html> under static linking section. I am getting error ``` C:\Documents and Settings\Administrator\Desktop\lxmlpackage\lxml-2.2.6\lxml-2.2. 6>python setup.py bdist_wininst --static Building lxml version 2.2.6. NOTE: Trying to build without Cython, pre-generated 'src/lxml/lxml.etree.c' need s to be available. ERROR: 'xslt-config' is not recognized as an internal or external command, operable program or batch file. ** make sure the development packages of libxml2 and libxslt are installed ** Using build configuration of libxslt Building against libxml2/libxslt in one of the following directories: ..\libxml2-2.7.6--win32--w2k--x64\lib ..\libxslt-1.1.26--win32--w2k--x64--0002\lib ..\zlib-1.2.4--win32--w2k--x64 ..\iconv-1.9.1--win32--w2k--x64-0001\lib running bdist_wininst running build running build_py running build_ext building 'lxml.etree' extension error: Unable to find vcvarsall.bat ``` Can any one help me with this? I tried setting the path to have Microsoft Visual Studio.. I can run vcvarsall.bat from the commandline.. but python is having problems
I bet you're not using VS 2008 for this :) There's [def find\_vcvarsall(version):](http://hg.python.org/releasing/2.7.6/file/ba31940588b6/Lib/distutils/msvc9compiler.py#l219) function (guess what, it looks for vcvarsall.bat) in distutils with the following comment > > At first it tries to find the > productdir of VS 2008 in the registry. > If that fails it falls back to the > VS90COMNTOOLS env var. > > > If you're not using VS 2008 then you have neither the registry key nor suitable environment variable and that's why distutils can't find vcvarsall.bat file. It does **not** check if the bat file is reachable through the PATH environment variable. The solution is to define VS90COMNTOOLS variable to point to Tools directory of Visual Studio. That being said take a look at [11.4. distutils.msvccompiler — Microsoft Compiler](http://docs.python.org/distutils/apiref.html#module-distutils.msvccompiler) section in Python's docs which states > > Typically, extension modules need to > be compiled with the same compiler > that was used to compile Python. > > > Martin v. Loewis in the email titled [Download Visual Studio Express 2008 now](http://mail.python.org/pipermail/python-list/2010-April/573606.html) on python-list mailing list states the same > > Python 2.6, 2.7, and 3.1 are all built > with that release (i.e. 2008). Because > of another long tradition, Python > extension modules must be built with > the same compiler version (more > specifically, CRT version) as Python > itself. So to build extension modules > for any of these releases, you need to > have a copy of VS 2008 or VS 2008 > Express. > > > In the light of above statements you **should** use VS 2008 if you want to build lxml for Python 2.7 so although setting VS90COMNTOOLS takes care of finding vcvarsall.bat file it's not **the** solution. That being said :) people do try to use older CRT with newer compiler: [Can I use Visual Studio 2010's C++ compiler with Visual Studio 2008's C++ Runtime Library?](https://stackoverflow.com/questions/2484511/) [How to Enforce C++ compiler to use specific CRT version?](https://stackoverflow.com/questions/730134/) [VS 2008 - Link against older C runtime](https://stackoverflow.com/questions/693250/) I'd like to thank Kev Dwyer (for pointing out importance of version of VS which is used) and Stefan Behnel (for pointing me to distutils as a place dealing with compiler's configuration) in the thread [Problem building lxml under Windows - error: Unable to find vcvarsall.bat](http://thread.gmane.org/gmane.comp.python.lxml.devel/5865) on lxml mailing list. I'd like to also thank *agronholm* from freenode #distutils IRC channel for confirmation that distutils does contain code which looks for vcvarsall.bat file.
PHP: Run function on multiple strings I am currently using `rtrim()` to remove trailing newlines from strings. Is there a more efficient way to run the following commands? ``` $a = rtrim($a); $b = rtrim($b); ... $z = rtrim($z); ``` (Note, `$a` .. `$z` are just placeholders, they're not the real variable names) These variables are being created through the following function ``` foreach ($xml->xpath('//hardware') as $hwprofile) { $machine .= $hwprofile->machine . "\n"; $count .= $hwprofile->ProcessorInfo->count . "\n"; $speed .= $hwprofile->ProcessorInfo->speed . "\n"; $type .= $hwprofile->ProcessorInfo->type . "\n"; $arch .= $hwprofile->ProcessorInfo->architecture . "\n"; } ``` I want to trim those variables because they all have trailing newlines
This should work for you: Just fill the array `$items` with your variable names. Then it loops through all variables and with [`variable variables`](http://php.net/manual/en/language.variables.variable.php) and rtim's them. ``` $items = range("a", "z"); //your variable names e.g. $items = ["machine", "count"] foreach($items as $item) { $$item = rtrim($$item); } ``` Another variant which you could also use, since you said you need a new line between every line expect the last one, just create an array like this: ``` foreach ($xml->xpath('//hardware') as $hwprofile) { $machine[] = $hwprofile->machine; $count[] = $hwprofile->ProcessorInfo->count; $speed[] = $hwprofile->ProcessorInfo->speed; $type[] = $hwprofile->ProcessorInfo->type; $arch[] = $hwprofile->ProcessorInfo->architecture; } ``` And if you need it now just use: ``` implode(PHP_EOL, $machine); ```
Why can you call the base version of a method from the override version of the same method I'm confused why C# is letting me do this: **base class** ``` public virtual void OnResultExecuted(ResultExecutedContext filterContext) { } ``` **derived class** ``` public override void OnResultExecuted(ResultExecutedContext filterContext) { base.OnResultExecuted(filterContext); } ``` This code works with no problem. But how is it calling base.OnResultExecuted when that is the method I am overriding?
Why this is useful is pretty obvious. "How?" is less obvious, but also interesting. The MSIL encoding in which .NET code is stored has two instructions for a method call: - `call` - `callvirt` The difference is that when `callvirt` is used with a virtual method, it doesn't call the method indicated. Instead, it maps the method indicated to a slot in the object class's vtable, finds the actual implementation belonging to the object's class, and calls that version. (For non-virtual methods, `callvirt` just adds a null check and then directly calls the indicated method). The `call` instruction doesn't use the vtable. It simply calls the method named in the MSIL. When you use the `base` keyword in C#, the compiler generates a `call` instruction, so that the exact method provided by the base class is used, and not the overriding method linked in the vtable. This behavior is [documented on MSDN for the `call` opcode](http://msdn.microsoft.com/en-us/library/system.reflection.emit.opcodes.call.aspx) > > It is valid to call a virtual method using `call` (rather than `callvirt`); this indicates that the method is to be resolved using the class specified by *method* rather than as specified dynamically from the object being invoked. > > >
How does Calender.set(Calender.Month, ?) Work? I am writing a method that can advance the date by a given number of weeks. Here is my code: ``` public class Date { int year; int month; int day; public Date (int year, int month, int day){ this.year = year; this.month = month; this.day = day; } public void addWeeks (int weeks){ int week = weeks * 7; DateFormat dateFormat = new SimpleDateFormat("MM/dd/yyyy"); Calendar calendar = Calendar.getInstance(); calendar.set(Calendar.DAY_OF_MONTH, this.day); calendar.set(Calendar.MONTH, this.month); calendar.set(Calendar.YEAR, this.year); calendar.add(Calendar.DAY_OF_MONTH, week); System.out.println(); System.out.println("Date after adding " + weeks + " weeks is: " + dateFormat.format(calendar.getTime())); } ``` So if I pass today's date to year, month and day. (03/08/2019) and then call the addWeeks(1) for example, then the date should advance as (03/15/2019) but it gives me (04/15/2019). For some reason the month is always 1 more than what I enter. If I enter 2 for the month, it gives 3, if I enter 3 it gives 4.
Here's why: > > `public static final int MONTH`: Field number for get and set indicating > the month. This is a calendar-specific value. The first month of the > year in the Gregorian and Julian calendars is JANUARY which is 0; the > last depends on the number of months in a year. > > > So, you need: ``` calendar.set(Calendar.MONTH, this.month-1); Jan: 0 Feb: 1 Mar: 2 Apr: 3 May: 4 Jun: 5 Jul: 6 Aug: 7 Sep: 8 Oct: 9 Nov: 10 Dec: 11 ```
Node.js, Express, Mongoose - input validation - within route or model? I have a rest api resource that accepts a JSON post. Example: ``` { "location": { "coordinates": [ -122.41941550000001, 37.7749295 ] } ``` The coordinates are then collected from the request by Express: ``` module.exports.create = function(req, res, next) { var coordinates = req.body.location.coordinates; .... ``` These are then submitted to a Mongoose model. I am writing tests against this where the location.coordinates is missing e.g. ``` { "foo": { "bar": [ -122.41941550000001, 37.7749295 ] } ``` This then fails within the validation section of the Model with : ``` locationSchema.path('location.coordinates').validate(function(coordinates){ ^ TypeError: Cannot call method 'validate' of undefined ``` **So my question is how would I validate that the input is correct? Should this be done in the route before getting to the model, or should it be done in the model? Any examples of how would also be appreciated.** For reference the Mongoose model looks something like: ``` var locationSchema = new Schema({ userid: { type: Number, required: true }, location: { type: [{ type: "String", required: true, enum: ['Point', 'LineString', 'Polygon'], default: 'Point' }], required: true, coordinates: { type: [Number], required:true } }, create_date: { type: Date, default: Date.now } }); locationSchema.path('location.coordinates').validate(function(coordinates){ ... }, 'Invalid latitude or longitude.'); ```
My typical approach is to introduce a service layer in between the routes and the model, and that's where the validation happens. Don't think "service" in the "web service" sense; it simply provides an abstraction level around a given domain. This has the following benefits: - It gives you a common abstraction for dealing with persisted and/or external data. That is, whether you're interacting with data from Mongoose or an external web service, all of your route logic can simply interact with a consistent interface. - It provides sound encapsulation around persistence details, allowing you to swap out the implementation without effecting all of your routes. - It allows you to re-use code with non-route consumers (such as an integration test suite). - It provides a good layer for mocking (for use with unit tests, for example). - It provides a very clear "validation and business logic happens here" layer, even when your data is spread across several different databases and/or backend systems. Here's a simplified example of what that might look like: **`location-service.js`** ``` var locationService = module.exports = {}; locationService.saveCoordinates = function saveCoordinates(coords, cb) { if (!isValidCoordinates(coords)) { // your failed validation response can be whatever you want, but I // like to reserve actual `Error` responses for true runtime errors. // the result here should be something your client-side logic can // easily consume and display to the user. return cb(null, { success: false, reason: 'validation', validationError: { /* something useful to the end user here */ } }); } yourLocationModel.save(coords, function(err) { if (err) return cb(err); cb(null, { success: true }); }); }; ``` **`some-route-file.js`** ``` app.post('/coordinates', function(req, res, next) { var coordinates = req.body.location.coordinates; locationService.saveCoordinates(coordinates, function(err, result) { if (err) return next(err); if (!result.success) { // check result.reason, handle validation logic, etc. } else { // woohoo, send a 201 or whatever you need to do } }); }); ``` I've applied this structure to 3 or 4 different web apps and APIs at this point, and have grown quite fond of it.
How would auto&& extend the life-time of the temporary object? The code below illustrated my concern: ``` #include <iostream> struct O { ~O() { std::cout << "~O()\n"; } }; struct wrapper { O const& val; ~wrapper() { std::cout << "~wrapper()\n"; } }; struct wrapperEx // with explicit ctor { O const& val; explicit wrapperEx(O const& val) : val(val) {} ~wrapperEx() { std::cout << "~wrapperEx()\n"; } }; template<class T> T&& f(T&& t) { return std::forward<T>(t); } int main() { std::cout << "case 1-----------\n"; { auto&& a = wrapper{O()}; std::cout << "end-scope\n"; } std::cout << "case 2-----------\n"; { auto a = wrapper{O()}; std::cout << "end-scope\n"; } std::cout << "case 3-----------\n"; { auto&& a = wrapper{f(O())}; std::cout << "end-scope\n"; } std::cout << "case Ex-----------\n"; { auto&& a = wrapperEx{O()}; std::cout << "end-scope\n"; } return 0; } ``` See it live [here](http://coliru.stacked-crooked.com/a/6d0f8f66f5f2b942). It's said that `auto&&` will extend the life-time of the temporary object, but I can't find the standard words on this rule, at least not in N3690. The most relevant may be section 12.2.5 about temporary object, but not exactly what I'm looking for. So, would auto&& life-time extension rule apply to *all* the temporary objects involved in the expression, or only the final result? More specific, is `a.val` guaranteed to be valid (non-dangling) before we reach the end-of-scope in case 1? **Edit:** I updated the example to show more cases (3 & Ex). You'll see that only in case 1 the lifetime of O is extended.
In the same way that a reference to `const` does: ``` const auto& a = wrapper{O()}; ``` or ``` const wrapper& a = wrapper{O()}; ``` or also ``` wrapper&& a = wrapper{O()}; ``` > > > > > > More specific, is `a.val` guaranteed to be valid (non-dangling) before we reach the end-of-scope in case 1? > > > > > > > > > Yes, it is. There's (almost) nothing particularly important about `auto` here. It's just a *place holder* for the correct type (`wrapper`) which is deduced by the compiler. The main point is the fact that the temporary is bound to a reference. For more details see [A Candidate For the “Most Important const”](http://herbsutter.com/2008/01/01/gotw-88-a-candidate-for-the-most-important-const/) which I quote: > > > > > > Normally, a temporary object lasts only until the end of the full expression in which it appears. However, C++ deliberately specifies that binding a temporary object to a reference to const on the stack lengthens the lifetime of the temporary to the lifetime of the reference itself > > > > > > > > > The article is about C++ 03 but the argument is still valid: a temporary can be bound to a reference to `const` (but not to a reference to non-`const`). In C++ 11, a temporary can **also** be bound to an rvalue reference. In both cases, the lifetime of the temporary is extended to the lifetime of the reference. The relevant parts of the C++11 Standard are exactly those referred in the OP, that is, 12.2 p4 and p5: > > 4 - There are two contexts in which temporaries are destroyed at a > different point than the end of the full expression. The first context > is [...] > > > 5 - The second context is when a reference is bound to a temporary. [...] > > > (There are some exceptions in the bullet points following these lines.) **Update**: (Following texasbruce's comment.) The reason why the `O` in case 2 has a short lifespan is that we have `auto a = wrapper{O()};` (see, there's no `&` here) and then the temporary is **not** bound to a reference. The temporary is, actually, copied into `a` using the compiler generated copy-constructor. Therefore, the temporary doesn't have its lifetime expanded and dies at the end of the full expression in which it appears. There's a danger in this particular example because `wrapper::val` is a reference. The compiler generated copy-constructor of `wrapper` will bind `a.val` to the same object that the temporary's `val` member is bound to. This object is also a temporary but of type `O`. Then, when this latter temporary dies we see `~O()` on the screen and `a.val` dangles! Contrast case 2 with this: ``` std::cout << "case 3-----------\n"; { O o; auto a = wrapper{o}; std::cout << "end-scope\n"; } ``` The output is (when compiled with gcc using option `-fno-elide-constructors`) ``` case 3----------- ~wrapper() end-scope ~wrapper() ~O() ``` Now the temporary `wrapper` has its `val` member bound to `o`. Notice that `o` is not a temporary. As I said, `a` is a copy of the `wrapper` temporary and `a.val` also binds to `o`. Before the scope ends the temporary `wrapper` dies and we see the first `~wrapper()` on the screen. Then the scope ends and we get `end-scope`. Now, `a` and `o` must be destroyed in the reverse order of construction, hence we see `~wrapper()` when `a` dies and finally `~O()` when it's `o`'s time. This shows that `a.val` doesn't dangle. (Final remark: I've used `-fno-elide-constructors` to prevent a optimization related to copy-construction that would complicate the discussion here but this is another [story](https://stackoverflow.com/a/19792864/1137388).)
Can eBPF modify the return value or parameters of a syscall? To simulate some behavior I would like to attach a probe to a syscall and modify the return value when certain parameters are passed. Alternatively, it would also be enough to modify the parameters of the function before they are processes. Is this possible with BPF?
I believe that attaching eBPF to kprobes/kretprobes gives you read access to function arguments and return values, but that you cannot tamper with them. I am NOT 100% sure; good places to ask for confirmation would be the IO Visor project [mailing list](http://lists.iovisor.org/pipermail/iovisor-dev/) or IRC channel (#iovisor at irc.oftc.net). As an alternative solution, I know you can at least change the return value of a syscall with [strace](https://strace.io/), with the `-e` option. Quoting [the manual page](http://man7.org/linux/man-pages/man1/strace.1.html): > > > ``` > -e inject=set[:error=errno|:retval=value][:signal=sig][:when=expr] > Perform syscall tampering for the specified set of syscalls. > > ``` > > Also, there was [a presentation](https://fosdem.org/2017/schedule/event/failing_strace/) on this, and fault injection, at Fosdem 2017, if it is of any interest to you. Here is one example command from the slides: ``` strace -P precious.txt -efault=unlink:retval=0 unlink precious.txt ``` **Edit:** As stated by Ben, eBPF on kprobes and tracepoints is definitively read only, for tracing and monitoring use cases. I also got confirmation about this on IRC.
Scanning resolution At <http://www.scantips.com/basics07.html> it is written: *The motor in the 600x1200 dpi scanner can step in 1/1200 inch steps vertically. If we select 300 dpi, it will move four steps at a time vertically, and resample to 50% horizontally, to give a 300x300 dpi image.* Can someone explain this? How resampling to 50% horizontally gives 300x300 dpi image?
The CCD is actually a single-line device. In the example, the single line CCD scans at a horizontal density of 600 samples-per-inch. We can call this a scanline. The hardware always captures 600 samples per inch (referred to as it's "optical density"). It moves vertically by an amount based on the motor and the settings. In the example, the motor moves 1/1200th of an inch for each increment, allowing for 1200 scanlines per inch vertically. When you set the scanner to 300x300, it still captures a row of samples at a density of 600 samples-per-inch. It then uses software to resample (i.e., average or interpolate the original values to produce new ones). The result is a scanline at a density of 300 samples per inch, by 1 scanline high. Then the motor moves the CCD and takes another row. By moving 4 increments of 1/1200 inch, the next scanline is 1/300 inch away. After moving the CCD 4 increments 300 times, taking a scanline each time, you now have a 300x300 image.
How Can I Share Visual Studio 2017 Code Style and Formatting? My team wants to share our code styling and formatting. We've been using Resharper for this and wanted to take advantage of the new native features in VS2017! What is the best way to do this?
[.editorconfig](http://editorconfig.org/)! Visual Studio 2017 will now [respect settings from a .editorconfig](https://learn.microsoft.com/en-us/visualstudio/ide/create-portable-custom-editor-options) file if it exists on disk, up to the project root. Currently this is supported on a *per-project* basis, not on a solution level (I believe R# may have supported this in the solution folder). Almost every editor in VS should support basic editorconfig options, such as: - indent\_style - indent\_size - tab\_width - end\_of\_line Additionally, some languages also provide support for language-specific style guidelines. For .NET, see [here](https://learn.microsoft.com/en-us/visualstudio/ide/editorconfig-code-style-settings-reference). Specifically for C# or VB, if you've configured your settings in VS Tools-> Options, there is also an option to generate an .editorconfig file capturing those settings: [![Tools -> Options -> Text Editor -> C# -> Code Style, showing the button to generate .editorconfig settings](https://i.stack.imgur.com/aehlU.png)](https://i.stack.imgur.com/aehlU.png)
AWS EC2: generating private key file out of cert-\*\*\*.pem for SSH terminal access i have my access key, secret key, and the downloaded cert pem file. i understand SSH requires a private key file in order to establish a terminal SSH connection to my instance. what i did find through Google is that i need to use puttygen to convert my cert pem file into a private key file: <http://www.techrepublic.com/blog/datacenter/connect-to-amazon-ec2-with-a-private-key-using-putty-and-pageant/5085> however, all the instructions i found are based on a GUI puttygetn and i only have the CL version because i am running Linux Fedora 16 on my laptop. i did yum PuTTy, which came with a GUI but puttygen only has a CL version. what i tried to do in puttygen is as follows: ``` >puttygen cert-***.pem -o default.pem puttygen: error loading `cert-***.pem': file does not begin with OpenSSH key header ``` can anybody point what i am doing wrong and how i can use CL puttygen to convert my cert file to a private key file that is usable by SSH to connect to my instances? thanks in advance
The ssh private key is completely different from the X.509 certificate and private key. It's a little confusing because folks often save the ssh private key generated by EC2 in a ".pem" file just like the cert and pk use. You can't convert or use the X.509 certificate or private key for ssh as you tried to do. You can generate an ssh private key (sometimes called a "keypair") through Amazon EC2 on the [AWS console](https://console.aws.amazon.com/ec2/home#s=KeyPairs) or through the AWS command line tools (`ec2-add-keypair`). If you are using Putty, you may still need to convert to PPK format as your referenced article describes, but you are converting the ssh key .pem file, not the X.509 private key or certificate. If you know how to generate your own ssh key locally (or have already done so) then I recommend doing this and [uploading the ssh public key to EC2](http://alestic.com/2010/10/ec2-ssh-keys). When you run an EC2 instance, you then specify the ssh keypair name so that EC2 makes the ssh public key available to the instance, letting you securely ssh in to your new server.
Builder pattern - borrowed value does not live long enough I'm trying to implement a simple builder but struggling with lifetimes. The following is giving `error: borrowed value does not live long enough`. [This question](https://stackoverflow.com/questions/28469667/borrowed-value-does-not-live-long-enough-when-using-the-builder-pattern) seems similar. If I store `t` in a mutable variable and then call `s` and finalize it works, but I want to get the one liner to work. What am I doing wrong? ``` struct Type<'a> { s: &'a String, } struct TypeBuilder { s: String, } impl TypeBuilder { fn new() -> TypeBuilder { TypeBuilder { s: "".to_string() } } fn s(&mut self, s: String) -> &mut TypeBuilder { self.s = s; self } fn finalize(&self) -> Type { Type { s: &self.s } } } fn main() { let t = TypeBuilder::new() .s("a".to_string()) .finalize(); println!("string: {}", t.s); } ```
The problem is that you're creating `Type` with a string slice based on a `String` from `TypeBuilder`, but `TypeBuilder` instance created with `new()` is destroyed immediately in the same `let` statement, so if this was allowed, the string slice would became dangling. And that's why it works when you store `TypeBuilder` in a variable first. The problem with your approach to the builder is that the builder is the owner of data for the value it builds: `Type` references the contents of `TypeBuilder`. This means that `Type` instances are always tied to `TypeBuilder` instances, and you just cannot create `Type` and drop `TypeBuilder`. However, this is really unnatural - builders are usually transient objects which are only necessary during construction. Consequently, in order for the builder pattern to work correctly your `Type` must become the owner of the data: ``` struct Type { s: String, } ``` Then the builder should be passed by value and then consumed by `finalize()`: ``` impl TypeBuilder { fn new() -> TypeBuilder { TypeBuilder { s: "".to_string() } } fn s(mut self, s: String) -> TypeBuilder { self.s = s; self } fn finalize(self) -> Type { Type { s: self.s } } } ``` This way your building code should work exactly as it is.
How can I compare a string's size / length in Jekyll's Liquid templates? I am using Jekyll on GitHub Pages in order to build a blog and am wanting to get the length of the `page.title` string passed to the Liquid Template in the YAML front matter in each post. I have not been able to figure out an easy way to do this. Looking at the [Liquid For Designers Guide](https://github.com/Shopify/liquid/wiki/Liquid-for-Designers) I was able to see that it supports two types of markup: - **Output Markup** - Delimited by double curly braces `{{ }}`, you can output variables that are passed to your template, either in the YAML front matter such as `page.title` in Jekyll, or the global site level variables in `_config.yml`. In order to output the title of the post or page you would use `{{ page.title }}`. - **Tag Markup** - Delimited by curly braces and percents `{% %}`, these are used for logic in your templates. If statements, loops, that type of thing. Apparently there are lots of filters you can use with the Output Markup and you can output the length of a string passed to the template by using `{{ page.title | size }}`. However, what I would like to do in my template is render the title of the page using either an `<h1>`,`<h2>`, or `<h3>` header depending on the length of the title. I can not figure out anyway to mix the tag markup and the output markup. I can output the size of `page.title` onto the page with `{{ page.title | size }}`, I cannot, however, figure out how to use the length in an if statement. This also returns a string representation and not a number. Does anyone with more experience with Liquid know how to do this? Ideally, what I would like to do is something along the lines of this: ``` {% if page.title | size > 5 %} ```
I am going to post this solution that I found on someone's blog. It is the only way that I have found so far so safely get the length of a passed in string and compare using anything other than straight equality. In order to make the comparison you must do subtractions and use the difference. The method is outlined in this [**blog post written by Ben Dunlap**](http://ecommerce.shopify.com/c/ecommerce-design/t/comparing-numbers-with-string-variables-in-liquid-37229). It is still kind of a workaround, but it's clever and it seems like it will always work. Might not be as efficient if you wanted to do an if, elsif, else with multiple clauses, but you could still take multiple differences and make it work. Basically you would do this in my case: ``` {% capture difference %}{{ page.title | size | minus:20 }}{% endcapture %} {% unless difference contains '-' %} // 20 characters or less <h3>{{ page.title }}</h3> // show smaller header {% else %} // More than 20 characters <h2>{{ page.title }}</h2> // show larger header {% endunless %} ``` Kind of clever, but also kind of a hack. If anyone comes up with a better way let me know.
Django template filters, tags, simple\_tags, and inclusion\_tags This is more of a general question about the distinctions between these four different kinds of django tags. I just read the documentation page on template tags: <http://docs.djangoproject.com/en/dev/howto/custom-template-tags/> But I'm finding it difficult to know when I should use one variation over another. For example, what can a template tag do that a simple\_tag cannot? Is a filter limited to manipulating strings only and is that why the documentation says that template tags are more powerful because they can "do anything"? Here is my perception of the distinctions: - template filters: only operate on strings and return strings. No access to models? - template tags: access to anything you can access in a view, compiled into nodes with a specified render function (it seems like the only advantage is that you can add variables to the context?) - simple\_tags: take strings and template variables and returns a string, you are passed the value of the template variable rather than the variable itself (when would you ever want the variable itself over the value?) - inclusion tags: allow you to render arbitrary extra templates Can someone give an example outlining when I would want to use one of these over another? Thanks.
**Template filters** can operate on any object (and at most two at once). They're just functions that take one or two arguments. e.g. ``` # filter implementation @filter def myfilter(arg1, arg2): .... # usage in template {{ arg1|myfilter:arg2 }} ``` They are limited in that they cannot access the template context, and can only accept a limited number of arguments. *Use case*: You want to use modify one of the variables in the context slightly before printing it. **Template tags** can change the way the rest of the template is parsed, and have access to anything in the context in which they are used. They're very powerful. For example I wrote a template tag that subclasses `{% extends %}` and allows a template to extend different templates based on the current User. You can easily recognise template tags when they are used, because they around surrounded in `{%` and `%}`. *Use case*: You want to perform some logic that requires Python code and access to the template context. **Inclusion tags** are still template tags, but Django provides some helpers (i.e. the `@inclusion_tag` decorator) to make it easy to write template tags of this kind. *Use case*: You want to render one template into another. For example you may have an advertisement on your site that you want to use in different places. It might not be possible to use template inheritance to achieve what you want, so rather than copy/paste the HTML for the ad multiple times, you would write an inclusion tag. The reason why you would use an inclusion tag over the existing `{% include %}` template tag, is that you may want to render the template with a different context to the one you are in. Perhaps you need to do some database queries, to select the correct ad to display. This is not possible with `{% include %}`. **Simple tags** like inclusion tags, simple tags are still template tags but they have limited functionality and are written in a simplified manner. They allow you to write a template tag that accepts any number of arguments (e.g. `{% mytag "some str" arg2 arg3 %}` etc) and require you to only implement a function that can accept these arguments (and optionally a `context` variable to give you access to the template context. Essentially they're an upgrade from template filters, because instead of accepting only 1 or 2 arguments, you can accept as many as you like (and you can also access the template context).
Partitioned key space for StackExchange Redis When developing a component that use Redis, I've found it a good pattern to prefix all keys used by that component so that it does not interfere other components. Examples: - A component managing users might use keys prefixed by `user:` and a component managing a log might use keys prefixed by `log:`. - In a multi-tenancy system I want each customer to use a separate key space in Redis to ensure that their data do not interfere. The prefix would then be something like `customer:<id>:` for all keys related to a specific customer. Using Redis is still new stuff for me. My first idea for this partitioning pattern was to use separate database identifiers for each partition. However, that seems to be a bad idea because the number of databases is limited and it seems to be a feature that is about to be deprecated. An alternative to this would be to let each component get an `IDatabase` instance and a `RedisKey` that it shall use to prefix all keys. (I'm using [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis)) I've been looking for an `IDatabase` wrapper that automatically prefix all keys so that components can use the `IDatabase` interface as-is without having to worry about its keyspace. I didn't find anything though. So my question is: **What is a recommended way to work with partitioned key spaces on top of StackExchange Redis?** I'm now thinking about implementing my own `IDatabase` wrapper that would prefix all keys. I think most methods would just forward their calls to the inner `IDatabase` instance. However, some methods would require a bit more work: For example [SORT](http://redis.io/commands/sort) and [RANDOMKEY](http://redis.io/commands/randomkey).
I've created an `IDatabase` wrapper now that provides a *key space partitioning*. The wrapper is created by using an extension method to `IDatabase` ``` ConnectionMultiplexer multiplexer = ConnectionMultiplexer.Connect("localhost"); IDatabase fullDatabase = multiplexer.GetDatabase(); IDatabase partitioned = fullDatabase.GetKeyspacePartition("my-partition"); ``` Almost all of the methods in the partitioned wrapper have the same structure: ``` public bool SetAdd(RedisKey key, RedisValue value, CommandFlags flags = CommandFlags.None) { return this.Inner.SetAdd(this.ToInner(key), value, flags); } ``` They simply forward the invocation to the inner database and prepend the key space prefix to any `RedisKey` arguments before passing them on. The `CreateBatch` and `CreateTransaction` methods simply creates wrappers for those interfaces, but with the same base wrapper class (as most methods to wrap are defined by `IDatabaseAsync`). The `KeyRandomAsync` and `KeyRandom` methods are not supported. Invocations will throw a `NotSupportedException`. This is not a concern for me, and to quote @Marc Gravell: > > I can't think of any sane way of achieving that, but I suspect NotSupportedException("RANDOMKEY is not supported when a key-prefix is specified") is entirely reasonable (this isn't a commonly used command anyway) > > > I have not yet implemented `ScriptEvaluate` and `ScriptEvaluateAsync` because it is unclear to me how I should handle the `RedisResult` return value. The input parameters to these methods accept `RedisKey` which should be prefixed, but the script itself could **return** keys and in that case I think it would make (most) sense to *unprefix* those keys. For the time being, those methods will throw a `NotImplementedException`... The sort methods (`Sort`, `SortAsync`, `SortAndStore` and `SortAndStoreAsync`) have special handling for the `by` and `get` parameters. These are prefixed as normal unless they have one of the special values: `nosort` for `by` and `#` for `get`. Finally, to allow prefixing `ITransaction.AddCondition` I had to use a bit reflection: ``` internal static class ConditionHelper { public static Condition Rewrite(this Condition outer, Func<RedisKey, RedisKey> rewriteFunc) { ThrowIf.ArgNull(outer, "outer"); ThrowIf.ArgNull(rewriteFunc, "rewriteFunc"); Type conditionType = outer.GetType(); object inner = FormatterServices.GetUninitializedObject(conditionType); foreach (FieldInfo field in conditionType.GetFields(BindingFlags.NonPublic | BindingFlags.Instance)) { if (field.FieldType == typeof(RedisKey)) { field.SetValue(inner, rewriteFunc((RedisKey)field.GetValue(outer))); } else { field.SetValue(inner, field.GetValue(outer)); } } return (Condition)inner; } } ``` This helper is used by the wrapper like this: ``` internal Condition ToInner(Condition outer) { if (outer == null) { return outer; } else { return outer.Rewrite(this.ToInner); } } ``` There are several other `ToInner` methods for different kind of parameters that contain `RedisKey` but they all more or less end up calling: ``` internal RedisKey ToInner(RedisKey outer) { return this.Prefix + outer; } ``` --- I have now created a pull request for this: <https://github.com/StackExchange/StackExchange.Redis/pull/92> The extension method is now called `WithKeyPrefix` and the reflection hack for rewriting conditions is no longer needed as the new code have access to the internals of `Condition` classes.
Mongodb Trying to get selected fields to return from aggregate I'm having trouble with my aggregate function. I'm trying to get the users most common orders from the database but I'm only returning the name and the count. I've tried using the `$project` operator but I can't seem to make it return anything other than what's in the `$group` statement. Here is my current aggregate function: ``` OrderModel.aggregate( {$unwind: "$products"}, {$match: { customerID: customerID }}, {$group: { _id: "$products.name", count: {$sum:1}}}, {$project: {name: "$_id", _id:0, count:1, active:1}}, {$sort: {"count" : -1}}, {$limit: 25 }) ``` This just produces an output as follows `{"count":10, "name": foo"}` whereas I want to return the whole object; embedded docs and all. Any ideas where I'm going wrong? Edit- Added example document and expected output Document: ``` { "charge": {}, "captured": true, "refunds": [ ], "balance_transaction": "txn_104Ics4QFdqlbCVHAdV1G2Hb", "failure_message": null, "failure_code": null, "amount_refunded": 0, "customer": "cus_4IZMPAIkEdiiW0", "invoice": null, "dispute": null, "statement_description": null, "receipt_email": null }, "total": 13.2, "userToken": "cus_4IZMPAIkEdiiW0", "customerID": "10152430176375255", "_id": "53ad927ff0cb43215821c649", "__v": 0, "updated": 20140701082928810, "created": 20140627154919216, "messageReceived": false, "ready": true, "active": false, "currency": "GBP", "products": [ { "name": "Foo", "active": true, "types": [ { "variants": [ { "name": "Bar", "isDefault": false, "price": 13.2 } ] } ] } ] } ``` Expected outcome: ``` [ { "name": "Foo", "active": true, "types": [ { "variants": [ { "name": "Bar", "isDefault": false } ] }, { "variants": [ { "name": "Something else", "isDefault": false } ] } ], "quantity": 10 }, { "name": "Another product", "active": true, "types": [ { "variants": [ { "name": "Bar", "isDefault": false } ] } ], "quantity": 7 } ``` ] Thanks!
Largely speaking here, [**`$project`**](http://docs.mongodb.org/manual/reference/operator/aggregation/project/) relies on the "absolute path" to the field property in the document on the "right hand" side. Shortcuts such as `1` are just for where that element is actually the top level of the document. Also you need to be able to retain fields when you [**`$group`**](http://docs.mongodb.org/manual/reference/operator/aggregation/group/), so this is where you use various grouping operators such as [**`$first`**](http://docs.mongodb.org/manual/reference/operator/aggregation/first/) and [**`$addToSet`**](http://docs.mongodb.org/manual/reference/operator/aggregation/addToSet/) or [**`$push`**](http://docs.mongodb.org/manual/reference/operator/aggregation/push/) to keep the information you are puling from the inner array. And you must [**`$unwind`**](http://docs.mongodb.org/manual/reference/operator/aggregation/unwind/) twice here as well since you are combining "types" across documents, and you do not want just the [**`$first`**](http://docs.mongodb.org/manual/reference/operator/aggregation/first/) in this case. ``` OrderModel.aggregate([ { "$unwind": "$products" }, { "$unwind": "$products.types" }, { "$group": { "_id": "$products.name", "active": { "$first": "$products.active" }, "types": { "$addToSet": "$products.types" }, "quantity": { "$sum": 1 } }}, { "$project": { "_id": 0, "name": "$_id", "active": 1, "types": 1, "quantity": 1 }} ],function(err,results) { }); ```
Is there a way to write this in F#? ``` let is_sum_greater_than_10 list = list |> Seq.filter (filter) |> Seq.sum |> (10 >) ``` This does not compile. Lookng at the last line "|> (10 >)" is there a way to write this such that the left is pipelined to the right for binary operators? Thanks
You can use a partial application of the `<` operator, using the *(operator-symbol)* syntax: ``` let is_sum_greater_than_10 list = list |> Seq.filter filter |> Seq.sum |> (<)10 ``` You can also see this as an equivalent of a lambda application: ``` let is_sum_greater_than_10 list = list |> Seq.filter filter |> Seq.sum |> (fun x y -> x < y)10 ``` or just a lambda: ``` let is_sum_greater_than_10 list = list |> Seq.filter filter |> Seq.sum |> (fun y -> 10 < y) ```
Multiple GitHub accounts on the same computer? Trying to work on both my actual "work" repos, and my repos on GitHub, from my computer. The work account was set up first, and everything works flawlessly. My account, however, cannot seem to push to my repo, which is set up under a different account/email. I've tried copying my work key up to my account, but that throws an error because of course a key can only be attached to one account. How can I push/pull to and from both accounts with their respective GitHub credentials?
All you need to do is configure your SSH setup with multiple SSH keypairs. - This link is easy to follow (Thanks Eric): <http://code.tutsplus.com/tutorials/quick-tip-how-to-work-with-github-and-multiple-accounts--net-22574> - Generating SSH keys (Win/msysgit): <https://help.github.com/articles/generating-an-ssh-key/> **Relevant steps from the first link:** 1. Generate an SSH-key: ``` ssh-keygen -t ed25519 -C "john@doe.example.com" ``` Follow the prompts and decide a name, e.g. `id_ed25519_example_company`. 2. Copy the SSH public-key to GitHub from `~/.ssh/id_ed25519_doe_company.pub` and tell ssh about the key: ``` ssh-add ~/.ssh/id_ed25519_doe_company ``` 3. Create a `config` file in `~/.ssh` with the following contents: ``` Host github-doe-company HostName github.com User git IdentityFile ~/.ssh/id_ed25519_doe_company ``` 4. Add your remote: ``` git remote add origin git@github-doe-company:username/repo.git ``` or change using: ``` git remote set-url origin git@github-doe-company:username/repo.git ``` --- Also, if you're working with multiple repositories using different personas, you need to make sure that your individual repositories have the user settings overridden accordingly: Setting user name, email and GitHub token – Overriding settings for individual repos <https://help.github.com/articles/setting-your-commit-email-address-in-git/> **Note:** Some of you may require different emails to be used for different repositories, from git **2.13** you can set the email on a directory basis by editing the global config file found at: `~/.gitconfig` using conditionals like so: ``` [user] name = Default Name email = defaultemail@example.com [includeIf "gitdir:~/work/"] path = ~/work/.gitconfig ``` And then your work-specific config `~/work/.gitconfig` would look like this: ``` [user] name = Pavan Kataria email = pavan.kataria@example.com ``` Thank you [@alexg](https://stackoverflow.com/users/920920) for informing me of this in the comments.
In jquery, is there a way to iterate over a jQuery array without $(this)? I know the standard way of doing it: ``` $('div').each(function(){ // here `this` is bound to DOM Element // use $(this) to access the jQuery wrapper })' ``` But this is a little bit cumbersome because we need to use `$(this)` everywhere and this causes a performance penalty as shown by <http://jsperf.com/jquery-each-this>. I am looking for a way to iterate over a jQuery array/selector with `this` bound to the jQuery wrapper instead of to the DOM element.
You can use an ordinary loop. and use `.slice(index, 1)` to get the corresponding jQuery object. [`.eq()`](http://api.jquery.com/eq/) does the same thing, and maps to [`.slice()`](http://api.jquery.com/slice/), so `.slice(i, 1)` is more efficient than `.eq(index)`. ``` var $divs = $('div'); for (var i=0; i <$divs.length; i++) { $divs.slice(i); } ``` Notice that the closure is not present. If you want to use closures, create a temporary function: ``` function eachMethod(index, $elem) { /* ... */ } var $divs = $('div'); for (var i=0; i <$divs.length; i++) { eachMethod(i, $divs.slice(i, 1)); // Or, if you even want to preserve `this` // eachMethod.call($divs[i], i, $divs.slice(i, 1)); } ``` ## Update: A [jQuery plugin](http://docs.jquery.com/Plugins/Authoring) to achieve your desired "each-syntax": ``` (function($) { $.fn._each = function(method) { // this points to the jQuery collection for (var i=0; i <this.length; i++) { method.call(this[i], i, this.slice(i, 1)); } return this; }; })(jQuery); // Usage: $('div')._each(function(index, $elem) { // this points to the DOM element // index to the index // $elem to the jQuery-wrapped DOM element }); ```
Is there an EDI Segment that can contain more than 256 characters? Is there an EDI x12 segment that has no character limit? We often use the MSG segment for open text fields but this is capped at 256 characters, so we’re looking for an alternative that can handle 500+ characters.
# The short answer The `MTX` Text segment allows you to send messages of up to 4096 characters long, which is the longest available in X12. You can’t just swap out an `MSG` segment for an `MTX` segment, though. You can only use `MTX` if it’s included in the transaction set, and that depends on which X12 'release' (version) you're using. For [the `005010` release](https://www.stedi.com/edi/x12-005010/segment/MTX) (one of the more popular ones), here are the transaction sets that `MTX` appears in: - `105` Business Entity Filings - `113` Election Campaign and Lobbyist Reporting - `150` Tax Rate Notification - `155` Business Credit Report - `179` Environmental Compliance Reporting - `194` Grant or Assistance Application - `251` Pricing Support - `274` Healthcare Provider Information - `284` Commercial Vehicle Safety Reports - `500` Medical Event Reporting - `620` Excavation Communication - `625` Well Information - `650` Maintenance Service Order - `805` Contract Pricing Proposal - `806` Project Schedule Reporting - `814` General Request, Response or Confirmation - `832` Price/Sales Catalog - `836` Procurement Notices - `840` Request for Quotation - `843` Response to Request for Quotation - `850` Purchase Order - `855` Purchase Order Acknowledgment - `860` Purchase Order Change Request - Buyer Initiated - `865` Purchase Order Change Acknowledgment/Request - Seller Initiated # Some additional clarification - Technically, character limits don't apply to X12 *segments* – what you're referring to is an X12 *element*. A segment is just a container for elements, and the element you're referring to is the element referenced in [MSG01](https://www.stedi.com/edi/x12-005010/element/933) (the first *element* of the `MSG` *segment*). - Each X12 element references an ID number. For each element, the ID number points to a dictionary that specifies the name, description, type, minimum length, and maximum length. In the case of `MSG01`, it points to data element `[933][1]`. - Data element `933` – the one you're currently using – actually has a character limit of 264 characters (more than 256 characters, but not by much). *Note: the link above is to the `005010` X12 release, but I checked backed to `003010` and up to `008030` and it seems to be 264 characters all the way through.* Now, back to your original question: is there a data element that allows for a larger character payload? The answer is that there are 8 data elements that accept a payload larger than 264 characters. Two of them are binary data types, which we can likely eliminate off the bat: - [785](https://www.stedi.com/edi/x12-005010/element/785). **Binary Data**. A string of octets which can assume any binary pattern from hexadecimal 00 to FF. Note: The maximum length is dependent upon the maximum data value that can be entered in DE 784, which value is 999,999,999,999,999. *Max characters: 999999999999999.* - [1700](https://www.stedi.com/edi/x12-005010/element/1700). **Transformed Data**. Binary or filtered data having one or more security policy options applied; transformed data may represent compressed, encrypted, or compressed and encrypted plaintext. *Max characters: 10000000000000000.* The rest are strings, which is promising: - [364](https://www.stedi.com/edi/x12-005010/element/364). **Communication Number**. Complete communications number including country or area code when applicable. *Max characters: 2048.* - [1565](https://www.stedi.com/edi/x12-005010/element/1565). **Look-up Value**. Value used to identify a certificate containing a public key. Max characters: 4096. - [1566](https://www.stedi.com/edi/x12-005010/element/1566). **Keying Material**. Additional material required for decrypting the one-time key. *Max characters: 512.* - [1567](https://www.stedi.com/edi/x12-005010/element/1567). **One-time Encryption Key**. Hexadecimally filtered encrypted one-time key. *Max characters: 512.* - [1573](https://www.stedi.com/edi/x12-005010/element/1573). **Encoded Security Value.** Encoded representation of the Security Value specified by the Security Value Qualifier. *Max characters: 1.00E+16.* And, last but not least: - [1551](https://www.stedi.com/edi/x12-005010/element/1551). **Textual Data**. To transmit large volumes of message text. *Max characters: 4096.* Looks like a winner! Note that element 1551 appears in only one segment: [MTX](https://www.stedi.com/edi/x12-005010/segment/MTX), which was introduced [in the `003060` X12 release](https://www.stedi.com/edi/x12-003060/segment/MTX). And in the initial `003060` release, it was only included in one X12 Transaction Set: `194 Grant or Assistance Application` (which makes sense – a longer field was needed for grant applications). It seems that as new releases were developed, the `MTX` segment made its way into more and more transaction sets – likely for exactly the reason you're asking. [In `003070`](https://www.stedi.com/edi/x12-003070/segment/MTX), it was included in 5 transaction sets; [in `004010`](https://www.stedi.com/edi/x12-004010/segment/MTX), 15; [in `005010`](https://www.stedi.com/edi/x12-005010/segment/MTX), 24, and so on. The `MTX` segment uses element `1551` in both `MTX02` and `MTX03`, so you can get double the length by using both of them. Note that there's a 'relational condition': `If MTX-03 is present, then MTX-02 is required` (in other words, you can't use `MTX03` if you don't use `MTX02` first). And depending on the transaction set, the `MTX` segment may be able to be repeated as well. Long story short: if the `MTX` segment is in the transaction set / release you're using, you're likely in luck. Hope this helps.
Populate word table using python-docx I have a table to populate [table to populate](https://i.stack.imgur.com/1HFPq.png), I am quite new to python-docx. I have tried to populate it using render but it's only giving as output one server: ``` for i in serverJson: doc.render(i) ``` where serverJson is the list of servers entered by the user. for e.g: ``` for i in appserver: server_1={"component":"Tomcat","comp_version":"7","server":i, "app_port":"5000","db_sid":" ","db_port":"200"} server_2={"component":"Apache","comp_version": "2.4","server":i, "app_port":" ","db_sid":" ","db_port":"200"} serverJson.append(server_1) serverJson.append(server_2) ``` My question is how do i populate the table shown in the link with the number of servers entered by user?
So, what you are actually doing with this block of code: ``` for i in serverJson: doc.render(i) ``` Is to render the same doc multiple times, but only using the **single** variables you provided. Instead, you need to provide a jinja `for` statement inside the block itself, to allow it to dynamically create rows and columns. You will have to operate on both your `docx` file and on the Python code. Firstly, create a table and make your `docx` file look like: [![enter image description here](https://i.stack.imgur.com/EVqDi.png)](https://i.stack.imgur.com/EVqDi.png) Above, we are using some jinja2 `for loops` to achieve the following: - Generate as many columns in the headers as we need - Generate as many rows as the servers in the list - Generate as many columns containing the data of the server in question In order to populate the above template with the correct context, take a look at the below code: ``` from docxtpl import DocxTemplate import os,sys #Just change these according to your needs inputFileName = "i.docx" outputFileName = "o.docx" #This is done to obtain the absolute paths to the input and output documents, #because it is more reliable than using the relative path basedir = os.path.dirname(sys.argv[0]) path = os.path.join(basedir, "", inputFileName) outpath = os.path.join(basedir, "", outputFileName) template = DocxTemplate(path) #Specify all your headers in the headers column context = { 'headers' : ['Component', 'Component Version', 'Server FQDN', 'Application port', 'DB SID', 'DB Port', 'Infos'], 'servers': [] } #Fictious appserver list appserver = ['a','b'] #Add data to servers 1 and 2 using a list and not a dict, remember to add #an empty string for the Infos, as well, otherwise the border won't be drawn for i in appserver: server_1= ["Tomcat",7,i,5000," ",200,""] server_2= ["Apache",2.4,i," "," ",200,""] context['servers'].append(server_1) context['servers'].append(server_2) template.render(context) template.save(outpath) ``` The above, will produce `o.docx` which will look like: [![enter image description here](https://i.stack.imgur.com/MoBPz.png)](https://i.stack.imgur.com/MoBPz.png)
Adding metadata to PDF I need to add metadata to a PDF which I am creating using [prawn](http://rubygems.org/gems/prawn). That meta-data will be extracted later by, probably, [pdf-reader](http://rubygems.org/gems/pdf-reader). This metadata will contain internal document numbers and other information needed by downstream tools. It would be convenient to associate meta-data with each page of the PDF. [The PDF specification](http://wwwimages.adobe.com/www.adobe.com/content/dam/Adobe/en/devnet/pdf/pdfs/PDF32000_2008.pdf) claims that I can store per-page private data in a "Page-Piece Dictionary". Section 14.5 states: > > A page-piece dictionary (PDF 1.3) may be used to hold private > conforming product data. The data may be associated with a page or > form XObject by means of the optional PieceInfo entry in the page > object (see Table 30) or form dictionary (see Table 95). Beginning > with PDF 1.4, private data may also be associated with the PDF > document by means of the PieceInfo entry in the document catalogue > (see Table 28). > > > How can I set a "page-piece dictionary" with prawn? I'm using prawn 0.12.0. If that's not possible, how else can I achieve my goal of storing metadata about each page, either at the page level, or at the document level?
One way is to do *none of the above*; that is, don't attach the metadata as a page-piece dictionary, and don't attach it with prawn. Instead, attach the metadata as a file attachment using the [pdftk](http://www.pdflabs.com/tools/pdftk-server/) command-line tool. To do it this way, create a file with the metadata. For example, the file *metadata.yaml* might contain: ``` --- - :document_id: '12345' :account_id: 10 :page_numbers: - 1 - 2 - 3 - :document_id: '12346' :account_id: 24 :page_numbers: - 4 ``` After you are done creating the pdf file with prawn, then use *pdftk* to attach the metadata file to the pdf file: ``` $ pdftk foo.pdf attach_files metadata.yaml output foo-with-attachment.pdf ``` Since *pdftk* will not modify a file in place, the output file must be different than the input file. You may be able to extract the metadata file using pdf-reader, but you can certainly do it with pdftk. This command unpacks *metadata.yaml* into the *unpacked-attachments* directory. ``` $ pdftk foo-with-attachment.pdf unpack_files output unpacked-attachments ```
Will Windows 8 Metro apps require any change to the code in order to work on ARM based devices? As far as I understand the Metro part of the runtime is not x86 processor architecture and native code dependant and will work without any change to the code on say an x86 tablet or ARM tablet. Is this correct? Also how easy would it be to port apps from Windows Phone metro to Windows 8 metro? Can we hope that no change to code at all is a possibility? Similarly, what about XBOX? Is there any chance Windows 8/Windows Phone metro apps can be easily ported to the new XBOX metro environment?
Windows 8 has [something of a split personality](http://www.scottlogic.co.uk/blog/colin/2011/09/windows-8-an-os-of-two-halves/), with the architecture shown below: ![enter image description here](https://i.stack.imgur.com/RWFAg.jpg) The left hand side is the newer metro-style / WinRT architecture, whilst the right-hand side is the older Win32 / .NET architecture. It has been [widely reported](http://www.slashgear.com/windows-8-arm-tablet-apps-metro-only-tips-insider-02199538/) that ARM tablets will only support the Metro / WinRT architecture. There has been no indication that ARM devices will require different code, and this seems quite unlikely based on the fact that it will have the same architecture. > > Also how easy would it be to port apps from Windows Phone metro to > Windows 8 metro? Can we hope that no change to code at all is a > possibility? > > > It is slightly easier to port WP7 apps to Win8 due to the similarities in their architecture, i.e. a similar application lifecycle and a similar restricted set of APIs. However, there certainly are code changes required, see this article which presents a [simple cross-platform Win8-WinRT / Silverlight](http://www.scottlogic.co.uk/blog/colin/2011/09/tweetsearch-a-cross-platform-metro-ui-winrt-and-silverlight-application/) application. The XAML UI elements are in different namespaces, which has an impact on all of your UI code, the XAML namespace mapping syntax is different, so you cannot share XAML. It is a bit of a mess really. There are rumours that WP8 will use the [same WinRT architecture](http://mobile.dzone.com/articles/windows-phone-8-and-winrt), which would make code sharing possible. However, I think this is highly unlikely, Microsoft already introduced significant architectural changes from WP6.5 to WP7, doing it again would alienate developers.
What are the differences between bsdtar and GNU tar? I've always used GNU `tar`. However, all GNU/Linux distributions that I've seen ship `bsdtar` in their repositories. I've even seen it installed by default in some, IIRC. I know for sure that Arch GNU/Linux requires it as a part of `basedevel` (maybe `base`, but I'm not sure), as I've seen it in PKGBUILDs. Why would you want to use `bsdtar` instead of GNU `tar`? What are the advantages? Note that I am the person who asked [What are the main differences between BSD and GNU/Linux userland?](https://unix.stackexchange.com/questions/79355/what-are-the-main-differences-between-bsd-and-gnu-linux-userland).
The Ubuntu `bsdtar` is actually the tar implementation bundled with `libarchive`; and that should be differentiated from classical `bsdtar`. Some BSD variants do use `libarchive` for their tar implementation, eg FreeBSD. `GNUtar` does support the [other tar variants](http://www.gnu.org/software/tar/manual/html_section/Formats.html#SEC132) and automatic compression detection. As *visualication* pasted the blurb from Ubuntu, there are a few things in there that are specific to `libarchive`: 1. `libarchive` is by definition a library, and different from both classical `bsdtar` and `GNUtar` in that way. 2. `libarchive` cannot read some older obscure GNU tar variations, most notable was encoding of some headers in base64, so that the tar file would be 7-bit clean ASCII (this was the case for 1.13.6-1.13.11 and changed in 1.13.12, that code was only officially in tar for 2 weeks) 3. `libarchive`'s `bsdtar` will read non-tar files (eg zip, iso9660, cpio), but classical bsdtar will not. Now that we've gotten `libarchive` out of the way, it mostly comes down to what is supported in classical `bsdtar`. You can see the manpages yourself here: - [GNU tar(1)](http://www.gnu.org/software/tar/manual/) - [FreeBSD tar(1)](http://www.freebsd.org/cgi/man.cgi?query=tar&sektion=1) - libarchive-based - [NetBSD tar(1)](http://netbsd.gw.com/cgi-bin/man-cgi?tar++NetBSD-current) - [OpenBSD tar(1)](https://man.openbsd.org/tar) - [Standard/Schily tar(1)](http://schilytools.sourceforge.net/man/man1/star.1.html) - the oldest free tar implementation, no heritage to any other - [busybox (1)](https://busybox.net/downloads/BusyBox.html#tar) - Mini tar implementation for BusyBox, common in embedded systems In your original question, you asked what are the advantages to the classical `bsdtar`, and I'm not sure there are really any. The only time it really matters is if you're trying to writing shell scripts that need to work on all systems; you need to make sure what you pass to `tar` is actually valid in all variants. `GNUtar`, `libarchive`'s `bsdtar`, classical `bsdtar`, `star` and `BusyBox`'s `tar` are certainly the tar implementations that you'll run into most of the time, but I'm certain there are others out there (early QNX for example). `libarchive`/`GNUtar`/`star` are the most feature-packed, but in many ways they have long deviated from the original standards (possibly for the better).
How can a completely fill its parent ? | Here is the relevant code (doesn't work): ``` <html> <head> <title>testing td checkboxes</title> <style type="text/css"> td { border: 1px solid #000; } label { border: 1px solid #f00; width: 100%; height: 100% } </style> </head> <body> <table> <tr> <td>Some column title</td> <td>Another column title</td> </tr> <tr> <td>Value 1<br>(a bit more info)</td> <td><label><input type="checkbox" /> &nbsp;</label></td> </tr> <tr> <td>Value 2</td> <td><input type="checkbox" /></td> </tr> </table> </body> </html> ``` The reason is that I want a click anywhere in the table cell to check/uncheck the checkbox. edits: By the way, no javascript solutions please, for accessibility reasons. I tried using display: block; but that only works for the width, not for the height
I have only tested this in IE 6, 7, 8 and FF 3.6.3. ``` <html> <head> <title>testing td checkboxes</title> <style type="text/css"> tr { height: 1px; } td { border: 1px solid #000; height: 100%; } label { display: block; border: 1px solid #f00; min-height: 100%; /* for the latest browsers which support min-height */ height: auto !important; /* for newer IE versions */ height: 100%; /* the only height-related attribute that IE6 does not ignore */ } </style> </head> <body> <table> <tr> <td>Some column title</td> <td>Another column title</td> </tr> <tr> <td>Value 1<br>(a bit more info)</td> <td><label><input type="checkbox" /> &nbsp;</label></td> </tr> </table> </body> </html> ``` The main trick here is to define the height of the rows so we can use a 100% height on their children (the cells) and in turns, a 100% height on the cells' children (the labels). This way, no matter how much content there is in a cell, it will forcibly expand its parent row, and its sibling cells will follow. Since the label has a 100% height of its parent which has its height defined, it will also expand vertically. The second and last trick (but just as important) is to use a CSS hack for the min-height attribute, as explained in the comments.
Inter applet communication In my understanding each applets are independent entities . Is it possible to have inter applet communication ? . If so , how it could be achieved ? Thanks J
Yes. It is possible to achieve inter-applet communication if they are on the same page and originating from the same domain. You will have to name the applets in the page first using the attribute "name=value" like: ``` <applet code="FirstApplet.class" name="firstApplet" width=nn height=nn></applet> <applet code="SecondApplet.class" name="secondApplet" width=nn height=nn></applet> ``` with above in place, in FirstApplet.java, use the following to access SecondApplet: ``` SecondApplet secondApplet = (SecondApplet)getAppletContext().getApplet("secondApplet"); //invoke a method on secondApplet here ``` Similarly, you can access the FirstApplet in SecondApplet.java
readonly data view (not copy) of attribute subset I want a readonly view (not a copy) of my data for selected attributes. I understand that this is possible to solve with a descriptor / or property but so far I could not figure out how. In case there is a better way / pattern to solve this I would be happy to learn about it. ``` class Data: def __init__(self, a, b, c): self.a = a self.b = b self.c = c class View: def __init__(self, data, attributes): self.attributes = attributes self.data = data for a in attributes: #setattr(self, a, lambda: getattr(data, a)) setattr(self, a, property(lambda: getattr(data, a))) #@property #def b(self): # return self.data.b def __getattr__(self, item): if item in self.attributes: return getattr(self.data, item) raise AttributeError("can't get attribute") def test_view(): data = Data(1, 2, 3) mydata = View(data, ['b', 'c']) # but not a! assert mydata.b == 2 data.b = 9 assert mydata.b == 9 with pytest.raises(AttributeError, match="can't set attribute"): mydata.b = 10 ```
> > I understand that this is possible to solve with a descriptor / or property but so far I could not figure out how. > > > That's incorrect, actually. Descriptors only work when found *on the class*, not on instances (properties are a type of descriptor so are no different here). Because your views define attributes *as instance data*, you can't generate properties for those attributes and stick them on your `View` instance. So `setattr(self, a, property(lambda: getattr(data, a)))` doesn't work, no. This is not a problem to be solved with descriptors. Stick with `__getattr__` doing the lookups, and a corresponding `__setattr__` method to prevent adding attributes to the view: ``` class View: def __init__(self, source, *attrs): self._attrs = set(attrs) self._source = source def __getattr__(self, name): if name in self._attrs: return getattr(self._source, name) raise AttributeError(f"{type(self).__name__!r} object has no attribute {name!r}") def __setattr__(self, name, value): # special case setting self._attrs, as it may not yet exist if name == "_attrs" or name not in self._attrs: return super().__setattr__(name, value) raise AttributeError("can't set attribute") def __dir__(self): return self._attrs ``` I made a few more alterations here. The attributes are stored as a set so testing if a name is part of the attributes forming the view we can do so efficiently. I also implemented [`__dir__`](https://docs.python.org/3/reference/datamodel.html#object.__dir__) so `dir(mydata)` returns the available attributes. Note that I also altered the API *slightly* there, making `View()` take an arbitrary number of arguments to define the attribute names. This would make your test look like this: ``` data = Data(1, 2, 3) mydata = View(data, 'b', 'c') # but not a! assert mydata.b == 2 data.b = 9 assert mydata.b == 9 with pytest.raises(AttributeError, match="can't set attribute"): mydata.b = 10 ``` In fact, there is no way you can generate descriptors on the fly for this, even using a metaclass, as attribute lookups for an instance do not consult `__getattribute__` or `__getattr__` on the metaclass (it's an optimization, see [*Special method lookup*](https://docs.python.org/3/reference/datamodel.html#special-method-lookup)). Only `__getattributes__` or `__getattr__` defined on the class remain as hook points, and generating a property object in either of those methods just to bind them is just more indirection than is needed here. If you are creating a lot of these `View` objects, you probably do want to use [`__slots__`](https://docs.python.org/3/reference/datamodel.html#slots). If you use this, no `__dict__` descriptor is created for your class, and so instances don't have arbitrary attributes. Instead, you name each attribute it *can* have, and Python will create special descriptors for those and reserve space for their values. Because the `__dict__` dictionaries required to give instances arbitrary attributes take more memory space than the fixed space reserved for a known number of attributes, this saves memory. It also has the side effect you can't add any new attributes to your `View` instances, making the `__setattr__` method unnecessary: ``` class View: __slots__ = ("_attrs", "_source") def __init__(self, source, *attrs): self._attrs = set(attrs) self._source = source def __getattr__(self, name): if name in self._attrs: return getattr(self._source, name) raise AttributeError(f"{type(self).__name__!r} object has no attribute {name!r}") def __dir__(self): return self._attrs ``` However, without a `__setattr__` the message on the `AttributeError` thrown when you try to set attributes does change somewhat: ``` >>> mydata.b = 10 Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'View' object has no attribute 'b' ``` so you may want to keep it anyway.
What should be in .gitignore for wearable native C/C++ Tizen Studio project? So, what should be in `.gitignore` for wearable native C/C++ Tizen Studio project? There are solution for some separate files like [this](https://developer.tizen.org/ko/forums/sdk-ide/what-.rds_delta-file?langswitch=ko), but seems not complete `.gitignore` file sample for wearable native C/C++ application.
You may follow these `.gitignore` from the open source Tizen projects - [VLC Tizen](http://github.com/TizenTeam/vlc-tizen/blob/master/.gitignore): > > > ``` > /.cproject > /.checkers > /.rds_delta > /.sdk_delta.info > /.sign > /crash-info/ > /res/res.xml > /vlc/ > /Debug/ > /Release/ > /lib/*.so* > /lib/*.a > /emotion/ > /libvlcpp/ > /medialibrary/ > *.tmp > *~ > res/edje/*.edj > > ``` > > - [Kodi](http://github.com/cisco-open-source/kodi/blob/master/.gitignore)
Manipulate alpha bytes of Java/Android color int If I have an int in Java that I'm using as an Android color (for drawing on a Canvas), how do I manipulate just the alpha component of that int? For example, how can I use an operation to do this: ``` int myOpaqueColor = 0xFFFFFF; float factor = 0; int myTransparentColor = operationThatChangesAlphaBytes(myOpaqueColor, factor); //myTransparentColor should now = 0x00FFFFFF; ``` Ideally, it would be nice to multiply those first bytes by whatever `factor` is, rather than just set the bytes to a static value.
Check out the [Color](http://developer.android.com/reference/android/graphics/Color.html) class. Your code would look a bit something like this. ``` int color = 0xFFFFFFFF; int transparent = Color.argb(0, Color.red(color), Color.green(color), Color.blue(color)); ``` So wrapping it in a method might look like: ``` @ColorInt public static int adjustAlpha(@ColorInt int color, float factor) { int alpha = Math.round(Color.alpha(color) * factor); int red = Color.red(color); int green = Color.green(color); int blue = Color.blue(color); return Color.argb(alpha, red, green, blue); } ``` And then call it to set the transparency to, let's say, 50%: ``` int halfTransparentColor = adjustAlpha(0xFFFFFFFF, 0.5f); ``` I think using the provided Color class is a little bit more self-documenting that just doing the bit manipulation yourself, plus it's already done for you.
Keeping only semicolon in code is allowed in java. What happens behind the scene ? How compiler and JVM treats them? ``` public static void main(String[] args) { File dir = new File("dir"); dir.mkdir(); File file = new File(dir,"file.txt");;;;; ; ; ; try { file.createNewFile(); } catch (IOException e) { // TODO Auto-generated catch block ; ; ; ; } } ``` Compiler shows no error for the extra semi-colons. And code runs as if nothing wrong happened. I want to know what happens behind scene ? Does including such semi-colon consume more of stack memory , and thus require more processor cycles to run ?
> > I want to know what happens behind scene ? > > > An extra `;` turns up as a skip statement in the AST. They are typically used instead of empty bodies in for instance while loops: ``` while (expression) ; ``` > > Does including such semi-colon consume more of stack memory , and thus require more processor cycles to run ? > > > No, it does not show up in the byte code. (Which is why they typically [can't be used as break statements](https://stackoverflow.com/a/28304273/276052) when debugging.) --- **Some trivia:** You may actually have skip statements *outside* of classes: ``` class MyClass { // ... } ;;;;; ``` These are ignored, and allowed simply to not annoy people coming from C++ and are used to put `;` after a class: ``` class MyClass { // ... }; ``` Source: I'm a compiler dev at Oracle
Hidden field in spring MVC I wanted to use Spring MVC hidden tag in below code. Is this possible in below code, what I have to write in my controller to do that or what I am doing is correct. ``` <c:forEach var="record" items="${records}"> <tr> <td> <form:form id="myForm" action="list.html" method="post"> <input type="hidden" name="record" value="${record}" /> <a href="#" onclick="document.getElementById('myForm').submit();">Submit</a> </form:form> </td> </tr> </c:forEach> ``` Any help will be highly appriciated.
You are on the right track [depending on what your backing bean is], but in order to bind an ID as a hidden field *on submission* automatically to a "Person" bean (in this example), you would do something like: ``` <c:forEach var="person" items="${persons}" varStatus="status"> <tr> <c:set var="personFormId" value="person${status.index}"/> .... <form id="${personFormId}" action="${deleteUrl}" method="POST"> <input id="id" name="id" type="hidden" value="${person.id}"/> </form> <td>${person.firstName}</td> <td>${person.lastName}</td> .... </tr> </c:forEach> ``` In case you'd like to *render* a hidden field, you would use a `form:hidden` tag: ``` <form:hidden path="id" /> ``` Take a look at [Hidden Input Tag](http://static.springsource.org/spring/docs/current/spring-framework-reference/html/view.html#view-jsp-formtaglib-hiddeninputtag) section of the Spring docs.
Cassandra Error - Clustering column cannot be restricted (preceding column is restricted by a non-EQ relation) We are using Cassandra as the data historian for our fleet management solution. We have a table in Cassandra , which stores the details of journey made by the vehicle. The table structure is as given below ``` CREATE TABLE journeydetails( bucketid text, vehicleid text, starttime timestamp, stoptime timestamp, travelduration bigint, PRIMARY KEY (bucketid,vehicleid,starttime,travelduration) ); ``` Where: 1. bucketid :- partition key which is a combination of month and year 2. vehicleid : -unique id of the vehicle 3. starttime :- start time of the journey 4. endtime :- endtime of the journey 5. travelduration:- duration of travel in milliseconds We would like to run the following query - **get all the travels of a vehicle - 1234567 between 2015-12-1 and 2015-12-3 whose travel duration is greater than 30 minutes** When I run this query: ``` select * from journeydetails where bucketid in('2015-12') and vehicleid in('1234567') and starttime > '2015-12-1 00:00:00' and starttime < '2015-12-3 23:59:59' and travelduration > 1800000; ``` I get this result: ``` InvalidRequest: code=2200 [Invalid query] message="Clustering column "travelduration" cannot be restricted (preceding column "starttime" is restricted by a non-EQ relation) ``` Does anyone have a recommendation on how to fix this issue?
``` select * from journeydetails where bucketid in('2015-12') and vehicleid in('1234567') and starttime > '2015-12-1 00:00:00' and starttime < '2015-12-3 23:59:59' and travelduration > 1800000; ``` That's not going to work. The reason goes back to how Cassandra stores data on-disk. The idea with Cassandra is that it is very efficient at returning a single row with a precise key, or at returning a continuous range of rows from the disk. Your rows are partitioned by `bucketid`, and then sorted on disk by `vehicleid`, `starttime`, and `travelduration`. Because you are already executing a range query (non-EQ relation) on `starttime`, you cannot restrict the key that follows. This is because the `travelduration` restriction may disqualify some of the rows in your range condition. This would result in an inefficient, non-continuous read. Cassandra is designed to protect you from writing queries (such as this), which may have unpredictable performance. Here are two alternatives: 1- If you could restrict all of your key columns prior to `travelduration` (with an equals relation), then you could apply a your greater-than condition: ``` select * from journeydetails where bucketid='2015-12' and vehicleid='1234567' and starttime='2015-12-1 00:00:00' and travelduration > 1800000; ``` Of course, restricting on an exact `starttime` may not be terribly useful. 2- Another approach would be to omit `travelduration` altogether, and then your original query would work. ``` select * from journeydetails where bucketid='2015-12' and vehicleid='1234567' and starttime > '2015-12-1 00:00:00' and starttime < '2015-12-3 23:59:59'; ``` Unfortunately, Cassandra does not offer a large degree of query flexibility. Many people have found success using a solution like [Spark](http://spark.apache.org) (alongside Cassandra) to achieve this level of reporting. And just a side note, but don't use `IN` unless you have to. Querying with `IN` is similar to using a secondary index, in that Cassandra has to talk to several nodes to satisfy your query. Calling it with a single item probably isn't too big of a deal. But `IN` is one of those old RDBMS habits that you should really break before getting too deep into Cassandra.
Input string was not in a correct format when parsed with TimeSpan.ParseExact I have a `DataTable` `dtTest`. I want to parse one cell from this table from row 2 and colum 2 This cell can have a format of `hh:mm:ss or h:mm:ss` I want to parse it to switch to format h.mm or hh.mm here I verify if there is symbol "`:`" or not on position `2` ``` string typeTime = dtTest.Rows[2][2].ToString().Substring(1, 1); ``` Now I parse them: ``` TimeSpan.ParseExact(dtTest.Rows[2][2].ToString(), typeTime == "." ? "h'.'mm" : "hh'.'mm", CultureInfo.InvariantCulture); ``` After parsing it gives me an error "**Input string was not in a correct format**".
You are trying to parse a string *exactly* - your string contains a seconds component that does not exist in your format strings. Your string also contains `:` where you specify `.`. The following should work: ``` TimeSpan.ParseExact(dtTest.Rows[2][2].ToString(), "h':'mm':'ss", CultureInfo.InvariantCulture); ``` Note that the `h` format specifier will correctly understand `8` or `08`. Additionally, you can simply use one of the [standard `TimeSpan` format strings](http://msdn.microsoft.com/en-us/library/ee372286.aspx) - specifically `g` OR `c` instead of the custom format string: ``` TimeSpan.ParseExact(dtTest.Rows[2][2].ToString(), "g", CultureInfo.InvariantCulture); TimeSpan.ParseExact(dtTest.Rows[2][2].ToString(), "c", CultureInfo.InvariantCulture); ```
How do I represent a loop in an activity diagram? I'd like to represent a loop in a UML activity diagram. Here's my situation: - For each folder, I check each document within that folder - For each document I check its content: - If it's invalid (based on keyword searching), do action X and pass to next document. - When all document are verified, continue to the next folder. Can anyone show me what this should look like?
There are 3 different notations that you can use. As your loop is based on some elements (folders, documents) the most convenient way is to use Expansion Region (of iterative type). [![Expansion Region example](https://i.stack.imgur.com/GfMvA.png)](https://i.stack.imgur.com/GfMvA.png) Second option, that is a preferred choice when you have some guard based loop is a Loop Node. [![Loop Node example](https://i.stack.imgur.com/ycOEW.png)](https://i.stack.imgur.com/ycOEW.png) The last possibility is to simply build correctly structured decision/merge structure. [![Decision/Merge example](https://i.stack.imgur.com/CgzSU.png)](https://i.stack.imgur.com/CgzSU.png) The benefits of the first two are that they are compact and clear. It is also easy to have nested loops. Neither of this is true with the last option. Yet if you present your diagram to someone who is not familiar with UML (especially if you have no chance to explain a meaning of particular structure), the last approach usually is most widely recognized and understood.
nested multiline comments in java While writing experimental code, I find it very useful to comment out entire blocks of code at a time. However, i can't find a reasonable way to do this in Java, because I frequently end up with nested blocks being commented out. In C or C#, this is easily achieved using `#if 0` or `#if false` I can't seem to find any such equivalent in Java -- any help would be appreciated. Thanks!
I'm presuming you are wanting to comment out nested code for debugging or testing purposes, right? Because leaving huge blocks of commented out code in production code is generally considered very bad style. Nested comments is not natively a feature of java. So what can you do? Here are some different options: **Slash Slash comment nesting** The best I've been able to come up with is to use an editor that has a hotkey to slashslash comment out an entire block of code as //'s can be nested. Eclipse and Netbeans both have this support, though there are minor differences in how it works between IDEs, as far as when it decides to toggle the comments vs nest them. Usually nesting // comments can be done so long as the selections are not identical. **third party preprocessor** That said, J2ME uses a pre-processor in eclipse, but I don't think that would help with desktop java. There do appear to be [other preprocessors](http://boldinventions.com/index.php?option=com_content&view=article&id=81%3ajavapreprocessorusingeclipse&catid=34%3acategory-electronics-articles&Itemid=53) written for or compatible with eclipse if you search for them. However, that definitely would hinder portability as that's not a standard java feature. **modify comment** Another thing I've often done for a quick nested comment need (moreso in css which doesn't do // comments than java though) is to add a space between the final two characters ending the inner comment `*/` so that java will not end the outer comment at the inner comment's end. Not too difficult (but slightly manual) to restore the original comment. The nested `/*` is usually ignored by the parser based on how it does its pattern matching. Eg /\* code /\* nested comment \* / more code \*/` **use version control** You could also easily use most any version control system to save a copy of your code with the code to remove. And then later use the version control restore/diff/merge to get your removed code back. Generally this solution is considered the best style. Especially if this commented out code will be needed to be left that way for long amounts of time. **if (false)** If your code that you want to mass prevent from running both compiles and is within a single function, you can easily disable it by adding an if statement around it that the condition will never be true such as `if (false) { ... }` Obviously, this would never pass any sort of lint type code inspection tool, but its quick and easy ;-)
Why should you include Podfile.lock under version control? First of all, I'd like to mention that I've read the cocoapods guide <https://guides.cocoapods.org/using/pod-install-vs-update.html> Still it seems somewhat unclear why should we commit Podfile.lock, in a setup where everyone is using only **pod install** command and we have all versions specified strictly in the podfile. The .lock file seems redundant then. Let's say we have a project that uses ReactiveSwift. ReactiveSwift has dependency on Result pod in its podspec as follows: ``` s.dependency 'Result', '~> 3.2' ``` My assumption is that I shouldn't really care what the ReactiveSwift depends on, since i'm just gonna do a pod install of my strictly specified ReactiveSwift version. For the pods that I develop myself, i can influence their podfile and podspec to strictly specify one version that i would like to use. So the *simplified* flow in my project without podfile.lock would be: 1. Develop a feature, if it needs a change in a dependency version - just directly specify it in a podfile, without ever commiting the Podfile.lock 2. Merge the feature to master branch, the CI then runs a pod install command with the new podfile 3. Now the CI has all correct versions of pods its using and can correctly build my app Would the Podfile.lock be needed in that scenario ?
Your `Podfile` specifies the versions of your direct dependencies, but it can allow a bit of ambiguity. For instance, maybe you need version 2.2 of a library, but you don't care if you get 2.2.1 or 2.2.2. The first time you do a `pod install`, Cocoapods will just get the latest version of 2.2.x. However, maybe 2.2.2 has a bug that you unknowingly depend on in your app. The maintainer releases 2.2.3, and if you haven't checked in your lock file one of your co-workers or maybe your CI system builds a crashing version of the app and causes all sorts of confusion. (It still works on your machine!) In addition to locking down the exact versions of your direct dependencies, locking down your transitive dependencies is just as important. In summary, `Podfile.lock` makes sure you don't accidentally upgrade the libraries you pull in, while keeping your `Podfile` concerned only with your direct dependencies.
Firebase function (written in NodeJS) to delete file from Cloud Storage when an object is removed from Realtime Database I am new to NodeJS and I am trying to write the next method for Cloud Functions for Firebase. **What I am trying to achieve:** 1. The function should be triggered when the user removes a Photo object from Firebase DB; 2. The code should remove the file object from Storage corresponding to the Photo obj. These is my Firebase DB structure: *photos/{userUID}/{photoUID}* ``` { "dateCreated": "2017-07-27T16:40:31.000000Z", "isProfilePhoto": true, "isSafe": true, "uid": "{photoUID}", "userUID": "{userUID}" } ``` And the Firebase Storage format: *photos/{userUID}/{photoUID}.png* And the NodeJS code that I am using: ``` const functions = require('firebase-functions') const googleCloudStorage = require('@google-cloud/storage')({keyFilename: 'firebase_admin_sdk.json' }) const admin = require('firebase-admin') const vision = require('@google-cloud/vision')(); admin.initializeApp(functions.config().firebase) exports.sanitizePhoto = functions.database.ref('photos/{userUID}/{photoUID}') .onDelete(event => { let photoUID = event.data.key let userUID = event.data.ref.parent.key console.log(`userUID: ${userUID}, photoUID: ${photoUID}`); if (typeof photoUID === 'undefined' || typeof userUID === 'undefined') { console.error('Error while sanitize photo, user uid or photo uid are missing'); return } console.log(`Deleting photo: ${photoUID}`) googleCloudStorage.bucket(`photos/${userUID}/${photoUID}.png`).delete().then(() => { console.log(`Successfully deleted photo with UID: ${photoUID}, userUID : ${userUID}`) }).catch(err => { console.log(`Failed to remove photo, error: ${err}`) }); }); ``` While I run it I get the next error: "ApiError: Not found" [![enter image description here](https://i.stack.imgur.com/wlgI5.png)](https://i.stack.imgur.com/wlgI5.png) I think these part of the code is the the one that causes the issue: ``` googleCloudStorage.bucket(`photos/${userUID}/${photoUID}.png`).delete() ``` Thanks in advance for support and patience.
Found the issue, here it is the code that works for me: ``` const functions = require('firebase-functions'); ``` Realtime Database: ``` exports.sanitizePhoto = functions.database.ref('photos/{userUID}/{photoUID}').onDelete(event => { let photoUID = event.data.key let userUID = event.data.ref.parent.key console.log(`userUID: ${userUID}, photoUID: ${photoUID}`); if (typeof photoUID === 'undefined' || typeof userUID === 'undefined') { console.error('Error while sanitize photo, user uid or photo uid are missing'); return } console.log(`Deleting photo: ${photoUID}`) const filePath = `photos/${userUID}/${photoUID}.png` const bucket = googleCloudStorage.bucket('myBucket-12345.appspot.com') const file = bucket.file(filePath) file.delete().then(() => { console.log(`Successfully deleted photo with UID: ${photoUID}, userUID : ${userUID}`) }).catch(err => { console.log(`Failed to remove photo, error: ${err}`) }); }); ``` Also here is the same code but for Firestore (not sure if it works, as I am not a NodeJS developer and didn't actually test it): ``` exports.sanitizePhoto = functions.firestore.document('users/{userUID}/photos/{photoUID}').onDelete((snap, context) =>{ const deletedValue = snap.data(); let photoUID = context.params.photoUID let userUID = context.params.userUID console.log(`userUID: ${userUID}, photoUID: ${photoUID}`); if (typeof photoUID === 'undefined' || typeof userUID === 'undefined') { console.error('Error while sanitize photo, user uid or photo uid are missing'); return } console.log(`Deleting photo: ${photoUID}`) const filePath = `photos/${userUID}/${photoUID}.png` const bucket = googleCloudStorage.bucket('myBucket-12345.appspot.com') const file = bucket.file(filePath) file.delete().then(() => { console.log(`Successfully deleted photo with UID: ${photoUID}, userUID : ${userUID}`) }).catch(err => { console.error(`Failed to remove photo, error: ${err}`) }); }); ``` Also you can notice that my path changed from: ``` photos/{userUID}/{photoUID} ``` to: ``` users/{userUID}/photos/{photoUID} ```
Set RewriteBase to the current folder path dynamically Is there any way to set RewriteBase to the path current folder (the folder which the .htaccess file is in) relative to the host root? I have a CMS and if I move it to the directory in my host it does not work unless I set the RewriteBase to the path of directory relative to the root of host. I would like my CMS to work with only copy and paste, without changing any code in htaccess. ## Update: For example: ``` webroot - sub_directory - cms - .htaccess ``` in this case I should write in the htaccess: `RewriteBase /` and if I move the htaccess inside sub\_directory I should change RewriteBase to: `RewriteBase /sub_directory/` So I want something like `RewriteBase /%{current_folder}/`
Here is one way one can grab the `RewriteBase` in an environment variable which you can then use in your other rewrite rules: ``` RewriteCond %{REQUEST_URI}::$1 ^(.*?/)(.*)::\2$ RewriteRule ^(.*)$ - [E=BASE:%1] ``` Then you can use `%{ENV:BASE}` in your rules to denote `RewriteBase`, i.e.: ``` #redirect in-existent files/calls to index.php RewriteCond %{REQUEST_FILENAME} !-f RewriteRule . %{ENV:BASE}/index.php [L] ``` ### Explanation: This rule works by comparing the `REQUEST_URI` to the URL path that `RewriteRule` sees, which is the `REQUEST_URI` with the leading `RewriteBase` stripped away. The difference is the `RewriteBase` and is put into `%{ENV:BASE}`. - In a `RewriteCond`, the LHS (test string) can use back-reference variables e.g. `$1`, `$2` OR `%1`, `%2` etc but RHS side i.e. condition string **cannot use** these `$1`, `$2` OR `%1`, `%2` variables. - Inside the RHS condition part only back-reference we can use are **internal back-references** i.e. the groups we have captured in this condition itself. They are denoted by `\1`, `\2` etc. - In the `RewriteCond` first captured group is `(.*?/)`. It will be represented by internal back-reference `\1`. - As you can make out that this rule is basically finding `RewriteBase` dynamically by comparing `%{REQUEST_URI}` and `$1`. An example of `%{REQUEST_URI}` will be `/directory/foobar.php` and example of `$1` for same example URI will be `foobar.php`. `^(.*?/)(.*)::\2$` is putting the difference in 1st captured group `%1` or `\1`. For our example it will populate `%1` and `\1` with the value `/directory/` which is used later in setting up env variable `%{ENV:BASE}` in `E=BASE:%1`.
MVVM and async tasks with lifetime longer than page's lifetime MVVM approach is nice and well established. However picture the scene: you have an app page where user can initiate some long-running task. Like synchronization of local and remote databases. This task can be long and should only be interrupted gracefully. Then user leaves the page, by going to some details page. It doesn't make sense to cancel that long async operation, because app is still running. But then suddenly user receives a phone call, so that app is deactivated. In my (maybe too primitive) understanding of MVVM, View Model should be used to control interactions with the Model (that long operation particularly). But View Model doesn't need to know about application lifetime events, since that will limit code reusability (on Windows 8 there's no such class as PhoneApplicationService). See a contradiction here? VM initiates operation, but should not be used to cancel it. Of course, View can take this responsibility to handle lifetime events. So that event about app deactivating is propagating like this: `View -> ViewModel -> (cancels long operation) -> Model`. But if user has navigated from the View, and some of operations initiated in that View is still running, there's no way of cancelling it anymore - View can be disposed of at any time. I've came up with only one idea, that is handling app lifetime events in View Models. But, as I said before, I dislike this approach, because it limits View Models' portability. Could anyone offer a better solution?
> > I've came up with only one idea, that is handling app lifetime events in View Models. But, as I said before, I dislike this approach, because it limits View Models' portability. Could anyone offer a better solution? > > > I actually do not see a problem here. In MVVM, the ViewModel is traditionally the "glue" that ties the View to the Model. Having a small amount of custom ViewModel code for each platform doesn't necessarily limit the portability of the rest of the ViewModel, especially if this is abstracted and contained within its own project for each platform. > > VM initiates operation, but should not be used to cancel it. > > > This strongly suggests that the VM *should* be the one to cancel it. If the VM creates these operations, it effectively has ownership of them, which suggests that it should manage their lifecycle, as well.
how is this sizeof expression evaluated? and why it is called that way? I came across with this code in an `std::optional` implementation : ``` template <class T, class U> struct is_assignable { template <class X, class Y> constexpr static bool has_assign(...) { return false; } template <class X, class Y, size_t S = sizeof((std::declval<X>() = std::declval<Y>(), true)) > // the comma operator is necessary for the cases where operator= returns void constexpr static bool has_assign(bool) { return true; } constexpr static bool value = has_assign<T, U>(true); }; ``` The part that I cant understand how it works or how it is evaluated is `size_t S = sizeof((std::declval<X>() = std::declval<Y>(), true))` I know that if the assign operation fails it will fall back to the first definition of has\_assign that returns false, but i don't know why it has the `, true)` part. I did some test with structs that returns void on the assign operator and removing the `, true` part in `sizeof` gives me the same results.
In order to apply `sizeof()`, you need a complete type. But returning a complete type isn't a requirement of assignability, hence: ``` sizeof((std::declval<X>() = std::declval<Y>(), true)) ~~~~~~~~~~~~~~~~~~ expr ~~~~~~~~~~~~~~~~~~~~~ ``` if the assignment is valid for those two types, then we have `sizeof(expr)` where the type of `expr` is `bool` (because `true`). So if the assignment is valid, we get some real `size`. Otherwise, substitution failure. --- But this is an unnecessarily cryptic way of writing this code. Moreover, it's not even correct because I could write a type like: ``` struct Evil { template <class T> Evil operator=(T&& ); // assignable from anything void operator,(bool); // mwahahaha }; ``` and now your `sizeof()` still doesn't work. Instead, prefer simply: ``` class = decltype(std::declval<X>() = std::declval<Y>()) ``` This accomplishes the same result - either substitution failure or not - without needed to care at all about what the type of the result is or to handle special cases.
Can Redshift SQL perform a case insensitive regular expression evaluation? The documentation says regexp\_instr() and ~ are case sensitive Posix evaluating function and operator. Is there a Posix syntax for case insensitive, or a plug-in for PCRE based function or operator Example of PCRE tried in a Redshift query that don't work as desired because of POSIX'ness. ``` select A.target , B.pattern , regexp_instr(A.target, B.pattern) as rx_instr_position , A.target ~ B.pattern as tilde_operator , regexp_instr(A.target , 'm/'||B.pattern||'/i') as rx_instr_position_icase from ( select 'AbCdEfffghi' as target union select 'Chocolate' as target union select 'Cocoa Latte' as target union select 'coca puffs, delivered late' as target ) A , ( select 'choc.*late' as pattern union select 'coca.*late' as pattern union select 'choc\w+late' as pattern union select 'choc\\w+late' as pattern ) B ```
To answer your question: No Redshift-compatible syntax or plugins that I know of. In case you could live with a workaround: We ended up using `lower()` around the strings to match: ``` select A.target , B.pattern , regexp_instr(A.target, B.pattern) as rx_instr_position , A.target ~ B.pattern as tilde_operator , regexp_instr(A.target, 'm/'||B.pattern||'/i') as rx_instr_position_icase , regexp_instr(lower(A.target), B.pattern) as rx_instr_position_icase_by_lower from ( select 'AbCdEfffghi' as target union select 'Chocolate' as target union select 'Cocoa Latte' as target union select 'coca puffs, delivered late' as target ) A , ( select 'choc.*late' as pattern union select 'coca.*late' as pattern union select 'choc\w+late' as pattern union select 'choc\\w+late' as pattern ) B ```
If SQL variable exists I'm writing some code that uses a significant quantity of dynamic SQL and as a result, there are some instances when a third party application may or may not have a SQL variable declared. Is there a way to test if a variable has been declared. Some psuedo code would be: ``` IF OBJECT_ID(N'@var' ) IS NOT NULL BEGIN DECLARE @var AS varchar(max) END ``` Also, is there a way to list all of the variables currently declared like a *local watch* window?
No. T-SQL declares variables per-batch - they do not have any scoping. You need to ensure all the variables that are actually used are declared from the outside - by the time the batch runs, it's too late. It's not clear what you're trying to accomplish, and what kind of constraints you can impose on the code fragments. One alternative would be to use something other than variables - for example, a common table variable that would be used for all the other "pseudo-variables". Something like this: ``` declare @parameters table ( Id varchar(20), Value sql_variant ); -- This is the code generated by the 3rd party; update might be better than insert insert into @parameters values ('MyVar', 42); -- Using the variable - you get NULL or the actual value declare @MyVar int; select @MyVar = cast(Value as int) from @parameters where Id = 'MyVar'; ``` All the variables are then declared by your part of the code (or not at all), and the 3rd party only has an option to change them from their defaults. Of course, this may be entirely useless for your use-case - it's not clear what kind of scenarios you're actually expecting to occur. In general, slapping together pieces of T-SQL is tricky. Since there's no scoping, there's no way of preventing one fragment from destroying the whole batch. If you can afford any checks at all, they need to be on a different layer - and you may have to change the names of the variables in the fragments to avoid collisions. Another option might be to prohibit the 3rd party from declaring any variables at all, and instead requiring them to register a variable from your side - that would allow you to choose names that avoid conflicts.
Calling unsafe method using expression trees I need to call `unsafe` method that takes raw pointers. For that I need to construct `Expression` that represents pointer to value represented by `VariableExpression` or `ParameterExpression`. How to do that?
My usual approach to `Expression` stuff is to get the C# compiler to build the `Expression` for me, with its wonderful lambda-parsing ability, then inspect what it makes in the debugger. However, with the scenario you describe, we run into a problem almost straight away: New project, set 'Allow unsafe' on. Method that takes raw pointers: ``` class MyClass { public unsafe int MyMethod(int* p) { return 0; } } ``` Code that builds an expression: ``` class Program { unsafe static void Main(string[] args) { var mi = typeof (MyClass).GetMethods().First(m => m.Name == "MyMethod"); int q = 5; Expression<Func<MyClass, int, int>> expr = (c, i) => c.MyMethod(&i); } } ``` My intent was to run this and see what `expr` looked like in the debugger; however, when I compiled I got > > error CS1944: An expression tree may not contain an unsafe pointer operation > > > Reviewing [the docs for this error](http://msdn.microsoft.com/en-us/library/bb546089%28v=vs.90%29.aspx), it looks like your "need to construct Expression that represents pointer to value" can never be satisfied: > > An expression tree may not contain an unsafe pointer operation > > > Expression trees do not support pointer types because the > `Expression<TDelegate>.Compile` method is only allowed to produce > verifiable code. See comments. [there do not appear to be any > comments!] > > > To correct this error > > > - Do not use pointer types when you are trying to create an expression tree. > > >
How to join result from two tables with same field into one field? I have tables like this: ``` Table1 Table2 name1 | link_id name2 | link_id text 1 text 2 text 2 text 4 ``` And I wanna have result: ``` name1 name2 link_id text text 1 text text 2 text text 4 ``` How I can do this? ADD: Sry, my English in not good. I have device, device\_model and device\_type tables with duplicate field counter\_set\_id. I wanna select fields from counter\_set with all values of counter\_set\_id. I need to fetch values only from counter\_set\_id fields Now I have this query: ``` SELECT `dev`.`counter_set_id`, `mod`.`counter_set_id`, `type`.`counter_set_id` FROM `device` AS `dev` LEFT JOIN `device_model` AS `mod` ON `dev`.`device_model_id` = `mod`.`id` LEFT JOIN `device_type` AS `type` ON `mod`.`device_type_id` = `type`.`id` WHERE `dev`.`id` = 4; ``` This returns 3 columns but I need all values in one column This is final variant I think: ``` SELECT `dev`.`counter_set_id` FROM `device` AS `dev` LEFT OUTER JOIN `device_model` AS `mod` ON `dev`.`device_model_id` = `mod`.`id` WHERE `dev`.`id` = 4 AND `dev`.`counter_set_id` IS NOT NULL UNION SELECT `mod`.`counter_set_id` FROM `device_model` AS `mod` LEFT OUTER JOIN `device` AS `dev` ON `mod`.`id` = `dev`.`device_model_id` WHERE `mod`.`counter_set_id` IS NOT NULL; ```
Based on the sample tables and desired output you provided, it sounds like you might want a FULL OUTER JOIN. Not all vendors implement this, but you can simulate it with a LEFT OUTER join and a UNION to an EXCEPTION join with the tables reversed like this: ``` Select name1, name2, A.link_id From table1 A Left Outer Join table2 B on A.link_id = B.link_id Union Select name1, name2, link_id From table2 C Exception Join table1 D on C.link_id = D.link_id ``` then your output would be like this: ``` NAME1 NAME2 LINK_ID ===== ===== ======= text <NULL> 1 text text 2 <NULL> text 4 ```
How to preview qml documents referencing c++ types? I frequently use types defined at runtime in c++ in QML documents. It works well, but not with the design view in Qt Creator or with the external preview tools. For example, in c++: ``` qmlRegisterType<CustomVideoSource>("MyModule", 1, 0, "CustomVideoSource"); ``` And in QML: ``` CustomVideoSource { id: customSource } VideoOutput { source: customSource; anchors.fill: ... } ``` The "qmlscene" external preview tool quits with the error 'module "MyModule" is not installed'. The design view is usable as a preview in simple cases, unusable in complex cases, but in any case slow and I can't edit code and see the preview at the same time. I'm aware of the "dummy context" concept but 1) don't see how it applies in this case and 2) have never had much luck getting it to actually work in other cases when it should. Does anyone have a good workflow? Maybe I shouldn't be doing things this way at all? BTW, I'm aware of the Qt forums and I'll probably ask there, too. If I can catch them when they're not down/broken. Update/clarification: I'm aware of the options for implementing an extension to QML in C++. My question is not about the mechanics of doing so but about how best to deal with the situation above, e.g., I'd like to register a type at runtime but still have quick previews for UI work. I'm considering doing a fake plugin purely for preview purposes and passing it via -I to qmlscene. Also modifying qmlscene itself.
This is the strategy I ended up with, which has worked well so far: 1. To address the "MyModule" issue, I created a qml module with a qml/javascript dummy implementation of MyModule in a "dummyModules" subdirectory. I run the qmlscene preview tool with "-I dummyModules" to make them visible for prototyping. The modules are, of course, not included in release distributions. 2. I also have several context properties set from C++. To make these work I use the "dummydata" feature of qmlscene. 3. I modified (hacked) qmlscene to reload the scene on ctrl-r. This 1) solves the "MyModule" problem without creating a c++ plugin (impractical in my case) and 2) gives me side-by-side editing and previews. I suspect this may work well with the Creator "Design" module, too. IIRC it uses an external program called "qmlpuppet" which is probably similar to qmlscene. But I haven't tried it.
Is there a reason why padding adds to the size of an element? I was very surprised when I found that a `<div>` with a size of - say - 200px becomes 220px wide if you give it 10px padding. It just makes no sense to me, the external size should not change when an internal setting does. It forces you to adjust the size every time you tweak the padding. Am I doing something wrong, or is there a reason for this behavior? EDIT: I know this is how it's supposed to work, my question is why? Is it logical in a way I don't understand? Does this give any advantage over the opposite approach of keeping size and padding separate?
There are two different so-called "box models", one adds the padding (and border) to the specified `width`, while the other does not. With the advent of CSS3, you can luckily switch between the two models. More precisely, the behaviour you are looking for can be achieved by specifying ``` box-sizing: border-box; ms-box-sizing: border-box; webkit-box-sizing: border-box; moz-box-sizing: border-box; width: 200px; ``` in your div's CSS. Then, in modern browsers, the div will always stay 200 px wide no matter what. For further details and a list of supported browsers, see [this guide](http://www.quirksmode.org/css/box.html). **Edit:** WRT your edit as to *why* the traditional box model is as it is, Wikipedia actually [offers some insight](http://en.wikipedia.org/wiki/Internet_Explorer_box_model_bug#Background): > > Before HTML 4 and CSS, very few HTML elements supported both border and padding, so the definition of the width and height of an element was not very contentious. However, it varied depending on the element. The HTML width attribute of a table defined the width of the table including its border. On the other hand, the HTML width attribute of an image defined the width of the image itself (inside any border). The only element to support padding in those early days was the table cell. Width for the cell was defined as "the suggested width for a cell content in pixels excluding the cell padding." > > > CSS introduced margin, border and padding for many more elements. It adopted a definition width in relation to content, border, margin and padding similar to that for a table cell. This has since become known as the W3C box model. > > >
Computing `AB⁻¹` with `np.linalg.solve()` I need to compute `AB⁻¹` in Python / Numpy for two matrices `A` and `B` (`B` being square, of course). I know that `np.linalg.inv()` would allow me to compute `B⁻¹`, which I can then multiply with `A`. I also know that `B⁻¹A` is actually [better](https://stackoverflow.com/questions/31256252/why-does-numpy-linalg-solve-offer-more-precise-matrix-inversions-than-numpy-li) computed with `np.linalg.solve()`. Inspired by that, I decided to rewrite `AB⁻¹` in terms of `np.linalg.solve()`. I got to a formula, based on the [identity](https://en.wikipedia.org/wiki/Transpose) `(AB)ᵀ = BᵀAᵀ`, which uses `np.linalg.solve()` and `.transpose()`: ``` np.linalg.solve(a.transpose(), b.transpose()).transpose() ``` that seems to be doing the job: ``` import numpy as np n, m = 4, 2 np.random.seed(0) a = np.random.random((n, n)) b = np.random.random((m, n)) print(np.matmul(b, np.linalg.inv(a))) # [[ 2.87169378 -0.04207382 -1.10553758 -0.83200471] # [-1.08733434 1.00110176 0.79683577 0.67487591]] print(np.linalg.solve(a.transpose(), b.transpose()).transpose()) # [[ 2.87169378 -0.04207382 -1.10553758 -0.83200471] # [-1.08733434 1.00110176 0.79683577 0.67487591]] print(np.all(np.isclose(np.matmul(b, np.linalg.inv(a)), np.linalg.solve(a.transpose(), b.transpose()).transpose()))) # True ``` and also comes up much faster for sufficiently large inputs: ``` n, m = 400, 200 np.random.seed(0) a = np.random.random((n, n)) b = np.random.random((m, n)) print(np.all(np.isclose(np.matmul(b, np.linalg.inv(a)), np.linalg.solve(a.transpose(), b.transpose()).transpose()))) # True %timeit np.matmul(b, np.linalg.inv(a)) # 100 loops, best of 3: 13.3 ms per loop %timeit np.linalg.solve(a.transpose(), b.transpose()).transpose() # 100 loops, best of 3: 7.71 ms per loop ``` --- My question is: does this identity **always** stand correct *or* there are some corner cases I am overlooking?
In general, [`np.linalg.solve(B, A)`](https://numpy.org/doc/stable/reference/generated/numpy.linalg.solve.html) is equivalent to `B-1A`. The rest is just math. In all cases, `(AB)T = BTAT`: <https://math.stackexchange.com/q/1440305/295281>. Not necessary for this case, but for invertible matrices, `(AB)-1 = B-1A-1`: <https://math.stackexchange.com/q/688339/295281>. For an invertible matrix, it is also the case that `(A-1)T = (AT)-1`: <https://math.stackexchange.com/q/340233/295281>. From that it follows that `(AB-1)T = (B-1)TAT = (BT)-1AT`. As long as `B` is invertible, you should have no issues with the transformation you propose in any case.
Bash will not auto-complete (Tab) with files Auto-completion in bash (with Tab) works on folders, but **not** with files. - I'm running Ubuntu 13.10 - I haven't touched my *.bashrc* file This is how the bottom part of the *.bashrc* file looks, with the *bash-completion* part installed and updated: ``` # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if ! shopt -oq posix; then if [ -f /usr/share/bash-completion/bash_completion ]; then . /usr/share/bash-completion/bash_completion elif [ -f /etc/bash_completion ]; then . /etc/bash_completion fi fi ``` Any ideas?
The third party "bash\_completion" package (not to be confused with bash or its native completion) can sometimes be hard to predict. 1. Some commands are specifically set up to not never complete files, like `cd` 2. Some commands will refuse to complete certain filenames, because bash\_completion doesn't realize the program handles them, like `mplayer`. 3. Some commands are just buggy, especially when paths contain spaces and other characters, like for `scp`. If you're ever in a situation where bash\_completion isn't being helpful, you can use `M-/` (aka `Alt + /`) to use bash's native filename completion instead. If a command is frequently giving you trouble, you can disable bash\_completion for this command using `complete -r thatcommand` at the end of your `.bashrc`.
How to translate emails when sending them asynchronously with Symfony Messenger? I configured the Symfony mailer to send emails with messenger. <https://symfony.com/doc/current/mailer.html#sending-messages-async> I have my emails in two languages and I rely on the requests to detect the language, but now the emails are not translated. How can I get the messages to be translated in the language detected in the request? In my controller: ``` $mailer->send( $user->email, $this->translator->trans('mails.recover.subject'), 'email/client/password-recovery.html.twig', compact('user', 'hash', 'target') ); ``` Template: ``` {% extends 'email/base.html.twig' %} {% block content %} <h2>{{ 'mails.recover.header' | trans({'%name%': user.name}) }}</h2> <p style="margin: 25px 0;"> {{ 'mails.recover.text1' | trans({'%url%': url('default')}) | raw }} </p> // More code ``` Messenger config: ``` framework: messenger: # Uncomment this (and the failed transport below) to send failed messages to this transport for later handling. # failure_transport: failed transports: # https://symfony.com/doc/current/messenger.html#transport-configuration async: '%env(MESSENGER_TRANSPORT_DSN)%' # failed: 'doctrine://default?queue_name=failed' # sync: 'sync://' routing: # Route your messages to the transports # 'App\Message\YourMessage': async 'Symfony\Component\Mailer\Messenger\SendEmailMessage': async ``` *Looking better the subject of the mail if it is translated correctly, the body of the mail does not* If I remove the line ``` 'Symfony\Component\Mailer\Messenger\SendEmailMessage': async ``` in messegner config, translation work.
The problem you have is that the Symfony Translator component gets the user's locale form the incoming **request**, and when sending your mails asynchronously by the time the mail is actually sent the request is long finished and gone, and then the context the message consumer (command line) and there is no request locale information. There are two solutions for this: ### First Option: You pass the values **already translated** to the template (which is what you are doing with the email subject). E.g. something like this: ``` $mailer->send( $user->email, $this->translator->trans('mails.recover.subject'), 'email/client/password-recovery.html.twig', [ 'user' => $user, 'hash' => $hash, 'target' => $target, 'labels' => [ 'header' => $this->translator ->trans('mails.recover.subject', [ 'name' => $user->getName()]), 'text1' => $this->translator ->trans('mails.recover.text1', ['url', => $defaulUrl]) ] ); ``` And then in your template you use the values directly: ``` {% extends 'email/base.html.twig' %} {% block content %} <h2>{{ texts.header }}</h2> <p style="margin: 25px 0;">{{ texts.text1 }}</p> {% endblock %} ``` This would be my preferred approach, since it makes the template as dumb as possible and easy to reuse in different contexts. The templates themselves do not need to know anything not pertaining the actual rendering of its content. ### Second Option: Let the templating system know what user locale you want to translate to: ``` $mailer->send( $user->email, $this->translator->trans('mails.recover.subject'), 'email/client/password-recovery.html.twig', [ 'user' => $user, 'hash' => $hash, 'target' => $target, 'requestLocale' => $locale // get the locale from the request // (https://symfony.com/doc/current/translation/locale.html) ] ); ``` Then you use the received locale in the filter you are using, as described [here](https://symfony.com/doc/current/translation/templates.html#using-twig-filters): ``` <h2>{{ 'mails.recover.header' | trans({'%name%': user.name}, 'app', requestLocale) }}</h2> ``` --- While I prefer the first one, playing with either option should let you get your desired results.
What's wrong in creating/printing this array? To find the average time taken to create certain files, I'm using this minutes array, then would simply use bash arithmetic to find the average. However, I'm unable to get the difference except for the first pair of elements. Here `l` is the array of subtractions of the `i++` and `i`; what's wrong? ``` MMarray=(`ls -lrt /some/location/ |tail -57|head -55|tr -s " "|cut -d" " -f8|cut -c4,5`) arrLen=`echo ${#MMarray[@]}` for((i=0;i<$arrLen;i++)) do x=$(($i+1)) j=${MMarray[x]#0} k=${MMarray[i]#0} l=($(($j-$k))) echo ${l[$i]} done ``` Also, how would `02-59` subtraction be handled?
If you know times are not much over one hour, you can simply add 60 if the result is negative. However, this is not the way to do it anyway. There are several points I'd like to raise: 1. NEVER parse output of ls, especially not the time part. It depends on the locale and can give completely unpredictable results. 2. Why assume times are around a hour and just like at the minutes, if you can just subtract dates? Use the unix timestamp - the number of second since the beginning of the world (1970). That's the standard way of storing time. 3. Just write `let i++` and `let l=j-k`, don't use millions of parentheses 4. You are making `$l` an array of length 1, and then access `$i` th member. This is why it only works for the first pair. 5. You are using very c-like programming style. Use a loop of the form `for i in "${MMarray[@]}"; do...` and just save the previous one. 6. You are using too much arrays and loops, do everything in a stream of dates, it will be much better. So pretty much everything can be rewritten. 7. You do realize, that an average time is simply first minus last divided by the number of files minus 1 (the number of diffences)? You don't need to subtract and then add together again... What to do if you want to get all the time differences in seconds, one per line: ``` find /some/location -type f -name '*your filter which files you want*' -printf '%T@\n' | sort -n | awk 'NR>1 { print $1-previous } {previous = $1}' ``` What it does? `find` finds the files and prints the unix timestamps. If you don't specify `-name` it lists eveything. `-type f` means only files, no directories. `sort -n` sorts the times numerically. `awk` simply computes and prints differences. For average, just keep the first and the last date. For instance, for all the files in the current directory: ``` find . -type f -printf '%T@\n' | sort -n | awk 'NR==1 { first = $1 } END{ print ($1-first)/(NR-1) }' ``` The results are decimal numbers in seconds, but you can easily just write `int(($1-first)/(NR-1))` for an integer or `int(($1-first)/(NR-1))/60` for minutes.
What is best way to perform jQuery .change() I have 2 separate scripts that essentially do the same thing. I built them over time and just discovered I am using a couple different means to get to the same result. I want to standardize and use the best practices method in both cases. One way I test a change event is this: ``` $('input[name="status"]').change(function() {}); ``` Another way I am testing a change event is this: ``` $("#email").bind("change", function(e) {}); ``` Which way is best? What is the difference between the 2? Thanks for helping me understand this.
Before jQuery 1.7, `change()` was [simply a short cut](https://github.com/jquery/jquery/blob/1.6.4/src/event.js#L1178-1197) for `bind("change")`. As of 1.7 however, [`on()`](https://github.com/jquery/jquery/blob/1.7.2/src/event.js#L974-976) was introduced, and is preferred to `bind()`. That now means `change()` is a shortcut for `on("change")`, and in fact all `bind()` calls will now call [`on()`](https://github.com/jquery/jquery/blob/1.7.2/src/event.js#L974-976) internally. In short, they do the same thing. I find the explicit use of `on()` (or `bind()`) preferable, but as long as you're consistent throughout your code base, I don't see any real differences. One could argue that using `change()` over `on("change")` is "better", as a typo in the word "change" would throw a parse error in the first instance ("undefined is not a function"), but would fail silently with `on()`... but obviously your unit tests would catch that, right? ;).
javascript function modification I am trying to write a logger object which logs messages to screen. here is my code. <http://github.com/huseyinyilmaz/javascript-logger> in every function that needs to log something, I am writing loggerstart and loggerEnd functions at start and end of my functions. But I want to run thos codes automaticalls for every function. is there a way to modify Function prototype so every function call can run automatically. (I am not using any javascript framework.)
`EDIT: Rewritten the function to make it more modular` Well, this is a creepy way to do it, but I use this way sometimes when I need overriding some functions. It works well, allows any kind of customization and easy to understand (still creepy). However, you will need to have all your functions stored in some kind of global object. See the example for details. ``` function dynamic_call_params(func, fp) { return func(fp[0],fp[1],fp[2],fp[3],fp[4],fp[5],fp[6],fp[7],fp[8],fp[9],fp[10],fp[11],fp[12],fp[13],fp[14],fp[15],fp[16],fp[17],fp[18],fp[19]); } function attachWrapperToFunc(object, funcName, wrapperFunction) { object["_original_function_"+funcName] = object[funcName]; object[funcName] = function() { return wrapperFunction(object, object["_original_function_"+funcName], funcName, arguments); } } function attachWrapperToObject(object, wrapperFunction) { for (varname in object) { if (typeof(object[varname]) == "function") { attachWrapperToFunc(object, varname, wrapperFunction); } } } ``` And some usage example: ``` var myProgram = new Object(); myProgram.function_one = function(a,b,c,d) { alert(a+b+c+d); } myProgram.function_two = function(a,b) { alert(a*b); } myProgram.function_three = function(a) { alert(a); } function loggerWrapperFunction(functionObject, origFunction, origFunctionName, origParams) { alert("start: "+origFunctionName); var result = dynamic_call_params(origFunction, origParams); alert("end: "+origFunctionName); return result; } attachWrapperToObject(myProgram,loggerWrapperFunction); myProgram.function_one(1,2,3,4); myProgram.function_two(2,3); myProgram.function_three(5); ``` Output will be: `start,10,end,start,6,end,start,5,end` So generally it allows you to wrap each function in some object automatically with a custom written wrapper function.
Does `lm` return `model` for reasons other than `predict` `lm` sets `model = TRUE` by default, meaning the entire dataset used for learning is copied and returned with the fitted object. This is used by `predict` but creates memory overhead (example below). I am wondering, is the copied dataset used for any reason other than `predict`? Not essential to answer, but I'd also like to know of models that store data for reasons other than `predict`. ### Example ``` object.size(lm(mpg ~ ., mtcars)) #> 45768 bytes object.size(lm(mpg ~ ., mtcars, model = FALSE)) #> 28152 bytes ``` Bigger dataset = bigger overhead. ### Motivation To share my motivation, the [twidlr](https://github.com/drsimonj/twidlr) package forces users to provide data when using `predict`. If this makes copying the dataset when learning unnecessary, it seems reasonable to save memory by defaulting to `model = FALSE`. I've opened a relevant issue [here](https://github.com/drsimonj/twidlr/issues/27). A secondary motivation - you can easily fit many models like `lm` with [pipelearner](https://github.com/drsimonj/pipelearner), but copying data each time creates massive overhead. So finding ways to cut down memory needs would be very handy!
I think model frame is returned as a protection against non-standard evaluation. Let's look at a small example. ``` dat <- data.frame(x = runif(10), y = rnorm(10)) FIT <- lm(y ~ x, data = dat) fit <- FIT; fit$model <- NULL ``` What is the difference between ``` model.frame(FIT) model.frame(fit) ``` ?? Checking `methods(model.frame)` and `stats:::model.frame.lm` shows that in the first case, model frame is efficiently extracted from `FIT$model`; while in the second case, it will be reconstructed from `fit$call` and `model.frame.default`. Such difference also results in the difference between ``` # depends on `model.frame` model.matrix(FIT) model.matrix(fit) ``` as model matrix is built from a model frame. If we dig further, we will see that these are different, too, ``` # depends on `model.matrix` predict(FIT) predict(fit) # depends on `predict.lm` plot(FIT) plot(fit) ``` Note that this is where the problem could be. If we deliberately remove `dat`, we can not reconstruct the model frame, then all these will fail: ``` rm(dat) model.frame(fit) model.matrix(fit) predict(fit) plot(fit) ``` while using `FIT` will work. --- This is not bad enough. The following example under non-standard evaluation is really bad! ``` fitting <- function (myformula, mydata, keep.mf = FALSE) { b <- lm(formula = myformula, data = mydata, model = keep.mf) par(mfrow = c(2,2)) plot(b) predict(b) } ``` Now let's create a data frame again (we have removed it earlier) ``` dat <- data.frame(x = runif(10), y = rnorm(10)) ``` Can you see that ``` fitting(y ~ x, dat, keep.mf = TRUE) ``` works but ``` fitting(y ~ x, dat, keep.mf = FALSE) ``` fails? Here is a question I answered / investigated a year ago: [R - model.frame() and non-standard evaluation](https://stackoverflow.com/q/37364571/4891738) It was asked for `survival` package. That example is really extreme: even if we provide `newdata`, we would still get error. Retaining the model frame is the only way to proceed! --- Finally on your observation of memory costs. In fact, `$model` is not mainly responsible for potentially large `lm` object. `$qr` is, as it has the same dimension with model matrix. Consider a model with lots of factors, or nonlinear terms like `bs`, `ns` or `poly`, the model frame is much smaller compared with model matrix. So omitting model frame return does not help reduce `lm` object size. This is actually one motivation that `biglm` is developed. --- Since I inevitably mentioned `biglm`, I would emphasis again that this method only helps reducing the final model object size, not RAM usage during model fitting.
Access webcam using OpenCV (Python) in Docker? I'm trying to use Docker for one of our projects which uses OpenCV to process webcam feed (Python). But I can't seem to get access to the webcam within docker, here's the code which I use to test webcam access: ``` python -c "import cv2;print(cv2.VideoCapture(0).isOpened())" ``` And here's what I tried so far, ``` docker run --device=/dev/video0 -it rec bash docker run --privileged --device=/dev/video0 -it rec bash sudo docker run --privileged --device=/dev/video0:/dev/video0 -it rec bash ``` All of these return `False`, what am I doing wrong?
The Dockerfile in the link you provided doesn't specify how opencv was installed, can you provide the Dockerfile you used? Or how you installed opencv? VideoCapture(0) won't work if you install opencv via pip. You're using `--device=/dev/video0:/dev/video0` correctly. EDIT: This thread is more than five years old. Perhaps things have changed, but I can confirm that installing opencv via pip works fine. I stumbled upon this thread because I was facing a similar issue where my app could not access the camera when running inside a docker container. In case my the issue was with the user account I was using to run the python application inside the docker. I had the following lines in my Dockerfile: ``` RUN adduser -u 5678 --disabled-password --gecos "" appuser && \ chown -R appuser /app USER appuser ``` Adding `appuser` to `video` group as shown below fixed the issue: ``` RUN adduser -u 5678 --disabled-password --gecos "" appuser && \ adduser appuser video && \ chown -R appuser /app USER appuser ``` By the way, there is no need for the `--privileged` flag in this case. Keep in mind that my setup runs on devices running Ubuntu/Debian based linux OS.
Random restarts caused by a machine check exception My laptop restarts randomly about twice a day. It shows the following error log before the restart. ![error log](https://i.stack.imgur.com/53vgG.jpg). Unfortunately I don't have an idea how to decode the Machine Check Exception (MCE). `mcelog --ascii` outputs nothing. Is there a chance that this is a software problem? The laptop is Samsung NP900X3C with the Intel Core i5-3317U processor. I use Arch Linux with the 3.13.5 kernel.
This issue has to do with a hardware failure, specifically it looks like the memory in bank 4, (DIMM 4 - I would assume), is faulty. The [MCE facility (Machine Check Events)](http://www.mcelog.org/) is not widely known about but I"ve answered several questions on the site related to it. - [Does kernel: EDAC MC0: UE page 0x0 point to bad memory, a driver, or something else?](https://unix.stackexchange.com/questions/83257/does-kernel-edac-mc0-ue-page-0x0-point-to-bad-memory-a-driver-or-something-e) - [OS errors : kernel: EDAC k8 MC0: extended error code: ECC chipkill x4 [duplicate]](https://unix.stackexchange.com/questions/91714/os-errors-kernel-edac-k8-mc0-extended-error-code-ecc-chipkill-x4/91719#91719) Additionally you can write your own rules for MCE in this U&L Q&A titled: [Writing triggers for mcelog](https://unix.stackexchange.com/questions/76307/writing-triggers-for-mcelog). Also if you go through the MCE's FAQ, item #6 shows you how to make use of the `mcelog --ascii` command, titled: [How do I "run through mcelog --ascii"?](http://www.mcelog.org/faq.html#6). Basically you're suppose to save the panic message in a text file and then run it through the `mcelog` command like so: ``` $ mcelog --ascii < file ``` ### How can I fix this? *Option #1* You essentially have 3 options. I won't go into describing the first, which is to replace the RAM DIMM in slot 4. *Option #2* The second option would be to further diagnose the issue and confirm that it's actually a faulty DIMM. You can use [memtest86+](http://www.memtest.org/) to do this. Along with performing this test, I would also take a minute and re-seat the DIMMS to make sure they're making a good contact within their slots on your motherboard, if you feel comfortable doing such a thing. It's actually quite trivial to do this. *Option #3* The third option would be to attempt to blacklist the location, assuming it's isolated to a specific couple of addresses within the DIMM. Believe it or not you can actually blacklist specific memory addresses. I've also explained how to do this on this site as well, titled: [How to blacklist a correct bad RAM sector according to MemTest86+ error indication?](https://unix.stackexchange.com/questions/75059/how-to-blacklist-a-correct-bad-ram-sector-according-to-memtest86-error-indicati).
How can I make TMUX be active whenever I start a new shell session? Instead of having to type `tmux` every time, **how could I have `tmux` always be used for new session windows**? So if I have no terminal windows open and then I open one, how can that first session be in `tmux`? Seems like a `.bashrc` sort of thing perhaps?
*warning* **this can now 'corrupt' (make it unable to open a terminal window - which is not good!) your Ubuntu logins. Use with extreme caution and make sure you have a second admin account on the computer that you can log into in case you have the same problems I did. See my *other* answer for more details and a different approach.** Given that warning, the simplest solution can be to append the `tmux` invocation to the end of your `.bashrc`, e.g. ``` alias g="grep" alias ls="ls --color=auto" # ...other stuff... if [[ ! $TERM =~ screen ]]; then exec tmux fi ``` Note that the `exec` means that the bash process which starts when you open the terminal is *replaced* by `tmux`, so `Ctrl-B D` (i.e. disconnect from tmux) actually closes the window, instead of returning to the original bash process, which is probably the behaviour you want? Also, the `if` statement is required (it detects if the current bash window is in a tmux process already) otherwise each time you start tmux, the contained bash process will attempt to start its own tmux session, leading to an infinite number of nested tmuxen which can be, err, quite annoying (that said, it looks cool). --- However, there is a very small risk this can make `bash` behave in a way that other programs don't expect, since running bash can possibly cause it to turn into a tmux process, so it might be better to modify how you start your terminal emulator. I use a small executable shell script `~/bin/terminal` (with `~/bin` in `$PATH`, so it is found automatically) that looks a bit like: ``` #!/bin/sh exec gnome-terminal -e tmux ``` (I don't use gnome-terminal, so you might have to remove the `exec`, I'm not sure.) Now whenever you run the `terminal` scipt you have a terminal with tmux. You can add this to your menu/desktop/keyboard shortcuts to replace the default terminal. (This approach also allows you to more easily customise other things about the terminal emulator later, if you ever desire.)
How to inject javascript into Chrome DevTools itself Ok, so just the other day I learned that you can inspect the devtools if it is in its own window(explained [here](https://stackoverflow.com/questions/12291138/how-do-you-inspect-the-web-inspector-in-chrome)). I also learned that you can style the devtools with your own css by editing the Custom.css file in your profile on your computer(more on that [here](http://darcyclarke.me/design/skin-your-chrome-inspector/)). What I want to do is not only add css, but also javascript, via a chrome extension. I am very aware of [devtools pages](http://developer.chrome.com/extensions/devtools.html), but those do not do what I want. Pretty much I want to get a content script to run on the devtools inspector itself. I found one extension that does exactly this, but for the life of me I have not been able to replicate it(even when copy-pasting the code!!). The extension is the "Discover DevTools Companion extension" from Code School([on the webstore](https://chrome.google.com/webstore/detail/discover-devtools-compani/angkfkebojeancgemegoedelbnjgcgme)). They even [explain how it works](http://discover-devtools.codeschool.com/faq#extension-access), but I still have had no luck. That was the only extension I have found that does what I want. So I guess what I'm really asking is if its just me that cannot get it to work or if others that try are having trouble also.
Usually, you cannot create a Chrome extension which injects code in a devtools page. The "Discover DevTools Companion" extension from now on, referred to as DDC ~~is allowed to do this, because this extension is [whitelisted in the source code of Chromium](https://code.google.com/p/chromium/codesearch#chromium/src/chrome/common/extensions/extension.cc&q=angkfkebojeancgemegoedelbnjgcgme&sq=package:chromium&type=cs&l=92):~~ (this is no longer the case) ``` // Whitelist "Discover DevTools Companion" extension from Google that // needs the ability to script DevTools pages. Companion will assist // online courses and will be needed while the online educational programs // are in place. scripting_whitelist_.push_back("angkfkebojeancgemegoedelbnjgcgme"); ``` If you want to publish an extension in the Chrome Web Store with these capabilities, give up. If you want to create such an extension for personal / internal use, read further. # Method 1: Impersonate ~~the DDC~~ a whitelisted extension The **easiest** way to create an extension with such permissions is to create an extension with the extension ID of a whitelisted extension (e.g. ChromeVox). This is achieved by copying the [`"key"`](https://developer.chrome.com/extensions/manifest/key) key of its manifest file to your extension's manifest (see also: [How to get the key?](https://stackoverflow.com/questions/21497781/how-to-change-chrome-packaged-app-id-or-why-do-we-need-key-field-in-the-manifest/21500707#21500707)). This is a minimal example: ### `manifest.json` ``` { // WARNING: Do NOT load this extension if you use ChromeVox! // WARNING: Do NOT load this extension if you use ChromeVox! // WARNING: This is a REALLY BIG HAMMER. "content_scripts": [{ "js": [ "run_as_devtools.js" ], "matches": [ "<all_urls>" ] }], // This is the key for kgejglhpjiefppelpmljglcjbhoiplfn (ChromeVox) "key": "MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEGBi/oD7Yl/Y16w3+gee/95/EUpRZ2U6c+8orV5ei+3CRsBsoXI/DPGBauZ3rWQ47aQnfoG00sXigFdJA2NhNK9OgmRA2evnsRRbjYm2BG1twpaLsgQPPus3PyczbDCvhFu8k24wzFyEtxLrfxAGBseBPb9QrCz7B4k2QgxD/CwIDAQAB", "manifest_version": 2, "name": "Elevated Devtools extension", "version": "1.0" } ``` ### `run_as_devtools.js` ``` if (location.protocol === 'chrome-devtools:') (function() { 'use strict'; // Whatever you want to do with the devtools. })(); ``` Note: This method is truly a hack. Since the extension shares the same ID as [ChromeVox](https://chrome.google.com/webstore/detail/chromevox/kgejglhpjiefppelpmljglcjbhoiplfn), both extensions cannot co-exist. And if Chrome decides to remove the whitelisted extension, then your permissions will evaporate. Instead of filtering via the content script, you can also use the [`include_globs` key](https://developer.chrome.com/extensions/content_scripts#include_globs) to restrict the content script to devtools only. # Method 2: Modify `resources.pak` I suggest to go with method 1 if possible. When method 1 fails (e.g. because the extension is no longer whitelisted), use the next method. 1. Get `paktools.py`, `unpack.py` and `pack.py` from [DennisKehrig/patch\_devtools](https://github.com/DennisKehrig/patch_devtools) (on Github). 2. Locate your Chrome directory containing `resources.pak`. 3. Run `python2 unpack.py resources.pak`, which creates a directory `resources` containing all files (all file names are numbers). 4. Locate the file containing a script which runs in the context of the developer tools. Add your desired code there. 5. Remove `resources.pak` 6. Run `python2 pack.py resources` to create the new `resources.pak` file. Note: `resources.pak` may be replaced when Chrome is updated, so I suggest to create a script which automates my described algorithm. That shouldn't be too difficult. If you're interested, you can look up the `.pak` file format in [`ui/base/resource/data_pack_literal.cc`](https://code.google.com/p/chromium/codesearch#search/&q=ui/base/resource/data_pack_literal.cc&sq=package:chromium&type=cs) ([description in human language](https://groups.google.com/a/chromium.org/forum/#!msg/chromium-dev/agGjTt4Dmcw/uekdoo_cAxkJ)).
Avoid else condition in 'When' in kotlin As per documentation of [When](https://kotlinlang.org/docs/reference/control-flow.html#when-expression) in Kotlin, else is not mandatory if the compiler knows all the values are covered. This is very in case of emums or sealed class but how to do it in case of arrays for numbers 1 to 5 (startRating). ``` private fun starMapping(startRating: Int): String { return when (startRating) { 1 -> "Perfect" 2 -> "Great" 3-> "Okay" 4-> "Bad" 5-> "Terrible" // don't want to add else as I believe it is prone to errors. } } ``` Something similar to this ``` return when (AutoCompleteRowType.values()[viewType]) { AutoCompleteRowType.ITEM -> ItemView( LayoutInflater.from(parent.context).inflate(R.layout.item_venue_autocomplete_item_info, parent, false)) AutoCompleteRowType.SECTION -> SectionView( LayoutInflater.from(parent.context).inflate(R.layout.item_venue_autocomplete_section, parent, false) ) } ```
Using `when` statement it is impossible to exclude `else` clause in case of using ints, because compiler doesn't know what to return if `startRating` is not in 1..5 range. You can, for example, throw an `IllegalStateException` if the value is not in the required range: ``` private fun starMapping(startRating: Int): String { return when (startRating) { 1 -> "Perfect" 2 -> "Great" 3-> "Okay" 4-> "Bad" 5 -> "Terrible" else -> throw IllegalStateException("Invalid rating param value") } } ``` Or you can do something like this: ``` return when { startRating <= 1 -> "Perfect" startRating == 2 -> "Great" startRating == 3 -> "Okay" startRating == 4 -> "Bad" else -> "Terrible" } ``` But `else` clause is required.
Can i override Convert.ToDateTime()? can i override `Convert.ToDateTime()`? I don't want 100 times or more check if string is nul and if is not then convert it to DateTime. Can i override this function to check if is null then will return null otherway convert it.
No, you can't override static methods. But you can write your *own* static method: ``` // TODO: Think of a better class name - this one sucks :) public static class MoreConvert { public static DateTime? ToDateTimeOrNull(string text) { return text == null ? (DateTime?) null : Convert.ToDateTime(text); } } ``` Note that the return type has to be `DateTime?` because `DateTime` itself is a non-nullable value type. You might also want to consider using [`DateTime.ParseExact`](http://msdn.microsoft.com/en-us/library/system.datetime.parseexact.aspx) instead of `Convert.ToDateTime` - I've never been terribly fond of its lenient, current-culture-specific behaviour. It depends where the data is coming from though. Do you know the format? Is it going to be in the user's culture, or the invariant culture? (Basically, is it user-entered text, or some machine-generated format?)
iOS: How to check if In-App purchase enabled? There are some function to check if user enable some feature. Just like location or camera roll. But, how to check if In-App purchase enabled?
this may help on you: ### Swift ``` if SKPaymentQueue.canMakePayments() { // you can, so start adding the payment to the payment queue } else { // you may not, handle the rejection } ``` ### ObjC ``` if ([SKPaymentQueue canMakePayments]) { // you can, so start adding the payment to the payment queue } else { // you may not, handle the rejection } ``` --- it is from the [`SKPaymentQueue`](https://developer.apple.com/documentation/storekit/skpaymentqueue) class reference about `canMakePayments()` class method: > > `true` if the user is allowed to authorize payment. `false` if they do not have permission. > > > and further explanation for the same method: > > An iPhone can be restricted from accessing the Apple App Store. For example, parents can restrict their children’s ability to purchase additional content. Your application should confirm that the user is allowed to authorize payments before adding a payment to the queue. Your application may also want to alter its behavior or appearance when the user is not allowed to authorize payments. > > >
When Declaring a Reference to an Array of Ints, why must it be a reference to a const-pointer? Note: I am using the g++ compiler (which is I hear is pretty good and supposed to be pretty close to the standard). --- Let's say you have declared an array of ints: ``` int a[3] = { 4, 5, 6 }; ``` Now let's say you really want to declare a reference to that array (nevermind why, other than the Bjarne says the language supports it). Case 1 -- If you try: ``` int*& ra = a; ``` then the compiler balks and says: ``` "invalid initialization of non-const reference of type `int*&' from a temporary of type `int*'" ``` First things first, why is 'a' a temporary variable (i.e. doesn't it have a place in memory?)... Anyway, fine, whenever I see a non-const error, I try to throw in a const... Case 2 -- if you try: ``` int*const&rca = a; //wish I knew where the spaces should go (but my other post asking about this sort of protocol got a negative rank while many of the answers got ranked highly -- aha! there are stupid questions!) ``` Then everything is cool, it compiles, and you get a reference to the array. Case 3 -- Now here is another thing that will compile: ``` int* justSomeIntPointer = a; //LINE 1 int*& rpa = justSomeIntPointer; //LINE 2 ``` This also gives you a reference to the original array. So here is my question: At what point does the name of a statically declared array become a const-pointer? I seem to remember that the name of an array of ints is also a pointer-to-int, but I don't remember it ever being a const-pointer-to-int... It seems like Case 1 fails because the reference declared (ra) is not to a const-pointer, which may mean that 'a' was already a const-pointer-to-int to begin with. It seems like Case 2 works because the reference declared (rca) is already a const-pointer-to-int. Case 3 also works, which is neat, but why? At what point does the assumed pointer-to-int (i.e. the array name 'a') become a const-pointer? Does it happen when you assign it to an int\* (LINE 1), or does it happen when you assign that int\* to a int\*& (LINE 2)? Hope this makes sense. Thanks.
``` int*& ra = a; ``` `int*` is a pointer type, not an array type. So that's why it won't bind to `a`, which has type `int[3]`. ``` int* const& ra = a; ``` works, because it is equivalent to ``` int* const& ra = (int*)a; ``` That is, a temporary *pointer* is conceptually created on the right-hand side of the assignment and this temporary is then bound to `ra`. So in the end, this is no better than: ``` int* ra = a; ``` where `ra` is in fact a pointer to the first element of the array, not a reference to the array. Declaring a reference to an array the easy way: ``` typedef int array_type[3]; array_type& ra = a; ``` The not-as-easy way: ``` int (&ra)[3] = a; ``` The C++11-easy way: ``` auto& ra = a; ``` --- > > At what point does the name of a statically declared array become a const-pointer? I seem to remember that the name of an array of ints is also a pointer-to-int, but I don't remember it ever being a const-pointer-to-int... > > > This is the right question to ask! If you understand when array-to-pointer decay happens, then you're safe. Simply put there are two things to consider: - decay happens when any kind of 'copying' is attempted (because C doesn't allow arrays to be copied directly) - decay is a kind of conversion and can happen anytime a conversion is allowed: when the types don't match The first kind typically happen with templates. So given `template<typename T> pass_by_value(T);`, then `pass_by_value(a)` will actually pass an `int*`, because the array of type `int[3]` can't be copied in. As for the second one, you've already seen it in action: this happens in your second case when `int* const&` can't bind to `int[3]`, but can bind to a temporary `int*`, so the conversion happens.
Get current date and time in GitHub workflows I have a GitHub workflow for releasing nightly snapshots of the repository. It uses [the create-release action](https://github.com/actions/create-release). This is how the workflow file looks right now: ``` name: Release Nightly Snapshot on: schedule: - cron: "0 0 * * *" jobs: build: name: Release Nightly Snapshot runs-on: ubuntu-latest steps: - name: Checkout master Branch uses: actions/checkout@v2 with: ref: 'master' - name: Create Release id: nightly-snapshot uses: actions/create-release@latest env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: tag_name: 'nightly snapshot' release_name: 'nightly snapshot' draft: false prerelease: false ``` I want `tag_name` and `release_name` to use the current date and time, instead of hard-coded values. However, I couldn't find any documentation on it. How should I do it?
From [this post](https://github.community/t5/GitHub-Actions/How-can-I-set-an-expression-as-an-environment-variable-at/m-p/41804/highlight/true#M4751) you can create a step that [set its output](https://help.github.com/en/actions/reference/workflow-commands-for-github-actions#setting-an-output-parameter) with the value `$(date +'%Y-%m-%d')` Then use this output using `${{ steps.date.outputs.date }}`. The following show an example for environment variables and for inputs : ``` on: [push, pull_request] name: build jobs: build: name: Example runs-on: ubuntu-latest steps: - name: Get current date id: date run: echo "::set-output name=date::$(date +'%Y-%m-%d')" - name: Test with environment variables run: echo $TAG_NAME - $RELEASE_NAME env: TAG_NAME: nightly-tag-${{ steps.date.outputs.date }} RELEASE_NAME: nightly-release-${{ steps.date.outputs.date }} - name: Test with input uses: actions/hello-world-docker-action@master with: who-to-greet: Mona-the-Octocat-${{ steps.date.outputs.date }} ``` Outputs : ``` * Test with environment variables nightly-tag-2020-03-31 - nightly-release-2020-03-31 * Test with input Hello Mona-the-Octocat-2020-03-31 ```
Saving an array with update\_user\_meta I have added more fields to the user information, but now I'm struggling with saving an array of values. How I display the values: ``` <tr> <th><label for="days">days</label></th> <td> <input type="checkbox" name="days" value="<?php echo esc_attr( get_the_author_meta( 'monday', $user->ID ) ); ?>"> Monday<br> <input type="checkbox" name="days" value="<?php echo esc_attr( get_the_author_meta( 'tuesday', $user->ID ) ); ?>"> Tuesday<br> <input type="checkbox" name="days" value="<?php echo esc_attr( get_the_author_meta( 'wednesday', $user->ID ) ); ?>"> Wednesday<br> <input type="checkbox" name="days" value="<?php echo esc_attr( get_the_author_meta( 'thursday', $user->ID ) ); ?>"> Thursday<br> <input type="checkbox" name="days" value="<?php echo esc_attr( get_the_author_meta( 'friday', $user->ID ) ); ?>"> Friday<br> </td> </tr> ``` How I'm saving the values: ``` update_user_meta($user_id, 'days', $_POST['monday, tuesday, wednesday, thursday, friday']); ``` I know I'm doing it wrong, but I can't find the right way to do it. I hope someone here can help me.
Change `name="days"` to `name="days[]"` for all input fields and use ``` update_user_meta( $user_id, 'days', $_POST['days'] ); ``` You will be storing a numeric array with true/false values depending on if those fields were checked or not. To set the pre-set the value of a checkbox you can't simply set its value to what you desire. The right attribute to do that is `checked="checked"`. Here's an example of how you could go about this in its entirety: ``` $values = array_combine( array( 'monday', 'tuesday', 'wednesday', 'thursday', 'friday' ), $_POST['days'] ); update_user_meta( $user_id, 'days', $values ); ``` For display: ``` <?php $days = get_user_meta( $user_id, 'days', true ); foreach( $days as $day => $checked ): ?> <input type="checkbox" name="days[]" <?php if( $checked ) echo 'checked="checked"'; ?>> <?php endforeach; ? ``` But in fact, in your case, using an array for field name is just clumsy. You could just name each field "monday", "tuesday" and so on, retrieve the value with `$_POST['monday']` for example, and then store `array( 'monday' => $_POST['monday'], ... )`. With this in the database, you can then replace `name='days[]'` with `name="<?php echo $day; ?>"'` in the foreach loop. Notice I'm using `get_user_meta` instead of `get_author_meta` - they are not interchangeable! The Wordpress API may be a small hurdle for you but I think that what you really need to study up on is just how to build forms and how to process them with PHP. Hopefully this can be a start, but there is a lot more material on the Internet.
How to create an Observable in Android? What I want to do is to create a simple in-memory cache just to try Observables out. However I got stuck because I don't understand how to create an observable. This is the code I have gotten so far: ``` public class MovieCache { MovieWrapper movieWrapper; public Observable<MovieWrapper> getMovies() { //How to create and return an Observable<MovieWrapper> here? } public void setCache(MovieWrapper wrapper) { movieWrapper = wrapper; } public void clearCache() { movieWrapper = null; } } ``` In the `getMovies()` method I want to create an Observable and return my local field movieWrapper to the subscriber. How can I do this? I tried with using `new Observable.just(movieWrapper)` but it results in a null exception.
Take a look at [this tutorial](http://blog.danlew.net/2015/07/23/deferring-observable-code-until-subscription-in-rxjava/) as it does exactly what you are looking for. Basically you use `defer()` to make sure you always get the latest version of your cached object: ``` public class MovieCache { MovieWrapper movieWrapper; public Observable<MovieWrapper> getMovies() { return Observable.defer(new Func0<Observable<MovieWrapper>>() { @Override public Observable<MovieWrapper> call() { return Observable.just(movieWrapper); } }); } public void setCache(MovieWrapper wrapper) { movieWrapper = wrapper; } public void clearCache() { movieWrapper = null; } } ``` `defer()` makes sure that you will get the object upon *subscription* to the `Observable` not on *creation*. Note however that, according to the author of the post: > > The only downside to defer() is that it creates a new Observable each > time you get a subscriber. create() can use the same function for each > subscriber, so it's more efficient. As always, measure performance and > optimize if necessary. > > >
How can I use the Homebrew Python version with Homebrew MacVim on Snow Leopard? > > **Note:** For Mountain Lion, see: [How can I use the Homebrew Python with Homebrew MacVim on Mountain Lion?](https://superuser.com/questions/461845/how-can-i-use-the-homebrew-python-with-homebrew-macvim-on-mountain-lion?lq=1) > > > I've installed Python 2.7 with Homebrew on Mac OS X Snow Leopard. When I install MacVim using `brew install macvim`, it compiles with Python support, but is compiled against the system's Python installation. This can be seen by running the command: ``` :python print(sys.version) 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] ``` However, Vim seems to be using the Python executable that's in my path: ``` :python print(sys.executable) /usr/local/bin/python ``` This causes problems for Vim scripts using [vim\_bridge](http://pypi.python.org/pypi/vim_bridge), such as [vim-rst-tables](https://github.com/nvie/vim-rst-tables) (can't import module "re"). How can I compile Vim against my Homebrewed Python version?
First, remove MacVim if you've already installed it: `brew remove macvim` Next, edit MacVim's Formula with the command: `brew edit macvim`. Find the arguments list (begins with `args = %W[ ...`), and modify this line: ``` --enable-pythoninterp ``` Change it to these two lines: ``` --enable-pythoninterp=dynamic --with-python-config-dir=/usr/local/lib/python2.7/config ``` (this config dir should be symlinked to `/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config`) Now when you reinstall MacVim with `brew install macvim`, it will use your Python 2.7 installation. ``` :python print(sys.version) 2.7.3 (default, Apr 16 2012, 23:20:02) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00)] ```
Hardcoding on frontend I have an API which returns a list of articles. An article can have three status: Approved, Pending and Rejected. Now the front-end needs to hit the API in following scenarios: 1. Get all the articles irrespective of the status. 2. Get all articles added by an user. 3. Get all articles added by an user which are pending/rejected. 4. Get all articles which are pending. There are two buttons on the front-end: one for fetching all pending articles so that they can be approved(only available for admin users). Another for fetching all rejected and pending articles for the logged in user so that he can see his pending/rejected articles. These two buttons are available on the list page of articles i.e. API has already been hit to fetch all articles and buttons are more like filters now. Is it good to return following dict along with list of all articles on the first API call : ``` { "admin_approval": ["Pending"], "self_view": ["Pending", "Rejected"] } ``` So that front-end can know which all status need to be passed in the query params for filtering on subsequent button hits? Or should the logic which decides which filters to apply be hard-coded on front-end? P.S Using Django for back-end and Angular for front-end.
The answer to this depends how much you want the UI to cope with future requirement changes, such as additional buttons for different kinds of filterings. For trivial apps, most people will hard-code filtering object into each button. However, if you want the system to be extensible, and you have a total control over the REST API, I'll suggest that you add another API endpoint that return all supported filtering modes. So, basically, you'll have - REST endpoint for fetching all articles: like `http://your-machine/api/v1/articles` - REST endpoint for fetching all supported filtering modes: like `http://your-machine/api/v1/filters` The second endpoint will then return an array that has enough information for you to render buttons, and know which filters to apply when any of the button is clicked: ``` [ { "name": "wait_for_approval", "buttonText": "Await Approval", "filtersBy": [ ... ] }, { "name": "self_pending_rejected", "buttonText": "Pending or Rejected", "filtersBy": [ ... ] }, ... ] ``` With this approach, it's - **future-proof** as your endpoint can return a bigger array when requirement changes. - **clean** as it also prevents you from polluting the first REST endpoint. - **flexible**: if you have users with different roles, the REST endpoint simply needs to check credentials and return different arrays depending on which user makes a request.
When it can be usefull to use the `IWebHost.Start()` method? ASP.NET Core 2 MVC. `Microsift.AspNet.Hosting.IWebHost` interface [contains](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.hosting.iwebhost.start?view=aspnetcore-2.0) the `Start()` method. Also, the `Microsoft.AspNetCore.Hosting.WebHostExtensions` class [defines](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.hosting.webhostextensions.run?view=aspnetcore-2.0) the `Run()` extension method for the `IWebHost` interface. The `Run()` method runs a web application and block the calling thread until host shutdown. At the same time the `Start()` method doesn't block the calling thread until host shutdown. At this case the browser closes before it can show information to user. Hmm... When it can be usefull to use the `IWebHost.Start()` method?
Not all hosting is performed in a classic serving-pages-over-the-internet scenario. For example, you may want to serve content from your WPF app or a Windows service. In this situation you probably don't want the call to block - your app will have other things to do. For example, lets say you have a WPF app and you want to service content from it, you could simply extend the `main` method: ``` private IWebHost _webHost; static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); //Create the host _webHost = WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>() .Build(); //We want to start, not run because we need the rest of the app to run _webHost.Start(); //Run the app as normal Application.Run(new MainForm()); //We're back from the app now, we can stop the host //... } ```
CRTP and c++1y return type deduction I was recently playing with CRTP when I came across something that surprised me when used with c++1y functions whose type is deduced. The following code works: ``` template<typename Derived> struct Base { auto foo() { return static_cast<Derived*>(this)->foo_impl(); } }; struct Derived: public Base<Derived> { auto foo_impl() -> int { return 0; } }; int main() { Derived b; int i = b.foo(); (void)i; } ``` I assumed that the return type from `Base<Derived>::foo` was a `decltype` of the expression returned, but if I modify the functio `foo` like this: ``` auto foo() -> decltype(static_cast<Derived*>(this)->foo_impl()) { return static_cast<Derived*>(this)->foo_impl(); } ``` This code does not work anymore, I get the following error (from GCC 4.8.1): ``` ||In instantiation of 'struct Base<Derived>':| |required from here| |error: invalid static_cast from type 'Base<Derived>* const' to type 'Derived*'| ||In function 'int main()':| |error: 'struct Derived' has no member named 'foo'| ``` My questions are: Why doesn't it work? What could I possibly write to get the correct return type without relying on automatic return type deduction? And, well... here is a [live example](http://coliru.stacked-crooked.com/a/d4c39f214bf59436).
### Why does the first example work (return type deduction)? The definition of a member function of a class template is only implicitly instantiated when odr-used (or explicitly instantiated). That is, by deriving from `Base<Derived>`, you do *not* implicitly instantiate the function body. Hence, the return type is still *not deduced yet*. At the (\*) point of instantiation, `Derived` is complete, `Derived::foo_impl` is declared, and the return type deduction can succeed. (\*) not "the", but "certain points of instantiation". There are several. --- ### Why doesn't the second example work (trailing-return-type)? > > I assumed that the return type from `Base<Derived>::foo` was a `decltype` > of the expression returned, but if I modify the function `foo` like this: > > > The *trailing-return-type* is part of the declaration of the member function; hence, it is part of the definition of the surrounding class, which is required to be instantiated when deriving from `Base<Derived>`. At this point, `Derived` is still incomplete, specifically `Derived::foo_impl` has not been declared yet. --- > > What could I possibly write to get the correct return type without > relying on automatic return type deduction? > > > Now this is tricky. I'd say this is not very clearly defined in the Standard, e.g. see [this question](https://stackoverflow.com/q/17478621/420683). Here's an example that demonstrates that clang++3.4 does not find the members of `Derived` inside `Base<Derived>`: ``` template<typename Derived> struct Base { auto foo() -> decltype( std::declval<Derived&>().foo_impl() ) { return static_cast<Derived*>(this)->foo_impl(); } }; ``` `declval` doesn't require a complete type, so the error message is that there's no `foo_impl` in `Derived`. --- There's a hack, but I'm not sure if it's compliant: ``` template<typename Derived> struct Base { template<class C = Derived> auto foo() -> decltype( static_cast<C*>(this)->foo_impl() ) { static_assert(std::is_same<C, Derived>{}, "you broke my hack :("); return static_cast<Derived*>(this)->foo_impl(); } }; ```
Fastest way to read/write a Bitmap from/to file? I'm currently writing Bitmaps to a png file and also reading them back to a Bitmap. I'm looking for ways to improve the speed at which writing and reading happens. The images need to be lossless since I'm reading them back to edit them. The place where I see the worst performance is the actual `BitmapFactory.decode(...)`. ***Few questions:*** 1. Is there a faster solution to read/write from file to a Bitmap using NDK? 2. Is there a better library to decode a Bitmap faster? 3. What is the best way to store and read a Bitmap?
Trying to resolve the best/fastest possible way to read/write image to file came down to using plain old `BitmapFactory`. I have tried using NDK to do the encoding/decoding but that really didn't make a difference. Essentially the format to use was lossless PNG since I didn't want to loose any quality after editing an image. The main concept from all this was that I needed to understand was how long encoding took versus decoding. The encoding numbers where in the upper 300-600ms, depending on image size, and decoding was just fast, around 10-23ms. After understanding all that I just created a worker thread that I passed images needing encoding and let it do the work without affecting the user experience. The image was kept cached in memory just in case it was needed right away before it was completely encoded and saved to file.
Azure blob storage throttling We are trying to move some data from one of our blob storage accounts and we are getting throttled. Initially, we were getting 9gbps but soon after we got throttled down to 1.1gbps. We also started receiving errors saying that Azure forcibly closed the connection and we were getting network timeouts. Has anyone experienced this or have any knowledge around increasing limits?
According to the offical document [`Storage limits`](https://learn.microsoft.com/en-us/azure/azure-subscription-service-limits#storage-limits) of `Azure subscription and service limits, quotas, and constraints`, there are some limits about your scenario which can not around as below. > > 1. Maximum request rate1 per storage account: 20,000 requests per second > 2. Max egress: > - for general-purpose v2 and Blob storage accounts (all regions): 50 Gbps > - for general-purpose v1 storage accounts (US regions): 20 Gbps if RA-GRS/GRS enabled, 30 Gbps for LRS/ZRS 2 > - for general-purpose v1 storage accounts (Non-US regions): 10 Gbps if RA-GRS/GRS enabled, 15 Gbps for LRS/ZRS 2 > 3. Target throughput for single blob: Up to 60 MiB per second, or up to 500 requests per second > > > Considering for download data to local environment, except your network bandwidth and stablity, you have to compute the max concurrent number of requests per blob not over `500` and the total number of all requests not over `20,000` if you want to move data programmatically. So it's the key point for high concurrency controll. If just move data inside Azure or not by programming, the best way is to use the offical transfer data tool `AzCopy`(for [Windows](https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json) or [Linux](https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-linux?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json)) and [Azure Data Factory](https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json). Then you will not need to consider for these limits and just wait for the move progress done. Any concern, please feel free to let me know.
Natural Language Processing - Converting Text Features Into Feature Vectors So I've been working on a natural language processing project in which I need to classify different styles of writing. Assuming that semantic features from texts have already been extracted for me, I plan to use Weka in Java to train SVM classifiers using these features that can be used to classify other different texts. The part I'm having trouble on is that to train an SVM, the features must be converted into a feature vector. I'm not sure how you would be able to represent features such as vocabulary richness, n-grams, punctuation, number of paragraphs, and paragraph length as numbers in a vector. If somebody could point in the right direction, that would be greatly appreciated.
I'm not sure what values your attributes can take on, but perhaps this example will help you: Suppose we are conducting a supervised learning experiment to try to determine if a period marks the end of a sentence or not, `EOS` and `NEOS` respectively. The training data came from normal sentences in a paragraph style format, but were transformed to the following vector model: - Column 1: Class: End-of-Sentence or Not-End-of-Sentence - Columns 2-8: The +/- 3 words surrounding the period in question - Columns 9,10: The number of words to the left/right, respectively, of the period before the next reliable sentence delimiter (e.g. ?, ! or a paragraph marker). - Column 11: The number of spaces following the period. Of course, this is not a very complicated problem to solve, but it's a nice little introduction to Weka. We can't just use the words as features (really high dimensional space), but we can take their POS (part of speech) tags. We can also extract the length of words, whether or not the word was capitalized, etc. So, you could feed anything as testing data, so long as you're able to transform it into the vector model above and extract the features used in the .arff. The following (very small portion of) .arff file was used for determining whether a period in a sentence marked the end of or not: ``` @relation period @attribute minus_three {'CC', 'CD', 'DT', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNPS', 'NNS', 'NP', 'PDT', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP','WRB', 'NUM', 'PUNC', 'NEND', 'RAND'} @attribute minus_three_length real @attribute minus_three_case {'UC','LC','NA'} @attribute minus_two {'CC', 'CD', 'DT', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNPS', 'NNS', 'NP', 'PDT', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP','WRB', 'NUM', 'PUNC', 'NEND', 'RAND'} @attribute minus_two_length real @attribute minus_two_case {'UC','LC','NA'} @attribute minus_one {'CC', 'CD', 'DT', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNPS', 'NNS', 'NP', 'PDT', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP','WRB', 'NUM', 'PUNC', 'NEND', 'RAND'} @attribute minus_one_length real @attribute minus_one_case {'UC','LC','NA'} @attribute plus_one {'CC', 'CD', 'DT', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNPS', 'NNS', 'NP', 'PDT', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP','WRB', 'NUM', 'PUNC', 'NEND', 'RAND'} @attribute plus_one_length real @attribute plus_one_case {'UC','LC','NA'} @attribute plus_two {'CC', 'CD', 'DT', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNPS', 'NNS', 'NP', 'PDT', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP','WRB', 'NUM', 'PUNC', 'NEND', 'RAND'} @attribute plus_two_length real @attribute plus_two_case {'UC','LC','NA'} @attribute plus_three {'CC', 'CD', 'DT', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNPS', 'NNS', 'NP', 'PDT', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP','WRB', 'NUM', 'PUNC', 'NEND', 'RAND'} @attribute plus_three_length real @attribute plus_three_case {'UC','LC','NA'} @attribute left_before_reliable real @attribute right_before_reliable real @attribute spaces_follow_period real @attribute class {'EOS','NEOS'} @data VBP, 2, LC,NP, 4, UC,NN, 1, UC,NP, 6, UC,NEND, 1, NA,NN, 7, LC,31,47,1,NEOS NNS, 10, LC,RBR, 4, LC,VBN, 5, LC,?, 3, NA,NP, 6, UC,NP, 6, UC,93,0,0,EOS VBD, 4, LC,RB, 2, LC,RP, 4, LC,CC, 3, UC,UH, 5, LC,VBP, 2, LC,19,17,2,EOS ``` As you can see, each attribute can take on whatever you want it to: - `real` denotes a real number - I made up `LC` and `UC` to denote upper case and lower case, respectively - Most of the other values are `POS` tags You need to figure out exactly what your features are, and what values you'll use to represent/classify them. Then, you need to transform your data into the format defined by your .arff. To touch on your punctuation question, let's suppose that we have sentences that all end in `.` or `?`. You can have an attribute called punc, which takes two values: ``` @attribute punc {'p','q'} ``` I didn't use `?` because that is what is (conventionally) assigned when a data point is missing. Our you could have boolean attributes that indicates whether a character or what have you was present (with 0, 1 or false, true). Another example, but for quality: ``` @attribute quality {'great','good', 'poor'} ``` How you determine said classification is up to you, but the above should get you started. Good luck.
params does not contain POST Body in Elixir / Phoenix I try to build a very simple REST API. It does not include a database or models. Here's my router: ``` defmodule Zentonies.Router do use Zentonies.Web, :router pipeline :browser do plug :accepts, ["html"] plug :fetch_session plug :fetch_flash plug :protect_from_forgery plug :put_secure_browser_headers end pipeline :api do plug :accepts, ["json"] end scope "/v1/events/", Zentonies do pipe_through :api post "/call", PageController, :call end end ``` Here is the Controller: ``` defmodule Zentonies.PageController do require Logger import Joken use Zentonies.Web, :controller def index(conn, _params) do render conn, "index.html" end def call(conn, params) do Logger.debug inspect(params) conn |> put_status(200) |> text("Response.") end end ``` Now, if I HTTP POST to this endpoint, `inspect(params)` does not return the JSON body of my POST request. Instead it returns the `:call`. Any help is greatly appreciated!
A `call/2` function is [defined by Phoenix](https://github.com/phoenixframework/phoenix/blob/7fd784a8fe92798fd212c49d1d114ec99e65693c/lib/phoenix/controller/pipeline.ex#L87-L104) for its own use to dispatch to the correct action in every Phoenix Controller. By creating a function with that name you're overriding the builtin functionality. You'll have to use a different name for the action. Check out the section "Controllers are plugs" in the documentation of [`Phoenix.Controller.Pipeline`](https://hexdocs.pm/phoenix/Phoenix.Controller.Pipeline.html): > > ## Controllers are plugs > > > Like routers, controllers are plugs, but they are wired to dispatch to a particular function which is called an action. > > > For example, the route: > > > > ``` > get "/users/:id", UserController, :show > > ``` > > will invoke `UserController` as a plug: > > > > ``` > UserController.call(conn, :show) > > ``` > > which will trigger the plug pipeline and which will eventually invoke the inner action plug that dispatches to the `show/2` function in the `UserController`. > > >
"TypeError: Cannot read property 'name' of undefined" when using useState and useEffect I'm new to using useState and useEffect and I'm trying to fetch users from randomuser api and display them. I'm following the documentation and I can see my users array logged into the console. What can I change so it doesn't tell me "Cannot read property 'name' of undefined". Would appreciate the help. ``` import axios from "axios"; import { Layout, Loading } from "../components"; // import config from '../config.json'; const Overview = () => { const [loading, setLoading] = useState(true); const [searchQuery, setSearchQuery] = useState(""); const [devices, setDevices] = useState([]); useEffect(() => { async function callApi() { const response = await axios.get("https://randomuser.me/api/?results=10"); const devices = response?.data?.results; setDevices(devices); // const response = await axios.get(`${config.serverAPI}/devices`); // const devices = response?.data?.status === 'success' && response?.data?.devices; console.log("Devices", devices); await localStorage.setItem("devices", JSON.stringify(devices)); setLoading(false); } callApi(); }, []); // eslint-disable-line react-hooks/exhaustive-deps return ( <Layout> <div className="overview-page-wrapper"> {loading && <Loading />} <h2>Title</h2> <div>{devices[0].name.first}</div> </div> </Layout> ); }; export default Overview; ``` ``` import ReactGA from "react-ga"; import { withRouter } from "react-router"; import Disclaimer from "./Disclaimer"; import Header from "./Header"; import Sidebar from "./Sidebar"; const Layout = ({ children, match }) => { ReactGA.pageview(match.url); console.log(111, match.url); return ( <div className="page-wrapper"> <Sidebar /> <div className="main-section"> <Header /> <div className="content"> {children} </div>{" "} </div>{" "} <Disclaimer /> </div> ); }; export default withRouter(Layout); ```
The problem is with your ``` return ( <Layout> <div className="overview-page-wrapper"> {loading && <Loading />} <h2>Title</h2> <div>{devices[0].name.first}</div> </div> </Layout> ); ``` Because first the render method is called, and then your useEffect hook & till that time `devices` is an empty array. and `devices[0]` is `undefined` and when you do `devices[0].name` you are essentially doing `undefined.name` TO overcome this you should do something like ``` return ( <Layout> <div className="overview-page-wrapper"> {loading && <Loading />} <h2>Title</h2> {devices.length > 0 && (<div>{devices[0].name.first}</div>)} </div> </Layout> ); ```
how to update child component's property using props in vue js? I have a addToCart component(child) on foodList component(parent). and there is another component Cart. i want to reset the addToCart component's counter value to 0 whenever i will empty my cart. **App.vue** ``` data() { return { msg: "Welcome to Your Food Ordering App", foodData:[], cart:[], reset:false }; }, methods: { emptyCart:function(){ this.reset = true; this.cart = []; } } ``` **foodList.vue** ``` export default { props:['foods','reset'], data() { return { }; } } <addToCart :reset="reset"></addToCart> ``` **addToCart** ``` export default { props:['food','reset'], data(){ return { counter:0 } }, beforeMount() { if(this.reset) { this.counter = 0; } } ``` in app.vue I'm modifying the reset property to "true" and then passing it to foodList.vue, then passing it to addToCart.vue. In addToCart.vue I'm checking if reset prop is true then set the counter to 0; But this is not working.let me know where am I missing? Please refer to this link for complete code. [food ordering app](https://github.com/spratap124/Food-Ordering)
So basically you want to pass the `state` over multiple components. There are multiple ways to achieve this. These are my three recommend ones. ## Centralized State management In order to handle `states` easier, you can make use of a centralized state management tool like `vuex`: <https://github.com/vuejs/vuex> This is what I recommend you, especially when it comes to bigger applications, where you need to pass the state over multiple levels of components. Trust me, this makes your life a lot easier. ## Property binding The most basic way to communicate with your child components is property binding. But especially when it comes to multi-level communication it can get quite messy. In this case, you would simply add `counter` to both of your child components `props` array like this: **foodList.vue (1. Level Child Component)** ``` export default { props:['foods','reset', 'counter'], // ... your stuff } ``` And include the component like this: ``` <foodList :counter="counter"></foodList> ``` **addToCart.vue (2. Level Child Component)** ``` export default { props:['food','reset', 'counter'], // ... your stuff } ``` And finally include the component like this: ``` <addToCart :reset="reset" :counter="counter"></addToCart> ``` As a last step, you can specify `counter` in the `data` object of your root component and then modify it on a certain `event`. The `state` will be passed down. **App.vue** ``` data() { return { // ... your stuff counter: 0, }; }, methods: { emptyCart:function(){ // ... your stuff this.counter = 0; // reset the counter from your parent component } } ``` ## Event Bus As a third option, you could make use of Vue's event bus. This is the option I personally choose for applications, which get too messy with simple property binding, but still are kind of too small to make us of `Centralized State management`. To get started create a file called `event-bus.js` and then add the following code to it: ``` import Vue from 'vue'; export const EventBus = new Vue(); ``` Now you can simply trigger events from your parent Component like this: **App.vue** ``` import { EventBus } from './event-bus.js'; // check the path export default { // ... your stuff methods: { emptyCart:function(){ // ... your stuff EventBus.$emit('counter-changed', 0); // trigger counter-changed event } } } ``` And then listen to the `counter-changed` event in your child component. **addToCart.vue** ``` import { EventBus } from './event-bus.js'; export default { // ... your stuff created() { EventBus.$on('counter-changed', newCounter => { this.counter = newCounter; }); } } ``` > > Learn more about the event bus: <https://alligator.io/vuejs/global-event-bus/> > > >
The function evaluation requires all threads to run - MVC The following error is occuring when passing values from a model to a parameter inside an If Statement. [![QuickWatch Error Description](https://i.stack.imgur.com/ZDeJY.png)](https://i.stack.imgur.com/ZDeJY.png) [![Threads](https://i.stack.imgur.com/I6kFp.png)](https://i.stack.imgur.com/I6kFp.png) This is the code the issue is occurring, I'm pretty sure its not the *ValidateUserPassword* method. ``` if (PSFNetSystem.ValidateUserPassword(model.Server, model.Username, model.Password) < 0) { ModelState.AddModelError("Password", "Failed to login"); return View(model); } ``` Any help is appreciated, thanks.
**Short answer:** You can click on the "thread" icon to the right to force the evaluation. **Long answer:** When you evaluate a method in the debugger, the debugger/CLR sets the context of the current thread to the method being evaluated, sets a guard breakpoint, freezes all threads except the current thread, then continues the process. When the breakpoint is hit, the debugger restores the thread to its previous state and uses the return value to populate the window. Because only one thread is running, it's possible to create deadlock situations if the evaluation thread takes a lock that's already held by another thread. If the CLR detects a possible deadlock it aborts the evaluation and the debugger ultimately shows that message. Clicking the button to allow all threads to run means that we don't freeze the other threads when retrying the evaluation. This will allow the evaluation to proceed, but has the disadvantage of breakpoints on other threads being ignored. BTW, If you are writing code that you know will likely deadlock if it's evaluated, you can call Debugger.NotifyOfCrossThreadDependeny. This will cause the behavior you are seeing.
Why doesn't my schema to add default values in mongoose arrays? I have a schema like this: ``` var CustomUserSchema = new Schema({ role: [], permissions: [], }); ``` `permissions` field store an array of strings that looks like this: ``` ["Delete", "Show","Create"] ``` whereas `role` field stores an array of objects that looks like this: ``` [ { name:"admin", priority:10, permissions: ["Delete", "Show" , "update"] }, { name:"user", priority:5, permissions: ["Delete", "Show"] } ] ``` Now, my requirement is to be able to store "Show" as default value for `permissions` field in the schema and to store 'user' as default value for name inside of role field , priority 0 for `priority` inside of `role` field and 'Show' for `permissions` inside of `role` field. Trying myself, I came up with this: ``` var CustomUserSchema = new Schema({ role: [{ name: {type: String, default: 'user'}, priority:{ type: Number, default: 0 } , permissions: [{type:String, default:'Show'}] }], permissions: [{type:String, default:'Show'}] }); ``` But it does not assign the default values to the fields at all and gives an array of size 0 to the fields . What seems wrong with above schema? How do I store these as default values?
Default values really don't work with arrays, unless of course it is a document within the array and you want to set a default property for that document when added to the array. Therefore an array is always initialized as "empty" unless of course you deliberately put something in it. In order to do what you want to achieve, then add a [pre save hook](http://mongoosejs.com/docs/middleware.html) that checks for an empty array and then otherwise places a default value in there: ``` var async = require('async'), mongoose = require('mongoose'), Schema = mongoose.Schema; mongoose.connect('mongodb://localhost/authtest'); var userSchema = new Schema({ permissions:[{ "type": String, "enum": ["Delete","Show","Create","Update"], }] }); userSchema.pre("save",function(next) { if (this.permissions.length == 0) this.permissions.push("Show"); next(); }); var User = mongoose.model( 'User', userSchema ); var user = new User(); user.save(function(err,user) { if (err) throw err; console.log(user); }); ``` Which creates the value where empty: ``` { __v: 0, _id: 55c2e3142ac7b30d062f9c38, permissions: [ 'Show' ] } ``` If of course you initialize your data or manipulate to create an entry in the array: ``` var user = new User({"permissions":["Create"]}); ``` Then you get the array you added: ``` { __v: 0, _id: 55c2e409ec7c812b06fb511d, permissions: [ 'Create' ] } ``` And if you wanted to "always" have "Show" present in permissions, then a similar change to the hook could enforce that for you: ``` userSchema.pre("save",function(next) { if (this.permissions.indexOf("Show") == -1) this.permissions.push("Show"); next(); }); ``` Which results in: ``` var user = new User({"permissions":["Create"]}); { __v: 0, _id: 55c2e5052219b44e0648dfea, permissions: [ 'Create', 'Show' ] } ``` Those are the ways you can control defaults on your array entries without needing to explicitly assign them in your code using the model.
Why don't property initializers call a custom setter? From the [Kotlin documentation](https://kotlinlang.org/docs/reference/properties.html#getters-and-setters), custom setters are allowed: ``` class Test { var stringRepresentation: String get() = field set(value) { setDataFromString(value) } init { stringRepresentation = "test" } private fun setDataFromString(value: String) { } } ``` But you cannot have a custom setter without a custom getter (and initialize from the `init` block): ``` class Test { // Compilation error: "Property must be initialized" var stringRepresentation: String set(value) { setDataFromString(value) } init { stringRepresentation = "test" } private fun setDataFromString(value: String) { } } ``` Although you can have a custom getter without a custom setter, no problem here: ``` class Test { var stringRepresentation: String get() = field init { stringRepresentation = "test" } private fun setDataFromString(value: String) { } } ``` So why can't you use a custom setter with a property initialized from within the `init` block, and why does the `init` block invoke the custom setter while the property initializer assigns directly, bypassing the custom setter? ``` class Test { var stringRepresentation: String = "" // Does not call custom setter set(value) { setDataFromString(value) } init { stringRepresentation = "test" // Calls custom setter } private fun setDataFromString(value: String) { } } ```
Property initializers doesn't call custom setter because their purpose is to provide the default value. Unlike in Java, in Kotlin not only local variables must be initialized before their first access but class properties as well. In Java this is valid. ``` public class Test { public String str; public static void main(String[] args) { System.out.println(new Test().str); } } ``` In Kotlin this is not. ``` class Parent { var str: String? } fun main(args: Array<String>) { Parent().str } ``` For this reason custom setter needs its property to be initialized either by property initializer or by the constructor. Take a look at the following example. ``` class Test { var stringRepresentation: String = "a" // Default value. Does not call custom setter get() = field set(value) { println("Setting stringRepresentation property to %s. Current value is %s.".format(value, field)) field = setDataFromString(value) } init { this.stringRepresentation = "b" // Calls custom setter } private fun setDataFromString(value: String): String { println("Setting stringRepresentation property to %s.".format(value)) return value } } fun main(args: Array<String>) { Test().stringRepresentation = "c" // Calls custom setter } ``` Property stringRepresentation is initialized to **"a"** opon instantiation of its class without calling setter. Then init block is called and sets value to **"b"** using setter. Then to **"c"** using setter.
Android/ORMLite Insert Row with ID I'm currently using ORMLite to work with a SQLite database on Android. As part of this I am downloading a bunch of data from a backend server and I'd like to have this data added to the SQLite database in the exact same format it is on the backend server (ie the IDs are the same, etc). So, my question to you is if I populate my database entry object (we'll call it Equipment), including Equipment's generatedId/primary key field via setId(), and I then run a DAO.create() with that Equipment entry will that ID be saved correctly? I tried it this way and it seems to me that this was not the case. If that is the case I will try again and look for other problems, but with the first few passes over the code I was not able to find one. So essentially, if I call DAO.create() on a database object with an ID set will that ID be sent to the database and if it is not, how can I insert a row with a primary key value already filled out? Thanks!
@Femi is correct that an object can either be a generated-id *or* an id, but *not* both. The issue is more than how ORMLite stores the object but it also has to match the schema that the database was generated with. [ORMLite](http://ormlite.com/) supports a `allowGeneratedIdInsert=true` option to `@DatabaseField` annotation that allows this behavior. This is not supported by some database types (Derby for example) but works under Android/SQLite. For posterity, you can also create 2 objects that share the same table -- one with a generated-id and one without. Then you can insert using the generated-id Dao to get that behavior and the other Dao to take the id value set by the caller. Here's [another answer talking about that](https://stackoverflow.com/questions/5216663/can-i-temporarily-suspend-auto-generated-id-in-ormlite). The issue for you sounds like that this will create a lot of of extra DAOs. The only other solution is to *not* use the id for your purposes. Let the database generate the id and then have an additional field that you use that is set externally for your purposes. Forcing the database-id in certain circumstances seems to me to be a bad pattern.
Git error when pushing - update\_ref failed I have a problem when i try to push my local commits, it has probably happened when Android Studio has crushed. Here is the error: > > update\_ref failed for ref 'refs/remotes/origin/master': cannot lock > ref 'refs/remotes/origin/master': unable to resolve reference > refs/remotes/origin/master: Invalid argument > > > `$git stash` says that `HEAD` is now at my latest commit. I'm still a beginner in git and I have no idea how to fix it.. What can I do?
Your `refs/remotes/origin` directory—this is a directory within in your local repository—has some sort of problem. (It's not clear to me what exactly went wrong, but your guess that it happened when Android Studio crashed seems likely to me as well.) To fix it, you can *probably* use the sequence of commands below. Note that I don't recommend it in general—your Git service, whether that's Android Studio or ordinary command-line Git, is not supposed to crash and leave you with a broken repository, so you should never *have* to do this, it's just an emergency repair, something like putting duct tape all over your car1 after a tree fell on it, just enough to get you to a proper repair / replacement vehicle later: ``` $ cd $(git rev-parse --show-toplevel) # if necessary $ rm -rf .git/refs/remotes/origin # remove all origin/* $ mkdir .git/refs/remotes/origin # create empty origin/ $ git fetch origin # repopulate origin/* ``` In any case it's likely that your `git push` has actually succeeded at this point, since what is failing is the update to your `origin/master` remote-tracking branch. Your Git has just finished talking to another Git on `origin`, giving them (the users of `origin`) updates for their `master`, and `origin` has accepted those updates, and your Git is now recording the fact that, the last time it talked with `origin`, *their* `master` was set to some particular commit hash—the one you just pushed. (Remember that your `origin/*` remote-tracking branches are just recording, for you, what your Git saw that *their* Git has for its branches. Your Git updates these when you run `git fetch`, where your Git talks to their Git and gets all of its branches. Your git also updates some—not all—on `git push`, when your Git talks to their Git and asks them to *set* one or more branches, to the hashes your Git hands them.) --- 1Except that as a Git mechanic, instead of just duct tape, baling wire, and chewing gum, my recommended parts are actually proper body panels, belts and hoses, and clamps. :-)
Similar technology to Chrome's Native Client Messaging in Firefox? We want to replace a custom NPAPI interface between a browser based web application and an client side daemon process. Is there a similar technology to Chrome's Native Client Messaging in Firefox?
js-ctypes[1] is probably the closest alternative for Mozilla. <https://developer.mozilla.org/en-US/docs/Mozilla/js-ctypes> I have a C++ module that I compile as a binary executable for native-messaging or as a library for js-ctypes. The difference between the two is primarily that native-messaging calls a binary executable and performs stdin/stdout data exchange, and js-ctypes opens a static/shared library (via dlopen) and calls exposed methods of your library which can return compatible data types[2] and optionally call a passed JavaScript callback method. [1] <https://developer.mozilla.org/en-US/docs/Mozilla/js-ctypes> [2] <https://developer.mozilla.org/en-US/docs/Mozilla/js-ctypes/js-ctypes_reference/ctypes#Predefined_data_types>
Lifetime differences between references to zero sized types I came across an interesting case while playing with zero sized types (ZSTs). A reference to an empty array will mold to a reference with any lifetime: ``` fn mold_slice<'a, T>(_: &'a T) -> &'a [T] { &[] } ``` I thought about how that is possible, since basically the "value" here lives on the stack frame of the function, yet the signature promises to return a reference to a value with a longer lifetime (`'a` contains the function call). I came to the conclusion that it is because the empty array `[]` is a ZST which basically only exists statically. The compiler can "fake" the value the reference refers to. So I tried this: ``` fn mold_unit<'a, T>(_: &'a T) -> &'a () { &() } ``` and then the compiler complained: ``` error: borrowed value does not live long enough --> <anon>:7:6 | 7 | &() | ^^ temporary value created here 8 | } | - temporary value only lives until here | note: borrowed value must be valid for the lifetime 'a as defined on the block at 6:40... --> <anon>:6:41 | 6 | fn mold_unit<'a, T>(_: &'a T) -> &'a () { | ^ ``` It doesn't work for the unit `()` type, and it also does not work for an empty struct: ``` struct Empty; // fails to compile as well fn mold_struct<'a, T>(_: &'a T) -> &'a Empty { &Empty } ``` Somehow, the unit type and the empty struct are treated differently from the empty array. Are there any additional differences between those values besides just being ZSTs? Do the differences (`&[]` fitting any lifetime and `&()`, `&Empty` not) nothing to do with ZSTs at all? [Playground example](https://play.rust-lang.org/?gist=b9e82d92875a6a53fe2e38bdd540390c&version=stable&backtrace=0)
It's not that `[]` is zero-sized (though it is), it's that `[]` is a constant, compile-time literal. This means the compiler can store it in the executable, rather than having to allocate it dynamically on the heap or stack. This, in turn, means that pointers to it last as long as they want, because data in the executable isn't going anywhere. Annoyingly, this doesn't extend to something like `&[0]`, because Rust isn't *quite* smart enough to realise that `[0]` is *definitely* constant. You can work around this by using something like: ``` fn mold_slice<'a, T>(_: &'a T) -> &'a [i32] { const C: &'static [i32] = &[0]; C } ``` This trick also works with *anything* you can put in a `const`, like `()` or `Empty`. Realistically, however, it'd be simpler to just have functions like this return a `&'static` borrow, since that can be coerced to any *other* lifetime automatically. **Edit**: the previous version noted that `&[]` is not zero sized, which was a little tangential.
hover this + after and this (n+1) and before [jsfiddle.net/rqJAY/](http://jsfiddle.net/rqJAY/) HTML: ``` <table class="table_database_edit"> <tr><td>Magazyn wycieraczek</td><td>Edytuj</td><td>Usuń</td></tr> <tr class="more" id=""><td colspan="3">aa</td></tr> <tr><td>test</td><td>Edytuj</td><td>Usuń</td></tr> <tr class="more" id=""><td colspan="3">aa</td></tr> </table> ``` CSS: ``` tr.more{ #display: none; } table.table_database_edit{ width: 100%; border-collapse:collapse; border-spacing: 0; } table.table_database_edit tr:nth-child(4n+3), table.table_database_edit tr:nth-child(4n+4){ background-color: #EFF0F1; } table.table_database_edit tr:hover:nth-child(n) + tr:nth-child(n):after{ background-color: #FFFFCC; } table.table_database_edit tr:hover:nth-child(n+1) + tr:nth-child(n+1):before{ background-color: #FFFFCC; } ``` I have table. Every two rows is a group. The groups alternate background color. Rows 1 and 2 are white. Rows 3 and 4 are gray. Rows 5 and 6 are white. Etc. I want to set the background-color to yellow when you hover over a group. How I can do it?
What you're looking for is `tbody`. The `tbody` element is similar to `colgroup`, but used for grouping rows. From there, the CSS is simple: ``` <table class="table_database_edit"> <tbody> <tr><td>Magazyn wycieraczek</td><td>Edytuj</td><td>Usuń</td></tr> <tr class="more" id=""><td colspan="3">aa</td></tr> </tbody> <tbody> <tr><td>test</td><td>Edytuj</td><td>Usuń</td></tr> <tr class="more" id=""><td colspan="3">aa</td></tr> </tbody> </table> ``` CSS: ``` tr.more{ #display: none; } table.table_database_edit{ width: 100%; border-collapse:collapse; border-spacing: 0; } table.table_database_edit tbody:nth-child(odd) tr { background-color: #EFF0F1; } table.table_database_edit tbody:hover tr { background-color: #FFFFCC; } ``` <http://jsfiddle.net/rqJAY/13/>
Google Maps API using Retrofit GET call I want to pass in the latitude and longitude values to the `location` attribute of the Google Maps Autocomplete API call but I have no idea how to form the GET call in Retrofit. The URL should ultimately look like this: ``` https://maps.googleapis.com/maps/api/place/autocomplete/json?&types=address&input=user_input&location=37.76999,-122.44696&radius=50000&key=API_KEY ``` What I currently have in my Retrofit interface: ``` public interface GooglePlacesAutoCompleteAPI { String BASE_URL = "https://maps.googleapis.com/maps/api/place/autocomplete/"; String API_KEY = "mykey"; //not the actual key obviously //This one works fine @GET("json?&types=(cities)&key=" + API_KEY) Call<PlacesResults> getCityResults(@Query("input") String userInput); //This is the call that does not work @GET("json?&types=address&key=" + API_KEY) Call<PlacesResults> getStreetAddrResults(@Query("input") String userInput, @Query("location") double latitude, double longitude, @Query("radius") String radius); } ``` My error is: `java.lang.IllegalArgumentException: No Retrofit annotation found. (parameter #3) for method GooglePlacesAutoCompleteAPI.getStreetAddrResults` So how can I correctly setup the GET method for `getStreetAddrResults()`? Also, are my data types correct for latitude/longitude and radius? Thanks for any help!
Your interface should look like this: ``` public interface API { String BASE_URL = "https://maps.googleapis.com"; @GET("/maps/api/place/autocomplete/json") Call<PlacesResults> getCityResults(@Query("types") String types, @Query("input") String input, @Query("location") String location, @Query("radius") Integer radius, @Query("key") String key); } ``` And use it like this: ``` Retrofit retrofit = new Retrofit.Builder() .baseUrl(API.BASE_URL) .addConverterFactory(GsonConverterFactory.create()) .build(); API service = retrofit.create(API.class); service.getCityResults(types, input, location, radius, key).enqueue(new Callback<PlacesResults>() { @Override public void onResponse(Call<PlacesResults> call, Response<PlacesResults> response) { PlacesResults places = response.body(); } @Override public void onFailure(Call<PlacesResults> call, Throwable t) { t.printStackTrace(); } }); ``` Of course you should give a value to the parameters.
Drools- how to find out which all rules were matched? I've one .DRL file which has say 10 rules. Once I insert a fact, some rules may be matched- how do I find out which rules were matched programmatically?
*Note that this answer is valid for versions of Drools up to 5.x. If you have moved on to 6 or above, then take a look at the modified answer from @melchoir55. I haven't tested it myself, but I'll trust that it works.* To keep track of rule activations, you can use an AgendaEventListener. Below is an example, as found here: <https://github.com/gratiartis/sctrcd-payment-validation-web/blob/master/src/main/java/com/sctrcd/drools/util/TrackingAgendaEventListener.java> You just need to create such a listener and attach it to the session like so: ``` ksession = kbase.newStatefulKnowledgeSession(); AgendaEventListener agendaEventListener = new TrackingAgendaEventListener(); ksession.addEventListener(agendaEventListener); //... ksession.fireAllRules(); //... List<Activation> activations = agendaEventListener.getActivationList(); ``` Note that there is also WorkingMemoryEventListener which enables you to do the same with tracking insertions, updates and retractions of facts. Code for a tracking & logging AgendaEventListener: ``` package com.sctrcd.drools.util; import java.util.ArrayList; import java.util.List; import java.util.Map; import org.drools.definition.rule.Rule; import org.drools.event.rule.DefaultAgendaEventListener; import org.drools.event.rule.AfterActivationFiredEvent; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * A listener that will track all rule firings in a session. * * @author Stephen Masters */ public class TrackingAgendaEventListener extends DefaultAgendaEventListener { private static Logger log = LoggerFactory.getLogger(TrackingAgendaEventListener.class); private List<Activation> activationList = new ArrayList<Activation>(); @Override public void afterActivationFired(AfterActivationFiredEvent event) { Rule rule = event.getActivation().getRule(); String ruleName = rule.getName(); Map<String, Object> ruleMetaDataMap = rule.getMetaData(); activationList.add(new Activation(ruleName)); StringBuilder sb = new StringBuilder("Rule fired: " + ruleName); if (ruleMetaDataMap.size() > 0) { sb.append("\n With [" + ruleMetaDataMap.size() + "] meta-data:"); for (String key : ruleMetaDataMap.keySet()) { sb.append("\n key=" + key + ", value=" + ruleMetaDataMap.get(key)); } } log.debug(sb.toString()); } public boolean isRuleFired(String ruleName) { for (Activation a : activationList) { if (a.getRuleName().equals(ruleName)) { return true; } } return false; } public void reset() { activationList.clear(); } public final List<Activation> getActivationList() { return activationList; } public String activationsToString() { if (activationList.size() == 0) { return "No activations occurred."; } else { StringBuilder sb = new StringBuilder("Activations: "); for (Activation activation : activationList) { sb.append("\n rule: ").append(activation.getRuleName()); } return sb.toString(); } } } ```
How to create reusable Components in .NET MAUI? I have just recently started using .Net MAUI. But now I'm wondering how to use a piece of code, e.g. a self-made navigation bar on all my pages, because it doesn't make sense to write the same code on all 10 pages. I like to know if there is a way to create a component that can be reused like in React or Angular? PS: This question is not specific to a navigation bar but to the general reuse of code in .NET MAUI. I have so far watched various videos & articles on this topic, however, it is more about custom controls and did not help me. Most articles corresponded to what was conveyed in [this](https://learn.microsoft.com/en-us/events/dotnetconf-focus-on-maui/code-reuse-with-dotnet-maui) video. I also came across [this](https://stackoverflow.com/questions/73948641/is-it-possible-to-create-global-objects-in-net-maui) article, but it didn't help me either. Thanks for your help :)
First, you can create a new .xaml file named `Name.xaml`. You can write some codes in it. ``` <?xml version="1.0" encoding="utf-8" ?> <ContentView xmlns="http://schemas.microsoft.com/dotnet/2021/maui" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="CusComponents.Name"> <ContentView.Content> <StackLayout Padding="10"> <Label Text="Name" FontAttributes="Bold" /> <Label Text="First name" /> <Entry x:Name="FirstName" Placeholder="First name" /> <Label Text="Last name" /> <Entry x:Name="LastName" Placeholder="Last name" /> </StackLayout> </ContentView.Content> </ContentView> ``` Second, you can use it in the page you want like this. You need to add an xmlns reference to the top of the XML file– this is like a using statement in a C# file. Using the namespace structure for the sample project, this will be `xmlns:custom_components="clr-namespace:CusComponents"`. ``` <?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:custom_components="clr-namespace:CusComponents" x:Class="CusComponents.MainPage"> <custom_components:Name /> </ContentPage> ``` Here is the view of the code: [![enter image description here](https://i.stack.imgur.com/IpfqQ.png)](https://i.stack.imgur.com/IpfqQ.png)
Determine reverse order of data given X/Y coordinates Imagine an electrical connector. It has pins. Each pin has a corresponding X/Y location in space. I am trying to figure out how to mirror, or 'flip' each pin on the connector given their X/Y coordinate. note: I am using pandas version 23.4 [![Rough Paint Sketch Visual](https://i.stack.imgur.com/xo3Un.jpg)](https://i.stack.imgur.com/xo3Un.jpg) We can assume that x,y, and pin are not unique but connector is. Connectors can be any size, so two rows of 5, 3 rows of 6, etc. ``` x y pin connector 1 1 A 1 2 1 B 1 3 1 C 1 1 2 D 1 2 2 E 1 3 2 F 1 1 1 A 2 2 1 B 2 3 1 C 2 1 2 D 2 2 2 E 2 3 2 F 2 ``` The dataframe column, 'flip', is the solution I am trying to get to. Notice the pins that would be in the same row are now in reverse order. ``` x y pin flip connector 1 1 A C 1 2 1 B B 1 3 1 C A 1 1 2 D F 1 2 2 E E 1 3 2 F D 1 1 1 A C 2 2 1 B B 2 3 1 C A 2 1 2 D F 2 2 2 E E 2 3 2 F D 2 ```
IIUC try using `[::-1]` a reversing element and `groupby` with `transform`: ``` df['flip'] = df.groupby(['connector','y'])['pin'].transform(lambda x: x[::-1]) ``` Output: ``` x y pin connector flip 0 1 1 A 1 C 1 2 1 B 1 B 2 3 1 C 1 A 3 1 2 D 1 F 4 2 2 E 1 E 5 3 2 F 1 D 6 1 1 A 2 C 7 2 1 B 2 B 8 3 1 C 2 A 9 1 2 D 2 F 10 2 2 E 2 E 11 3 2 F 2 D ```
bash: nano: command not found at Windows git bash I am using git version 2.7.0.windows.1 on a windows pc, I used the following command: ``` $ nano README ``` which results me: ``` bash: nano: command not found ``` Now how can I install nano text editor to git bash?
Little modification of the previous solution (@Simopaa) is worked for me on Windows 10 (without Chocolatey): 1. Download [nano-git](https://www.nano-editor.org/dist/win32-support/) 2. Move the `nano-git-xxx.exe` to (for example) `C:\Program Files\Git\bin`. 3. Modify the `.gitconfig` file with the following (**single** and **double quotes** are important): ``` [core] editor = "winpty '/c/Program Files/Git/bin/nano-git-0d9a7347243.exe'" ``` 4. (Optional step) Make `nano` available for editing in general: Create or edit the one of the startup script of bash (e.g. `~/.bash_profile`) with the followings: ``` export PATH="/c/Program Files/Git/bin:$PATH" alias nano="winpty nano" ```
REMOTE\_USER variable without @DOMAIN suffix I administer an application that enables single sign on with valid AD users and runs on IIS currently. For performance increase, I have a task to migrate the web layer to Apache/Php on Linux. I have an AD on Win2012 Server and Apache on CentOS. I have successfully joined the domain (TEST.COM), and can login to centos with windows user accounts. I have also configured Kerberos and Samba and the SSO works, with one problem. The users from AD are imported into the application without the domain name prefix/suffix. So, if my user is TEST\myUser in the AD, in the application the user is just myUser. The application reads the username from REMOTE\_USER variable, but the username is appended with the @DOMAIN string resulting in full username being myUser@TEST.COM. Naturally the application thinks it is not a valid user, since it expects it to be just myUSer. If I add a new user in the application called myUser@TEST.COM, then SSO works fine. Is there a way to discard the @DOMAIN attribute in REMOTE\_USER variable? How would you do it and which files need to be configured?
mod\_auth\_kerb is Kerberos-specific: it implements Kerberos authentication via HTTP Negotiate. In the REMOTE\_USER environment variable, it therefore reports the Kerberos identification ("principal name") of the authenticated client. Kerberos principal names are written foo/bar/baz/...@REALM; the leading components are called "instances" (one most often sees only one or two), and the "realm" is a trust domain within the Kerberos system, a built-in federation mechanism. In AD, the Kerberos realm is the same as the AD "domain" name, in upper case. mod\_auth\_kerb (a new enough version) has a feature called KrbLocalUserMapping. This calls the Kerberos library function krb5\_aname\_to\_localname() to translate a principal name to a "local name;" that is, something meaningful on the local host. What this function does depends on the Kerberos implementation. In MIT Kerberos, you can customize the mapping with "auth\_to\_local" rules in krb5.conf. The default rule just translates foo@[default realm] -> foo, which is sufficient in simple situations in which there's a single realm and your usernames are the same as your Kerberos principal names. However, you might want more complex rules. For example, we have a convention whereby Windows administrators have a "user-admin" account with domain administrator rights, in addition to their "user" accounts. When logged into their "admin" accounts, they would get rejected when going to authenticated web services running on Unix, since "user-admin" was not recognized. We just added a mapping so that user-admin@REALM gets mapped to "user" just as user@REALM does, and this was immediately fixed transparently for all web apps. The other nice thing about doing it this way is that it works for any kerberized service which uses krb5\_aname\_to\_localname(), as opposed to doing it with mod\_map\_user which would only apply to Apache. Some people suggested just blanket mapping all user@REALM names to "user", regardless of the realm (this is what the suggested mod\_map\_user solution would do). Note that this is a potential security problem: if you have multiple Kerberos realms connected by cross-realm trust, then the realm portion becomes meaningful; it is part of the user identification. If you just strip it, that means an administrator in another realm can impersonate a local user to Apache just by creating an account with the same name.
Create table that lists worksheet visibility Following my previous question [Create a table that lists macros in a workbook or worksheet](https://codereview.stackexchange.com/questions/156255/create-a-table-that-lists-macros-in-a-workbook-or-worksheet) here's my Sub to determine worksheet visibility in a workbook. This arises from updating code that used extremely hard to understand logic and several disparate NamedRanges to subsequently hide/reveal sheets. - Is there a better/optimal to create a string as opposed to what I use: `join(Array(param1, param2,...,paramN), DELIMIT)`? I only have brief exposure to [StringBuilder Class](https://msdn.microsoft.com/en-us/library/system.text.stringbuilder(v=vs.110).aspx) and would like to know how best to do this. --- ``` Public Sub ListWorksheetVisibilityInActiveWORKBOOK() Const DELIMIT As String = "|", COLSPAN As Long = 2 Dim HEADER As String Dim inputCell As Range Dim Rw As Long, Col As Long Dim Ws As Worksheet Dim ASU As Boolean Dim TableName As String HEADER = join(Array("SheetName", "Visibility"), DELIMIT) On Error Resume Next 'Error handling to allow for cancelation Set inputCell = GetInputCell("Select where you want the table to go") If inputCell Is Nothing Then GoTo CleanExit On Error GoTo 0 'Clear error handling TableName = Application.InputBox("Table name", Default:="WorksheetVisibility") If TableName = "False" Then MsgBox "Table name not entered. No table has been created." GoTo CleanExit End If 'Check to avoid overwriting information below Dim tblVisibility As Range, rngFormulas As Range, rngConstants As Range Set tblVisibility = inputCell.Resize(ActiveWorkbook.Worksheets.count + 1, COLSPAN) On Error Resume Next 'If no cells are found error wont cause issue Set rngConstants = tblVisibility.SpecialCells(xlCellTypeConstants) Set rngFormulas = tblVisibility.SpecialCells(xlCellTypeFormulas) On Error GoTo 0 'Clears error handling If Not rngConstants Is Nothing Or Not rngFormulas Is Nothing Then Dim Msg As String Msg = "Some cells below will be overwritten. Overwrites cannot be undone..." & vbNewLine & vbNewLine & "Do you wish to proceed?" If MsgBox(Msg, vbYesNo + vbCritical, "Your attention please!") = vbNo Then End End If ASU = Application.ScreenUpdating Application.ScreenUpdating = False inputCell.Value2 = HEADER Rw = inputCell.row + 1 Col = inputCell.Column Dim Value As String For Each Ws In ActiveWorkbook.Worksheets Value = join(Array(Ws.Name, Ws.Visible), DELIMIT) Cells(Rw, Col).Value2 = Value Rw = Rw + 1 Next tblVisibility.Columns(1).TextToColumns DataType:=xlDelimited, Other:=True, OtherChar:=DELIMIT ActiveSheet.ListObjects.Add(xlSrcRange, tblVisibility, XlListObjectHasHeaders:=XlYesNoGuess.xlYes, Destination:=inputCell).Name = TableName CleanExit: Application.ScreenUpdating = ASU End Sub Private Function GetInputCell(ByVal Prompt As String) As Range On Error GoTo ErrorHandler Set GetInputCell = Application.InputBox(Prompt, Type:=8) Exit Function ErrorHandler: MsgBox "User Cancelled" Set GetInputCell = Nothing End Function ```
I don't *think* there is a StringBuilder() class in VBA, only some tricks using `Mid`. > > > ``` > Const DELIMIT As String = "|", COLSPAN As Long = 2 > Dim HEADER As String > > ``` > > This is a little confusing, UPPERCASE should indicate a constant, which is does with `DELIMIT` - but `Header` is not (cannot) be a constant. And that leaves me without a `Dim` or a `Const` for `COLSPAN`. Try to be a little more consistent with that - it will be much easier to tell what variables are what. > > > ``` > Dim ASU as Boolean > ASU = Application.ScreenUpdating > Application.ScreenUpdating = False > Application.ScreenUpdating = ASU > > ``` > > Now, I *know* `ASU` can't be a constant. Maybe `screenIsUpdating`? But then again, I think using a variable to store this is overkill unless you are trying to save the settings of the user - which you aren't ``` Dim screenIsUpdating as Boolean screenIsUpdating = Application.ScreenUpdating Application.ScreenUpdating = False Application.ScreenUpdating = screenIsUpdating ``` This way you store the user's settings, but still turn it off for your procedure. These variables could use better names, even if `i` and `j` - > > > ``` > Dim Rw As Long, Col As Long > Dim Ws As Worksheet > > ``` > > `Ws` works, but I don't recommend it, it will start to look pretty messy once you have a lot going on. Also, local variables should start with a lowercase letter [Standard VBA naming conventions](https://msdn.microsoft.com/en-us/library/1s46s4ew(v=vs.140).aspx). > > > ``` > Dim tblVisibility As Range, rngFormulas As Range, rngConstants As Range > > ``` > > I see `tblVisibility` and think "oh, must be a boolean" - but it's a range. And `rngFormulas` and `rngConstants` seem to have the same issue, which is why they are prefixed with `rng` - yeah? ``` tableRange formulaRange constantRange ``` But, what is `constantRange`? If it's constant, it doesn't need a range. > > > ``` > Cells(Rw, Col).Value2 = Value > > ``` > > You did a good job qualifying most things, but this `Cells` isn't qualified - it should be `inputCell.Parent.Cells` - or just give that target sheet a variable. > > > ``` > If MsgBox(Msg, vbYesNo + vbCritical, "Your attention please!") = vbNo Then End > > ``` > > Here's an `End` again, try to avoid those. Also I think `Msg` (as well as some other fixed strings) could be a `Const`. > > > ``` > Rw = inputCell.Row + 1 > Col = inputCell.Column > Dim Value As String > For Each Ws In ActiveWorkbook.Worksheets > Value = Join(Array(Ws.Name, Ws.Visible), DELIMIT) > Cells(Rw, Col).Value2 = Value > Rw = Rw + 1 > Next > > ``` > > This loop is pretty confusing to me. You are iterating up the rows, but have a loop for the sheets? ``` For index = 1 to Thisworkbook.Worksheets.Count targetSheet.Cells(index+1,tableColumn) = Join(Array(Worksheets(index).Name,Worksheets.Visible), DELIMITER) Next ``` But, for that `Join` string, I would probably do it a different way - ``` Dim index As Long Dim tableArray() As String Dim sheetCount As Long sheetCount = ThisWorkbook.Worksheets.Count ReDim tableArray(1 To sheetCount, 1 To 2) For index = LBound(tableArray) To UBound(tableArray) tableArray(index, 1) = ThisWorkbook.Worksheets(index).Name tableArray(index, 2) = ThisWorkbook.Worksheets(index).Visible Next ``` Arrays are faster and you can just `Transpose` it into your table range. Or *maybe* just [convert the array into a table](https://stackoverflow.com/a/32827905/1161309). Oh, and your procedure name > > > ``` > Public Sub ListWorksheetVisibilityInActiveWORKBOOK() > > ``` > > Good job on being descriptive, but it's a bit much. `CreateSheetVisibilityTable()` maybe?
SQL Server 2005 ROW\_NUMBER() without ORDER BY I am trying to insert from one table into another using ``` DECLARE @IDOffset int; SELECT @IDOffset = MAX(ISNULL(ID,0)) FROM TargetTable INSERT INTO TargetTable(ID, FIELD) SELECT [Increment] + @IDOffset ,FeildValue FROM SourceTable WHERE [somecondition] ``` TargetTable.ID is not an identity column, which is why I have to find a way to auto-increment it myself. I know I can use a cursor, or create a table variable with an identity column and a FieldValue field, populate that, then use it in my `insert into...select`, but that is not very efficient. I tried using the ROW\_NUMBER function to increment, but I really don't have a legitimate ORDER BY field in the SourceTable that I can use, and would like to keep the original order of the SourceTable (if possible). Can anyone suggest anything?
You can avoid specifying an explicit ordering as follows: ``` INSERT dbo.TargetTable (ID, FIELD) SELECT Row_Number() OVER (ORDER BY (SELECT 1)) + Coalesce( (SELECT Max(ID) FROM dbo.TargetTable WITH (TABLOCKX, HOLDLOCK)), 0 ), FieldValue FROM dbo.SourceTable WHERE {somecondition}; ``` However, please note that is merely a way to avoid specifying an ordering and **does NOT guarantee** that any original data ordering will be preserved. There are other factors that can cause the result to be ordered, such as an `ORDER BY` in the outer query. To fully understand this, one must realize that the concept "not ordered (in a particular way)" is not the same as "retaining original order" (which IS ordered in a particular way!). I believe that from a pure relational database perspective, the latter concept **does not exist**, *by definition* (though there may be database implementations that violate this, SQL Server is not one of them). The reason for calculating the `Max` in the query and for adding the lock hints is to prevent errors due to a concurrent process inserting using the same value you plan to use, in between the parts of the query executing. The only other semi-reasonable workaround would be to perform the Max() and INSERT in a loop some number of times until it succeeds (still far from an ideal solution). Using an identity column is **far superior**. It's not good for concurrency to exclusively lock entire tables, and that is an understatement. Note: Many people use `(SELECT NULL)` to get around the "no constants allowed in the ORDER BY clause of a windowing function" restriction. For some reason, I prefer `1` over `NULL`. What you use is up to you.
How to grow filesystem to use unallocated space in partition? I duplicated a hard disk to a new larger one using the method suggested in [Full DD copy from hdd to hdd](https://unix.stackexchange.com/questions/144172/full-dd-copy-from-hdd-to-hdd). After doing that `df -h` reports the original and smaller partition sizes from the original disk and `gparted` highlights the disparity and offers to fix them, though it seems unwise as they are mounted. If you look closely at the image you can see that `Used + Unused < Size` for the partitions with the yellow warning signs. What command line tools can be used to fix the issue, and will it be safe for `gparted` to do it on mounted partition live? Ideally I should have done that before switching over to the target disk and rebooting from it. [![Gparted Snapshot](https://i.stack.imgur.com/7bbXT.png)](https://i.stack.imgur.com/7bbXT.png) Below is the information dialog from gparted about the discrepancy and I edited the title to describe it better. [![Gparted information dialog](https://i.stack.imgur.com/F49dA.png)](https://i.stack.imgur.com/F49dA.png)
- If `gparted` only has to extend the partition or filesystem into unused space (immediately following the partition), then it should be safe to let it extend the partition and/or fs. - If, however, it has to MOVE any partitions around to make space for resizing, you'll have to boot with a [gparted Live CD](http://gparted.org/livecd.php) - See the man page for `resize2fs` (which is the command-line tool gparted will use to grow an ext2, ext3, and ext4 filesystem) for more details about resizing those filesystems. For ext2/3/4, growing a filesystem is generally not a problem and can safely be done while the fs is mounted. Shrinking a filesystem, however, is more troublesome and should be done while the fs is unmounted. If it's the rootfs, that means booting to a rescue CD/USB/PXE etc. --- BTW, both `dd` and `cat` are amongst the worst ways to copy a linux system to another hard disk. Use [Clonezilla](http://clonezilla.org/), that's what it's for.
How to assign data returned from $promise to global variable I've two services, `service1 and service2`, and I would like to invoke a method of `service1` into `service2`. Ideally, I would like to assign the returned data from `service1.getMethod()` to a global variable declared as `var result = []`. Here is the code: ## service1 ``` .factory('service1', function (dependencies...) { var getMethod1 = function () { ...//making http get call return deferred.promise(); }; return { getMethod1 : getMethod1 }; }); ``` ## service2 ``` .factory('service2', function (dependencies...) { var result = []; var getMethod2 = function () { Service1.getMethod1().then(function (data) { result = data; }); //RUN METHOD getMethod2(); //Here ideally, I would like result to contain `[object, object, object]` console.log(result); //instead it prints `undefined` }); ``` So ideally, I would like to use what's will be in `result` in `service2's` other `functions` `i.e. result[0].name` etc. Not sure if what I'm doing is the right approach. Please provide an `plunker demo or code snippet` example and if not sure about something, please write comment below. Thanks!
You can't work with asynchronous code like you are attempting. `result` variable is simply not yet populated when you are trying to use it. Instead you should make `getMethod2` return promise too, and use its `then` method: ``` .factory('service2', function (dependencies...) { var getMethod2 = function () { return Service1.getMethod1(); }; // RUN METHOD getMethod2().then(function(result) { console.log(result); }); }); ``` You can also cache returned data: ``` .factory('service2', function (dependencies...) { var result; var getMethod2 = function () { return result ? $q.when(result) : Service1.getMethod1().then(function(data) { result = data; return result; }); }; // RUN METHOD getMethod2().then(function(result) { console.log(result); }); }); ```
Seaborn scatterplot matrix - adding extra points with custom styles I'm doing a k-means clustering of activities on some open source projects on GitHub and am trying to plot the results together with the cluster centroids using [Seaborn Scatterplot Matrix](http://stanford.edu/~mwaskom/software/seaborn/examples/scatterplot_matrix.html). I can successfully plot the results of the clustering analysis (example tsv output below) ``` user_id issue_comments issues_created pull_request_review_comments pull_requests category 1 0.14936519790888722 2.0100502512562812 0.0 0.60790273556231 Group 0 1882 0.11202389843166542 0.5025125628140703 0.0 0.0 Group 1 2 2.315160567587752 20.603015075376884 0.13297872340425532 1.21580547112462 Group 2 1789 36.8185212845407 82.91457286432161 75.66489361702128 74.46808510638297 Group 3 ``` The problem I'm having is that I'd like to be able to also plot the centroids of the clusters on the matrix plot too. Currently I'm my plotting script looks like this: ``` import seaborn as sns import pandas as pd from pylab import savefig sns.set() # By default, Pandas assumes the first column is an index # so it will be skipped. In our case it's the user_id data = pd.DataFrame.from_csv('summary_clusters.tsv', sep='\t') grid = sns.pairplot(data, hue="category", diag_kind="kde") savefig('normalised_clusters.png', dpi = 150) ``` This produces the expected output: [![matrix plot](https://i.stack.imgur.com/Nwqh7.png)](https://i.stack.imgur.com/Nwqh7.png) I'd like to be able to mark on each of these plots the *centroids* of the clusters. I can think of two ways to do this: 1. Create a new 'CENTROID' category and just plot this together with the other points. 2. Manually add extra points to the plots after calling `sns.pairplot(data, hue="category", diag_kind="kde")`. If (1) is the solution then I'd like to be able to customise the marker (perhaps a star?) to make it more prominent. If (2) I'm all ears. I'm pretty new to Seaborn and Matplotlib so any assistance would be very welcome :-)
`pairplot` isn't going to be all that well suited to this sort of thing, but it's possible to make it work with a few tricks. Here's what I would do. ``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.cluster import KMeans sns.set_color_codes() # Make some random iid data cov = np.eye(3) ds = np.vstack([np.random.multivariate_normal([0, 0, 0], cov, 50), np.random.multivariate_normal([1, 1, 1], cov, 50)]) ds = pd.DataFrame(ds, columns=["x", "y", "z"]) # Fit the k means model and label the observations km = KMeans(2).fit(ds) ds["label"] = km.labels_.astype(str) ``` Now comes the non-obvious part: you need to create a dataframe with the centroid locations and then combine it with the dataframe of observations while identifying the centroids as appropriate using the `label` column: ``` centroids = pd.DataFrame(km.cluster_centers_, columns=["x", "y", "z"]) centroids["label"] = ["0 centroid", "1 centroid"] full_ds = pd.concat([ds, centroids], ignore_index=True) ``` Then you just need to use `PairGrid`, which is a bit more flexible than `pairplot` and will allow you to map other plot attributes by the hue variable along with the color (at the expense of not being able to draw histograms on the diagonals): ``` g = sns.PairGrid(full_ds, hue="label", hue_order=["0", "1", "0 centroid", "1 centroid"], palette=["b", "r", "b", "r"], hue_kws={"s": [20, 20, 500, 500], "marker": ["o", "o", "*", "*"]}) g.map(plt.scatter, linewidth=1, edgecolor="w") g.add_legend() ``` [![enter image description here](https://i.stack.imgur.com/mVRB1.png)](https://i.stack.imgur.com/mVRB1.png) An alternate solution would be to plot the observations as normal then change the data attributes on the `PairGrid` object and add a new layer. I'd call this a hack, but in some ways it's more straightforward. ``` # Plot the data g = sns.pairplot(ds, hue="label", vars=["x", "y", "z"], palette=["b", "r"]) # Change the PairGrid dataset and add a new layer centroids = pd.DataFrame(km.cluster_centers_, columns=["x", "y", "z"]) g.data = centroids g.hue_vals = [0, 1] g.map_offdiag(plt.scatter, s=500, marker="*") ```
Does a union always have default value of zero? Please let us consider following code: ``` #include <iostream> using namespace std; union{ int i; }u; int main(){ int k=5; cout<<k+u.i<<endl; system("PAUSE"); return EXIT_SUCCESS; } ``` This code shows me output 5,what means to me is that,variable i in union structure has default value=0, but the same code on ideone.com shows warning like this ``` prog.cpp:6: warning: non-local variable ‘<anonymous union> u’ uses anonymous type and then prints 5 as well, and last one core of this problem comes from algorithm calculate ``` Reciprocal of the square root and here is code ``` #include<iostream> #include<math.h> using namespace std; float invsqrt(float x){ float xhalf=0.5f*x; union{ float x; int i; }u; u.x=x; u.i=0x5f3759df-(u.i>>1); x=u.x*(1.5f-xhalf*u.x*u.x); return x; } int main(){ float x=234; cout<<invsqrt(x)<<endl; return 0; } ``` It shows me output also,but my question is that is it a this code good?i meant that because int i is not initailized ,can any compiler consider it's value as zero?i am curious and please tell me something about this,also if something is not clear from my question say me,i am not English native speaker.
> > Does a union always have default value of zero? > > > The language standard says this: > > If an object that has static or thread storage duration is not initialized > explicitly, then: > > > - if it has pointer type, it is initialized to a null pointer; > - if it has arithmetic type, it is initialized to (positive or unsigned) zero; > - if it is an aggregate, every member is initialized (recursively) according to these rules, > and any padding is initialized to zero bits; > - if it is a union, the first named member is initialized (recursively) according to these > rules, and any padding is initialized to zero bits; > > > So, in your first code sample, `u.i` will be initialised to zero. I'm not sure about the second code sample at all. I cannot see the point of the `union` there. I rather suspect that you meant to use a `struct` rather than a `union`. But note that the two code examples are very different because the `union` in this first has static storage duration and in the second the `union` has automatic storage duration. This results in completely different semantics for uninitialized variables.